Friday, 28 February 2014

Samsung slip up at MWC?

So it's MWC time in Barcelona, the rumours about what could be released had me eagerly looking to see what might be my next mobile.

Obviously the expected big gun Samsung S5 was going to be announced. Well that has to be the biggest tech disappointment for me. For some reason Samsung released a phone which for the most part is worse than their 6 month old Galaxy Note 3. Each year the S range has spec wise been the best phone at least for a few months, but now Sony have out done them with the Z2 and are even planning to release it before the S5.

It seems strange that Sony are suddenly looking to win the spec race. Not only that but they are even talking about a new flagship every 6 months!

Meanwhile HTC have already mocked the S5 and alleged that their follow up to the HTC One, which is to be unveiled March 25th, will be superior.

I was certainly considering the S5 prior to the announcement, but now it is off my list. Essentially I would be surprised if their minor camera update would allow it to compete with the Sony Z2, Nokia 1020 or 1520. In fact the HTCs rumoured new camera might be able to surprise, but without OIS, a standard sized sensor and just improved focus speed it does not seem there is much to be excited about in the camera department.

Then you might consider the RAM, at 2gb it is great now but this year will be the year of 3gb for android and probably even for windows phone. The S5 looks worse than its rivals and the only real highlight is a removable battery.

Now my current contract does not run out until well after the HTC announcement, and I would certainly be tempted by the rumoured new Lumia 1820. I do have a bit of a quandary though, if HTCs announcement disappoints me and Nokia bring nothing new then the Sony Z2 will still be the best phone on the market for me...

But as I stated in an earlier post I do not intend to purchase Sony products any more due to their customer support treatment. As of now I would be choosing between the Lumia 1520/Icon or delaying for the Galaxy Note 4...can I really wait that long for new tech?!

Sunday, 23 February 2014

MongoDB VS Intersystems Cache Part 2 Query performance

As I alluded to in my previous post there are many reasons to pick a database, performance is only one of them, but as it is the easiest to objectively assess that it where I am starting.

So here is a quick look at query performance. So we have our 1,010,000 strong pair of databases.

So how about a count of the number of editions which have a 1 in them (about 47000).

MongoDB - 0.906 seconds
Intersystems Cache - 2.247 seconds

Well MongoDB takes a healthy lead in this respect. It should be said that by default MongoDB has its own query language, where as Intersystems Cache can be accessed through iteration or via SQL queries.

Now I could have biased the query strategy, for example Intersystems Cache is significantly faster at selecting whole phrases, e.g. searching a specific property which appears multiple times can be returned faster that MongoDB, but in both cases I was looking at 0.010 seconds VS 0.007 seconds. It was consistently faster although it was in the magnitude of microseconds, which can be important in certain contexts.

Saturday, 22 February 2014

MongoDB VS Intersystems Cache Part 1 insert performance

While looking through stackoverflow I noticed an interesting question about the differences between Intersystems Cache and MongoDB.

Now these databases are substantially different in many respects, but both do allow for a more direct object mapping approach than you will normally experience in a standard SQL environment. Having read that MongoDB had recently secured $150M in financing I was curious as to what the platform was like.

Picking a database is never just about raw performance, but it is one of the most important aspects so I thought I would start with this aspect.

What I thought would be interesting would be to compare the insert performance of objects in Intersystems Cache VS the insert performance in MongoDB.

I took a very simple example, inserting 10,000 objects each with 3 properties, performed both tests 3 times on my laptop and averaged the results. There were no indexes defined. This is not exactly a great test but as Cache was significantly faster at doing this than MSSQL and MySQL in previous tests I thought I would be a good first look.

MongoDB - 0.436 seconds
Intersystems - 0.445 seconds

I then followed this up with a slightly larger 1,000,000 records.

MongoDB - 39.925 seconds
Intersystems - 43.445 seconds

Looks like they manage similar performance in respect to database inserts.

In part 2 I will look into query performance and more fundamental differences between the two NoSQL databases.

Friday, 21 February 2014

1 step backwards cancels 10 steps forwards

Problems of Implementing Change

As a software developer you are always an agent of change. There is a problem with this role however, many of your customers will share your joy of change with quite the same light.

When Office 2007 came out I personally thought it was a substantially better interface. However, there was plenty of online criticism, features had been renamed and moved, Office experts suddenly didn't know where their obscure several level deep options were, and even though most options were easier to find, because some were harder it was not undeniably better. People even went so far as to develop software to alter the new ribbon interface back to the 2003 design.

Windows 8 suffers some of the same issues, Microsoft improved the boot up time, the hibernate functionality, they built in lots of new features, however, the tile interface change and the separation of desktop and tile based apps has lead to the entire product being criticised. 

It can be disheartening that when you make a change the change can be criticised so harshly. I have noticed that only undeniable improvements are universally accepted. Even if 10 steps forward have been made and only 1 step back, no matter how insignificant that 1 step may be, it will frequently override all other changes.

The only improvements that tend to be universal accepted are performance changes with no obvious change in functionality and minor changes to a single point within a program which remove a substantial number of repetitive steps.

I worked on a project where there was a process change, the time the customer took to complete a task went from 2 minutes down to 30 sec - 1 minute. While the change objectively improved efficiency it was not well received.

The original process had 8 distinct steps that were always required. The new process had 3 to 5 distinct steps depending on the situation. The customer objected to the fact that they could in a minority of cases need to carry out 5 steps.

Unfortunately as they say you only make a first impression once, and so even when it was agreed that the change was definitely an improvement the first impression tainted the project in a negative light.


While it would be great to always make universally loved changes designs will always have an element of compromise, but that does not mean you can not help your users to love your changes.

Consult Your Users Early

If you tell your users about the proposed changes before you even implement them they will have a chance to offer their criticism, you will be able to discuss and come up with a solution that they have approved and when it is released they will have a vested interest in it as they have to some extent requested it, and no one likes being wrong.


Try to promote the change in an objective fashion. If the end user is aware of how the change helps in an objective fashion they will be able to offset that against any criticism they may feel it justifies. If you cannot promote your change objectively to illustrate improvement then perhaps your customers are right and your change was wrong!

Opting Out

When major changes come about it can be hard for your customers to stomach, altering their practices to your new paradigm can be difficult. Now it is the case that most users stick with the defaults, to the extent that it is almost pointless having options in applications, however, intelligent opt outs can be developed to mitigate against this.

I feel Windows 8 would have benefited massively if their new tile interface was only on by default on touch enabled devices. There was an improvement with Windows 8.1 with the option of going directly to the desktop, but this is not quite far enough for most people who are not using touch devices and would prefer to remove tiles entirely. Allowing non-touch screen users to try tiles and disable them would probably have removed the majority of the complaints about Windows 8 and allowed the new benefits to shine through.

Consider something similar either intelligent defaults, or tutorial introduction to the new version and the benefits it contains. Advising people as to the benefits helps predispose them towards accepting any compromises that may have been made to existing functionality. At the very least you are preparing them for the change so they have time to adopt strategies to best fit any new working practices.

Is it ever a good idea to re-write from scratch?

Joel Spolsky wrote many years ago that a re-write from scratch was the single worst strategic mistake a company could make.

He pointed to a number of notable failures that were re-writes from scratch. I believe his most salient points are

  • You will have devoted a very large amount of development time to a product which will essentially have the same value as the product you are currently selling. 
  • You will not have resource to improve the features of your existing product and your competitors will overtake you.
  • Even if you develop using all the best practices available such as TDD or BDD, have a wonderful QA team a live field tested product is actually likely to be more robust.

In essence he implies that refactoring legacy code is always the best path to adopt. The evidence he provides albeit from pre-2000 is somewhat compelling, and in fact some very successful companies have stuck to his mantra of refactoring sometimes in unique ways. By refactoring I mean the process of adjusting through merger and separation of existing classes, of relatively minor reorganisation of code (i.e. no substantial architectural changes) and adjusting to follow some form of coding standard to improve readability and maintainability of the system.

PHP is famous for its rapid development ability and for being easy to learn. It has also be derided for many years for its lack of full object support, and as a non-compiled language for its performance. Facebook was developed in PHP and obviously while the numbers of users were small feature development and rapid changes were extremely desirable. However, with such a massive user base it becomes clear that performance becomes a significant cost. Rather than re-write from scratch they took a different approach to the problem and developed HipHop which was at first a C++ compiler for PHP, but then evolved into a JIT compiler. This allowed them to keep all of their production ready & tested code, but greatly reduce their operating costs. While I do not have access to the figures involved I would be surprised if anyone would suggest that this approach was not substantially cheaper than re-writing their code base in a compiled language and re-hiring / training their workforce in the new language they intended to use.

The "never re-write" mantra is a little depressing for a developer who works on a legacy project with spaghetti code greeting them on a daily basis. If the developer has worked with the legacy code for a long enough time then they are not re-writing from scratch, but re-writing with years of business logic experience, knowledge of failed pathways and ideas of how the system architecture can be improved.

Refactoring is always quoted as the best way to deal with the spaghetti code, however, with a large code-base where do you start? How aggressively do you refactor, and if you are in the end having to refactor an entire code base are you really saving time when compared to reworking from the model upwards?

Additionally refactoring does not help if you are: 
  • using a dead/dying language
  • using unsupported technologies software or hardware
  • specialist systems with few specialist staff capable of filling roles
  • when 3rd party licencing costs become a substantial proportion of your budget
If you are encountering those problems then something more aggressive than refactoring is can be required. While a rewrite from scratch, 12 months of intensive coding developing a new "perfect" design, and building a platform from which you are sure you can reliably develop on rapidly may sound great it is almost certainly not the best approach. Your sales staff won't be happy knowing that they cannot sell anything new for a year, and even when it arrives it will only do the same thing the last one did. The risk is huge and you are also essentially giving a 1 year head start to your competitors, and that is assuming your rewrite does not end up overrunning.

There are two approaches that I believe allow you to adopt your glorious rewrite and yet also deliver added business value to your products.

New Minimum Viable Product

Trying to match a several year old legacy product feature for feature is likely to take a very long time before you can ship a release. Additionally can also lead to a compromised design where you match the feature in its interaction design rather than applying a considered approach which could improve the functionality. 

So why not use this new code base as a basic product offering. Instead of matching the product feature for feature develop a "basic" version which has the core functionality you need to actually sell and then sell it as the cheaper version of your all singing and dancing existing product. 

This gives you a revenue stream relatively quickly, your new product gets field tested much earlier, and you have the base platform to build on until you can match and exceed your existing legacy product. 

Modular Plug-in

The modular approach is to replace smaller chunks of existing functionality with your new system. Plug in your new architecture side-by-side with your existing architecture and re-route in a modular fashion your existing code through new pathways. There are several ways to achieve this, but essentially you are taking existing calls perhaps putting a new routing method at the front and then sending your calls down the existing route and the new route.

This approach allows for one major benefit risk reduction. If your new routes encounter a problem then you should be able to switch back to the legacy routes straight away and mitigate the impact. You can target areas that need new functionality first or that are a nightmare to maintain to achieve the greatest impact available for your changes.

There are however 4 major disadvantages of this approach

  1. For periods of time before the legacy code is retired have essentially doubled the complexity of the path and object progresses. It is vital to plan in a way to minimise this period of time or the sanity of your support staff will be compromised.
  2. Replacing in a modular fashion makes it difficult to address fundamental architecture problems. It is certainly tempting to maintain the same structure and just swap out components, but if one of the issues is an unduly complex architecture then this approach will struggle to fix that.
  3. If you are switching language or technology and lack significant experience in it, then modular replacement may lead to significant problems. Your new system may not work well with your existing architecture, it may have its own idiosyncrasies which end up reducing the stability of the system rather than improving it.
  4. Without a strong enough impetuous your legacy product may linger around for a very long time. The main problem with this is the support cost of the legacy system. This cost is often obfuscated at a management level and difficult to place a monetary value on. Without a monetary value it becomes difficult to make a business case for the change, hence there is commonly a lack of impetuous to change.
The modular approach's costs can often be minimised by combining the approach with the minimum viable product methodology. If you make your modules too small then you are likely to exacerbate the above costs, but it is tempting to do so to deliver quickly. Instead it can be better to look to replace a larger chunk of functionality with a simpler solution, build the minimum viable version of that and re-route as much as possible through that minimum viable version. You will hopefully have a big support cost saving with your initial change, and potentially performance / stability benefits as well. Build up the functionality over time and then retire a large chunk of your legacy product in one go.