Wednesday, 29 June 2011

The future of books

At a dinner party recently after I revealed the fact that I was a software developer for a Library Management System the person asked me what I felt the future of books was.  Their interest was primarily in respect to what impact ebook readers like the Kindle might have to libraries and books.

Well this is a pretty open ended question and I would like to divide this into numerous topic areas.  Although this blog post will only scratch the surface I am hopeful it will emphasis that it is a much more diverse future pattern than many "death of the library" posts are suggesting.

I feel the main areas for discussion in terms of libraries and ebooks are as follows;  Fiction versus non-fiction,  public versus private libraries,  technology change in respect to electronic reading devices, first world versus the rest of the world, reference versus study material.

Fiction versus non-fiction

If we look at current ebook sales, verses physical copy sales we see and interesting fact, that as of 2011 non-fiction literature saw a rise in the US in terms of sales and fiction works saw a reduction in sales.  I am not privy to all of the statistical analysis however, looking at the headline figures it appears that the current short-term trends are to non-fiction being purchased in physical books, where as fiction works are increasingly being sold through the medium of ebook.

This is perhaps backed up by studies that suggest that the Kindle is not an appropriate learning add due to the difficulty to make notes in text and to quickly refer to.  Now the Kindle's technical short comings versus a book I will discuss a little later, but currently the physical book is in many ways superior to the electronic version for non-fiction materials which is reflected in the sales figures.

Public versus private

Public libraries have a split focus between non-fiction and fiction works, where as private institutions have a focus on non-fiction works, so this electronic fiction trend is likely to have a larger impact on the public libraries initially than public libraries who are likely to need to maintain physical stocks for longer and in large amounts.

Technology change in respect to electronic reading devices

As mentioned studies suggest that the Kindle is not good enough to replace the humble book.  The book technology has many advantages.  However, much like the scroll has been replaced by the book, I suspect that technological innovation will make the electronic medium superior in practically all facets, and once it is quicker to navigate, clearer to read, battery life is a non-issue then physical books.  How long will this take?  Well manufactures are bringing folding tablets to the market in 2011/2012, affordable colour e-ink has been on the horizon for a while, foldable / rollable OLED screens look like they are not too far away from production.  All of this suggests to me that in less than 5 years there will be devices which people will consider generally superior to books.  This to my mind will have a dramatic impact on physical book sales and you will probably see a similar trend to music with an 8% fall in sales year on year.

First world versus the rest of the world

OK £150 for an e-book reader and a few books is "cheap" for a well off first worlder, but libraries that lend books are the only way that the relatively impoverished will have access to good quality material for some time to come.  With public libraries decreasing in the first world I expect the reverse trend (at least for some time) in the rest of the world.

Reference versus study material

Now another trend from my own reading which I feel will be felt across the non-fiction world is the reference versus study material.  The internet makes it very easy to find lots of diverse material and quickly look up facts figures etc.  However, currently I does not yet provide great study guide material.  Books are still superior in this respect.  You can pick up and authoritative book and the author will have distilled a set of information relevant to the topic for you to get a good broad appreciation of the subject.  This is currently difficult to achieve on the internet as it is difficult to search for information you do not know.  Once you have developed a broad appreciation for a topic then you can readily use the internet for reference purposes or to further explorer topics you were made aware of in your study guide.

Now the internet's linked nature does mean that it can develop towards providing study guide information, but published works still have this arrow in their quiver in the hunt for public interest.

Actually this study guide advantage I feel has been exploited in publishing, IT reference materials have become much more study guide oriented and much less reference manual.  I feel this is a good thing, because after I am aware of a concept if I need to use it again I can quickly find all of the information on the internet much faster than in a reference manual, but again if I am unaware of the concept in the first place how do I search for it...

Other other elephant in the room

Unfortunately publisher and their ebook distribution / subscription methods will affect the speed and nature of change over time.  They are currently adopting a method of treating an ebook much like a physical copy, with limited access, expiration (wearing out) periods etc.  I am not sure that model will be sustainable and I suspect new platforms will evolve altering the book/library landscape significantly over time.

Conclusion

Hopefully I have distilled the fact that this is a relatively complex arena with competing pressures.  I do suspect that technology will have a major impact on our learning structures, cognitive science research is constantly  developing new teaching and learning techniques, but I suspect that eventually we will be able to download knowledge and a vastly different society will develop from this.

Its good to learn - Pretty Shadow effects with just one DIV!

Wow the internet has come a long way, I remember when all the top designers were using 4-5 divs just to create rounded corners around a single page element.  Now with the magic of CSS3 you can create a beautiful "up-turned corners" with only one div!

The basis of the up-turned corner is 3 angled shadow effects.  Now doing this with an image can be fairly effective, but not only is that "cheating", it is not the cleanest, simplest low bandwidth method.  With just a minor CSS trick you can have such an effect without images and all the extra bandwidth, late loading, etc issues that can occur.

So how do you get 3 different shadow angles on a single DIV?

The answer is to apply two classes to the DIV.  This way you can use the :before and :after selectors along with the content attribute to create 3 distinct shadows.  You can then use the transform property to rotate the before and after DIVs to create a lovely shadow on any CSS3 compliant browser.

So our HTML is incredible simple


<div> class="FancyCorners DropShadow">

  Wow aren't these up-turned corners pretty?</div>


Our CSS is a little more complex, but where would the sense of achievement be if it wasn't ;)


        .FancyCorners
        {
            -moz-border-radius: 4px;
            border-radius: 4px;
        }
     
        .FancyCorners:before, .FancyCorners:after
        {
            bottom: 15px;
            left: 10px;
            width: 50%;
            height: 20%;
            -webkit-box-shadow: 0 15px 10px rgba(0, 0, 0, 0.7);
            -moz-box-shadow: 0 15px 10px rgba(0, 0, 0, 0.7);
            box-shadow: 0 15px 10px rgba(0, 0, 0, 0.7);
            -webkit-transform: rotate(-5deg);
            -moz-transform: rotate(-5deg);
            -ms-transform: rotate(-5deg);
            -o-transform: rotate(-5deg);
            transform: rotate(-5deg);
        }
     
        .FancyCorners:after
        {
            right: 10px;
            left: auto;
            -webkit-transform: rotate(5deg);
            -moz-transform: rotate(5deg);
            -ms-transform: rotate(5deg);
            -o-transform: rotate(5deg);
            transform: rotate(5deg);
        }
     
        .DropShadow
        {
            border:1px solid rgb(236,224,0);
            position: relative;
            padding: 1em;
            width: 250px;
            background: rgb(255,242,0);
            -webkit-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), 0 0 40px rgba(0, 0, 0, 0.1) inset;
            -moz-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), 0 0 40px rgba(0, 0, 0, 0.1) inset;
            box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), 0 0 40px rgba(0, 0, 0, 0.1) inset;
        }
     
        .DropShadow:before, .DropShadow:after
        {
            content: "";
            position: absolute;
            z-index: -2;
        }

Do I still want windows 8?

Further research has shown me that if you "jailbreak" an iPad and purchase a camera adapter you can link it to external USB storage, which as I stated was one of my main problems with the iPad.

However, this still does not endear me towards the iPad.  The fact that you have to "break" Apple's security restrictions just to perform what I would consider as required functionality is a big strike against the product.  If Apple dropped their restrictions then perhaps I could be tempted, but I prefer being supplied a device which I have more control over than one that operates in an Apple only paradigm...

With Windows 8 rumoured to be available in 2012 Q1 I will find out if my wait was worth it, or if I should have been on the "Apple cart" this whole time.

Tuesday, 21 June 2011

I want windows 8

I watched the windows 8 preview again and I cant wait to pick up a Windows 8 tablet.  I know the iPad 2 is already out, looks beautiful and is a relatively reasonable £400, but it is not quite for me.  There are several aspects I do not like, but mainly it comes down to two aspects.

Fixed storage size.  If I purchase a 32gb model then that is all I have.  OK I could rip it open and put a new flash drive in, but not really the way I wanted to go...

iOS.  There are many plus points, but overall I like the existing applications I use, I like the freedom of Windows and Android too much.

Now you might wonder why would I not look at the Galaxy 10.1 and 8.9.  Well I have and lovely they are too, but again no USB storage, storage is fixed.  At the windows manufacturer conference the tablets all demonstrated the ability to connect to USB peripherals, and this for me is a very big deal!

I have seen suggestions that a Sept 2011 release date for tablets is not out of the question...I know that is really soon, but soon is always too far away ;)

Can't we make it a little more Google?

As I work for a Library Management software vendor it is perhaps not surprising that I have some what of an interest in search technologies.  However, what I believe is surprising is the lack of interest many software packages seem to have in this technology.  In the "Information Age" search technology is perhaps the king of all technologies.

Search technology has grown up a lot in the last few years, perhaps Google's dominance and wealth from being a market leader in this technology has elevated the lowly searches status, but quite simply I wonder why it has been neglected for so long.

I remember a time of using multiple web search engines, Webcrawler was my early favourite, but Altavista soon became superior, then it was a 50/50 battle with Altavista and Yahoo, and then all of a sudden Google came from no where and won.  Anyway early searches tended to follow the wildcard boolean search paradigm.  i.e. you had to be very specific a search for child would not return any results for children for example.

This exact matching wildcard based search is sometimes helpful if you know all of the data you are searching for and are trying to find data on something you know exists, and to some extent may require an understanding of boolean logic, which I have found even trained librarians can struggle with the difference between AND and OR.  

Now in the early days of the internet it was not too bad, you could do a simple wildcard search and due to the limited material available you could trawl through each and every link to find relevance information.  However, with the internet explosion even some of the most isolated niches simply have too much data to sift through to find the information you are attempting to retrieve.

This is where relevance ranking becomes of such great value.

Relevance ranking is what allows search engines such as Google to provide such simple search interfaces and return useful relevance data.  What is more important is it allows you to search through data and find items without needing to know if the data exists or not with a reasonable confidence that if you do not find anything that there is nothing there to find.

If I take a simple contrived example, the "old style" exact matching search could lead you believe a record is not there more easily than a relevance ranked search.  If a person wanted to find the book "Harry Potter and the Deathly Hallows", with the older search methodology you could mistakenly search for "harry potter and the dead hallows" not find a result and perhaps assume that the item does not exist.  What the older methodology would then do is perhaps have drop words to counteract this problem.  However, if we imagine that this search were to trigger a couple of drop words then you could easily be swamped by all Harry Potter related material and have to sift through a lot of relatively irrelevant items.

Relevance ranking gets round this problem by essentially introducing ORs on each of the words, this will return dramatically more results than the non-relevance ranked result, but as the title Harry Potter and the Deathly Hallows will contain a higher percentage of the searched terms within its record it will have a higher relevance rank than other record so even if you end up returning every book in the library with your search, you will have the most important one at the top.

Another aspect that is great about relevance ranking is word stemming / breaking.  The introduction of full text searching in MS SQL 2008 with word breakers in multiple languages is in my opinion the most important new feature.  OK you could point to Transparent Data Encryption, but for me the text searching means that any application based on SQL 2008 no longer has an excuse to have a poor search mechanism.

I was recently dealing with a CRM solution that for some reason has a very poor search mechanism.  Data entry in CRMs is going to be relatively inaccurate, the people using it are doing so as a subsection of their job, they are not qualified catalogue librarians with OCD cataloguing skills, a contact could be registered as Jon, John, Jonathan or Johnathan, Saint Francis Primary school could be St., St, Saint and to make it even harder it could be referred to by some people as Primary School and other as First School.  These inconsistencies can range from a minor irritation, at the worst case incorrect denial of customer service.

Now in my situation I made a very simple change I introduced wildcard searches on all terms and provided some limited training advise on how best to search.  So the SQL driven searches went from

account.name like 'St Francis' 

to

account.name like St% Francis%'

Now this change does introduce a longer processing time, but in the case of the application this longer processing time moved the page request speed from 0.128 secs to 0.188 secs.  Yes significantly slower, but completely acceptable in the situation it was employed and unnoticeable to the staff using the application.    

Additionally it could be considered a problem that it incorrectly return more results.   Again this was completely acceptable as it significantly reduced the number of 0 result searches and rarely did it increase searches significantly and most searches returned less than 10 results.

What it did allow though was some really fast searching.  Before you had to type John Smith to return all John Smiths and as I stated this might not even include the person you were looking for.  Well now you can search for "j smi" and return 12 results.  Much less typing and a better result set.

Too many applications do not even automatically insert wildcards, which under most circumstances will help the situation, and if you are upset about the string handling inefficiency this introduces then perhaps you should implement a proper full text solution with word stemming / breaking, thesaurus terms, spell checking, phrase checking and relevance ranking.  There are plenty of tools to do this, yes some are expensive, but as Lucene is opensource you really have no excuse.

Introduce relevance ranking and so many UI problems go away, introduce faceting and you are even further down the road to providing your users with the information they want easily and quickly.  

Tuesday, 14 June 2011

Web typography - in a quandry

It is considered somewhat of a typographic rule that

“Anything from 45 to 75 characters is widely regarded as a satisfactory length of line for a single-column page set in a serifed text face in a text size. The 66-character line (counting both letters and spaces) is widely regarded as ideal. For multiple column work, a better average is 40 to 50 characters.”

What I am musing over is appropriate behaviour for web applications with "liquid layouts" and columns which can contain multiple lines of text.  Essentially the choice boils down to prescribing a value (or range) for the column to fit to, or allowing the text to fill the remaining view port space available, given sensible element spacing.  Perhaps it is better to create a simple pros/cons list and attempt to weigh up the balance in favour of one approach or another.

Pros

  • The end user is completely in control, they can adjust the view port to fit the text to the level of their choosing, a value that they are happy with, which may be very different from the majority.
  • View port space is filled to its maximum potential
  • Vertical scrolling is minimised


Cons

  • Is the end user really in control, sure they can in theory adjust their view port to mold the text pattern to their choosing, but how many users are of that level of competence or can even be bother to tweak their window size?
  • Can cause more fixed elements to look disproportionate to the available space on the extreme width range, e.g. 16:9 monitor wide text columns, mixed with "thin" tables and "thin" images.

Looking at the general case the answer perhaps depends on the nature of the application.  Many websites adopt different approaches for example.  In fact some companies show both approaches for different parts of their site.  Check out Microsoft Support for a fixed column view and then check out MSDN with their ever expanding columns.

In my specific case I believe that the weight is strongly on the Pros side, it is too common for users to not realise they can resize their wide screen view port, legibility and proportionality is improved for the majority of users, and for the minority case of the user who does not gain a significant benefit from the less wide column sizes on their 1920+ pixel width monitor should be considered a small loss to the larger gain.

I have tried to research the proportion of users that run all applications full screen, but have not come up with any research.  When watching people use applications I generally see them run everything full screen unless they are developers.  Even with developers there is a tendency to have more than one monitor so all apps are  for the most part running full screen.  Hey try to work with some applications in anything less than full screen and it is a bit of a struggle, see Visual Studio and SQL Management Studio for my favourite examples (auto-hiding the toolbars does not count ;) ).

Yes perhaps you could go on a moral crusade and not code for the less technical users, but that seems rather a poor separatist mentality that really will not fly within the business world.  Yes we all sometimes wish we could say, "I am sorry you are not competent to use this software", but really we should be admitting that we could have made it better in the first place to avoid any real confusion...

Thursday, 9 June 2011

The 4 most common UI mistakes

Design being a reflection of the data function rather than user goal oriented

I mentioned this is a previous blog, but my example was perhaps a little too far from home for most people so I will give an example of a Microsoft Excel behaviour that has been generating support calls for a very long time, and I truly do not know why it has not been changed.

For as long as I can remember when entering a purely numeric value with leading zeros into Excel the zeros the column type automatically changes to number and the zeros are removed.

Now I am assuming that if the column type is number then the values in the column can be stored more efficiently (as integers represented by bits rather than a strong) and that math calculations can be performed more efficiently and accurately. If this is the case then the turning into numbers has a potential data function benefit, but for the end user it is almost always conflicting with their goal.  Typing in a phone number or a barcode, which are surely the most common cases of leading zero numbers, will simply lead to data loss and frustration.  The user is forced to learn about column types and re-import their data.

Lack of end user empathy

Well this topic heading could encompass any of these types of failures, but I want to focus on the lack of contextualising the users situation when they are using this software.  You might hear a software developer saying "that is not the way the software is meant to be used".

A long time ago I worked as a first line support technician at a high volume call centre.  The development team created a new application for call logging.  Now the application would only allow one person to update a call at the same time, but multiple people could read from it.  In the instance when first line support were unable to resolve a case within a set time frame we had to pass the case through to second line support.  Now what the application developers thought was that all notes should be completed by the first line team before the second line team received the call, but what initially happened was that the first line support would continue editing the case while they were passing the case over and save the data at some point after the second line technician opened the case.  The second line technician would then edit the case and then on saving would lose all of their data because of the intervening update and have to type it all out again.

This system reduced efficiency noticeably, because first line technicians would have the customer on hold for longer as they corrected their notes, the second line technicians would generally type their notes on another application before copying their notes into the call logging application and it was essentially a pain to manage as a user.  Specifically though by the head of software development we were told we were to type up all notes and "launch" the call to the other person before they opened it.  This application design certainly lacked empathy with the role of the user that it was aimed at.  It is a shame that so often software is written by people that never have to use it...perhaps this would help empathy in software design?

Warnings rather than guidance

Too often applications have dire warnings of potential data loss and that misuse can seriously damage the data integrity / application.  Really this is a poor crutch, your application should do the best to avoid ever needing these warnings.  Google's Gmail undo when deleting email is a great example of this, no are you sure you really wanted to delete that ok button extra, just accept the users action as their intention, but have a failsafe should it have been an accidental action.

Designing for the 0.0001% case to the deteriment of the 99.999% case

Too often I see features which initially sounded great, but then fall a part because they are too complicated to use because they focus on the needs of one extremely technically competent individual with extremely specialist requirements.

Not making bold decisions and instead creating options

Yes it can be frustrating not having the ability for an application to perform a task in exactly the right way, but in many ways it is also equally frustrating not being able to work in the most common way because you have to search through a myriad of options.

There are many techniques for getting around this problem:

1. Create defaults as you go.  Essentially this is recording customer action and trying to learn the best set of options. This does have its draw backs, but can help a complex application be setup quickly and teach the user as to the available options

2. Assume the "best" defaults and educate how to change them afterwards.  Sort of the opposite of point one.  IE9 recently exhibited this kind of behaviour in respect to IE plugins.  It installs them and runs happily a long, but if one of them is taking a long time to process it suggests the option of disabling it.  Allows you to work as you want, but then educates you as to the options and continues on.

3. Deep thinking to the problem - if a solution does not seem elegent then you have not given it enough thought.  Spread the problem around the company, friends and family can even contribute.  They may not give you the answer, but even the smallest spark can move you towards a much more elegant solution.

4. Perhaps there is not a solution right now....Strong decision making to not introduce unnecessary complexity is important. Apple proves it is possible to sell very locked down products that "just work", they follow a number of the techniques above, but they are one of the few companies strong enough to say "no you cant do that".  Sometimes you need to say no that will not work it is only beneficial to a tiny minority and detrimental to everyone else, of course even Apple is sometimes wrong...

Wednesday, 8 June 2011

QI lied to me, how could they

I watched an episode quite a long time ago and was facinated to hear that apparently Florence Nightingale invented bar charts.  I found it interesting that it was in the early 1800s that bar charts were first used. 

Well apparently William Playfair preceded her significantly with a published barchart in 1801, which given Florence Nightingale was born 1820 was defintely ahead of her.

QI how could you, actually I have always wondered about some of the facts that make it on to QI, perhaps I should research more.

For anyone interested in early visual representations of information then I found this site particularly interesting.

Why America why?

You throw our tea away, well that is upsetting but I will get over it, but seriously US to UK pricing is simply upsetting.

Sony announced their new handheld games console at $299.  A quick currency conversion puts that at £182.58.  So where does Amazon's £279.99 price come from?

Well I am not into handheld gaming consoles, but I am more upset by the principle of the matter...

UX No.1 enemy

From my experience the number one enemy to good UI and UX is when the interface becomes a reflection of the underlying data rather than the users perceptions and goals.

I once had to test a piece of software that was meant to be an emergency application for when server access failed.  It was designed to run standalone and then the data could be uploaded to the server when it was back online so that network outages would have a temporary method to allow work to continue.

Now the single biggest problem with this software is that the user interface was developed based on the steps the database required rather than the steps that the user should perform.

Uploading the offline data should have been a relatively easy process but instead it was an extremely painful process demonstrating a huge lack of user empath throughout.

Stage 1 was a form to define the offline data outage.  Then there was a second form where you imported the data files.  Then you had a third form to check the validity of the file, then you had to return to the definition to confirm everything is set ok, then select commit.  This is essentially because those are all of the underlying database process that need to take place and they are simply reflected with data entry forms for each of the steps.

The user interface should have been a much simpler process, just select the offline file test its validity and if there are no issues commit the changes.  If there are conflicts then alert the user.  This should take a process from 4 steps with lots of data entry to 1-2 steps with almost no data entry.  All the same processing would take place at the database level, but quite simply the end user should not care they should be lead towards their expected goal as simply as possible.

I see lots of this type of design that the user interface reflects the database, and most of the time this is entirely wrong, and can become embded throughout products.

Tuesday, 7 June 2011

Convergent design

I am sure I am not the only person who has read about Apple's legal action against Samsung with some interest. Interestingly perhaps I have purchased both a Galaxy S and an iPhone 4, but I was certainly not confused about my purchase and I was certainly aware of the differences between the two phones.  Actually most people I know who know nothing about phones purchased an iPhone, those that did know something about phones made a conscious choice between primarily HTC, Apple and Samsung for the most appropriate phone for their purposes.

Now I have no real knowledge of patent law, so I do not know how much of a case Apple have, but looking at the filing there are some minor technical inaccuracies, some design decisions that I consider significantly different and of the few similarities from my perspective I believe they could easily be explained with convergent design.

Perhaps the most obvious "copy" is the green phone call icon.  Now, while you could easily claim they are so strikingly similar that there must be some copying at the same time there are design patterns at play that have long been established.

The phone silhouette is considered a standard icon, it is in one of the ISO standards as a reference icon design.  Green is obviously a go metaphor derived from traffic signals.  I believe I first saw a handset silhouette in green on a Nokia phone perhaps 15 years ago, and I doubt that phone was the first to use this symbol.  The button metaphor in icon design is so standard I am almost shocked that Apple are trying to claim it is "trade dress copying".  Rounded corners look more advanced and stylish, because they are harder to achieve, they require a higher colour depth and in essence are the natural progression.

Many of their claims feel like convergent design strategies, a phone is thin and and has curved corners because by being thin it is easy to carry and by being rounded it prevent damaging carrying devices or people.  These concepts appear obvious and good design strategies, if you can patent design at this base level it appears to me to stifle design.

There is certainly an interesting element which I had not noticed before was that Apple used a sunflower icon for their gallery and Samsung use what appears to be one or more sunflowers with a play button over the top.  Now this certainly appears a little strange, but if you look at Windows Live Photo Gallery and Windows Live movie maker they both have sunflowers.  I find this a little strange as my first though is a landscape, but as the device can be put into landscape view then what do you use as an icon to symbolise this functionality if you are using it for a gallery...well I guess convergent design appears to have lead to Sunflowers, although to my mind Microsoft and Samsung have done a better job of conveying the gallery message with the play button or the photo framing rather than the pure sunflower Apple have employed. However, is this sunflower icon not a clear indication of convergent design?


I certainly can see a huge difference in design between the icons, Apple's top down shine effect is distinctive and certainly not copied, to my mind Samsung perfectly validly say they used square icons to emphasise the touch area of an icon and the rounded corners were little more than a standard design trend to show off the available colour depth of the device.

There was one element that I found interesting that Apple were challenging Samsung on, the UI element of screen bounce when scrolling to the end of a page.  This is a great UI element, but I am not sure you can consider it patentable unless Samsung literally lifted the same code.


Additionally if a great new UI element appears surely if you achieve that same UI element using entirely new code can you really claim breach of patent...It strikes me that authors of books can claim the right to certain types of plot twist, that even though the language is different the characters, pacing, setting has changed the plot twist would be patentable and the original author could sue the new author.

Science and design progress occurs through this competition and synthesis of ideas.  It would be very difficult from the ground up to write iOS, and anyone attempting to copy it would always be behind Apple.

In the realms of convergent design.  A few years ago I designed a button for a website.  The website had several built in theme colours.  The website was quite glossy in design and the majority of visitors were running IE6 or IE7. The XP theme buttons looked a little out of place and the Classic style buttons looked positively terrible, a custom button would definitely improve the site.  I did a little experimenting and decided that as the site had several built in theme colours and customers were able to further customise this to colours of their choosing I had to have a button that was  without colour, a shade from white to black.

After trying various greys and effects I settled on a subtle white button.  Plastic effects were popular at the time and my initial design was a glossy white button, but I softened the effect as I felt it was standing out too much on the page.  I developed this in 2008 and the website was release 2009.  I noticed when running Skype 4.0 that the login button they used and the button I had created were very similar.

My button left VS Skypes button right




Essentially we had both decided on the same design independently, but I am sure someone external to this process may have assumed that my button was simple a copy of the Skype button.  Now they do have differences the button I designed used large text and had a stronger outline and darker shadow, but these are minor differences.  I do not think Skype are likely to sue me, but still the thought that a design convergence could easily be misconstrued into copying trade dress does worry me a little.

It can’t be that difficult to do this, can it?

Thought I would just mention that there is a general lack of empathy in the world.  The title phrase

It can't be that difficult to do this, can it?

This is often an upsetting phrase to here for a programmer.  It borders on insulting the programmers intelligence.  Ok it may be a few steps away from you're an idiot and you really should be able to do this easily if you had an ounce of intelligence, but it is not far away enough from this phrase that it does not have the ability to grate or offend.  A slightly more empathic phrase would be more a long the lines of

is this possible, how much work would it take

I can put together a word processor that would make those of the 60s look appalling with little work at all, but just because in my field a lot of the work of the 60s has dated fast I would not assume the same of other fields.  The first successful moon landing was more than 50 years ago I would never say to a rocket scientist in my ignorance of their tasks,

landing on the moon, it can't be difficult to do can it
Why do people think this of software so readily?

Oh well luckily my empathy allows me to understand that they were not trying to cause offence and so I should not take any, but it does make me realise how much nicer the world would be with a little more empathy.

Monday, 6 June 2011

User interface design insipration

I recently read an article by Bret Victor titled Magic Ink Information Software and the Graphical Interface.  It further reinforced to me several very important aspects of software design.
  • It is vital to focus on the user's goals when designing UI interactions
  • Failure to empathise with a user leads to significant design flaws
  • Team contribution can lead to a design which is greater than any one designer can achieve
A few pages into his article there is a critique on Amazons results design.  There are several salient points that can be taken away from the criticism, however, I feel there are a couple of incorrect design decisions, and perhaps not a coherent focus on the most important point.

The main criticism was the inappropriate use of space and not providing the correct information at the correct point in time.  While this is a reasonable point to make, I feel a more advanced criticism would be to look at the user goals of someone viewing that information before moving on to the assumptions he is making.

There are many of ways of assessing user goals, personas are one, but even a relatively simple analysis with a small amount of end user empathy can help solidify the basic user goals on visiting Amazon.

Bret focused on how difficult it was to identify the relevance of a book to a search from a very general search.  This essentially is a jump to the conclusion that the main goal of a visiting user is to compare numerous items that can be received from the same single search to find a preferred item.  There are a lot of assumptions there, certainly there are plenty of customers with that goal and situation, but I am not sure that all of them will be driving through that approach or that it is even best to optimise for that goal.

I feel that if you are to generalise to a very high level then there are two main goals.  The first is to find a specific single item that you are already aware of, perhaps you wish to read the Harry Potter series.  In this light you can probably quickly identify the item primarily from its cover image and your search term is liable to exclude most irrelevant items.  Amazon's very simple interface makes this relatively easy to find, in fact Bret's design is certainly not significantly better and the small robust text make to an extent be distracting from this goal.

The second goal of finding and comparing items on an area of interest, then his design becomes more compelling, but at the same time, I still have some significant criticisms.


1. The text is too small to cover a full range of levels of vision and viewing device sizes
2. The quick review headlines appear to be too brief to be useful the majority of the time.
3. I have mixed feelings over the related books section and would want A/B design testing to see how well it works, my hunch is that removing it and providing a large font for the synopsis information would be a better design base, and that having this information at a lower level would not negatively impact the basic design.

The desire to prevent drilling down to a lower record has lead to too much information being compressed into too small a space.

However, the use of a Contents page as well as the his start wieghting indicator are both very good design features that I believe add to the compare and contrast ability well.

It is a tough choice as to the most relevant details to present on a brief and full record display to achieve the best level of usability.  Additionally elements external to those displayed in the illustrations help provide context for comparison and cannot be dismissed from the whole.  Amazon's search faceting helps you to know if you are looking at fiction/non-fiction and other elements so that you are aware if you are accurately comparing like for like items.  Bret's example is somewhat unusual as cover images normally provide a level of guidance towards what and item is and a reflection of the contents.  Although reliance on cover images can  be shown to fail in most cases it is a good crutch for Amazon's lack of synopsis or reviews on their brief record displays.

This article is a really enjoyable read, and shows that software is an area for huge potential, but in many cases even industry leaders do fall down in providing great user interface designs.


PS I really love lots of Bret's work and think he is very gifted, and criticism should be taken in light of the face that it is intended as a discourse in design to achieve better software everywhere, and that it is always easier to criticise than create!

X500 HDMI connection

Well it took quite some time to get an X500 Toshiba laptop to connect to a Pioneer PDP-4270XD plasma TV, so in case you are suffering the same issue of no display being output through the HDMI socket you might like me find that the problem was due to an old bios...  Updating the BIOS was all that was required to get the system to output to the TV.

Also to avoid vertical refresh issues it appears necessary to "extend the desktop" rather than mirror it to the TV.  Not sure why that is the case, might investigate this at a later date.

During my investigation I undated the Nvidia drivers etc.  I have noticed that some of my HDMI cables do not work with the X500 laptop, but do work with other devices successfully, so if you are still having problems try another cable as well before you give up.