Friday, 28 October 2011

Don't let scanablility destroy your website

People don't read they scan -  Really?

Every single web design book, and limitless online articles all advise you that people do not read they scan web pages. They cite tons of studies with scary statistics such as people will click the back button within 2 seconds, they will not read sentences and they will quickly browse your material.

Because of these reasons all of the aforementioned articles recommend to re-write your body copy with this in mind, cut down the text as much as possible.

Why do readers scan and not read?

Is it the medium of web browsing that means you cannot read you have to scan? Many people point to studies showing that reading the same text on a computer takes longer than it does on paper. Well studies do suggest that this is true, but does this really mean that people only scan?

One book on web design I have read recommended performing a ruthless cut of the details of an articles body copy. It is commonly stated that you should
  • Start with the most important information and then leave less important information at the bottom
  • Use bullet point lists to emphasise key points
  • Ruthlessly cull your body copy
  • Assume that people will rarely read passed the first paragraph
They took an example of a poorly written 3 paragraph article on the history of a company. They stated that web users would see that block of text and immediately click away, however, if you culled and re-ordered the content it would keep users coming back to read more.

I noticed that in their re-written version that they had culled a couple of sentences about one of the founders claiming a important role in the creation of the company's first product, although he was normally considered "just an investor". I personally found this information interesting and its failure to be included stuck out like a sore thumb. The re-written version was clearer, shorter, allowed you to quickly find out the big pieces of information, but I preferred the original article as it simply had more information.

Is it the fault of advertising?

Perhaps website owners do not want people reading their pages, they just want them to quickly scan the page and then click on an advert? Perhaps this is the case, but sadly even sites like the BBC follow this path too.

You get used to scanning because there is too much noise!


The low publication costs of the web may lead to this scanning. So few pages have profession editors, copy writers etc (this is obviously also true of this blog :) ) that unlike the print medium there is a poorer filter on low quality content. People become used to spam, poor content etc and so scan pages quickly. The level of scan drops significantly when the quality of content is higher. Additionally many companies have poor web strategies, relying on one untrained person to maintain their web presence on top of their other duties.

Scannability is changing

E-readers, iPads, mobile phones are all allowing a more instant on experience that you currently have with a book. Further more you can increasingly put them down and restart at the exact same point you were at last, much like a book and a bookmark.  This makes in depth reading much easier.

Currently webpages over a certain length suffer from difficultly in bookmarking, re-finding that exact scroll position you were on that a physical book does not. It will not be too long before that is not as much of an issue.

Some times users want more information!

When I find an in depth piece of information I will read it. take for example I was recently wanting to find the UK release date of the Galaxy Note so that I could go to a local store and check it out. There were so many sites which were simply re-prints of the same press release written in almost identical style. After the first few I started scanning them rather than reading them as I could tell they did not have anything more to offer than the information I had already.

However, I would stop for any in depth information that did occur, unfortunately with that information being so few and far between I became a scanner rather than a reader.

Text is cheap

Yes you should improve your information structure; subheadings, bullet points and other similar constructs are great for getting your information across in an efficient manner.  However, ruthlessly culling perhaps unique information does not help you stand out, it helps you become part of the quickly scanned sites.

Re-tweeting and copying articles verbatim is common on the internet. This information overload of simple information hides the unique information on the same topic that may exist.

Text is a fantastic medium, in terms of network bandwidth and browser performance adding pages of text does not have a dramatic effect, it is compressed and normally a fraction of the network payload that a user actually downloads.

Until people stop focusing on scanability and start focusing on unique and well written content the internet will continue to be a useful reference, but a poor substitute to books for learning.

Please do not lose unique and interesting data just because you felt the article was too long for the web, sub link it, have it as the last block of information, but only throw away repetition...

What about reviews?

I believe reviews are perhaps the strongest indicator that people are happy to read. There are numerous articles on how important user reviews are on various e-commerce websites, in fact for some websites it is practically the buk of their content. These reviews can be wordy, poorly edited and certainly not designed with scanability in mind, but they are still important and often vital in a purchase decision, or revisitng a site.

In fact my biggest pet hate is that reviews on mobile versions of website can often be hard to access, require multiple clicks and are often paged in such a way as you need to load each review in turn.

Thursday, 13 October 2011

Multi-select with shift on HTML table

Well I was looking into adding a shift select on a checkbox table. After performing a quick search I found a jquery plugin. This was a great start, but I found it would not unselect successfully. I made some minor modifications and now it appears to work in IE9 and FF...

(function($){
  function toggleSelected(element, shouldSelect) {
    $(element).attr("checked", shouldSelect);
    if(shouldSelect){
        $(element).parents("tr").addClass("selected");
    } else {
        $(element).parents("tr").removeClass("selected");
    }
  }
 
  $.fn.shiftSelectTable = function(){
    var $table = $(this);
   
    $table.find(":checkbox").click(function(event){
      var last = $table.data("jquery-shift-select-table.last");
      $table.data("jquery-shift-select-table.last", $(this).get());
     
      if(last == null || !event.shiftKey) {
        $(this).parents("tr").toggleClass("selected");
      }
      else {
        var shouldSelect = $(this).attr("checked");
        if(shouldSelect===undefined){
            shouldSelect=null;
        }
        var $checkboxes = $table.find(":checkbox");
        var currentIndex = $checkboxes.index(this);
        var lastIndex = $checkboxes.index($(last));
        var $checkboxesToChange = (currentIndex >= lastIndex) ? $checkboxes.slice(lastIndex, currentIndex+1) : $checkboxes.slice(currentIndex, lastIndex+1);
       
        $checkboxesToChange.each(function(){
          toggleSelected(this, shouldSelect);
        });
      }
    });
   
    this.find(":checked").parents("tr").toggleClass("selected");

    return this;
  };
})(jQuery);

Friday, 7 October 2011

IE9 Intranet compatibility mode in Intranet websites


So you have written an HTML5 site on your local intranet with some lovely CCS3 and run it up in Firefox and you feel smug, all your HTML and CSS are perfectly formed, but you run it up in IE9 and all the CSS3 goodness has gone away leaving your lack luster IE7 version of your site. 


Why is IE9 running in IE7 compatibility mode?
IE9 has a hidden setting that forces it to run in compatibility mode when it encounters any intranet websites. Microsoft have detailed this behaviour in a Blog about what they call Smart compatibility mode.

You can easily switch off the compatibility mode for specific machines using the internet tools mentioned in the article above, but most of the time developers do not have the luxury of applying corporate wide settings of this nature.

Avoiding Smart compatibility with X-UA-Compatible

Luckily there is a single line of HTML you can that you can use to override this behaviour:

<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">

OK I did this but it is still not working!
If like me you implemented this code and IE9 tormented you by continuing to display in compatibility mode then perhaps my investigation can help you. I discovered 2 quirks in IE9 that can cause compatibility mode to remain in effect.
  1. The X-UA-Compatible meta element must be the first meta element in the head section.
  2. You cannot have condtional IE statements before the X-UA-Compatible meta element.
This means that if you use Paul Irish's wonderful HTML5 Boilerplate then on an Intranet with default IE9 settings your website will display in compatibility mode.  You need to change the start of the boilerplate from the following:-

<!doctype html>
<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en"> <![endif]-->
<!--[if IE 7]> <html class="no-js ie7 oldie" lang="en"> <![endif]-->
<!--[if IE 8]> <html class="no-js ie8 oldie" lang="en"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">

to:

<!doctype html>
<html class="no-js" lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<meta charset="utf-8">

Hope this has helped your IE9 intranet website up and running in shiny new HTML5 :)