FOWA Dublin 2009: Robin Christopherson (AbilityNet) – “Apps for All in a Web 2.0 World”

The second talk I attended at FOWA Dublin 2009 was given by Robin Christopherson from UK organisation AbilityNet, focussing on the accessibility of web applications. AbilityNet is a UK charity that deals with disabilities and technology (carrying out accessibility audits, disabled user testing, and web design). As well as looking at the Web, they assess people at home or in work to make sure that they can use various computing devices, mobile phones, set-top boxes, and other consumer electronics.

Photo by Naomi Kelly.
Photo by Naomi Kelly.

Robin’s talk was refreshing in that he didn’t have any slides; rather he used a selection of websites with varying levels of text and multimedia content to give some demonstrations of good and bad practice in terms of accessibility compliance. Robin is himself visually impaired, and his talk certainly alerted many in the audience to pitfalls that they had not previously considered.

Robin started off by talking about the importance of accessibility. Web designers can get so excited when building an application that they often forget a substantial proportion of their customer base. 10% of people (possibly as high as 20%) can have an impairment that will result in them having problems with a particular web application. These impairments include age-related conditions, dyslexia, visual impairments, etc., and people with impairments will often find that a particular user interface is not very user intuitive.

One of the main issues in relation to accessibility is the humble CAPTCHA. The CAPTCHA is a visually-corrupted piece of text that can only be visually interpreted by people, and is used as a verification method to prevent automated processes from signing up for web accounts (typically for spam purposes). OCR (optical character recognition) cannot be used to recognise a CAPTCHA. Researchers in Newcastle recently cracked the captchas of both Microsoft and Yahoo!, but the companies then made them more complex. The CAPTCHA can be very inconvenient for those visually-impaired people who don’t have other people to help them out.

On the Web, you can hardly do anything without setting up an account (the sites that allow you to “try before you buy” are few and far between, and for many services, this would simply not be feasible). As Robin said, if you cannot get through the door you are absolutely stuck. Some sites have an audio version of the CAPTCHA, but by no means all of them do. However, it is absolutely impossible to use these audio renderings. Google is one of the few sites that has a third method of verification for people with screen readers. A hidden image takes you to a customer services page which allows you to request a manual subscription, and this means that a human must then check over your details and manually enable your account. The difficulty here is that sometimes they still won’t set up your account for one reason or another, and they may forward you to another page for further verification.

Robin demonstrated a nice application for people in the UK called FixMyStreet that makes it very easy to report an issue in your local area. Putting in a postcode produces an interactive map that you can then click on to report a problem. For such applications, it is important to make sure that things don’t break with JavaScript disabled, e.g. for mobile devices. Even with JavaScript turned off, you can scroll a map in FixMyStreet and put a pin on a particular location. If you can’t see the map for placing a pin, you can still report a problem. This is handy for people with handheld devices.

Robin gave another example with Google Maps. Sometimes there may be a need for a text alternative if someone can’t view a map. Google Maps has a good text-only rendering, which can be enabled by adding a switch to the end (?output=html).

ARIA (Accessible Rich Internet Applications) is very important from an accessibility point of view. There are a huge number of web applications that use controls and pop-up menus that are extremely difficult for keyboard-only or no-mouse users. For example, Radio New Zealand uses ARIA and has a slider control for moving through the audio. Various keyboard shortcuts can be used to control functions instantly.

Robin then gave an ARIA Live Regions example. He said that it would be fantastic if Twitter could tell someone (using audio) how many characters are left. With ARIA Live Regions, you can can add assertive reminders at certain time intervals. You can also set it up so that pressing each keystroke will flush the buffer since you don’t need a complete history of what has been reported, you just need to know the last message (i.e. the number of characters left).

Another example of poor use of multimedia for accessibility is the autostarting of videos in YouTube. This makes it hard to find or hear what buttons would be required for pausing the video since the audio has already started. It is always good to have user-driven animation rather than autostarting. You could make it loop a couple of times and then let it stop if possible.

It is also very hard to have a liquid, scalable, flexible design on websites. Number 10 relaunched their website recently. It is a nice website, but despite the provision of scalable text, when you increase the text size, things go horribly wrong due to fixed-width areas. The site also has a lot of video clips and Flash activity.

Robin also talked about the 2012 London Olympics site as it had flickering graphics that caused issues with photosensitivity. There is a tool from the Trace Centre that can be used to check multimedia for people with epilepsy.

For multimedia, it is very important to have an audio description. Also, it is important to have alternate text on images rather than have someone go through all the images to find a link that may be relevant. Flash doesn’t always (depending on how it is designed) work well with audio readers, so this can make it difficult for people to navigate, as they may need to use arrows to move to a part of a grid (or subgrid) on the Flash graphic. Robin finished by referring us to the JK Rowling website as they have a very accessible Flash version.

(As an aside, the W3C recently published version 2 of their Web Content Accessibility Guidelines (WCAG) as a W3C Recommendation, and version 1 of its Accessible Rich Internet Applications (WAI-ARIA) document is available as a W3C Working Draft. The WCAG guidelines define four principles that websites should follow: Perceivable, Operable, Understandable and Robust. For example, one guideline is that web applications should “provide text alternatives for any non-text content so that it can be changed into other forms that people need, such as large print, braille, speech, symbols or simpler language”.)

Advertisements

Nice video shows how hidden structured data from the Drupal content management system can lead to semantic search

(Cross-posted at johnbreslin.com/blog.)

Via Drupal creator Dries Buytaert‘s post entitled RDFa and Drupal and Stéphane Corlosquet‘s post about RDFa and Drupal examples and use cases, there is a really cool video that demonstrates how the structured data that is available in many Drupal deployments (but is difficult to leverage due to HTML representations) can be exposed and leveraged using RDFa semantic data. The video shows deep searches of Drupal data using Yahoo! SearchMonkey and also some visual navigations of this linked data. The possibilities are very exciting, as Dries says:

Google and Yahoo! are getting increasingly hungry for structured data. It is no surprise, because if they could built a global, vertical search engine that, say, searches all products online, or one that searches all job applications online, they could disintermediate many existing companies. […] Hundreds of thousands of Drupal sites contain vast amounts of structured data, covering an enormous range of topics [and these structures] can be associated with rich, semantic meta-data that Drupal could output in its XHTML as RDFa. For example, say we have an HTML textfield that captures a number, and that we assign it an RDF property of ‘price’. Semantic search engines then recognize it as a ‘price’ field. Add fields for ‘shipping cost’, ‘weight’, ‘color’ (and/or any number of others) and the possibilities become very exciting.

The video is here.

This effort has been growing over the past year, since it was championed by Rasmus Lerdorf (the creator of PHP) and proposed by Dries himself at DrupalCon 2008. Based on Stéphane’s roadmap for RDFa in Drupal 7, the video shows some modules that have been developed for Drupal 6 to demonstrate the power of having embedded RDFa representations of Drupal structures. RDFa is currently being integrated into the core of Drupal 7.

There’s a nice line in the video about this embedded data:

It’s machine readable and now we have access to all of the machine-readable fields available to us before. Very quick, very simple, just what RDFa is supposed to be: human readable data [text], formatting data [HTML] and machine-readable data [RDFa] all in the same document, all inline, all describing the same thing.

(See also this great video and deck of slides about the “Practical Semantic Web and Why You Should Care” by Boris Mann from DrupalCon 2009.)

FOWA Dublin 2009: Eoghan McCabe, Des Traynor (Contrast) – “Unconventional Web Apps”

The first talk I attended at the Future of Web Apps 2009 event in Dublin was by Eoghan McCabe and Des Traynor from Contrast. As I mentioned in my previous post, many of the speakers (in particular this presentation and the last one by David Heinemeier Hansson) urged attendees not to conform to conventional expectations. Contrast focussed on the design of web applications.

Photo by Naomi Kelly.
Photo by Naomi Kelly.

The main topic of their presentation was “conventions”. There are great opportunities to be had for web designers, but you must first question everything you do when building web applications: including every aspect of how you expect an application to work, where much of it is based on personal experience and familiarity. In terms of conventions, it is basically “survival of the fittest” where the most used or most popular becomes “the” convention (e.g. hashtags on Twitter).

The benefits of using conventions are vast. There will be less friction in the user experience, and using conventions reduces the learning curve. Also, the design process becomes easier and more productive due to the assumptions that can be made. For example, the Rails community have adopted a number of conventions or best practices which lead to an ease in designing applications.

However, there are some problems that Contrast see with conventions. It restricts innovation, since relying solely on conventions is lazy. If all you are doing as a designer is doing what everyone else does, you’re not a designer.

When you are coming up with a design solution, you want to find that unique but at the same time optimal solution. Most people use a local maxima (i.e. the peak in a range of known values). But there is always a better way of doing things: look beyond the local maxima in what we use right now.

If all things look the same, you will look at the price tag (e.g. most mobile phones are very similar in appearance). For commercial opportunities or wins, you must design innovatively, just as Dyson, Apple or Flip did. By doing this this, you can change the world. Breaking conventions and innovating yields more meaningful ways in which you can also enjoy making money.

In this presentation, Contrast were talking about Web conventions specifically. They listed a couple of these, beginning with the standard layout. It almost always consists of a footer, left navigation bar, a body area, and a header. Superimposed on this there’s a logo, a primary navigation bar, secondary navigation, and a search box. This convention gives you the same site again and again and again (e.g. Sky.com, Play.com, etc.). It works because it is an established convention that has been proven to make money (e.g. Amazon).

Next up was signups. For a new application (e.g. invoicing, one of Contrast’s areas of expertise), the start page is always a signup where you are asked for the same information. It is taken for granted that if you want to try out an application, you must register first.

Copy (or text phrases) was next on the list. Certain phrases are used to say something since they almost guarantee that you will know what to expect next.

They then talked about the home screen. Since a user usually starts at a home page, that page contains a whole lot of content that will cater for a wide range of users. It acts as a starting point that will draw out some content and deep links.

Page-based conventions for websites evolved from traditional flyers and brochures that migrated to the Web. We often talk about pages as discrete units, with hierarchical content. Sitemaps of these hierarchies have themselves made it on to the Web as page contents.

The last item was branding. Branding is used to take ownership of content and to “give it a voice” by placing a mass header above everything that is publish. “This is BBC or Sky or RTÉ News”, “we are an authority”, “our journalists are associated with this brand”. The header takes ownership, where the identity is always on the top left. It’s tried, trusted, true and it works.

So what’s the problem? Conventions work and they are extremely useful. But they are also unremarkable. If you lean on them too heavily, it becomes difficult to sell or market something as a special or remarkable product.

The other problem is that many of these conventions evolved when people started building websites in the 90s. Contrast argue that these are the wrong conventions, and we should break them because who knew what they were doing back then?

Why are they wrong now? Because we are not building sites any more, we are building software applications. Contrast don’t advocate throwing everything out, but you should certainly question a particular convention when it becomes obsolete. The game is changing, but we need to adapt as well. There is an opportunity for us to change these conventions as the world moves forward with increased connectivity, the Web everywhere / on any device, and loads of APIs to move data back and forth.

The presenters returned to the six conventions of websites and gave some ideas on how these could be changed.

Website layouts are normally three columns. But there are alternate ways to navigating content, e.g. by zooming. The online bookstore Zoomii.com does things differently than conventional sites, allowing you to zoom in to content items (books) using a mouse wheel. It’s easy to do the top-left branding thing, but such a navigation system can make you stand out even more. Another example is zoomism.com from BEVOdesign’s Ben Voos. Ben’s portfolio is linked all over the Web this as a result of this innovative site. Such an interface couldn’t have been created six or seven years ago as the required technology was not there.

They came back to the topic of signups, contrasting with the model we have now. If you want to encourage users to try your service, you should let them try before they buy, where they don’t necessarily buy with cash but rather with their time. We need to respect time, e.g. soup.io allows you to click “Try it now” and publish content without a user account. With no signup, a user doesn’t have to let go of personal information. drop.io is another example, a site for sharing files without registering.

For copy, there are two conventions: firstly, the text phrase is usually pretty mundane, and secondly, it has to stay the same. However, this has been ignored more and more recently. Flickr says hello to you in different languages, instead of just saying “You are logged in”. If you are as bland as everyone else, you will be ignored like everyone else.

Some examples of sites using cool copy are the Huffduffer podcasting application and the Threadless store. Instead of saying “0 items in cart”, Threadless says “I’m so, so hungry […] feed my carty belly with delicious Threadless products”. Fender is a counter example. They should have cool copy with rocking language for guitarists. Where it currently says “Login failed” or some other robotic nonsense, it should say “Try again dude”! Contrast often use an accounting application called Freshbooks, and in this case you would use interesting but formal copy. The copy text suits the application, so you would use formal if it is a formal engagement.

On Lovestruck New York, every label has about nine or ten possible values (“I love you in that colour! Never wear anything else. Come on in.”). This makes it always interesting, creates engagement, and makes you like the site. We should build the sites that people like, therefore there is a need to think about engaging people as you are building sites.

Next under the microscope was the home page or dashboard. We need to question the idea that it has to be there, that it is taken as a given. The Harvest high-level view uses one third of screen space to promote Harvest news, but that isn’t the purpose of the site for most users who use the service (an application built for invoicing and time tracking). WordPress.com also suffers from this, promoting internal news and WordPress trivia on the dashboard of blog users (I laughed at this because I thought of it just before they said it). The first thing you see when you go to write a post is that you have 14 pages and 35 categories and 165 tags and 2636 comments, all of which is unimportant. A nice example of where Harvest have done it right is their iPhone view, which allows people to easily create solutions from scratch and to track time. (Similarly, the WordPress iPhone applications is clear of irrelevant clutter.)

The problem with page-based design, the next convention, is that we miss out on various opportunities when designing pages. You can’t design a wireframe for a zoom-in application with the current set of page-based development tools available. Prezi, the zooming presentation editor, allows you to mix all kinds of content, where everything is purely interactive. We can now give up on the convention of having a static set of navigable pages for a application, which is a big win for the Web.

Branding was the final item. The convention is that you need to give your application a voice, to say who you are. Contrast say that we need to “stop the sell”. If you keep pushing your brand on someone, it gets tiring, and you don’t necessarily have to do that (e.g. Keynote doesn’t keep reminding you of its brand all the way during a presentation). As Contrast’s work is quite oriented towards invoicing and time plans, they cited the examples of Blinksale, invotrak, and Remember The Milk who all keep reminding you of who owns the service you are using. You don’t have to keep pushing the brand: let the brand be good software.

The inspiring companies (the ones to copy) are those who provide web applications where people believe they “own” the software. An example is the Basecamp service, used by many of Contrast’s clients who are unaware that it is Basecamp. There is no hard sell, no pushing required if the service speaks for itself.

Finally, the Contrast guys spoke about the future. We have to build remarkable stuff, but we can’t do it with the current conventional templates available to us. You can throw together an application but it won’t stand out from others despite any flashy graphics you may throw in there, and this requires imagination.

Contrast are a small company of four: Dave, Paul, Des and Eoghan. In October 2008, they were effectively out of business. But they built up significant client business since then because they looked at breaking the rules (Qwitter, Exceptional, etc.), and now they have project work lined up for the next two years.

Eoghan finished up by saying: “Try it, if we can, anyone can. Break the rules, question them and have fun!”

There were a few questions, the first was about what is the required skills mix. Contrast answered that you need a small team all with a balance of skills. The next question was about other layouts apart from zoomable ones. Contrast believe that the hub-and-spokes style we now have isn’t the right one for web applications. For example, iPhone applications use back and forth, allowing you to drill down and back up again. ZUI is a technology that Des thinks will get huge. Another interesting option is layered views in web applications. His advice was not to design a navigation system before you think through it properly. Dan Saffer talks about functional carthography, i.e. how likely is something going to be hit, how often. We should also take inspiration from non-Web platforms, e.g. Mac desktops.

The last question was about non-JavaScript interaction methods, e.g. Silverlight, Flash, and if they are the way of the future. Using JavaScript restricts some options, e.g. camera or audio input are not possible. You may not need everything that Flash or Silverlight offers but they have many more possibilities. HTML/CSS/JS is a bit of a hack but it works well for web pages, if not for interactions. A good guess would be to say that other technologies will have a look in during the next few years (e.g. Adobe Air).

More FOWA Dublin 2009 posts by others:

Future of Web Apps Dublin: great speakers, poorly organised

I attended my first Future of Web Apps conference yesterday when Carsonified’s FOWA troupe came to Dublin for a one-day event in Liberty Hall.

I was really looking forward to this event and the talks certainly fulfilled expectations. Blog reports on all presentations (apart from Ryan Carson’s whom I missed) will be published next week after some editing.

The highlights were David H. Hansson from 37signals and Eoghan McCabe and Des Traynor from Contrast. A theme that ran through many of the presentations was not to conform to conventional expectations, echoing the memes of change and revolution prevalent in today’s world.

An unwelcome change that was noticed by many attendees was the lack of “extras” during the conference, especially tea or coffees. In fact, apart from a name badge (and I didn’t even manage to get all of that as they had run out of the necklace clip thingys by the time I got there) that you had to write yourself, attendees received little more than the pleasure of seeing a varied lineup of great speakers and topics.

That’s fine, and indeed good speakers are what we came for in the main. But as many conference organizers will tell you, the secret to having a satisfied bunch of repeat attendees is good speakers, good food and drink, good wifi and good social activities – unfortunately FOWA Dublin failed on the wifi, food and drink, and there were precious few areas for socialising (the Liberty Hall is a limited venue with narrow walkways and one lecture theatre; I presented there last year so was surprised at the choice). I didn’t make the afters party so can’t comment on that.

Wifi was very, very poor, but this can be forgiven in part due to the fact that it probably rarely gets stress tested with 400 laptop- and iPhone-loving web professionals, and the organisers probably assumed “there’s wifi, great, another thing ticked off”. Toilets were abysmal: there was one working toilet for the male attendees in the afternoon (and the majority of attendees were male).

The main issue was no food or drink. This should have been included. Ticket prices were either 115 or 175 euro for early and regular purchasers, so 145 on average. As organizer of last year’s BlogTalk, I know that 150 euro tickets (plus four sponsorships) for 125 people can get you a lecture theatre and at least three or four extra rooms in a top-class hotel, tea, coffees, biscuits or muffins twice a day, plus lunch for THREE days, a banquet dinner and a t-shirt, and expenses for four plenary speakers. Oh, we printed out badges too!

FOWA Dublin had about 10 speakers, two or three from Ireland so about eight may have required expenses but many were from the UK. FOWA Dublin had around three times the people/revenues of the conference I organized, so there should have been plenty of extra cash for those four extra guests AND teas or coffees. I can’t imagine the fancy intro animations were more important than refreshments.

Overall, the fees were low so for the high quality speakers it was worth attending. But that little bit of extra devotion to attendees’ needs would have made all the difference.