A few years ago I was sat at my desk staring blankly at the folder structure for yet another software project and I suddenly noticed something I hadn’t noticed before. The folders were all prefixed by a number to keep them in the correct order and I realised that Windows Explorer was correctly ordering the numbers as they transitioned from 9 to 10.
So why was that a surprise?
Well I realised that if they were sorted alphabetically as I had assumed they would be, they would be ordered 1, 10, 2, 3 … rather than 1, 2, 3 … 10. Now I’m pretty sure that in previous versions of Windows it was the case that if you sorted by file name, the results would be 1, 10, 2, 3 … but somewhere along the line, someone somewhere has done something really clever.
I started experimenting with different combinations of letters and characters to try and confuse it but found it was pretty resilient. For example, files Fred 1, Fred 2 and Fred 10 also appear in the correct order and putting more text after the number or including multiple numbers separated by dots also still works.
I love stuff like that. This to me is exactly what technology should be, quietly and unobtrusively supporting the user in the background, doing the right thing without a fuss. This is where engineering goes from the skilled to the sublime.
When users are working with technology, the actions that they take must have predictable outcomes even if the detail of how something works is very complex, otherwise we risk frustrating our users. The way we help our users out with this is to build the user experience around a metaphor that allows them to quickly build up a mental model of how the system works. This metaphor must relate to a user’s previous experience and understanding of a simpler or analogue equivalent system.
As an example, think about what happens when you’re driving a car and you put your foot on the brake pedal. What do you visualise in your head? Is it a digital signal sent to a computer that uses feedback data from the wheels to measure the difference in rotational speed of each one and distribute an optimal breaking load to each wheel by pulsing the disk brakes up to 15 times per second?
Err no. At least that’s not how I see it! In my head there’s a mechanical linkage between the pedal and the wheels that causes the brake shoes to grip the disks and I slow down and that’s enough.
But what happens if brake design ever changes to the point where my mental model is broken? This is when the problems start. If having used metaphor to reinforce a particular mental model, you then break it, the system suddenly becomes unpredictable again and the user is immediately confused or worse, thinks the product is broken.
Sometimes though, mental models simply aren’t going to work because the simple metaphor doesn’t hold in all cases. Take the ‘alphabetically sorted’ file system for example. Lot’s of people want to put numbers into their file and folder names… but they also need it to be alphabetic so that they can predict what the outcome of their sorting actions will be. So what do you do? Well a skilled engineer can successfully subvert the user’s mental model by doing it so subtly that the user doesn’t even notice. The system just ‘does the right thing’.
As an example think about what happens when you plug headphones into a phone or a radio. What do you expect to happen? Well the sound stops coming out of the speaker and starts coming through the headphones instead right?
But is this always true?
Every weekday I use the alarm clock function on my iPhone to wake me up for work but last week I had been listening to audio books in bed with my headphones. When the alarm went off in the morning I reached over to turn it off and realised that the alarm was coming out of the speaker despite the fact that I still had the headphones plugged in. Clearly the engineers at Apple realised that in order for the alarm clock to be any use, the alarm has to come from the speaker even if the headphones are plugged in despite the fact that this is different than the normal behaviour a user might expect. I hadn’t noticed because even though the system isn’t behaving the way I might predict it would, it is just supporting me and ‘doing the right thing’.
This for me represents a skill that separates good engineering from sublime engineering. For those of us involved in the development of technology, we should understand the importance of predictability to a user’s experience and already be looking at how our users interact with our systems, providing them the metaphors that help them build useful and easy to grasp mental models of how our systems behave. The next step though, is to know when and how to break those mental models where the metaphors break down and to do it in such a subtle way that the users never even notice it happened.