Breeding Faster Horses – Misunderstanding User Experience

“If I had asked people what they wanted, they would have said faster horses.”

This quote makes my teeth grind. It’s attributed to Henry Ford and supposedly  relates to the origins of the Model-T Ford, but there is no evidence of him ever actually saying it… although that’s not why I have a problem with it.

It’s often used to justify ignoring user research, but all it does is demonstrate a misunderstanding of how feedback should be interpreted. It suggests that we should take whatever someone says about our product or service as verbatim, and not look any deeper.

Don’t take things so literally

But being user-centered is not the same as being user-led. The latter suggests any feedback is taken literally whereas user-centered is about reading between the lines, it’s about discovering root cause. Alan Cooper sums this up nicely:

“[Doctor], I Broke my arm, the bone is sticking out, it hurts like hell, but I find that if I hold it in this position the pain is at a local minimum. So would you please duct tape it to my body in this position so it doesn’t hurt every time I take a step?”

“When they come to you […] with bones sticking out of their body it means there is a problem and you have to bring your expertise to bear and analyze the problem and come up with a solution.”
Alan Cooper, 2006

Putting Innovation in Context

The quote makes Ford out to be a ‘rockstar designer’, a gifted genius that gave birth to an idea without consultation or external input. But there would have been many contributing factors that lead to its success at the time.

At the end of the 19th Century many cities were in a state of emergency. By 1894 the over reliance on horses had led to the Great Horse Manure Crisis, with The Times newspaper even predicting that “In 50 years, every street in London will be buried under nine feet of manure.”

The problem wasn’t just limited to manure, each horse also produced over two pints of urine a day. Working horses only lasted for about 3 years, and carcasses were often left in the streets to putrefy, so that chopping them up for removal was made easier. Add to all of this the plague of typhoid-carrying flies and you can start to gain an appreciation for some of the socio-economic motivations which contributed to the rise of automobiles.

Understanding Customers

I’d argue that Ford had a deeper understanding of his intended customers than the quote suggests. The Model-T wasn’t the first car to be manufactured, the top-tier of society was already well catered for. Ford instead chose to focus on Middle America. He identified his customers would be an average sized family of moderate income, with limited technical ability and free time.

He would have appreciated the cost of owning horses and the restrictions on lower-income households unable to escape the cities they lived in. As the son of a farmer Ford had first-hand experience of the hard work and scant reward associated with many popular occupations, so would have felt empathy for such families. He wanted to help them avoid the grinding life of hard labour by developing the means for an easier, yet more productive, life.

“I will build a car for the great multitude. It will be large enough for the family, but small enough for the individual to run and care for. It will be constructed of the best materials, by the best men to be hired, after the simplest design that modern engineering can devise. But it will be so low in price that no man making a good salary will be unable to own one – and enjoy with his family the blessing of hours of pleasure in God’s great open spaces.”
‘My Life and Work’, Henry Ford (1922)

So, Ford identified a target audience with which he could empathise. He knew the limitations of the current product and the huge problems it was creating for customers and society in general. He’d know that greater speed, or reduced journey time, would be a benefit but not the only factor to consider, and was also aware of the financial and technical constraints he’d have to work within.

Disrupting the Market

The first Model-T launched in 1908. Regarded as the first affordable automobile, it opened up travel to the common middle-class American. By the time the 10 millionth unit rolled off the production line in 1924 50% of all cars in the world were Fords.

A big part of the success of the Model-T is often attributed to the invention of assembly line production. But even that wasn’t a new invention. The concept of mass production already existed in Europe (as far back as 1802) and was introduced to Ford by William “Pa” Klann, after he visited a slaughterhouse in Chicago (Swift & Company) to study how they butchered animals along a conveyor, referred to as the “disassembly line”. So through user observation Ford was able to identify the efficiency of one person doing the same small task over and over again compared to the artisan approach of other car manufacturers at the time.

“Henry Ford is generally regarded as the father of mass production. He was not. He was the sponsor of it.”
‘My Forty Years with Ford’, Sorensen (1956)

As a start-up he was doing well; he understood his audience, had identified his constraints, and had looked beyond his own industry for innovation in production.

But as a ‘rockstar designer’ Ford took his eye off the ball – he believed what he had created, both in the Model-T and the production process, was perfect first time. He refused to change either. Even with the growing success of emergent competitors who were more than willing to adapt.

“Any customer can have a car painted any color that he wants as long as it is black.”
‘My Life and Work’, Henry Ford (1922)

Ford believed the Model-T was all the car a person would, or could, ever need, so why change it?

Competitors looked beyond the initial requirements, realising that people wanted more choice. They soon began offering greater levels of comfort, styling options, as well as evolving the production process to drive more competitive pricing.

…And What Would You Use a Faster Horse For?

“To paraphrase Confucius, when customers point to the moon, the naive product manager examines their finger.”
‘Mistakes We All Make With Product Feedback’, Des Traynor

The Faster horses anecdote is used to justify not listening to customers. But that completely misses the point. When questioned, if people’s first reaction was to ask for faster horses they’d really be telling you that speed or duration was a major factor to travel. One response might be to ask “and what would you use a faster horse for?”. Beyond that, knowing the context in which customers travelled would indicate that faster horses would only help to solve one particular symptom of a much larger problem.

By only taking into account one factor, what the user tells us, we risk misdiagnosis. By considering all aspects provides us with the ability to truly understand the problem.

“An innovator should have an understanding of one’s customers and their problems, via empirical, observational, anecdotal methods or even intuition.”
‘Henry Ford, Innovation, and That “Faster Horse” Quote’, Patrick Vlaskovits



An introduction to Lean UX

Back in June I flew up to Edinburgh for UX Scotland, a new User Experience conference. The highlight for me was a full-day workshop with Jeff Gothelf, author of ‘Lean UX’, the latest book in the ‘Lean Start-up’ series. Being new to Lean UX I wanted to write a post covering some of the basics to help me get to grips with it.

This article isn’t intended as an opinion piece, although hopefully that will follow once I’ve had a chance to put some of the principles into practice. If you’re already familiar with Lean UX please look away now!

What is Lean UX?

Inspired by Lean and Agile development theories, Lean UX helps us focus on the actual experience being designed, rather than deliverables. The workshop showed us how, as UX specialists, we could collaborate more closely with other members of a product team, and gather valuable feedback as early and often as possible. It taught us how to drive the design of a product in short, iterative cycles in order to help assess what works best for both the business and the user.

Making Assumptions

Traditionally, UX design projects are framed by requirements and deliverables; teams are given set requirements and expected to produce detailed deliverables. Lean UX re-frames the way we approach a project. The goal is not to create a deliverable (e.g. a design specification), but rather to change something for the better – to affect an ‘outcome’.

Lean UX replaces requirements with a ‘problem statement’ and a set of ‘assumptions’, which in-turn form ‘hypotheses’.

An assumption is a high-level declaration of what we believe to be true but not to be taken as a literal fact. Assumptions enable a team to develop a shared understanding and to agree upon a common starting point.

By going through an ‘assumptions declaration’ exercise as a team, designers and non-designers alike are given the opportunity to voice their opinions on how best to solve the problem. Early assumptions respond to questions such as:

  • Who is our ‘customer’?
  • When and how is our product used?
  • What features are most important?
  • What is our biggest product risk?

The assumptions are then prioritised by the team; user A is more important than user B, feature X is of far greater value than Y or Z. Prioritisation should be based on the level of risk (i.e. how bad would it be if we were wrong about this?) and understanding of the issue.

By prioritising assumptions according to greatest risk or unknown issues, the sooner we are able to test the accuracy of those initial assumptions.

Forming a hypothesis – from outputs to outcomes

A hypothesis statement uses the format:

We believe that [ doing this / building this feature / creating this experience],

For [these people/personas],

Will achieve [this outcome],

We will know this to be true when we see [this feedback / quantitative measure / qualitative insight ].

We transform assumptions into a series of hypothesis statements so they can be tested. These statements communicate a clear vision for the work and shift the conversation between the team members and their managers from outputs (e.g. “we will create an advanced search feature”) to outcomes (e.g. “we want to increase the accuracy of a user’s first search”).

The hypothesis is a way of expressing our initial assumptions. It should be defined in a way that is testable, in order to measure whether it has achieved the desired outcome.

Expressing assumptions in this way helps build a shared understanding and takes much of the subjective and political conversation out of the decision-making process, instead orientating the team towards the users.

MVPs and Experiments

Lean UX relies upon the notion of MVP (Minimum Viable Product) to help test assumptions – will this tactic achieve the desired outcome? – while keeping the time we spend on unproven ideas to an absolute minimum.

The sooner we can find which features are worth investing in, the sooner we can focus our limited resources on the best solutions for our business problems.

The MVP is then used to run an experiment, the outcome of which will show us whether our initial hypothesis was correct or not, and whether the direction explored should continue to be pursued, refined further or abandoned altogether.

When it comes to testing, Lean UX takes the basic research techniques from traditional UX processes and overlays three important ideas:

  • Research is continuous, built into every sprint (e.g. 3 users, every Thursday),
  • Continuous activities are bite-sized (quick to organise and ruthlessly focused)

All activities require collaboration in which responsibility for activities such as research, are spread evenly across the team, removing the bottleneck often created by a solitary researcher. By eliminating the step between researcher and developer the quality and depth of learning is increased (in theory).


Hopefully this has helped to introduce some of the fundamental characteristics of Lean UX. Personally I liked the way many aspects were framed, and from my experience of Agile was glad of an approach that put greater emphasis on the user. Although there are a lot of challenges to adopting such an approach I hope to be able to introduce some of the more valuable aspects into our current agile process.

I’d be interested to hear other people’s experiences of Lean UX, especially gaining buy-in from a development team. If you have any thoughts please feel free to share them.


What’s the point of UX certification?

approvedRecently at work we’ve been discussing the merits of various UCD training courses, with particular focus on those offering certification.

Our UX team is in its infancy, and still relatively small, but we’ve managed to establish a solid base of enthusiastic advocates in various areas of the business, eager to learn more and develop new skills. Because of this, we’re in the process of defining a new training syllabus, to help support the wider team as much as possible. In order to gain more sponsorship, recognition and  legitimacy for our advocates, within their respective LOB, a certified course could be a useful standard to introduce.

Our counterparts in North America have already adopted a certified course as their standard, encouraging all UX practitioners to take part. But, with the right resources, we could be better off using the training budget to develop our own curriculum, tailored specifically for our industry/LOB, and structured around our established processes, rather than something more generic. This would involve considerable effort and investment. You could also argue that keeping everything in-house puts a ceiling on our abilities and knowledge, which overtime could potentially deteriorate if the courses weren’t maintained.

Is UX certification worth it?

The issue we keep returning to is – how relevant would a certified course actually be? Would it provide significant ROI? And, although it may provide our practitioners with a greater degree of legitimacy, unless the certification is aligned with our processes its practical value would, most likely, be limited. In short, is it really worth it?

A few months ago a similar question was posed on UX Stack Exchange.  Having had first-hand experience of such a course, gaining ‘certified’ status in 2008/9, I felt well placed to provide an answer to the question. With the pros and cons still fresh in my mind I wanted to explore the topic further.

Why did I need Certifying?

I graduated in 2000, with a degree in Graphic and Interactive Design, and began my career at Deepend, a Digital Communications Consultancy in London. While there I was exposed to a new breed of people, ‘Information Architects’, who were strong proponents of ‘user centered design’. The concept was new to me, but made perfect sense, I wanted to know more. Over the following years I taught myself as much as I could, and tried to guide my career toward UX wherever possible. By 2008 I was working in a great agency, that openly supported my desire to progress further. I’d gained knowledge, first-hand experience, and was growing in confidence.  But, the more the agency grew (putting more emphasis on UX and by association, me) I felt I needed to validate the things I’d learnt, and supplement my formal qualifications with an industry recognised accreditation, or at least the nearest thing to it I could find.

The User Experience of getting Certified

The course I picked was divided into four parts; User Centered Analysis, Practical Usability Testing, Effective Design, and Research in Practice, followed by a final exam.

The 4 courses were certainly of use at the time. For the most part, they reassured me that my level of  knowledge was adequate, and helped to highlight and improve certain areas of weakness. Having formally studied Design, and not coming from a particularly research orientated background, the module on ‘research into practice’ was of particular interest. Although, the Design module didn’t seem all that great, and in hindsight made me question the quality of the other courses. If, as an experienced Designer, I didn’t rate the Design modules that highly, who was to say the other topics were any better?

Being Certified

At the time, the certification helped me gain recognition as the UX specialist within my agency, as well as the confidence I needed to fulfil the role. It encouraged me to have faith in my abilities, and provided me with various levels of support; be it from the course handouts, my notes, or the attendees I met along the way. When pitching to new or existing clients, it helped to define me as an ‘expert’, and arguably gave a fledgling service offering slightly more structure and gravitas.

For me, the impact soon faded. Today I’m still glad I have it on my CV, but I no longer feel it  bears much significance, it’s not something I consciously reference anymore.

Weighing up the benefits of certification


  • Helped to position me as a specialist within a team
  • Gave me the confidence I needed to fulfil a specialist role
  • Indirectly contributed towards winning new business
  • Arguably strengthened my CV when looking to progress my career
  • The topics covered were varied and helped fill gaps in my knowledge
  • Would be good as a recognised benchmark across a team with mixed abilities and experience


  • Expensive if undertaking the whole course + exam
  • Any difference it made, when applying for new jobs, was most likely superficial – although I’m sure it would be of greater help if practical experience was limited
  • Attitude, experience and knowledge counts for far more, in my view
  • Some certificated courses are recognised within UX, but opinions are mixed – I’d question how much weight they’d garner outside of the industry
  • I had mixed opinions on the course material, and didn’t feel like my level of knowledge improved significantly

Certification vs. Bespoke training

For our advocates and aspirant UX practitioners, certification could be of use. But, I believe the return on investment is short-lived. Although each person who completes the course would gain knowledge, some basic skills and grow in confidence, there would have to be a degree of ‘on-boarding’ on our part in order to take what they had learnt and help apply it to our ways of working. Our ‘customers’ are senior managers and internal product owners, our ‘users’, the people sat around us. The apps we work on are all internally facing and highly technical. We work in an agile way, often constrained heavily by Business processes and  system architecture. To find externally run, certified courses, that easily  transfer to these challenges would be tricky.

Because of the environment we work in, I believe taking the time to develop a set of internal training courses would be of far greater value to us in the long run. Although this comes with its own set of challenges the benefits would far out way the negatives. It’s down to use to ensure that standards remain high, that we keep a handle on what’s happening outside of our bubble and make sure that any courses we do develop can evolve easily.

We’re very much in the early stages of creating a training plan, so there’s still time to back out if it begins to feel like the wrong thing to do, and if it doesn’t work out at least it’ll be an interest experience.

I’m really interested to hear other people’s opinions on specialist UX courses (certified or otherwise). Are you UX certified? Have you developed or taken part in in-house training? Would you do it again or recommend it to others? Please feel free to add a comment.

Teaching user experience with Lego

Whenever I have to communicate what it is I do, or the benefits of UX, I feel a bit like a  salesmen, having to convince someone (almost against their will) why they should care about user experience as much as I do. Because of this I’m often on the lookout for different ways of educating people. Since moving ‘client-side’, after years of working in agencies, I’ve found myself in a position where more time can be dedicated to educating people without necessarily worrying about being on the clock. In this new environment, away from pitches and ‘honeymoon periods’, I get the opportunity to find more engaging ways to communicate the relevance, benefits and importance of experience design.

A few years ago at UX London Jared Spool shared his workshop technique, ‘Making a peanut butter and jelly sandwich’. In the workshop he asks a group to write down, step by step, how to make the sandwich. He then takes the raw ingredients and makes the sandwich by following the instructions to the letter. If the directions failed to tell him to remove the bread from the bag, he’d make the sandwich with the bread still in the bag, and so on. I thought this was an interesting way of getting across the need to understand your audience and not take anything for granted.

There are no original ideas

I liked the idea, but wanted to come up with an activity that would encourage greater group participation. I wanted to similarly educate people on the importance of good navigation and clear user assistance, but at the same time communicate some of the fundamentals of user-centered design. I came up with an idea I liked… and then discovered Jared had beaten me to it!

Testing Lego Construction’ was the sort of approach I wanted to take, but I thought it would be worth developing my idea further, to see how I could evolve it to better suit my own needs.

Lego-centered design

In contrast to Jared’s approach of two observers and one assembler I decided to take the role of observer myself and asked two volunteers to help me, one taking the role of ‘Instructor’, the other of ‘Maker’. I bought a basic Lego toy, gave the pieces to the Maker,and the manual to the Instructor, who sat with their back to the Maker. I made sure the latter wasn’t aware of what the end result was meant to look like or even be.

The Instructor was asked to follow the manual and guide the Maker through assembling the Lego model step by step. They could approach the activity in anyway they wished. At first I limited the number of questions the Maker could ask but it actually made it more interesting to allow them to ask questions of the Instructor and see how, inturn, they dealt with the queries.

Beta testing

It took my volunteers around 30-40 minutes to complete the build, which was longer than I thought it would take, but in the context of a half day or full day workshop it would probably be about right.

Neither of the volunteers were big fans of Lego, which made it all the more interesting to watch, and put them on a level playing field. Even though they had good instructions to follow, confusion and miscommunication started very early on. The main confusion was over the Lego pieces. Colour was used straight away as a descriptor, but even that proved problematic as the colours didn’t match exactly, for example black bricks within the instructions look grey. The terminology for describing size and shape was also an issue. The Instructor kept referring to the number of “nobbles”, but for the Maker what did that mean? 4 wide, 4×4, or 4 in total? Other words and phrases that caused problems were “prongs”, “pieces”, and describing something as going “away from” or “out from” something. All these small problems soon built up to the point where both volunteers were showing signs of frustration. Once mistakes started to creep in, the task became more difficult as the model no longer matched the instructions.

Losing perspective

One of the main issues was the inability to see the task from the other persons point of view. The Instructor described things from their perspective, giving instructions like “horizontally”, referring to the orientation of the page, which lead the Maker (working in 3 dimensions) to ask “what are you seeing as ‘horizontal’?” Similarly on another occasion the Maker asked “Would it be facing me or you?”, referring to the model in front of her, the Instructor, sat with her back to the Maker, responded “both!”, once again thinking only of the instructions she was reading.

Reviewing the activity

Afterwards I asked the participants for their feedback, and to discuss the activity with each other. It was great to see how frustration soon lead to empathy and the realisation that they weren’t the only one getting annoyed. A prime example was the Maker explaining how she saw the physical bricks (2 by 4, 1 by 6, thin, thick, etc.) which  lead to the immediate realisation by the other participant “why didn’t I explain it like that!”

The activity seemed to work well and with some tinkering could be useful. It helps communicate the importance of understanding your users, their ability levels, the terminology they’re familiar with, and their knowledge of your product, service or subject matter. It reinforces the  importance of clear instructions and navigational cues, use of language and the need for a user friendly interface. If the Observer role was taken by a third participant It could also help to highlight the benefits of watching people interact with your product or service, experiencing the highs and lows first-hand, educating them on the importance of user testing.

Evolving the activity

If I was to run this activity as part of a workshop in the future I’d split the group into 3 separate teams. It would probably work best if there were 9 attendees.

Each of the teams would ideally consist of a Maker, an Instructor and 1 or more Observers. The groups would be given the same Lego building task, and a 30 minute time limit. But there would be different restrictions applied to each.

  • Group A (closed) – The Instructor is given the plans, the Maker the Lego pieces. Neither is able to see what the other is doing. No time is given to prepare. Before and during the task only the Instructor is allowed to speak. This group represents a company or team that doesn’t involve any sort of research or customer feedback into their process.
  • Group B (open) – As with the first group, the Instructor and Maker have defined roles. However, this group is given 2-3 minutes beforehand to discuss the task. They can ask questions of each other, agree terminology, understand each others abilities, and discuss an approach. Once the 2-3 minutes are up only the Instructor can speak.
  • Group C: (collaborative) – The final group have the same set of tasks as group B, the difference being they can have open dialogue throughout the task, in order to ask and answer questions clarify approach and change things around if needed.

Ideally each team would have at least one Observer, asked to remain silent and neutral, their job is to note down how the activity went, along with positives and negatives throughout.

Once the 30 minutes are up, each Observer could then share their experience with the wider group allowing time for their other team members to contribute their experiences also. The workshop facilitator could then discuss the difference between the 3 approaches and highlight the level of majority of each from a UX perspective.

Hopefully I’ll get the chance to refine the approach more over the coming months and start to think about including it within a workshop. I’d be interested to read what you think about this, if you’ve heard or been involved in something similar in the past, and how it went. Even if you don’t think it’s a good idea I’d be really interested to read your comments.

Considering the impact of regional variance when recruiting for user research

When carrying out research do we place enough importance on geography? Based on the sparsity of research recruitment agencies outside of London I’m guessing we don’t. There’s certainly an appreciation for regional variation when designing on a global scale, but do we put as much rigor into our approach when the audience is limited to just one country, and do we need to?

Anecdotal evidence

Last year I was fortunate enough to work with a well-known charity (top 10, based on donations). With hundreds of locations across the UK they have a significant impact on local communities, and their volunteers and supporters are incredibly passionate about their cause. Because of this they’d often refer to themselves as being a collection of small local charities rather than one large national organisation. Interestingly, during the pitching process we were the only agency to suggest carrying out research across the UK. Those who also discussed user research purely focused on London or the charities home town as possible locations.

The issue with both of these is that they represent extremes. London is where the charity has the least amount of public awareness, but does the most work. Whereas their hometown naturally has the greatest public awareness and positive perception. By carrying out qualitative face-to-face research in Scotland, Wales, Ireland and the Midlands in addition to both of these locations (supported by more quantitative national activities) we were able to build a much more rounded picture.

Our mini UK tour proved to be invaluable, helping to form the backbone of our eventual solution. If we had limited our research purely to London our solution would have been incorrectly weighted to educating people on the proposition rather than dealing with a more engaged and experienced audience.

Urban vs. rural

I’m not suggesting London should be ignored as a research destination, or any other city for that matter. It’s more about understanding the potential implications location can have on your research findings.

My background is not in applied research, but my understanding is that the basic principle when testing a sample of a larger population, is that the sample has to be representative of the whole population otherwise the conclusions that are drawn are limited and can easily be misconstrued.

Comparing urban and rural living we can easily draw up a list of attributes that are diametrically opposed. For instance, you could argue that someone living in an urban environment is more likely to use public transport, therefore have more dead time on their hands during a commute allowing them to consume content via a mobile or tablet device more frequently. In comparison someone living in a rural area may have to reply on a long car journey to get to work, therefor consuming more audio content such as podcasts. Someone living out of town may be effected by slower broadband speeds, be more likely to shop online or have greater disposable income than a city dweller on a comparative salary.

I know where you live

In the instance of the charity, visiting multiple locations and discovering notable regional variations didn’t lead us to create a site that relied on geography as a navigational aid, more that the sense of place, of relevance, was intrinsic to the experience. In actual fact our research helped us to better understand the various audience types and led to a task-based navigation. What it did help us to understand was how important local context was to people and how much more they would potentially engage with the charity if we could bring what they experienced off-line to the online environment.

A local solution for local people

I’m not suggesting you should include multiple rounds of testing spread across several locations with every project you face, after-all it can be time consuming and costly, and anyway, insight however small is better than nothing at all, right? I don’t believe regional variation will necessarily be of relevance every time, but, as with many other factors such as literacy or education, cultural background, socio-economic, gender or age, the location you choose to carry out your research needs to be something you take the time to consider fully.

Setting a standard

UX research isnt to the same standard as scientific practices, but we should look to be more diligent in our approach. Regional variance isn’t an issue when it comes to usability testing, as small groups of people will flag a large number of issues, whether or not they’d actually use the service is arguably of little importance. Returning to the example of the charity site, getting someone who has no interest in the charity to test the donation journey is still of benefit as they’ll still indicate if the call-to-action is misplaced or whether or not the request for giftaid Is unclear. However, if carrying out initial research for a travel company to help define a strategy for cruise holidays by running sessions in the same city they sail from, you’d struggle when the majority of sales come from outside of the area.

By researching across multiple locations you are better informed to focus a solution on the right approach, understanding the problems people are trying to solve or tasks they are looking to complete. So next time you think about gathering some initial insight give a thought to where you’re gathering it; will it make a difference? is it just down to convenience, time or cost implications? Or because its the right place to speak to the right people.

Making a mobile usability testing sled the MacGyver way

Last year I had an influx of mobile projects and needed to find a way to carry out usability testing on a mobile device. I’ve been meaning to share my solution for a while, but it’s taken me until now to get round to it.

When it comes to carrying out mobile usability testing there’s a variety of well documented solutions, for example Harry Brignull’s usability testing sled made for a fiver, Nick Bowmast’s variation on a theme, and Lokion Interactives pimped sled beautifully monogramed and made by Ponoko. There’s also a great slide-deck from this years IA Summit which summarises the different approaches that can be taken, including the pros and cons of each.

What would MacGyver do?

At first I considered copying an existing solution, especially as there’s so many good ones already floating around. However, I had several requirements that I didn’t feel previous sleds had answered completely enough. I wanted to create a testing sled that was:

  • Unobtrusive for the person using it – this isn’t easy by any means but I wanted to try and stay out of the way as much as possible, meaning the sled and camera had to be small, lightweight (light enough to hold in one hand) and have minimal impact on the participants field of vision.
  • Of a professional standard – as I work in a commercial context the sled had to represent my client and agency in a professional way. I’ve seen some solutions that rely on Blu Tack or sticky-tape to hold them together, which is a perfectly fine solution but something I personally wanted to avoid.
  • Adjustable and interchangeable – I wanted the ability to alter the camera position (to allow for lefties and righties) as well as accommodate multiple devices without too much fuss.
  • Easily duplicated and disassembled – heading up a rapidly growing UX team at the time the rig had to be remade easily and consistently within a short time frame. To accommodate different devices and testing in multiple locations it had to be in kit form.
  • Made from widely available parts – to allow for future duplication I wanted the rig to contain no expensive, limited edition or bespoke parts.

The ingredients

I shopped around for affordable parts that were readily available so I could make more in the future if everything went to plan, and if something went horribly wrong I could source replacement parts easily. I managed to get everything, including the camera, for a little under £42 (including postage). I used:

* Initially I planned to use superglue, but instead opted for small adhesive Velcro patches so that it could be disassembled if necessary

The only tools used were a scalpel and a Sharpie (to mark where to cut), MacGyver would have been proud.

The ingredients for my mobile testing sled

Making the sled

Firstly I had to attach the webcam to the case, in a way that wasn’t permanent but was secure and stable. The Hue HD webcam comes with a USB stand, but it wasn’t necessary as the USB on the camera could be plugged directly into the extension lead.

The Hue webcam and iPhone case

Four small cuts were made in the back of the iPhone case, the space between them equal to the width of the USB extension lead. Two cable ties were passed through, horizontally to the case, and left untied.

Attaching the camera to the case

Then the USB lead was placed between the cable ties, which were tightened to secure the lead in place. A small square of adhesive Velcro was placed just above the USB port with the other half stuck to the webcam’s USB plug. This meant that once the webcam was attached to the USB port the Velcro held it in place and stopped it from moving around, or detaching under it’s own weight. Initially I was worried that the Velcro wouldn’t hold the weight of the camera but it actually worked well and was pretty solid. Finally the iPhone was clipped into the case.

Attaching the camera

The end result

The camera was attached so that it curved up from the bottom of the phone and therefore didn’t obstruct the users view too much, it also meant that the cameras built-in mic was close to their mouth. Once the camera was attached and the phone was in the holder the USB lead could be attached to a laptop, which in this instance was equipped with Morae testing software. By using Morae, we could position a second webcam (we used the laptops built-in webcam) to capture the participants facial expressions and body language.

The finished mobile testing sled

It took a little bit of tweaking to get the camera positioned correctly so that it was in focus, and the webcam did add weight to the phone and unbalance things a little, but without hands on experience of other testing sleds I can’t say whether this was better or worse than other solutions. We also found that, if we didn’t get the position perfect, occasionally the camera was susceptible to wilting to the right or left but only very slightly and not to the extent that it was noticeable to the participant or detrimental to the recording.

That said for only £42 (not including the recording software license) and only taking 30 minutes to build from scratch I was really pleased with the end result. It was straight forward to adapt for other devices (e.g. iPad and Android devices) and very convenient to transport. I’d recommend it as a solution, and it definitely worked for me, but without trying out alternatives I couldn’t say how it compares.

If you have a go at recreating this sled I’d be really interested to hear about it, whether the experience is good or bad.

The Devil is in the Detail – what does a default state say about you?

Last week I commented on Michael Wilson’s post about ‘sort by default‘ as an option when customising search results or product listings. I shared my personal experience with a recent client and thought it was worth sharing here too.

Intelligent defaults

In short, the point Michael made was that sites providing the ability to sort content without setting a relevant default are missing a trick.

An example of how ASOS don't set a default 'sort' state

Michael used the example of ASOS. Although they include a sort feature with the options ‘what’s new?’, ‘Price High to Low‘ and vice-verse they don’t explicitly set a default state. This raises the question of how the products are currently sorted, is it editor-defined, chronological, alphabetical or something else entirely?

Although this is a valid issue and something, as User Experience professionals, we need to be aware of it’s a relatively quick fix. The complexity is in understanding what the sort says about the website in the first place.

What does a default sort say about a brand?

Recently I had a conversation with a travel destination client about this exact issue. During a workshop intended to cover off the finer points of a prototype a heated debate started around what the default ‘sort’ state should be for holiday search results. Should it be an ‘editors choice’ or be sorted by more neutral means;  location, accommodation type (lodge, chalet, etc), availability, or price.

After much discussion everyone agreed that, based on our knowledge of the customer, price was the best option. But then came the question; should we sort high to low, or low to high?

By preselecting ‘high to low‘ you communicate that you are a higher-end brand and that quality, rather than cost, is a priority for your customers.

Conversely, by presenting items ‘low to high‘ you align your proposition with affordability, value, and competitive/budget pricing rather than the sense of exclusivity or luxury.


For the travel brand it came down to making a fundamental decision about themselves that, surprisingly, they hadn’t openly discussed or defined before; are we a value/budget brand (such as Butlins or Easyjet) or is money not an issue for our customers and therefore focused on quality, closer aligned to brands such as Mr & Mrs Smith or Kuoni? Once we posed this question to them it was an easy decision to make and helped drive other decisions across the site.

Its safe to say, with hindsight, that this was an issue we should have had clearly defined at the start of the project as part of a wider strategy. In actual fact it was, but with so many stakeholders in the room it became apparent that it was not a shared view and certainly not something that had been openly discussed.

The default state of a sort isn’t going to dramatically change people’s perceptions but it’s this kind of little detail in my opinion that really matters as it can help to provide a cohesive and consist experience.

With a clearly defined experience strategy these sorts of decisions should be straight forward and not open for debate (e.g. “we’re a value brand appealing to families therefore the only logical answer is to provide our customers with the cheapest options first.”), without this the experience can end up feeling disjointed and can lead to conflict.

In short, do sweat the small stuff, but be clear on your strategy and proposition so that you keep the sweat to a minimum.