Usability Testing Demystified
Issue № 293

Usability Testing Demystified

There seems to be this idea going around that usability testing is bad, or that the cool kids don’t do it. That it’s old skool. That designers don’t need to do it. What if I told you that usability testing is the hottest thing in experience design research? Every time a person has a great experience with a website, a web app, a gadget, or a service, it’s because a design team made excellent decisions about both design and implementation—decisions based on data about how people use designs. And how can you get that data? Usability testing.

Article Continues Below

Jared Spool will tell you for free that when his company researched the causes of failed designs, they found that lack of information was the root of all bad design decisions. The point of user research is to make good, solid, confident decisions about design. Why usability testing as opposed to using other methods? I contend that 80% of the value of testing comes from the magic of observing and listening as people use a design. The things you see and the things you hear are often surprising, illuminating, and unpredictable. This unpredictability is tough to capture in any other way.

The other 20% of the value comes from the pre-testing discussions team members have as they decide what their Big Questions are and the post-testing discussions about what to do with what they’ve learned.

One test doesn’t fit all#section2

When I say “usability test,” you may imagine something that looks like a psych experiment: The “Subject” is in one room, with a stack of task cards and may even have biometric sensors attached. The “Researcher” is in another room, madly logging data and giving instruction over an intercom as the voice of god.

That image of a usability test is what I’d call “formal usability testing,” and is probably going to be summative and validating. It’s a way to verify whether the design does what you want it to do and works the way you want it to work.

This is often the kind of test done toward the end of a design cycle. What I’m interested in—and I think most of you are interested in—is how to explore and evaluate in the early and middle stages of a design.

The classic process#section3

The process that Jeff Rubin and I present in the Handbook of Usability Testing, Second Edition could be used for a formal usability test, but it could also be used for less formal tests that can help you explore ideas and form concepts and designs. The steps are basically the same for either kind of test:

  • Develop a test plan
  • Choose a testing environment
  • Find and select participants
  • Prepare test materials
  • Conduct the sessions
  • Debrief with participants and observers
  • Analyze data and observations
  • Create findings and recommendations

Let’s walk through each of these steps.

Develop a test plan#section4

Sit down with the team and agree on a test objective (something besides “determine whether users can use it”), the questions you’ll use, and characteristics of the people who will be trying out the design. (We call them participants, not subjects.) The plan also usually includes the methods and measures you’ll use to learn the answers to your research questions. It’s entirely possible to complete this discussion in under an hour. Write everything down and pick someone from the team to moderate the test sessions.

Choose a testing environment#section5

Will you use a lab? If not, what’s the setup? Will you record the sessions? Again, the team should decide these things together. It’s good to include these logistics in the test plan.

Find and select participants#section6

Focusing on the behavior you’re interested in observing is easier than trying to select for market segmentation or demographics. If you’re testing a web conferencing service, you want people who hold remote meetings. If you’re testing a hotel reservation process on a web site, you want people who do their own bookings. If you want to test a kiosk for checking people into and out of education programs, you want people who are attending those programs. Make sense?  Don’t make recruiting harder than it has to be.

Prepare test materials#section7

You’re going to want some kind of guide or checklist to make sure that the moderator addresses all of the research questions. This doesn’t mean asking the research questions of the participants; it means translating the research questions into task scenarios that represent realistic user goals.

In the test materials, include any specific interview questions you might want to ask, prompts for follow-up questions, as well as closing, debriefing questions that you want to ask each participant.

Conduct the sessions#section8

The moderator is the master of ceremonies during each session. This person sees to the safety and comfort of the participants, manages the team members observing, and handles the data collected.

Though only one person from the team moderates, as many people from the team as possible should observe usability test sessions. If you’re going to do multiple individual sessions, each team member should watch at least two sessions.

Debrief with participants and observers#section9

At the end of each session, be sure to take a step back with the participant and ask, “How’d that go?” Also, invite the trained observers to pass follow-up questions to the moderator or to ask questions themselves. Thank the participant, compensate him or her, and say good-bye.

Now, the team observing should talk briefly about what they saw and what they heard. (This discussion is not about solving design problems, yet.)

Analyze data and write up findings#section10

What you know at the end of a usability test is what you observed: What your team saw and heard. When you look at those observations together, the weight of evidence helps you examine why particular things happened. From that examination, you can can develop theories about the causes of frustrations and problems. After you generate these theories, team members can use their expertise to determine how to fix design problems. Then, you can implement changes and test your theories in another usability test.

What you get#section11

If you follow this process in a linear way, you’ll end up with thorough planning, solid controls, heaps of data, rigorous analysis, and—finally—results. (As well as a lot of documentation.) It can feel like a big deal, and sometimes it should be.

But most real-world usability tests need to be lighter and faster. Some of the best user experience teams do only a few hours of testing every month or so, and they may not even think of it as “usability testing.” They’re “getting input” or “gathering feedback.”

Whatever. As long as it involves observing real people using your design, it’s usability testing.

Someone, something, someplace#section12

Really, all you need for a usability test is someone who is a user of your design (or who acts like a user), something to test (a design in any state of completion), and someplace where the user and the design can meet and you can observe. Someplace can even be remote, depending on the state of the design. You can do all that fancy lab stuff, but you don’t have to.

Once you get into a rhythm of doing user research and usability testing, you’ll learn shortcuts and boil the process down to a few steps that work for you. When we get down to the essential steps in the usability testing process, this is what it tends to look like:

Develop a test plan#section13

In the classic process, a usability test plan can be several pages long. Teams in the swing of doing testing all the time can work with a minimalist structure with one or two lines on the elements of the plan.

Find participants#section14

Again, this is about behavior. The behavior you’re interested in for the study is parents going through the process of getting their kids into college. Just make sure you:

  • Know your users
  • Allow enough time
  • Learn and be flexible
  • Remember they’re human
  • Compensate lavishly

Conduct the sessions#section15

If you’re the moderator, do your best to be impartial and unbiased. Just be present and see what happens. Even the designer can be the moderator so you can step back and see the test as an objective exercise.

Remember that this is not about teaching the participant how to use the interface. Give a task that realistically represents a user goal and let the rest happen. Just listen and watch. (Of coure, if the task is something people are doing in real life and they’re having trouble in the session, show them the correct way to do the task with the current design after you’ve collected your data.)

As the session goes on, ask open-ended questions: Why? How? What?

Debrief with observers and come to consensus about design direction#section16

Talk. Brainstorm. Agree. Unless the design was perfect going into the usability test (and that’s a rare thing) and even if the team has only done one or two sessions, use the observations you made to come up with theories about why things happened for participants the way they did. Make some changes and start the cycle again.

Where do great experience designs come from? Observing users#section17

Getting input from users is great; knowing their requirements is important. Feedback from call centers and people doing support is also helpful in creating and improving designs. Whatever your team might call it—usability testing, design testing, getting feedback—the most effective input for informed design decisions is data about the behavior and performance of people using a design to reach their own goals.

Teams that have lots of data make better design decisions. Nine times out of ten, that data comes from some kind of usability testing.

If you’re interested in seeing examples and templates for test plans, recruitment, and session scripts, you can download them for free from the website that accompanies the Handbook of Usability Testing, Second Edition: www.wiley.com/go/usabilitytesting.

About the Author

Dana Chisnell

Dana has helped hundreds of people make better design decisions by giving them the skills to gain knowledge about users. She’s the co-author, with Jeff Rubin, of Handbook of Usability Testing, Second Edition (Wiley, 2008).

26 Reader Comments

  1. Great article, Dana. Thanks for clarifying the value, purpose, and flavors of usability testing. From what I’ve seen, the bad rap seems to stem from teams working with less experienced practitioners who ask the wrong questions, leading questions, or believe they must use the formal lab approach vs. doing quick or in-context testing. Hope this helps squelch the naysayers.

  2. Thanks, Dana. Very good information. I’m always hearing that some testing is better than none at all. That being said, what do you think about usertesting.com? I’m considering it as an option for quick and inexpensive testing for some of our firm’s clients.

  3. @ronansprake – there was a project to redesign one of Nielsen’s pages, you can see the results at “http://www.designbyfire.com/000094.html”:http://www.designbyfire.com/000094.html . Nielsen does not pretend that his website is an example of good aesthetic design, he knows that it is not, but says that he does not have the skills to make it aesthetically pleasing and so he would rather leave it plain and simple than make it a mess.

    Despite its looks, it is one of the most popular websites out there for web authoring and usability research, which just proves his point – if you build it well, they will come … even if it looks cruddy.

  4. I’ve only done one session, but I found people didn’t like being asked to “test” something – they didn’t want to take a test. I learned a lot that time, mainly about what I should and shouldn’t say.

    It’s difficult to work behind a screen everyday and then go into such a public and more than likely humiliating situation. Through humiliation comes humility though. I guess.

    Next time I will try the “gathering feedback” approach.

  5. Recently while talking to a “Usability Expert” at one of our local utility companies, she asked if I had any experience with different usability software programs that spit out usability data. I must have had a blank look on my face as I thought to myself, “how many software programs are going to be using your website?”

  6. bq. I’ve only done one session, but I found people didn’t like being asked to “test” something — they didn’t want to take a test. I learned a lot that time, mainly about what I should and shouldn’t say.

    What you need people to understand is that they aren’t being tested. They are the examiners, your website is under the microscope. If they are having difficulty, that means that the website probably isn’t well-enough designed to meet their needs. Because even if they are clearly not demonstrating a lot of intelligence, logic or resourcefulness – and there are a lot of people out there who don’t! – they are still the people that your website is there to serve, and you’ve got a better chance of changing your website to fit what people can work with than you have of changing the way they work with your website.

    Yes, sometimes it’s hard. When you’ve created a design that you think is really easy to use, and some knucklehead comes along and can’t follow simple instructions, ignores the big highlighted links and buttons that you’ve put in prime spots and clicks around randomly for a while before giving up, it’s hard not to scream and cry and ask them why they can’t see the blindingly obvious when it’s right in front of them … but you need to be able to understand their thought processes and what they are looking for/at, so that you can come up with a design that they can follow.

  7. To @romansprake and Stephen Down

    Stephen is right – Jakob has said this about useit.com. Visual design is not his forte.

    Okay, so useit.com looks very 1997, but so what? It’s about content. Look at craigslist. Not beautiful but tons of great content that is pretty easy to get to.

    What you can learn through getting feedback from users is whether which part of design matters and how much, so you can make an informed decision about where to spend resources. Jakob has decided that, based on his audience, it’s about content, not slick visual design.

  8. @kwalser Thanks much!

    It’s easy to do a bad job of moderating usability test sessions or user research, but I think most people are well-intentioned about it.

    I actually find it much easier to ask the right kinds of questions in quick, in the wild tests because they tend to be very focused. And the context offers so much to observe. In formal lab tests there’s a lot of pressure to do it the “right” way, and less opportunity to recover if you screw up.

    But mostly it just takes practice. So I take your point about people having bad experiences with usability testing because their facilitator is not experienced.

  9. @usersrule,

    Glad you liked the piece. I am all about observing people using designs to reach their own goals. It’s about being present to witness the crazy things they say and do that you could never have predicted — it’s that “ah ha!” moment that you get when a participant creates a workaround to some hindrance in the design that gives you what you need to make good design decisions. There’s no substitute for that.

    And more data isn’t necessarily better. It depends on the source.

    Two things offend me about usertesting.com.

    1. What I do is get people to help me evaluate a design. It is the design that is being tested, not the user. So I *hate* the term “user testing.” It’s usability testing.

    2. usertesting.com reduces “user testing” to a QA process. It isn’t QA. It’s a tool for the team to understand users. So, to do remote, unmoderated usability testing, you have to make sure that your test is *very* well designed, so there are no questions about the tasks. The other thing with QA is that it’s just running stuff through a test kit. Usability testing is about learning *why* users do what they do in addition to the *what*. So although with usertesting.com you get video of people, there’s no guarantee they’re going to think aloud, and you have no opportunity to ask follow up questions.

    Short answer: I’m not a fan.

    Dana

  10. Many of us would like to do user testing, but how can resources which are typically thin be allocated? Resources meaning time and/or money.

    Specifically, how can you gain meaningful input on something that s quite frankly not even to the “usable” stage in production yet, and getting it there would eat up some of the budget… only to learn from the testing that things have to be re-done? How is this reconciled?

  11. @Bill Leonard

    I come from a background in game design and have implemented some of the techniques that I learned from game development into my web development projects.

    I always create a design document upfront that not only describes in text exactly what the goal of the project is, but also provides well thought out visual diagrams of how the application will work. The diagrams do not have to be an exact replication of what the user-interface will look like. Think of it as the sketch that a painter lays on a canvas before actually painting. The main goal is to be able to quickly create and modify a tangible representation of the process that the user will have to go through in order to achieve the desired goal.

    This design procedure provides the ability to immediately identify unnecessary actions that the user will have to go through to navigate your application. You can also easily create several diagrams of different ways that the navigation could work. You can then have a simple pre-test by presenting the diagrams to multiple people from your user demographic and getting a consensus on which UI seems to be the most user friendly.

    This will allow you to focus your development from the start before spending too much of your budget. You should always do a usability test once you’ve got a working prototype, but a good design document goes a long way in making sure that the first prototype is not a waste of time, money or resources.

  12. The problem about testing the usability of early prototypes is that you have to find many more people to test it on. I realized that most of the people who tested early versions of a design do not want to spend time over again. Probably enterprises with a considerable budget do not have difficulties solving this problem, but for small projects good usability testing is challenging anyway.

  13. @Bill Leonard
    @timmcart
    @patrick_l

    Great discussion. Costs are always a concern; budget and time for gathering feedback from users must be built in to a project plan from the beginning.

    For help with costs, you might want to check out a book by Mayhew and Bias called *Cost Justifying Usability Second Edition.*

    The idea of testing early on prototypes is that the team must be testing something that it is willing to experiment and if the design being experimented on doesn’t work, then be willing to throw it away (or part of it), and start over (or revise heavily). Prototypes are trials – they’re meant to be disposable but also to be cumulative. As you learn more, each successive one gets more usable and more detailed. For excellent help with prototyping software, web app, and web site UIs, see Carolyn Snyder’s *Paper Prototyping.* It’s brilliant and makes the whole process easy.

    To say that you need a lot of users to test a prototype isn’t quite right. You can test the first prototype with 1 or 2 or 3 users, and you’ll probably get data right away. That is, just within those few people, you will see where there are frustrations and hindrances. Then you can change the prototype and test with a few more people. Though there’s a lot of discussion about how many people to have test a design, my rule is use as many as you need to feel confident in the design decisions you make. If you observe one person having a particular problem and feel like you know just what to do about it, go ahead and change the prototype UI and test again. With luck, you’ll find different problems, or be able to refine what you did in the last round.

    In any case, usability testing is never meant to be a one-shot thing. It’s part of an iterative process of improving a design. The idea is that you’re learning what you need to know from real people to help you make well-informed design decisions.

  14. Several commenters have expressed some concern about recruiting participants for user research and usability studies.

    This is the most important step to get right in user research. If you don’t have the right users, you don’t have the right data.

    So, that sounds hard and expensive, but it doesn’t have to be.

    When you’re just starting out with a design, you can use people close to you. People in your company, people in your family, or friends — but they still must be like your real users. And they shouldn’t know anything about what you’re trying to do. That is, they should not be people on your project. Make sense?

    Usually, you can just say, “Can you give me a few minutes to try something out?” And they almost always will. Buy a cup of coffee or soda for them and thank them profusely. Not costly at all.

    As the prototypes become more real and you want to do more formalized testing, you can keep costs lower by going to the participants rather than setting up a formal test in a lab (unless your company has a lab and doesn’t charge you back for using it). Visit them in their offices or wherever they would normally use the design.

    This is a less expensive option than asking participants to come to you because you’re using less of their time. When you ask them to come to you, though it isn’t obvious, you’re also compensating them for travel time and the hassle of getting to you. If you visit them, you also get the benefit of bonus data because you’ll learn things by being in their environment that you won’t pick up from their being in your space.

    I’ve got a lot of articles on my blog about recruiting. You might want to check those out. This link will get you to all the posts tagged with “recruiting”: http://usabilitytestinghowto.blogspot.com/search/label/recruiting

    You might also want to read my article on Boxes and Arrows about how to treat study participants: http://www.boxesandarrows.com/view/why-we-call-them

  15. @ Dana Chisnell

    Thanks for providing the great resources. The article on Boxes and Arrows is spot on. Many of the projects that I have been working on recently have extremely tight budgets. That being said, we are usually pinching pennies during development. Treating my testers well has really paid off for me.

    I’ve been able to establish a good relationship with 5-6 people who will participate in tests as often as I need them. These users are excited to see how their feedback effects our applications and always willing to test the prototypes for no cost. Usually, we will provide lunch or other amenities before or after the test. That is enough to keep them happy.

    These guys are great at finding bugs in our systems and declaring what changes they would like to see. I always test on a larger group once my core group of testers seem comfortable with the app. This process saves us a good amount of time and money.

  16. Thanks for that article, very informative.
    Whenever I’m designing and considering usability, I always think of the classic example of the newly built student campus. Put the buildings in place, but not the footpaths. After a couple of months, the students will naturally choose the easiest path between buildings and wear a groove in the grass. Build the footpaths there!

  17. Interesting article.

    I’m a big believer in testing but sometimes I get lazy. I think the feedback idea rather than testing is a great idea.

    I think content is key but visuals are important

  18. Usability testing may be an arduous process and it could very well be a luxury for some websites, but no matter what, it will always provide some insight into your website and your users.

    It’s amazing to realize that you can spend weeks hammering out a quality sitemap, navigation structure, and labels for important links, only to find that the actual visitors you are targeting don’t use the same terminology you do, and therefore can’t find what they’re looking for (even if its right in front of them).

    Usability testing doesn’t have to be burdensome — hold a simply focus group and ask participants to complete a “homework assignment” you’ve come up with. In the end, you can see how many were successful and give them a survey to add in their thoughts and frustrations with the project.

  19. Thanks Dana, great article. Thought you might be interested in our new fast and inexpensive service.

    It can be used for any type of online property (websites, website prototypes, online adverts, search processes, etc).

    It has a very attractive delivery speed, and price point (24-48 hours from order, 299$ for a 5 person user experience test). The features are as follows:

    Clients define a target url (their own or that of their competition or best practice)
    Clients define a goal for the testers to perform (e.g.; “find product x and take it through the checkout process..”)
    Clients define the demographics of the kind of testers they would like
    Clients define survey questions

    Within 24-48 hours clients receive a report that includes, for a minimum of 5 testers:

    Web cam recordings of the testers conducting their assigned goal/task
    A synchronized recording of the entire screen session during the test
    ClickFlow Analysis
    Contextual written “bubble” commentary on screenshots
    Survey results
    Other quantitative data

    Visitors to the site (www.userlytics.com) can request a free 1 person sample test.

    Kind regards,

    Alejandro

  20. We use this and it is as Dana says, about evaluation,testing and great design. I think that the method of Alpha to Beta to release is key to success in any web application. Usertesting benchmarks the experience and I think you are right not to be a fan of this. I am for the before launch and after launch approach which emplys actually user information, market research and post launch click heat mapping.

  21. If you are a small company and dont really have the resources for such a large scale usability test, would you just carry out in house usability testing for a new piece of software or invite a select number of your regular customers to test the software online and give you feedback. As an online company its kind of difficult to set up a testing environment where you can visually see how people react, you can only get written feedback via a form. Where I think asking the right questions will determine how good and useful that feedback will be.

  22. For those that are looking to get feedback on your websites, I would recommend trying http://www.usertesting.com. I do not own this site, however, I am one of the testers that review websites. The site shows website owners how visitors (such as myself)view/would use their website through the power of video. Site owners write out task that they want the testers to complete. The testers complete the tasks and send their videos to the site owner.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career