Affordable Usability Testing

Yesterday I described the first two steps to usability testing that you can afford even if you’re on a shoestring budget. So now that you’ve come up with five tasks or scenarios to test, and know who your test subjects will be, it’s time to get the testing underway. Let’s get started.

First, make sure that everything you need to test is working. This may seem like a trivial step, but you’d be surprised how many people neglect it – and you can’t test the site if your script doesn’t work. Work out all the software bugs before going in front of your test participants. Run through each task yourself to make sure it can be completed properly.

While you’re running through these tasks, time them. Make sure that each of the five tasks you want to test takes no longer than 15 to 30 minutes to complete. If it does, you may need to redesign the scenario. Why? Users get bored. You’ll usually want to have each tester run through three out of five of your tasks. Respect their time and yours, and “Keep the tests short and snappy,” Neeman notes.

Resist the urge to guide the user through the tasks. Once your site goes live, you’re not going to be talking visitors through the process, so don’t do it now. This is why it was important when you designed the tasks that you used language that explained what needed doing, but didn’t duplicate words used on the site. As Neeman states, “you are testing the terminology the site users as much as the design of the site.”

While  you shouldn’t be guiding the user through the site, you should feel free to ask questions. They might not try to complete the tasks you’ve given them in ways that you anticipate. If they do veer off the track, keep your questions open-ended and non-defensive. You may be tempted to ask “Why did you do that?” if they do something odd; instead, ask “What do you think this will do?” in a neutral tone of voice.

Make sure you let them talk. Some people, with just a little prompting, will tell you the most amazing things. They’ll talk through their entire thought process while testing. Others will tell you things that turn out to be useful outside the context of the test. “ I’ve had participants tell me their complete business process, including profit margins, during tests,” Neeman observed.

While the test is going on, you need some way of capturing the results. A screen recorder such as Camstudio or WebEx can help with this. Eye tracking software may be considered the gold standard, but it’s expensive. If you’re observant, you can get good results just from taking notes with pen and paper. Combine pen and paper with screen capture software, and you’ll be pleased with how much information you can get.

It helps to sum up each task with some kind of grade. Neeman has used the pass/fail method on tasks before, but he prefers a scale of 0 (task accomplished) to 3 (could not find anything). He noted it was “a better approach to grading tasks on a step by step basis and in aggregate.” The academically-inclined might prefer to give a letter grade. Use a system that you’ll understand and be able to explain later.

Once you’ve completed your usability testing, it’s time to analyze the results. Start by looking for patterns. Remember that you need more than one incident to establish a pattern. If one person says a button is hard to find, it could just be his preference; if two or three people say it, maybe you need to look at why they’re having trouble finding it.

Look through your notes for feedback you weren’t expecting. Neeman recommends highlighting comments that might be about something outside of the tasks assigned “but make great sound bites for describing issues with your site.” For example: “I already ticked off that I’m selling a one-of-a-kind item, why do I also have to fill in the quantity?”

Be prepared to discuss the results you’ve received with the website’s team. Keep in mind that your results are subjective, but that doesn’t mean that they shouldn’t be taken seriously. You might want to get some qualitative results to bolster your arguments, such as stats from watching users at each stage of the conversion funnel and noting where they abandon the process.

Which leads to the next point: testing is not the be-all and end-all. It is most effective when combined with other data collection methods. Neeman notes that he sometimes uses Attention Wizard (http://www.attentionwizard.com/aw/). This tool predicts where users might click based on contrast of color values. Using other validation methods, in addition to testing, will give you better results.

Finally, realize that this won’t be the last time you’re testing. The more you test, the more you refine your offering. Neeman notes that he has tested as much as every two to three weeks when working on some projects. “I’ve had a couple of clients where we tested over 15 users with different tasks, creating a great amount of data that helped in product design,” he explained. Do not think of testing as a chore or a waste of time. Rather, think of it as a way to get valuable information that will help you improve your website (and your bottom line) – information that you can’t get any other way. Good luck!

Google+ Comments

Google+ Comments