Last week we drafted a usability test and tested one user in order to get a real experience with the theories and abstractions we were researching and discussing. Our results were surprising.
What We Wanted
We wanted to figure out a repeatable process for conducting a user test that would improve upon the simple watercooler test – The type of test that my friend at textWoo.com conducted. Additionally we wanted to “user test the user test” by way of quickly getting feedback on an early draft of our test scheme in the hopes of creating a more effective testing tool. And, we wanted to dive in.
What We Did
I downloaded and edited a script from the usability.gov site which came from the SUS framework and a usability test from 3w.org. The way I customized it was to look at each page type in the JavaJack’s site and create a task that would engage the user with that page. I didn’t make the tasks ‘hard’ but I didn’t want them to be too specific. I didn’t want to test the users ability to read, for instance. Our friend, Anna, came by just in time to be the tester.
What We Got
The results of the very unscientific test were intriguing and beneficial to the overall design of the site. As Anna talked thru using the site with Ben and I there watching, we noticed some things that could be improved and changed. Time well spent.
After the test, our thoughts turned to the testing process itself. Ben’s post explains his thoughts on the critical path and the purpose, goals and objectives of the sites and how they relate to user testing. Here are other thoughts about the user-test and the testing process in general.
User’s aren’t designers, don’t ask them to critic a site design
Most scripts I’ve seen ask the user this question in the beginning of the test:
Please give me your initial impressions about the layout of this page and what you think of the colors, graphics, photos, etc.
When you asked this question, it seems that the users become unsettled and defensive. They have to form an opinion and defend it. They stop using the site. They critique the site instead. They start to look at the site as a designer. The user is tainted after that simple little question. Ask the user about their own experience… and why not ask them AFTER they have the experience.
To fix this we came up with a few ideas. Let them complete the tasks and then ask them about their experience. Record the session (the screen, webcam and audio) without you in the room. Just don’t ask the question, there are better ways to get initial reactions from users – the user-testing sites in the sidebar. Or, conduct separate tests for reaction and impression.
Hiring UX facilitators: Flys apply within. Humans need not apply
You want to be a ‘fly on the wall’ as much as possible. Humans suck at this. Evaluator bias is rampant and I’m doubtful you can eliminate it. Simply put it’s when the testing-user feels that they should answer a certain way or feel the agenda of the test questions. Who doesn’t feel that in every survey! Human societal norms get in the way here (Perhaps not in New York City, granted). People tend to be polite to strangers and people in authority. I figure negative answers questions are rare. Users will try to figure out what you want and try to give you that. The act of testing will influence their use. I feel the designer is the least desirable person to be the facilitator of the test. If the user feels that the facilitator has an agenda or prefers one outcome over the other, then the test is compromised.
You can’t hire actual flys to do your testing (they don’t make lab coats and clipboards small enough). Here are a few ideas we had to correct the problem on Evaluator Bias and human factors. You could use a remote testing service – like the ‘Mouse Tracking Tools’ in the sidebar. Run face to face tests in a familiar place to the user – office, coffee shop, mall, home – so the user is more comfortable giving honest opinions. The facilitator should not be perceived as someone affiliated with the site – regardless if they are or not – nor an authority of any kind. Perhaps, you can test several other sites to hide the site you are actually testing (This might be time/resource intensive)
Something is better than nothing.
However, doing any type of test is better than nothing. The simple act of watching somebody go through the site is very desirable. Testing gives you insight into the flow of the site, if the site is mechanically (or functionally) sound and working and if the user finds and stays on the critical path or primary site goal.
You sellin’ what I’m buying? Great, let’s do this.
Each user has their own goal when coming to the site. Each site owner has a goal when building a site. If the two goals match, Great! Now get out of the way and let the site churn out money. The user clicks thru the site and their goals is met. This click trail thru the site is called the Critical Path. And, it’s what you should test in the ‘Tasks’ portion of the standard usability test.
There can be many paths, but only one critical path on a site. For example, I go to Apple.com to watch movie trailers. Would Apple user-test my experience in getting to and watching movies? Perhaps, but I bet they measure how easy it is for me to ‘jump over’ and buy some music or a new computer. The purpose – the critical path - of the site is to sell (I’ll give you branding and customer service, as additional paths) and all other features and functions of the site support that goal.
How well the site moves visitors along the path is the effectiveness of the site. And, we test for it by asking testers to assume they have the same goal as the site. Likewise, we test for satisfaction. Did they complete the task, but were pissed because of something else? Did they expect one thing and get an unpleasant surprise? Also, we test for efficiency. Did they complete the task, but it took 15 clicks and 20 minutes to complete?
You can test this by doing a very simple user tests. That’s the low hanging fruit of user tests. Site effectiveness, Satisfaction, and Efficiency.
In conclusion, don’t ask about impressions before your evaluation of the critical path. Do test even if the conditions are not scientific. Let users use. Visitors visit. Don’t force them to have an opinion and then needle them about it. This is a carry over from the designers perspective. User’s aren’t lab rats. Your color palette isn’t of supreme importance. Listen close and you can hear a user. They are probably saying, “Just give me the dang banana already”.
Is the user qualified to speak to design?
How do you get around the evaluator bias?
Can one site have multiple critical paths?