Tuesday, June 26, 2007

A List Apart: "Testability Costs Too Much"

Gian Sampson-Wild has penned a deliciously provocative article at A List Apart titled "Testability Costs Too Much." It is specifically about accessibility and the W3C, and asks the question - is it possible to have a good guideline that isn't testable? Sampson-Wild emphatically argues "yes". It's definitely worth a read.

But what got me thinking is how often this same question is brought up outside of the accessibility domain. Specifically, I've dealt with this in both setting usability objectives and when creating design guidelines.

It goes without saying that setting usability objectives for a product or a release is a good thing, right? I'm not convinced. Usability objectives can do as much harm as good. First, who decides whether the objectives are met? And how do they decide? Is it useful to have an objective that "The product will be easier to use than the previous release"? Probably not. But what if we operationally define the objective as "User satisfaction will increase from 3.2 to 4.0"? That seems testable. But if that is a release objective then it needs to be tested before the release is shipped. That means a late-cycle design validation. That means creating meaningful, representative scenarios for users to perform (often in a lab setting). And do you test the new features or do you use the same scenarios as the previous release to have an apples-to-apples comparison? if you just test the new stuff, is that valid? If you just test the previous samples, are you really testing the new release? How does it cost in time and resources to conduct the testing? If the objectives aren't met, do you delay the release? How much resource do you need to apply before you've got enough testing data to make a delay-the-release decision? Is it worth it?

Another option is to set objectives that don't require user testing. For example, you can use quantitative measures of user experience like step counts. An objective might be, "Completing this task will go from 27 steps to 10 steps." But this has a lot of problems as well. First, defining a "step" is easier than it sounds. Second, defining a "task" is easier than it sounds. Third, and I think most importantly, reducing the number of steps does NOT mean the task is more usable. You can improve usability by reducing steps, but it's not a guarantee. These quantitative measures of user experience are almost always secondary effects, which can lead to all kinds of problems. And in the end, these objectives still need to be "tested", usually by UXers, and although it is cheaper than user testing, there's still a ROI problem.

Compare this to just having a smart UXer on your team that you trust, and ask her "Do you think we've done what we intended to do in this release from a usability perspective?" Very high ROI.

Usability objective testing costs too much.

Now what about design guidelines? In this case, the "test" relates closely to Sampson-Wild's description "reliably human testable—which means that eight out of ten human testers must agree on whether the site passes or fails each success criterion." Now I'm going to extend this slightly to make it harder... a design guideline is testable if eight out of ten developers agree on whether guideline has been met. In many cases, guidelines are used because UXers don't have the resources to design everything in the product, and developers forced into design work need guidelines to help them make decisions. Of course, for UXers every guideline is "It depends". "It depends" is not a good guideline to give to developers. But damn it, design is HARD and the Truth is that the right answer really does depend on a bunch of factors that don't lend themselves to pithy guidelines that a non-professional-designer can consume and understand at a shallow level.

But if your developers are doing design anyway, then that's just tilting at windmills. Dumbing down your guidelines so that they are testable by developers is the right thing to do. In this case, I think the cost of making it testable is worth it. Coming up with guidelines that require expert interpretation when you know non-experts need to follow the guidelines is not useful, it's arrogance.


Joshua said...

Interesting discussion. (Actually I found the original article arcane and overly agenda-driven, but your discussion relating it to usability is interesting.)

Similarly, I've seen a temptation to reduce usability to a checklist, as if it's Six Sigma or Sarbanes-Oxley.

Instead of giving design guidelines, like in a style guide, like sharing pattern libraries to help non-designers who are doing design. I think pattern a helpful way for them to assemble interfaces from parts that work.

Joshua said...

"... like in a style guide, *I* like sharing pattern libraries ..."