Abstract: One of the main tenets of most company-sponsored quality programs is that the customer is always right. Designers frequently evaluate the goodness of their systems by simply asking users whether or not they like the interface. The fallacy of this approach is that users generally make judgements based on their "preferences" and tend to ignore the more important performance issues. System designers frequently use their own preferences to make decisions, and then make major inferences about how users will perform with their system.
Several past studies are reviewed to show that users can perform well and not like a system, or like a system and still not perform well. Two recent studies are reported showing a mismatch between designer's preferences for certain interface decisions, and measured user performance when using the resulting interfaces.
It is proposed that better user interfaces are possible if we clearly separate the performance and preference concepts, recognize the limitations of each, and work to optimize one or the other (there is usually not sufficient time to optimize both). The only way to ensure that systems will elicit acceptable levels of performance is to conduct performance-oriented usability tests.
Keywords: Design; Empirical studies; Evaluation, subjective; Evaluation
Originally published: Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, 1993, pp. 282-286
Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 316-320.