HF Perspectives on HCI

Preface

Document URL: http://garyperlman.com/hfeshci/preface.html
This book contains a collection of papers on Human-Computer Interaction (HCI) from the Proceedings of the Annual Meetings of the Human Factors and Ergonomics Society from 1983 to 1994. The collection includes some of the best, most useful articles selected from the papers most relevant to HCI. In this preface, we describe the process used to create the collection.

In the past dozen years, computer technology has advanced at a tremendous rate. Never has there been a tool (or, if you will, an appliance) so complex and flexible. In the days of their infancy, computers were frightfully difficult to use. Today computers are easier to learn and navigate, and there is no doubt that this trend towards increased usability will continue and that computers will be even more closely integrated into people's daily lives than they have now become. Part of the increase in usability (despite so many technological advances) is attributable to research on people's interaction with computers.

People from many disciplines have contributed to this growing body of technical literature on human- computer interaction (HCI). These scientists and practitioners have included computer scientists, psychologists, technical writers, designers, sociologists and engineers. However, a substantial portion of the work has been carried out by researchers identified with the field of human factors. In the forefront of this research is the Computer Systems Technical Group (CSTG) of the Human Factors and Ergonomics Society (HFES). HFES is the principal professional group in the U.S. concerned with issues of person-machine interaction; helping to make technology easier to use is one of its foremost goals. The CSTG is the most active technical group in HFES, usually having the most sessions at the Annual Meeting. The membership includes human factors specialists in academia, government, and industry. The research deals principally with the interface between people and computer hardware and software, but part of it has concerned closely-allied areas such as workstation design and lighting, among others. A result of this emphasis within the discipline of human factors is that many very good research articles on human-computer interaction have appeared in the Proceedings of the Annual Meetings of the Human Factors (and Ergonomics) Society.

The fact that the Proceedings have served as a primary outlet for publishing research and technical literature on this topic has resulted in a problem: People outside the Society do not have good access to these articles. Because the Proceedings are not widely distributed, researchers and other potential consumers of this work either have difficulty obtaining the publications or are simply unaware of them. In part, this is due to the fact that many on-line services that are used to access and retrieve references on various topics have not included the Proceedings into their databases. While it is possible to search for HCI publications in the HCI Bibliography (located at www.hcibib.org), that database contains abstracts, not full articles. Moreover, many libraries do not regularly receive the Proceedings as part of their collection, and due to cutbacks in many research libraries, the Proceedings might be even less available to potential users than in the past. There is a need to better disseminate this work.

As a consequence of this state of affairs and in an effort to make some of the articles that have appeared in the Proceedings more available, we have assembled this collection. It consists of papers that are reprinted in their entirety, selected from papers published in the Proceedings over the last 12 years dealing with HCI issues.

To include every paper in the collection that has any relationship to HCI would be unwieldy. Because of a desire to include papers that are still relevant and because of practical limits on time and budget, decisions had to be made about what to include. First, papers that were not highly relevant to current HCI were filtered by the editors. Then, the highly-relevant papers were reviewed by over 50 independent CSTG members. Finally, the editors made inclusion decisions based on the reviews.

On the whole, we think the set of papers that we have included here has some of the finest work that has been done in the field. We believe that many professionals and researchers in areas related to HCI should have this book on their reference shelf. Even HFES professionals who have copies of the Proceedings will find that this book makes the articles more accessible than having them scattered across many volumes (and many HFES members may not have all the Proceedings volumes available). In addition, these articles can serve as a ready set of readings for undergraduate and graduate HCI classes. Lastly, and where this book might find the greatest utility, this collection will introduce newcomers to research on HCI to the issues and the methods that can be employed to investigate them. We hope you find this collection both useful and informative.

HOW THE PAPERS WERE SELECTED

We considered papers from all sessions, not just from the CSTG, because we thought that papers relevant to HCI might appear in sessions sponsored by other TGs such as Aging, Communications, Industrial Ergonomics, Systems Development, Testing and Evaluation, Training, and Visual Performance, since so many TGs address HCI. The relevant papers would be rated by independent reviewers followed by decisions about which papers to include in the collection.

The editorial process began by deciding which papers would be reviewed for the collection. After some papers (e.g., abstract-only articles) were removed automatically from consideration, the editors judged the relevance to HCI of 3597 papers published in the Annual Meeting Proceedings from 1983 to 1994. The results of those ratings are shown in Figure 1 in which the sets contain the number of papers rated by each editor (A, B, and C) as relevant. The editors agreed unanimously on 2925 papers deemed not relevant to HCI, and on 150 deemed relevant and worthy of consideration. We called these the 150 YYY papers. There were 23+20+164=207 papers receiving the support of two editors, and 27+88+200=315 receiving the support of only one. Our schedule and the number of available reviewers helped us decide that we would only consider the 150 YYY papers for this collection.

	A	B	C	   #
	N	N	N	2925
	N	N	Y	 200
	N	Y	N	  88
	N	Y	Y	 164
	Y	N	N	  27
	Y	N	Y	  20
	Y	Y	N	  23
	Y	Y	Y	 150

Figure 1. Editors' 3597 HCI-Relevance Judgments

While the editorial judgments were taking place, reviewers were being recruited via email. To complete the editorial process quickly, we decided to assign papers for review and gather all reviews via electronic mail. Over 50 CSTG members answered our call for volunteer reviewers. Papers were assigned to reviewers randomly, except that the known affiliations of reviewers and authors were used to avoid possible conflicts of interest in the review process. Fourteen papers were reassigned because of the automatic detection of possible conflicts of interest (only two were subsequently discovered after the papers were assigned). We mailed printed copies of all the papers and emailed electronic instructions.

The reviewers were sent electronic forms with 7-point Likert rating scales to indicate whether they thought a paper should be included in the collection, using the criteria that highly-rated papers should:

Reviewers provided additional information for the editors: Comments pertaining to the review criteria were used to help make inclusion decisions; Keywords were chosen from a provided list to assist in indexing the papers. The provided keywords were based on Annual Meeting HCI-relevant session names and included information on process lifecycle, methodology, input/output technology, and area of application. Reviewers were free to add their own keywords.

The reviewer ratings and comments were merged so that the editors could compare the three ratings/comments for each paper. Papers were ranked automatically in two ways: (1) by the number of positive ratings (ratings over 4), by the number of negative ratings (ratings under 4), and by the number of non-negative or non-positive ratings; (2) by the mean of the ratings. These rankings, however, were only used to get a sense of the overall rating of a paper, particularly with respect to the neutral mid-point of the rating scale. There were 54 papers with either positive or neutral ratings, of which 28 received all positive ratings, and there were 28 papers with no positive ratings, of which 10 had all negative ratings. There were 54 papers with a mean negative rating, and 11 papers with a mean neutral rating. Many of these papers had low ratings because they were dated, not because they were badly done at the time.

	A	B	C	 #
	N	N	N	54
	N	N	Y	 8
	N	Y	N	 3
	N	Y	Y	12
	Y	N	N	 5
	Y	N	Y	13
	Y	Y	N	 3
	Y	Y	Y	52

Figure 2. Editors' Initial Recommendations for Inclusion for the 150 Reviewed Papers

Each editor went through each set of ratings and comments and made independent recommendations for inclusion/exclusion. These were merged to create another list of Y/N (include/exclude) triples, summarized in Figure 2. After a single round of adjustments, the number of papers accepted rose to 79, by including virtually all papers with two votes for inclusion.

Table 1 shows that most of the included papers fell into the design and evaluation categories. Almost a quarter (18/79) of the papers were not classified as being relevant to any lifecycle stage, however, the results from many uncategorized papers should be applicable to analysis and design.

Table 1. Reviewer Assignments of Included Papers to Process Lifecycle Stages

	Process Lifecycle Papers Percent
	Analysis          12     15
	Design            40     51
	Prototyping       12     15
	Implementation     5      6
	Evaluation        34     43
	None of the Above 18     23

Table 2 shows that most of the papers (62/79 or 78%) used an empirical methodology of some sort. This large percentage reflects both the nature of HCI work presented at the HFES annual meetings, and the bias of the editors. Of the initial 150 papers selected for review, 102 (68%) were judged by the reviewers to use empirical methods. In contrast, about a third of the papers used models/theories and about a third used a developmental method (e.g., a demonstration system).

Table 2. Reviewer Assignments of Included Papers to Methodology Used

	Methodology       Papers Percent
	Empirical         62     78
	Models/Theories   25     32
	Development       24     30
	Case Studies      13     16
	Survey             7      9
	Other              8     10

ACKNOWLEDGMENTS

Many people helped make the collection possible. At the HFES office, Lois Smith (Publications Manager) and Lynn Strother (Executive Director) both contributed to the proposal and production. We thank the Publications Committee and Executive Council for their support. At Ohio State, Dave Woods and Phil Smith served as backup reviewers in case there were many conflicts of interest or non-responding reviewers; they were not needed, thankfully. We owe the most thanks to the following 51 of 52 reviewers who returned their reviews on time. Edie M. Adams (Microsoft Corporation), Arlene F. Aucella (AFA Design Consultants), Barry J. Bassin (MECA Software, Inc.), Robert P. Bateman (Automation Research Systems, Ltd.), Randolph G. Bias (IBM), Harry E. Blanchard (AT&T Bell Laboratories), Susan J. Boyce (AT&T Bell Laboratories), John F. Brock (InterScience America, Inc.), Elizabeth A. Buie (Computer Sciences Corporation), Brenda J. Burkhart (Bellcore), James C. Byers (Idaho National Engineering Laboratory), Martha E. Crosby (University of Hawaii), James P. Cunningham (AT&T Bell Laboratories), Tom Dayton (Bellcore), Jerry R. Duncan (Deere & Company), Deb Galdes (Silicon Graphics, Inc.), Roger B. Garberg (AT&T Bell Laboratories), Douglas J. Gillan (New Mexico State University), Nancy C. Goodwin (MITRE Corporation), Paul Green (University of Michigan), Rich Halstead-Nussloch (Southern Tech), Robert A. Henning (University of Connecticut), Thomas T. Hewett (Drexel University), LaVerne L Hoag (AT&T Bell Laboratories), Demetrios Karis (GTE Laboratories Incorporated), Thomas J. Laska (Loral Defense Systems), Lila F. Laux (US West Technologies), Steven Hart Lewis (AT&T Bell Laboratories), Thomas B. Malone (Carlow International), Monica A. Marics (U S WEST Advanced Technologies), Marshall R. McClintock (Microsoft Corporation), Peter R. Nolan (Vertical Research, Inc.), Richard W. Obermayer(Pacific Science & Engineering Group), Ronald Douglas Perkins (AT&T Interchange Network), Renate J. Roske-Hofstrand (NYMA Inc.), James B. Sampson (U.S. Army Natick RD&E Center), Mark W. Scerbo (Old Dominion University), Chris A. Shenefiel (Motorola), John B. Smelcer (American University), Sidney L. Smith (Smithwin Associates), Thomas Z. Strybel (California State University Long Beach), Spencer C. Thomason (Micro Analysis and Design, Inc.), Thomas S. Tullis (Fidelity Investments), Kevin C. Uliano (Cincinnati Bell Information Systems, Inc.), Kath Uyeda (ALLTEL Information Services), Robert A. Virzi (GTE Laboratories Incorporated), John T. Ward (Kohl Group), Robert Williges (Virginia Tech), Chauncey E. Wilson (Human Factors International, Inc.), Susan J. Wolfe (The Hiser Consulting Group), Elizabeth Zoltan (San Jose State University).
Go to HF Perspectives on HCI: Home Page