Anthony C. Salvador / Gunilla A. Sundstrom
Abstract: Creating useful dialogues between human and automated decision makers (i.e., intelligent agents) is a critical design aspect of any effective decision support environment. However, surprisingly few studies have examined the various factors influencing the way a human decision maker interacts with various types of intelligent agents. In the present work, one such factor was examined, namely the confidence expressed by the agent about its own conclusions. Subjects were trained in a network management fault diagnosis task. They were then asked to accept or reject a fault diagnosis generated by the automated decision making agent. The automated decision maker presented its fault diagnosis with an associated confidence indication expressed as a probability. Subjects were required to decide whether to accept or reject the automated decision maker's diagnosis. To conceive an informed response, subjects were able to examine various types of information related to network performance. The results indicated that the higher the confidence level presented by the automated decision maker, the more likely it was that the human decision maker would accept the automatically generated diagnosis. Thus, the higher the confidence level of the automated decision maker, the more likely subjects were to accept a wrong decision. Moreover, subjects examined fewer pieces of information in situations when the automated decision maker expressed a high level of confidence.
Keywords: Intelligent/expert systems, Empirical studies, Complex systems, Case studies, Decision making
Note: Originally published in Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting, 1994, pp. 220-224, (online access).
Republished: G. Perlman, G. K. Green & M. S. Wogalter (Eds) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, California: HFES, 1995, pp. 344-348.