A failure of imagination how and why people respond differently to human and computer team mates

125 827 0
A failure of imagination how and why people respond differently to human and computer team mates

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

A FAILURE OF IMAGINATION: HOW AND WHY PEOPLE RESPOND DIFFERENTLY TO HUMAN AND COMPUTER TEAM-MATES TIMOTHY ROBERT MERRITT NATIONAL UNIVERSITY OF SINGAPORE 2012 A FAILURE OF IMAGINATION: HOW AND WHY PEOPLE RESPOND DIFFERENTLY TO HUMAN AND COMPUTER TEAM-MATES TIMOTHY ROBERT MERRITT B.A. (Liberal Arts) Xavier University M.A. (Digital Culture), University of Jyv¨askyl¨a A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY NUS Graduate School for Integrative Sciences and Engineering NATIONAL UNIVERSITY OF SINGAPORE (2012) Acknowledgements I express my sincere thanks to all of the people who have helped me throughout the duration of this research including my family and friends, colleagues, and anyone who listened to me talk about my research. In particular, I would like to express my gratitude to the following people for all that they have given to me. My supervisor, Kevin McGee has been tremendously patient and insightful throughout this journey and always knows how to provide the right amount of guidance when needed. I also thank the thesis advisory committee members Sun Sun Lim and Connor Graham, who spent considerable time guiding me and offering important viewpoints to strengthen this work. The members of the Partner Technologies Research Group including Alex, Aswin, Chris, Joshua, Maryam, and Teong Leong provided countless suggestions in our weekly lab meetings and provided moral support – you are the best! I also thank my friends outside of the lab who helped me with stimulating conversation or sharing coffee. Most importantly, I thank my family for being my unwavering supporters who have helped me by listening to my struggles or just spending time together. I couldn’t have done it without you. This work was funded in part under a National University of Singapore Graduate School for Integrative Sciences and Engineering (NGS) scholarship. Additional funding was provided by National University of Singapore AcRF grant “Understanding Interactivity” R-124-000-024-112 & Singapore-MIT GAMBIT Game Lab research grant “Designing Adaptive Team-mates for Games.” i Contents Introduction 1.1 Social responses to technology . . . . . . . . . . . . . . . . . . . . . 1.2 Structure of this document . . . . . . . . . . . . . . . . . . . . . . . Related Work 2.1 Conversational Interactions . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Differences in Perception . . . . . . . . . . . . . . . . . . . . 2.1.2 Differences in Behavior . . . . . . . . . . . . . . . . . . . . 2.2 Competitive Interactions . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Cooperative Interactions . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Research Problem 12 3.1 Context of cooperative games . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Critique of previous work . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Originality of thesis contribution . . . . . . . . . . . . . . . . . . . . 14 3.3.1 Empirical contribution . . . . . . . . . . . . . . . . . . . . . 14 3.3.2 Theoretical contribution . . . . . . . . . . . . . . . . . . . . 15 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 Method 16 4.1 Mapping Our Studies to Explore Cooperation . . . . . . . . . . . . . 16 4.2 Overview of user studies . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3 Game: Capture the Gunner . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.1 Drawing Fire . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3.2 Gunner Behavior Algorithm . . . . . . . . . . . . . . . . . . 21 4.4 Game: Defend the Pass . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.5 Toward an explanatory framework . . . . . . . . . . . . . . . . . . . 23 4.5.1 Framework Development . . . . . . . . . . . . . . . . . . . . 24 4.5.2 Framework Validation . . . . . . . . . . . . . . . . . . . . . 24 ii CONTENTS iii Enjoyment & Preference 26 5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2 Study Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2.1 Participants & Materials . . . . . . . . . . . . . . . . . . . . 27 5.2.2 Study Session Protocol . . . . . . . . . . . . . . . . . . . . . 27 5.2.3 Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.3.1 Preliminary Analysis . . . . . . . . . . . . . . . . . . . . . . 28 5.3.2 Perceived team-mate identity & enjoyment . . . . . . . . . . 28 5.3.3 Perceived team-mate identity & preference . . . . . . . . . . 28 5.3.4 Effects of identity on game events . . . . . . . . . . . . . . . 29 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.4.1 30 5.3 5.4 Credit/Blame & Skill Assessment 31 6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 6.2 Study Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.2.1 Participants & Materials . . . . . . . . . . . . . . . . . . . . 32 6.2.2 Study Session Protocol . . . . . . . . . . . . . . . . . . . . . 32 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.3.1 Assigning blame unfairly . . . . . . . . . . . . . . . . . . . . 33 6.3.2 Inaccurate skill assessment . . . . . . . . . . . . . . . . . . . 33 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.4.1 34 6.3 6.4 Possible limitations . . . . . . . . . . . . . . . . . . . . . . . Possible limitations . . . . . . . . . . . . . . . . . . . . . . . Cooperation & Risk-taking 35 7.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 7.2 Study Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 7.2.1 Participants & Materials . . . . . . . . . . . . . . . . . . . . 36 7.2.2 Study Session Protocol . . . . . . . . . . . . . . . . . . . . . 36 7.2.3 Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 7.3.1 Preliminary Analysis . . . . . . . . . . . . . . . . . . . . . . 37 7.3.2 Effects of team-mate identity on perception of risk . . . . . . 37 7.3.3 Effects of team-mate identity on perception of cooperation . . 38 7.3.4 Logged game events . . . . . . . . . . . . . . . . . . . . . . 38 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 7.4.1 40 7.3 7.4 Possible limitations . . . . . . . . . . . . . . . . . . . . . . . CONTENTS iv . . . . . . . . . . . . . . 41 41 41 42 42 42 44 44 44 46 46 46 47 47 47 Protecting Team-mates 8.1 Motivation . . . . . . . . . . . . . . . . . . . 8.2 Study Details . . . . . . . . . . . . . . . . . 8.2.1 Participants & Materials . . . . . . . 8.2.2 Study Session Protocol . . . . . . . . 8.2.3 Measures . . . . . . . . . . . . . . . 8.3 Results . . . . . . . . . . . . . . . . . . . . . 8.3.1 Preliminary Analysis . . . . . . . . . 8.3.2 Logged Data . . . . . . . . . . . . . 8.3.3 Self-evaluation of protective behavior 8.3.4 Stereotypes . . . . . . . . . . . . . . 8.3.5 Personal pressures . . . . . . . . . . 8.3.6 Observed behaviors . . . . . . . . . . 8.4 Discussion . . . . . . . . . . . . . . . . . . . 8.4.1 Possible limitations . . . . . . . . . . Sacrificing Team-mates 9.1 Motivation . . . . . . . . . . . . 9.2 Study details . . . . . . . . . . . 9.2.1 Participants & Materials 9.2.2 Study Session Protocol . 9.2.3 Measures . . . . . . . . 9.3 Results . . . . . . . . . . . . . . 9.3.1 Preliminary analysis . . 9.3.2 Logged Data . . . . . . 9.3.3 Self reported data . . . . 9.4 Discussion . . . . . . . . . . . . 9.4.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 49 49 49 50 50 51 51 52 52 53 54 10 Explanatory Framework 10.1 Requirements for an explanatory framework . . . . . . . . . 10.2 Cooperative Attribution Framework: Main Components . . . 10.2.1 Schemas and Person Perception . . . . . . . . . . . 10.3 Cooperative Attribution Framework: Self-centric concerns . 10.3.1 Social Motivations . . . . . . . . . . . . . . . . . . 10.3.2 Personal Consequences . . . . . . . . . . . . . . . . 10.4 Cooperative Attribution Framework: Inferring mental states . 10.4.1 Evidence-based: Behaviors in context . . . . . . . . 10.4.2 Evidence-based: Emotional displays . . . . . . . . . 10.4.3 Extra-target: Projecting . . . . . . . . . . . . . . . . 10.4.4 Extra-target: Stereotypes . . . . . . . . . . . . . . . 10.5 Cooperative Attribution Framework: Process flow . . . . . . 10.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 55 57 58 60 60 61 62 64 67 67 68 69 70 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS 11 Discussion v 71 11.1 Phases of User Studies . . . . . . . . . . . . . . . . . . . . . . . . . 71 11.2 Applying the Framework: Enjoyment/Preference . . . . . . . . . . . 72 11.2.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 72 11.2.2 Framework Process Flow: Enjoyment/Preference . . . . . . . 75 11.3 Applying the Framework: Credit/Blame/Skill Assessment . . . . . . . 77 11.3.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 78 11.3.2 Framework Process Flow: Credit/Blame/Skill . . . . . . . . . 80 11.4 Applying the Framework: Cooperation/Risk-taking . . . . . . . . . . 82 11.4.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 82 11.4.2 Framework Process Flow: Cooperation/Risk-taking . . . . . . 84 11.5 Applying the Framework: Protecting Team-mates . . . . . . . . . . . 86 11.5.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 86 11.5.2 Framework Process Flow: Protection . . . . . . . . . . . . . 88 11.6 Applying the Framework: Sacrificing Team-mates . . . . . . . . . . . 90 11.6.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 90 11.6.2 Framework Process Flow: Sacrifice . . . . . . . . . . . . . . 92 11.7 Justifying the Framework . . . . . . . . . . . . . . . . . . . . . . . . 94 11.8 Applying the Framework: Commitment to Cooperation . . . . . . . . 95 11.8.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 95 11.8.2 Framework Process Flow: Commitment to Cooperate . . . . . 97 11.9 Applying the Framework: Arousal . . . . . . . . . . . . . . . . . . . 99 11.9.1 Overview of differences . . . . . . . . . . . . . . . . . . . . 99 11.9.2 Framework Process Flow: Arousal . . . . . . . . . . . . . . . 101 11.10Limitations of the CAF . . . . . . . . . . . . . . . . . . . . . . . . . 103 11.11Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 12 Conclusion 105 12.1 Contribution of this work . . . . . . . . . . . . . . . . . . . . . . . . 105 12.2 Limitations of this work . . . . . . . . . . . . . . . . . . . . . . . . . 106 12.2.1 Limitations: Game context . . . . . . . . . . . . . . . . . . . 106 12.2.2 Limitations: Research Method . . . . . . . . . . . . . . . . . 107 12.3 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Summary Much attention in the development of artificial team-mates has focused on replicating human qualities and performance. However, all things being equal, human players respond in the same way to human and artificial team-mates – and if there are differences, what accounts for them? Related research has examined differences using direct comparisons of responses to human and AI partners in conversational interactions, competitive games, and in the cooperative game context. However, the work to date examining the effects of team-mate identity has not been extensive and previous attempts to explain the findings have not sufficiently examined player beliefs about their team-mate or the rationale and motivation for behavior. This thesis reports on research to understand differences in player experience, perception, and behavior when human players play with either human or AI team-mates in realtime cooperative games. A number of experiments were conducted in which the subjects played a computer game involving an unseen team-mate whom they were told was a human or a computer program. Data gathered included performance logs, questionnaires, and in-depth interviews. Participants consistently rated their enjoyment higher with the “presumed human” (PH) team-mate and rated it more favorably – higher in cooperation, skill, and noticed more risk-taking by the PH team-mate. PH team-mates were given more credit for successes and less blame compared to their AI counterparts. In terms of behavior, players protected the PH team-mate more in a game involving few decisions, yet players protected AI team-mates more in a complex cooperative game involving sustained effort and constant decision-making. In order to explain why the identity of the team-mate results in different emotional, evaluative, and behavioral responses, an original Cooperative Attribution Framework was developed. The framework proposes that the player considers the intentions and attributes of their team-mates and also considers the pressures and motivations of the player in the larger social context of the interaction. Using the Cooperative Attribution Framework, this thesis argues that the differences observed are broadly the result of being unable to imagine that an AI team-mate could have certain attributes (e.g., emotional dispositions). One of the more surprising aspects of this insight is that the “inability to imagine” impacts decisions and judgments that seem quite unrelated (e.g., credit assignment for objectively equivalent events). This thesis contributes to the literature on artificial team-mates by revealing some of the differences in response to human and computer team-mates in cooperative games. In order to explain these differences, a framework is developed and applied to our studies, and justified through its application to the results of related research. vi List of Figures 1.1 Threshold of social influence model by Blascovich [18] . . . . . . . . 4.1 Capture the Gunner game elements: a) human-controlled avatar b) computer-controlled agent c) gunner d) gunner’s field of view (FOV) . 20 4.2 Avatar blinking yellow to signal “draw fire” . . . . . . . . . . . . . . 21 4.3 Screenshot of the Defend the Pass (DTP) game screen . . . . . . . . . 23 4.4 Positions that team-mates can be placed (Pos & 2) . . . . . . . . . . 23 4.5 Screenshot of the score shown at the end of the Defend the Pass (DTP) game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 9.1 Summary table indicating on the Y axis, the number participants placing the team-mate in the protected position for each game of the 5game rounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 10.1 Communication centric models focus on maintenance of the communication channel, relationship, and effectiveness of sharing messages. 56 10.2 Communication model in cooperative games involves more focus on the game goals in combination with the communication between teammates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 10.3 Basic components of the Cooperative Attribution Framework . . . . . 58 10.4 Basic components of the Cooperative Attribution Framework . . . . . 59 10.5 Mindreading strategies proposed by Ames 2004 . . . . . . . . . . . . 64 10.6 Heider’s attribution theory . . . . . . . . . . . . . . . . . . . . . . . 65 10.7 The typical process flow applying the Cooperative Attribution Framework to the cooperative game context . . . . . . . . . . . . . . . . . 69 11.1 Phases of the typical user studies. Chronological time runs left to right for the phases and top to bottom within the phases. . . . . . . . . . . 73 11.2 Process flow of CAF and the sacrifice study results indicating stereotypes, social motivations, and personal consequences as highly dominant, behaviors in context, emotional displays, and perceiver’s own mental states have a moderate influence. . . . . . . . . . . . . . . . . 75 vii LIST OF FIGURES viii 11.3 Process flow of CAF and the sacrifice study results indicating stereotypes, personal consequences, behaviors in context, and perceiver’s own mental state as highly dominant. . . . . . . . . . . . . . . . . . . 80 11.4 Process flow of CAF and the sacrifice study results indicating stereotypes, personal consequences, and behaviors in context as highly dominant, emotional displays and perceiver’s own mental states have a moderate influence. . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 11.5 Process flow of CAF and the protection study results indicating stereotypes, social motivations, and personal consequences as highly dominant, emotional displays have a moderate influence. . . . . . . . . . . 88 11.6 Process flow of CAF and the sacrifice study results indicating social motivations and personal consequences as highly dominant, perceiver’s own mental states has a moderate influence. . . . . . . . . . . . . . . 92 11.7 Process flow of CAF and the Prisoner’s Dilemma study results indicating stereotypes, social motivations, emotional displays and personal consequences as highly dominant. . . . . . . . . . . . . . . . . . . . 97 11.8 Process flow of CAF and the sacrifice study results indicating stereotypes, social motivations, and personal consequences as highly dominant, behaviors in context, emotional displays, and perceiver’s own mental states have a moderate influence. . . . . . . . . . . . . . . . . 101 11.9. APPLYING THE FRAMEWORK: AROUSAL Social Motivations Personal Consequences Behaviors in Context (Attribution) Emotional displays Personal projections Stereotypes 100 Human: Arousal AI: Arousal (More arousal: feels like real social interaction) (More arousal: pressure to cooperate) (More arousal: behavior carries intention) (Less arousal: not social, playing alone) (Less arousal: no pressure to cooperate) (Less arousal: behavior follows algorithm) (More arousal: recognizing/acknowledging team-mate emotions) (More arousal: easy to imagine how team-mate feels) (Less arousal: no team-mate emotions) Med (Less arousal: can’t imagine that team-mate can feel at all) (Less arousal: playing alone is not social) Med (More arousal: social games should involve people) Table 11.7: Arousal: comparison of Human and AI team-mate. Dominance refers to the proposed influence this is on physiological arousal in reading emotions and intentions, whereas with the computer team-mate, there is no authentic social experience, and no shared emotionally charged moments. Participants evaluate the computer team-mates simply for their performance in the game. The following detailed analysis examines the six categories of the framework to identify which of the categories have a dominant influence on the differences noted in the study results. We proposed that a comparison can be made for each of the six categories and how they relate to human an computer team-mates as shown in Figure 10.4. We present provide the side-by-side comparison in Table 11.7. Dominance (High, Med, Low, N/A) High High Med High 11.9. APPLYING THE FRAMEWORK: AROUSAL 11.9.2 101 Framework Process Flow: Arousal The framework is now applied to the arousal study results presented in [60]. We present a diagram to illustrate how the experience unfolds when interacting with either a human team-mate or a computer team-mate. We discuss the step by step process for the human team-mate, followed by the computer team-mate to serve as a comparison. Figure 11.8: Process flow of CAF and the sacrifice study results indicating stereotypes, social motivations, and personal consequences as highly dominant, behaviors in context, emotional displays, and perceiver’s own mental states have a moderate influence. HUMAN TEAM-MATE: Arousal To summarize the higher levels of physiological arousal with the presumed human team-mate, the schema for cooperative game is an activity that involves enjoying an activity with another human. The player enters the game with this expectation and also expects that the team-mate adapts well to the situation at hand, is more understanding and that the interactions with them will be a valid form of social attention. There is also an obligation to recognize the efforts of the human and to appreciate their intention to cooperate. There is an overall higher level of attentiveness to the interaction with the human team-mate. 1. Identity: player is told their team-mate is human-controlled. 2. Stereotypes: Stereotypes for typical human team-mates are brought to mind and include an image of a team-mate who is understanding, adapts to the situation. Among the stereotypes, the player believes that cooperative games are fun because of a team-mate being human. (a) Social Motivations: player imagines that the interaction carries real social benefits and social attention. (b) Personal consequences: player feels social pressure to cooperate in the game. 3. Schemas, Scripts: The player expects that the human team-mate will give as much effort as possible, will try to adapt and coordinate, is motivated by the 11.9. APPLYING THE FRAMEWORK: AROUSAL 102 goal in the game, but also motivated by the social interaction. In assessing all the behaviors of the team-mate, the player looks for evidence to confirm and to support these expectations. 4. Perceptions of game events and team-mate behavior • Behaviors in context: more perceived intention, effort, more confidence in the partner actions • Emotional Displays: player easily imagines and attends to the emotions of the team-mate • Personal Projections: player feels comfortable in projecting their own emotions and feels comfortable making assumptions about how the teammate is likely reacting emotionally to events in the game. 5. Schemas Evaluated: The player considers how well the schema fits their experience. 6. If it is not considered a Special Case for the schema, it is considered an outlier or chance difference and the player continues the process by reconfirming the stereotype continuing on step (2). If the difference is considered a special case, adjustments are made to the schema for the team-mate in step (7). 7. Adjustments to Stereotypes and Schema are made and the process starts again at step (2). COMPUTER TEAM-MATE: Arousal To summarize the lower levels of physiological arousal with the computer team-mate, the schema for cooperative game is an activity that is enjoyed less with a computer human. The player enters the game with this expectation and also expects that the teammate is rigid, may not adapt well, and is less understanding. Interactions with them are not expected to involve real social attention. There is no obligation to recognize the efforts of the computer and it is difficult to imagine a computer having the intention to cooperate. There is an overall lower level of attentiveness to the interaction with the computer team-mate. 1. Identity: player is told their team-mate is computer-controlled. 2. Stereotypes: Stereotypes for typical computer team-mates are brought to mind and include an image of a team-mate who is rigid and less able to adapt. Among the stereotypes, the player believes that cooperative games are less fun with a computer team-mate. (a) Social Motivations: the interaction does not carry any social benefits or social attention. (b) Personal consequences: player feels no pressure to cooperate. 3. Schemas, Scripts: The player expects that the computer team-mate will follow the algorithm it is programmed to follow and therefore may not be able to adapt. The computer does not have intentions of its own. 4. Perceptions of game events and team-mate behavior 11.10. LIMITATIONS OF THE CAF 103 • Behaviors in context: computer simply follows algorithm, difficulty in imagining a computer having intention and effort. • Emotional Displays: player can not imagine a computer having emotions • Personal Projections: player does not imagine the computer has feelings therefore, no projection 5. Schemas Evaluated: The player considers how well the schema fits their experience. 6. If it is not considered a Special Case for the schema, it is considered an outlier or chance difference and the player continues the process by reconfirming the stereotype continuing on step (2). If the difference is considered a special case, adjustments are made to the schema for the team-mate in step (7). 7. Adjustments to Stereotypes and Schema are made and the process starts again at step (2). 11.10 Limitations of the CAF This section discusses some of the limitations of the framework including concerns about the narrow focus on dyadic relationships, which makes it unclear how the framework applies to larger groups, concerns that more rich modes of communication are not accounted for in the framework, and concerns that cultural issues are not specifically addressed in the framework. We now discuss these issues in more detail. The studies conducted in this thesis and the studies from the related work that were used to justify the framework involved dyadic relationships. It raises the concern that it is unclear how the framework would scale to larger team-mate relationships. Although the focus of the framework was on the individual acting in a dyad, the choice of focusing on the most simple team configuration avoids the problems and complexities of studying coalitions, which would be interesting future expansions for this work. The role of communication in team-mate relationships is very important in many situations. The studies conducted in this thesis did not involve communication aside from the blinking yellow of the avatar to signal the “draw fire” action. While minimal communication channels, such as those available in the games used in this research, have been shown to provide rich opportunities to share meaning [50], the framework does not address rich forms of communication. It was a deliberate choice to keep the framework general, yet expandable. While the development of the framework was focused on cooperative games and is useful for the game context, it could also be useful for explaining differences in other contexts such as competitive games or conversational interactions. Future refinement and progressive elaboration of its components signal important future work. Another aspect of team-mate interaction that has not been represented in detail are the various cultural issues that influence the response to team-mates. The demographics of the participants in the studies presented in this thesis were from Singapore, mostly ethnic Chinese, and of a narrow age range. It is possible that players from different cultures would respond differently, yet the CAF does not specifically account for cultural differences such as power distance, the degree of collectivism/individualism, among 11.11. SUMMARY 104 others. It is likely that the elements defined as “personal concerns” and “social motivations” are appropriate for exploring cultural differences. This suggests important future work in the development of the framework, yet it is beyond the scope of this thesis. 11.11 Summary In this chapter we applied the Cooperative Attribution Framework to explain the results of our studies that were presented in (Chapters 5-9). We justified the framework by applying it to the results of two studies discussed previously in the Related Work (Chapter 2). The main insights provided by this analysis is that players adopt different schemas for human and computer team-mates which result in attending more to certain aspects of the experience. The differences are largely due to the players being unable to imagine that an AI team-mate could have certain attributes (e.g., emotional dispositions). This leads to different interpretations of the same events and it also affects the strategies players use to evaluate and make sense of their team-mates. This chapter concluded with discussion about the limitations of the framework. In the next chapter, we provide overall conclusions of this thesis. Chapter 12 Conclusion This thesis set out to reveal some of the differences in response to human and computer team-mates, develop an explanatory framework to reveal causes, motivation, and rationale behind the differences, and then test the framework by using it to analyze related research. Various studies were conducted in which players cooperated with team-mates that were thought to be controlled by either a human or computer and then answered questions about the experience. Logged data from player behaviors and in-game events supplemented findings from the self-reported data revealing differences in perception, behaviors toward team-mates, and even differences in the players’ perceptions of their own actions. Through the development of the Cooperative Attribution Framework, it becomes more clear why these differences exist. It seems that players fail to imagine that artificial agents have certain attributes (adaptability, performance, feelings, intention, etc.), which affects the expectations players have for the cooperative experience with them. With differences in expectations, otherwise equivalent experiences are perceived very differently. In this chapter, we describe the contribution of this work in more detail and then discuss limitations and future research exploring the differences in response to human and computer team-mates. 12.1 Contribution of this work This thesis is situated within the field of HCI research focused on cooperation with artificial partners. There are two main contributions presented in this thesis including the empirical contribution of revealing differences in responses to human and computer team-mates, including differences in perception judgment, and behaviors, and a theoretical contribution with the development of an explanatory framework to reveal some of the causes for these differences. This thesis provides an additional contribution by suggesting implications for the design of artificial team-mates. We conducted game-based user studies that examined direct comparisons between responses to humans and computers and claim that, in various situations, players spend considerable effort trying to understand the capabilities of the team-mate and considering the social context, which inevitably results in differences in perception, behavior, and evaluations. Significant differences were found in studies that examined the four 105 12.2. LIMITATIONS OF THIS WORK 106 main components of cooperation, which suggests that cooperation with human and computer team-mates is a substantially different experience, even in otherwise equivalent interactions. To explain the differences in response to human and computer team-mates, the Cooperative Attribution Framework was developed. The framework builds on relevant theories from social psychology and cognitive science and focuses on the strategies players use to infer the mental state of their team-mates and the self-centric concerns of personal pressures and social motivations when cooperating with team-mates. The framework provides explanations for the results of the user studies presented in this thesis, and was justified by applying it to the results of other research studies discussed in the related work (Chapter 2). Aside from the primary contributions of revealing and explaining differences in response to human and computer team-mates, this thesis also makes a contribution in the form of implications for the design of artificial team-mates. While it does not provide prescriptive guidance on how to compensate for difference in response to the teammates, this work suggests that designers can evaluate, debug, and refine their teammate AI by taking a systematic approach. Developers can look for key events that are intrepreted in very different ways, for example, when a team-mate pauses in the CTG game, it was interpreted in very different ways depending on the framed identity of the team-mate. Developers can observe users and identify similar types of events. Once those events are identified, further design choices can be explored and evaluated with the users. 12.2 Limitations of this work The work presented in this thesis provides evidence that suggests differences in the response to team-mates and provides a framework that can be used to analyze other related studies. There are limitations, some of which were discussed in the chapters describing the user studies and in the discussion section, however, the most important limitations involve aspects of the game context and the research method. These are discussed now in more detail. 12.2.1 Limitations: Game context In terms of the game context, the the games were very simple, players did not know their human team-mates, and the scenarios only involved dyads. The games used in the studies were very simple – the graphics were flat, two dimensional, and there were no sound effects. While it might at first seem to be a limitation, the simplicity of the games used in the study are a strength of the work. The games were fast-paced, rated as very enjoyable by the participants, allow for equivalent comparisons across team-mates, and are quite similar to typical casual games that are popular with mobile gamers. The studies were conducted with a single algorithm for the team-mate partner in order to keep the game experience consistent across all game sessions. Although the effects were significant using the team-mate at the current skill level, which was usually more skilled than the subject, it would be worthwhile to examine how teammates of other skill levels would be perceived. Another possible limitation of this study is that it did not allow the participants to meet or interact with their presumed 12.3. FUTURE RESEARCH 107 human team-mates. Familiarity with the human participant would likely influence the ways human players would experience the game and respond. Studying people who are already familiar with each other is a difficult context to study using a quantitative approach, and although the present findings are still important against the backdrop of the many anonymous gamers in CVEs, it is a logical next step to find out how the present research generalizes into other typical team-mate pairings. Furthermore, the studies focused only on dyadic interactions and did not study larger teams. While these various limitations to the game context studied not impact the findings of the study, they signal substantial potential for future work including studies of more complex games, larger teams, and artificial team-mates of different skill levels. 12.2.2 Limitations: Research Method In terms of the research method, the interactions were very short, more questions could have been asked, biosignals were not taken, and more close analysis could have been performed on video recordings of game play. The studies presented involved interactions that were approximately five minutes per session. Although this may seem like a short time, it is typical for this domain of study [60], providing an adequate amount of time for the subjects to understand and play the game, yet not become bored by the end of the research session. It would be advantageous, however, to conduct studies of longer duration. In [71] researchers propose that impression development takes longer in contexts involving less social cues, and extended exposure may in fact result in a reduction in differences. Another possible limitation of the research method could be the measurements used. Certainly, more questions could have been asked by the subjects, their movements in the game could have been video recorded for in-depth analysis, and biosignals of the participants could have been taken. Although those all remain valuable measurements that could bring interesting results, the findings of the present studies are substantial and the reduction in the number of measures was a deliberate choice made to ensure that the subjects did not tire from the inquiry and focus more on the game and their response to the cooperative game. It is worthwhile to note that a focus on ethnographic methods as used in [94, 72] could be helpful in future research to gain a better understanding of team-mate interactions “in the wild.” 12.3 Future Research While there are many possibilities for future exploration of the topic, perhaps one of the most intriguing possibilities for future work looks toward recent developments in CSCW in which conversational agent technologies are being designed to recognize the mental schemas adopted by the users and then adapt the system appropriately [59]. As proposed in recent research on the design of artificial agents, researchers propose that trying to build artificial agents that mimic life is not the most important focus. Instead, a more promising solution is to develop an active system that tries to determine and adjust to the human stance – to recognize if the person is treating the agent like a machine or a human, and to react accordingly [46]. There is much exciting work ahead! Bibliography [1] R. P. Abelson. Psychological status of the script concept. American Psychologist, 36(7):715–729, 1981. [2] A. T. Abraham and K. McGee. AI for dynamic team-mate adaptation in games. In Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, pages 419–426, Aug. 2010. [3] J. C. Abric and J. P. Kahan. The effects of representations and behavior in experimental games. Eur. J. Soc. Psychol., 2(2):129–144, 1972. [4] E. Aharoni and A. J. Fridlund. Social reactions toward people vs. computers: How mere lables shape interactions. Comput. Hum. Behav., 23:2175–2189, Sept. 2007. [5] N. Ambady and R. Rosenthal. Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111(2):256 – 274, 1992. [6] D. R. Ames. Inside the mind reader’s tool kit: Projection and stereotyping in mental state inference. Journal of Personality and Social Psychology, 87(3):340–353, 2004. [7] D. R. Ames. Everyday solutions to the problem of other minds: Which tools are used when? In B. F. Malle and S. D. Hodeges, editors, Other Minds: How Humans Bridge the Divide between the Self and Others, pages 158–173. The Guilford Press, London, 2005. [8] M. Argyle. Cooperation: The Basis of Sociability. Routledge, Jan. 1991. [9] R. Axelrod. The Evolution of Cooperation. Basic Books, October 1985. [10] J. N. Bailenson, K. Swinth, C. Hoyt, S. Persky, A. Dimov, and J. Blascovich. The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Teleoper. Virtual Environ., 14(4):379–393, Aug. 2005. [11] J. A. Bargh. The cognitive monster: The case against the controllability of automatic stereotype effects. In S. Chaiken and Y. Trope, editors, Dual-process theories in social psychology., pages 361–382. The Guilford Press, New York, 1999. 108 BIBLIOGRAPHY 109 [12] S. Baron-Cohen. Precursors to a theory of mind: Understanding attention in others. In A. Whiten, editor, Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreading, pages 233–251. Blackwell Pub, 1991. [13] S. G. Barsade. The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47:644–675, 2002. [14] F. C. Bartlett. Remembering: A Study in Experimental and Social Psychology (original publication 1932). Cambridge University Press, edition, June 1995. [15] A. L. Baylor. The design of motivational agents and avatars. Educational Technology Research and Development, 59(2):291–300, Apr. 2011. [16] B. Beaton, S. Harrison, and D. Tatar. Digital drumming: a study of co-located, highly coordinated, dyadic collaboration. In Proceedings of the 28th international conference on Human factors in computing systems, CHI ’10, pages 1417–1426. ACM, 2010. [17] K. Bhatt, M. Evens, and S. Argamon. Hedged responses and expressions of affect in Human/Human and Human/Computer tutorial interactions. In In Proc. Cognitive Science, 2004. [18] J. Blascovich. Social influence within immmersive virtual environments. In R. Schroedr, editor, The social life of avatars, pages 127–145. Springer-Verlag, 2002. [19] J. Blascovich. A theoretical model of social influence for increasing the utility of collaborative virtual environments. In Proceedings of the 4th international conference on Collaborative virtual environments, CVE ’02, pages 25–30. ACM, 2002. [20] J. Blascovich and J. Bailenson. Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution. William Morrow, Apr. 2011. [21] C. Breazeal, J. Gray, M. Berin, C. Breazeal, J. Gray, and M. Berin. Mindreading as a foundational skill for socially intelligent robots robotics research. In M. Kaneko, Y. Nakamura, M. Kaneko, and Y. Nakamura, editors, Springer Tracts in Advanced Robotics, volume 66 of Springer Tracts in Advanced Robotics, chapter 32, pages 383–394. Springer Berlin / Heidelberg, Berlin, Heidelberg, 2011. [22] S. E. Brennan. Conversation with and through computers. User Modeling and User-Adapted Interaction, 1(1):67–86, Mar. 1991. [23] B. Brown and M. Bell. CSCW at play: ‘there’ as a collaborative virtual environment. In Proceedings of the 2004 ACM conference on Computer supported cooperative work, CSCW ’04, pages 350–359. ACM, 2004. [24] J. S. Bruner. Beyond the Information Given: Studies in the Psychology of Knowing. W W Norton & Co Inc, 1st edition, 1973. [25] D. Carlston. Models of implicit and explicit mental representation. In B. Gawronski and B. K. Payne, editors, Handbook of Implicit social cognition: measurement, theory, and applications, pages 38–61. The Guilford Press, New York, 2010. BIBLIOGRAPHY 110 [26] J. Cassell. Embodied conversational interface agents. Commun. ACM, 43(4):70– 78, Apr. 2000. [27] T. Chaminade, M. Zecca, S.-J. J. Blakemore, A. Takanishi, C. D. Frith, S. Micera, P. Dario, G. Rizzolatti, V. Gallese, and M. A. A. Umilt`a. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PloS one, 5(7):e11577+, July 2010. [28] G. B. Cross. Is an agent theory of mind (ToM) valuable for adaptive, intelligent systems? In Proceedings of the 9th Workshop on Performance Metrics for Intelligent Systems, PerMIS ’09, pages 127–130. ACM, 2009. [29] N. Dahlb¨ack, A. J¨onsson, and L. Ahrenberg. Wizard of oz studies. In IUI ’93: Proceedings of the 1st international conference on Intelligent user interfaces, pages 193–200. ACM, 1993. [30] Y. A. W. de Kort, W. A. IJsselsteijn, and K. Poels. Digital games as social presence technology: Development of the social presence in gaming questionnaire. In PRESENCE 2007, pages 195–203, Oct. 2007. [31] U. Dimberg, M. Thunberg, and S. Grunedal. Facial reactions to emotional stimuli: Automatically controlled emotional responses. Cognition & Emotion, 16(4):449–471, July 2002. [32] J. F. Dovidio, J. A. Piliavin, D. A. Schroeder, and L. A. Penner. The Social Psychology of Prosocial Behavior. Lawrence Erlbaum Associates, Apr. 2006. [33] M. Dragone, B. R. Duffy, and G. M. P. O’Hare. Social interaction between robots, avatars & humans. In Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on, pages 24–29. IEEE, Aug. 2005. [34] N. Ducheneaut and R. J. Moore. The social side of gaming: a study of interaction patterns in a massively multiplayer online game. In Proceedings of the 2004 ACM conference on Computer supported cooperative work, CSCW ’04, pages 360–369. ACM, 2004. [35] N. Ducheneaut, N. Yee, E. Nickell, and R. J. Moore. ”alone together?”: exploring the social dynamics of massively multiplayer online games. In Proceedings of the SIGCHI conference on Human Factors in computing systems, CHI ’06, pages 407–416. ACM, 2006. [36] S. Fisk and S. Neuberg. A Continuum of Impression Formation, from CategoryBased to Individuating Processes: Influences of Information and Motivation on Attention and Interpretation, volume 23 of Advances in Experimental Social Psychology, pages 1–74. Elsevier, 1990. [37] S. Fiske and S. Taylor. Social Cognition, from Brains to Culture. McGraw-Hill Humanities/Social Sciences/Languages, edition, Oct. 2007. [38] B. A. Fox. The Human Tutorial Dialogue Project: Issues in the Design of instructional Systems (Computers, Cognition, and Work Series). CRC Press, edition, Sept. 1993. BIBLIOGRAPHY 111 [39] B. Friedman. “it’s the computer’s fault”: reasoning about computers as moral agents. In Conference companion on Human factors in computing systems, CHI ’95, pages 226–227. ACM, 1995. [40] B. Gajadhar, Y. de Kort, and W. IJsselsteijn. Influence of social setting on player experience of digital games. In CHI ’08 extended abstracts on Human factors in computing systems, CHI EA ’08, pages 3099–3104. ACM, 2008. [41] H. L. Gallagher, A. I. Jack, A. Roepstorff, and C. D. Frith. Imaging the intentional stance in a competitive game. NeuroImage, 16(3):814–821, July 2002. [42] V. Gallese. Mirror neurons and the simulation theory of mind-reading. Trends in Cognitive Sciences, 2(12):493–501, Dec. 1998. [43] V. Groom, J. Chen, T. Johnson, F. A. Kara, and C. Nass. Critic, compatriot, or chump?: responses to robot blame attribution. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction, HRI ’10, pages 211–218. ACM, 2010. [44] Y. Hayashi and K. Miwa. Cognitive and emotional characteristics of communication in Human-Human/Human-Agent interaction. In Proceedings of the 13th International Conference on Human-Computer Interaction. Part III: Ubiquitous and Intelligent Interaction, pages 267–274. Springer-Verlag, 2009. [45] F. Heider. The psychology of interpersonal relations. Wiley, New York, 1958. [46] D. Heylen, R. op den Akker, M. ter Maat, P. Petta, S. Rank, D. Reidsma, and J. Zwiers. On the nature of engineering social artificial companions. Applied Artificial Intelligence, 25(6):549–574, June 2011. [47] L. M. Hiatt, A. M. Harrison, and J. G. Trafton. Accommodating human variability in Human-Robot teams through theory of mind. In Twenty-Second International Joint Conference on Artificial Intelligence, 2011. [48] M. Hoogendoorn and J. Soumokil. Evaluation of virtual agents utilizing theory of mind in a real time action game. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume - Volume 1, AAMAS ’10, pages 59–66, Richland, SC, 2010. International Foundation for Autonomous Agents and Multiagent Systems. [49] D. Johnson and J. Gardner. Exploring mindlessness as an explanation for the media equation: a study of stereotyping in computer tutorials. Personal and Ubiquitous Computing, 13(2):151–163, Feb. 2009. [50] J. Kaye, M. K. Levitt, J. Nevins, J. Golden, and V. Schmidt. Communicating intimacy one bit at a time. In CHI ’05: CHI ’05 extended abstracts on Human factors in computing systems, pages 1529–1532. ACM, 2005. [51] A. Kennedy. Dialogue with machines. Cognition, 30(1):37–72, Oct. 1988. [52] S. Kiesler, L. Sproull, and K. Waters. A prisoner’s dilemma experiment on cooperation with people and human-like computers. Journal of personality and social psychology, 70(1):47–65, Jan. 1996. BIBLIOGRAPHY 112 [53] Y. Kim. Pedagogical Agents as Learning Companions: The Effects of Agent Affect and Gender on Learning, Interest, Self-Efficacy, and Agent Persona. PhD thesis, Florida State University, June 2004. [54] T. Kircher, I. Bl¨umel, D. Marjoram, T. Lataster, L. Krabbendam, J. Weber, J. van Os, and S. Krach. Online mentalising investigated with functional MRI. Neuroscience letters, 454(3):176–181, May 2009. [55] G. Knoblich and N. Sebanz. The social nature of perception and action. Current Directions in Psychological Science, 15(3):99–104, June 2006. [56] A. Kohn. No Contest: The Case Against Competition. Houghton Mifflin, 2nd, revised edition, Nov. 1992. [57] E.-J. Lee. What triggers social responses to flattering computers? experimental tests of anthropomorphism and mindlessness explanations. Communication Research, 37(2):191–214, Apr. 2010. [58] E. J. Lee and C. Nass. Does the ethnicity of a computer agent matter? an experimental comparison of human-computer interaction and computer-mediated communication. In Proceedings of the 1st Workshop of Embodied Conversational Characters (WECC’98), Oct. 1998. [59] M. K. Lee, S. Kiesler, and J. Forlizzi. Receptionist or information kiosk: how people talk with a robot? In Proceedings of the 2010 ACM conference on Computer supported cooperative work, CSCW ’10, pages 31–40. ACM, 2010. [60] S. Lim and B. Reeves. Computer agents versus avatars: Responses to interactive game characters controlled by a computer or other player. International Journal of Human-Computer Studies, 68(1-2):57–68, January 2010. [61] R. L. Mandryk, K. M. Inkpen, and T. W. Calvert. Using psychophysiological techniques to measure user experience with entertainment technologies. Behaviour & Information Technology, 25(2):141–158, Apr. 2006. [62] G. Mantovani. Social context in HCl: A new framework for mental models, cooperation, and communication. Cognitive Science, 20(2):237–269, June 1996. [63] K. McGee, T. Merritt, and C. Ong. What we have here is a failure of companionship: communication in goal-oriented team-mate games. In the proceedings of (OzCHI 2011) the 23rd Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design, pages 190–193. ACM, Nov. 2011. [64] K. Meissner, U. Bingel, L. Colloca, T. D. Wager, A. Watson, and M. A. Flaten. The placebo effect: Advances from different methodological approaches. The Journal of Neuroscience, 31(45):16117–16124, Nov. 2011. [65] T. Merritt, T. L. Chuah, C. Ong, and K. McGee. Choosing human team-mates: perceived identity as a moderator of player preference and enjoyment. In the proceedings of the 2011 Foundations of Digital Games Conference. Society for the Advancement of the Science of Digital Games (SASDG), ACM Press, June 2011. BIBLIOGRAPHY 113 [66] T. Merritt and K. McGee. Protecting artificial team-mates: when doing more feels like less. In (accepted for publication) Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, May 2012. [67] T. Merritt, C. Ong, T. Chuah, and K. McGee. Did you notice? artificial TeamMates take risks for players intelligent virtual agents. In H. Vilhj´almsson, S. Kopp, S. Marsella, and K. Th´orisson, editors, Intelligent Virtual Agents (IVA) 2011 conference, volume 6895 of Lecture Notes in Computer Science, chapter 37, pages 338–349. Springer Berlin / Heidelberg, 2011. [68] T. R. Merritt, K. B. Tan, C. Ong, A. Thomas, T. L. Chuah, and K. McGee. Are artificial team-mates scapegoats in computer games. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, CSCW ’11, pages 685–688. ACM, 2011. [69] K. Miwa and H. Terai. Analysis of humanhuman and human-computer agent interactions from the viewpoint of design of and attribution to a partner. In the28th Annual conference of the Cognitive Science Society, pages 597–602, 2006. [70] P. R. Montague and P. H. Chiu. For goodness’ sake. Nature Neuroscience, 10(2):137–138, Feb. 2007. [71] J. Morkes, H. K. Kernal, and C. Nass. Effects of humor in task-oriented humancomputer interaction and computer-mediated communication: a direct test of SRCT theory. Hum.-Comput. Interact., 14:395–435, December 1999. [72] B. Nardi and J. Harris. Strangers and friends: collaborative play in world of warcraft. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, CSCW ’06, pages 149–158. ACM, 2006. [73] C. Nass, K. Isbister, and E. Lee. Truth is beauty: Researching embodied conversational agents. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied conversational agents, pages 374–402. MIT Press, Apr. 2000. [74] C. Nass, J. Steuer, E. Tauber, and H. Reeder. Anthropomorphism, agency, and ethopoeia: computers as social actors. In INTERACT ’93 and CHI ’93 conference companion on Human factors in computing systems, CHI ’93, pages 111–112. ACM, 1993. [75] C. Nass, J. Steuer, and E. R. Tauber. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence, CHI ’94, pages 72–78. ACM, 1994. [76] K. L. Nowak and F. Biocca. The effect of the agency and anthropomorphism of users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoper. Virtual Environ., 12:481–494, Oct. 2003. [77] L. M. Oberman, J. P. McCleery, V. S. Ramachandran, and J. A. Pineda. EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomput., 70(13-15):2194–2203, Aug. 2007. BIBLIOGRAPHY 114 [78] S. Oviatt and B. Adams. Designing and evaluating conversational interfaces with animated characters. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied conversational agents, pages 319–345. MIT Press, Apr. 2000. [79] S. Park, A. D. Fisk, W. A. Rogers, S. Park, A. D. Fisk, and W. A. Rogers. Human factors consideration for the design of collaborative machine assistants handbook of ambient intelligence and smart environments. In H. Nakashima, H. Aghajan, J. C. Augusto, H. Nakashima, H. Aghajan, and J. C. Augusto, editors, Handbook of Ambient Intelligence and Smart Environments, chapter 36, pages 961–984. Springer US, Boston, MA, 2010. [80] J. Piaget. The Child’s Conception of the World. Harcourt, Brace and Co., New York, 1929. [81] H. Plassmann, J. O’Doherty, B. Shiv, and A. Rangel. Marketing actions can modulate neural representations of experienced pleasantness. Proceedings of the National Academy of Sciences, 105(3):1050–1054, Jan. 2008. [82] D. Premack and G. Woodruff. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(04):515–526, 1978. [83] S. Radke, F. P. de Lange, M. Ullsperger, and E. R. de Bruijn. Mistakes that affect others: an fMRI study on processing of own errors in a social context. Experimental brain research. Experimentelle Hirnforschung. Exp´erimentation c´er´ebrale, 211(3-4):405–413, June 2011. [84] N. Ravaja, T. Saari, M. Turpeinen, J. Laarni, M. Salminen, and M. Kivikangas. Spatial Presence and Emotions during Video Game Playing: Does It Matter with Whom You Play? Presence: Teleoper. Virtual Environ., 15:381–392, August 2006. [85] B. Reeves and C. Nass. The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, 1996. [86] K. Salen and E. Zimmerman. Rules of play : game design fundamentals. MIT Press, Oct. 2003. [87] A. G. Sanfey, J. K. Rilling, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. The neural basis of economic Decision-Making in the ultimatum game. Science, 300(5626):1755–1758, June 2003. [88] A. Serenko. Are interface agents scapegoats?: attributions of responsibility in human-agent interaction. Interacting with Computers, 19(2):293 – 303, 2007. [89] N. Shechtman and L. M. Horowitz. Media inequality in conversation: how people behave differently when interacting with computers and people. In Proceedings of the SIGCHI conference on Human factors in computing systems, CHI ’03, pages 281–288. ACM, 2003. [90] H. Shibata, T. Inui, and K. Ogawa. Understanding interpersonal action coordination: an fMRI study. Experimental brain research. Experimentelle Hirnforschung. Exp´erimentation c´er´ebrale, 211(3-4):569–579, June 2011. BIBLIOGRAPHY 115 [91] G. Si, S. Rethorst, and K. Willimczik. Causal attribution perception in sports achievement. Journal of Cross-Cultural Psychology, 26(5):537–553, Sept. 1995. [92] S. S. Sundar and C. Nass. Source orientation in Human-Computer interaction: Programmer, networker, or independent social actor. Communication Research, 27(6):683–703, Dec. 2000. [93] S. Turkle. Life on the Screen: Identity in the Age of the Internet. Simon & Schuster, Sept. 1997. [94] S. Turkle. Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books, first edition / first printing edition, Jan. 2011. [95] J. Uleman, L. Newman, and G. Moskowitz. People as Flexible Interpreters: Evidence and Issues from Spontaneous Trait Inference, volume 28 of Advances in Experimental Social Psychology, pages 211–279. Elsevier, 1996. [96] A. M. von der P¨utten, N. C. Kr¨amer, J. Gratch, and S.-H. Kang. it doesn’t matter what you are! explaining social effects of agents and avatars. Computers in Human Behavior, 26(6):1641–1650, Nov. 2010. [97] T. D. Wager, J. K. Rilling, E. E. Smith, A. Sokolik, K. L. Casey, R. J. Davidson, S. M. Kosslyn, R. M. Rose, and J. D. Cohen. Placebo-Induced changes in fMRI in the anticipation and experience of pain. Science, 303(5661):1162–1167, Feb. 2004. [98] A. Waytz, J. Cacioppo, and N. Epley. Who Sees Human? Psychological Science, 5(3):219–232, May 2010. Perspectives on [99] D. Weibel, B. Wissmath, S. Habegger, Y. Steiner, and R. Groner. Playing online games against computer- vs. human-controlled opponents: Effects on presence, flow, and enjoyment. Computers in Human Behavior, 24(5):2274–2291, Sept. 2008. [100] B. Weiner. An attributional theory of achievement motivation and emotion. Psychological review, 92(4):548–573, Oct. 1985. [101] B. Weiner. Intrapersonal and interpersonal theories of motivation from an attributional perspective. Educational Psychology Review, 12(1):1–14, Mar. 2000. [102] J. Weizenbaum. ELIZA: a computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1):36–45, Jan. 1966. [103] R. Williams. Aggression, competition and computer games: computer and human opponents. Computers in Human Behavior, 18(5):495–506, Sept. 2002. [104] S. You, J. Nie, K. Suh, and S. S. Sundar. When the robot criticizes you .: selfserving bias in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction, HRI ’11, pages 295–296. ACM, 2011. [...]... participants were asked to rate their enjoyment after each session, and after playing with both a human and computer team- mate they were asked to indicate which team- mate they preferred In terms of judgmental evaluations, participants were asked to assess the skills of their team- mate, to assign credit and blame for success and failure events in the game, and rate the amount of cooperation and 4.2... perceived levels of cooperation and risk-taking behaviors of their human or computer team- mates and the reasons behind their evaluation Participants played the CTG game for two game sessions, once with a presumed human and once with a computer team- mate, and at the end of each session rated how cooperative their team- mate was and how many risks were taken by the team- mate on their behalf After both sessions...Chapter 1 Introduction Artificial team- mates are becoming more common in the context of work and play In addition to the technical challenges of designing such team- mates, it is also important to identify and understand various dimensions that impact our acceptance (or not) of artificial team- mates Since very little research has been done to describe and explain human responses to artificial team- mates, ... more capable and sophisticated There is also a growing interest in the development of artificial team- mates in the research community to develop exciting games that engage and entertain players when other human players are not available or to augment mixed teams involving humans and agents The wild popularity of games that involve virtual team- mates (e.g Left4Dead, World of Warcraft, social network games,... entities as ethopoeia [74] and coined a more simple phrase to summarize the effects, Computers Are Social Actors (CASA) Researchers went on to demonstrate through various studies that humans would react to computers socially in a wide range of contexts and tasks, for example, feeling flattered by software agents, accepting a computer as a team- mate, and various others summarized in [85] As an example of the... In this chapter we present the details of the study examining how participants make emotional evaluations in rating their enjoyment and preference for human and computer team- mates The findings suggest that people enjoy cooperative games more with human team- mates and prefer the human over a computer team- mate The reasons for the enjoyment and preference include claims that the human team- mate is more... questions about the game and their team- mates In all cases, the actual identity of the team- mate was the same That is, even though participants were told that the team- mate was either human or computer, it was the computer in both cases Said another way: there was no objective difference between team- mates in terms of their behaviors or performance A variation of this configuration was used to explore... to these efforts by increasing our understanding of motivations behind responses to human and computer team- mates We approach this issue by exploring the perceptions, judgments, and behaviors toward team- mates that are believed to be controlled by either a human or a computer Specifically, it addresses the question: with all things being equal, do human players respond in the same way to human and artificial... framing of team- mate identity impacts the emotional evaluations of enjoyment and preference (Chapter 5), and judgmental evaluations of credit/blame assignment and skill assessment (Chapter 6), perceived levels of cooperation, and risk-taking by team- mates (Chapter 7) Two studies are then presented that examine behavioral differences in protective actions taken on behalf of team- mates (Chapter 8), and. .. users in a state of mindlessness, the computer as a proxy for a programmer, and anthropomorphism Nass and Reeves claimed that the human brain has not evolved fast enough to account for advanced technology and therefore, when placed into a situation involving social cues by media, the only way the human knows how to respond is to follow the automatic social rules that are used for human social interactions . A FAILURE OF IMAGINATION: HOW AND WHY PEOPLE RESPOND DIFFERENTLY TO HUMAN AND COMPUTER TEAM- MATES TIMOTHY ROBERT MERRITT NATIONAL UNIVERSITY OF SINGAPORE 2012 A FAILURE OF IMAGINATION: HOW AND. performance. However, all things being equal, do human players respond in the same way to human and artificial team- mates – and if there are differ- ences, what accounts for them? Related research has examined. presented that examine behavioral differences in protective ac- tions taken on behalf of team- mates (Chapter 8), and decisions related to sacri- ficing team- mates (Chapter 9). • An explanatory framework

Ngày đăng: 09/09/2015, 18:58

Từ khóa liên quan

Mục lục

  • Introduction

    • Social responses to technology

    • Structure of this document

    • Related Work

      • Conversational Interactions

        • Differences in Perception

        • Differences in Behavior

        • Competitive Interactions

        • Cooperative Interactions

        • Summary

        • Research Problem

          • Context of cooperative games

          • Critique of previous work

          • Originality of thesis contribution

            • Empirical contribution

            • Theoretical contribution

            • Summary

            • Method

              • Mapping Our Studies to Explore Cooperation

              • Overview of user studies

              • Game: Capture the Gunner

                • Drawing Fire

                • Gunner Behavior Algorithm

                • Game: Defend the Pass

                • Toward an explanatory framework

                  • Framework Development

                  • Framework Validation

                  • Enjoyment & Preference

                    • Motivation

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan