Who Needs Emotions The Brain Meets the Robot - Fellous & Arbib Part 15 potx

20 130 0
Who Needs Emotions The Brain Meets the Robot - Fellous & Arbib Part 15 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

264 robots The four major components operate in different time and activation scales. Emotions are high-activation and short-term, while moods are low- activation and relatively prolonged. Traits and attitudes determine the under- lying disposition of the robot and are relatively time-invariant. The basis for each of these four components is discussed briefly below (see also Ortony et al., Chapter 7). Traits serve as an adaptation mechanism to specialized tasks and envi- ronments, whereas emotions mobilize the organism to provide a fast response to significant environmental stimuli. The five-factor model of personality developed by McCrae and Costa (1996) serves as the basis for the trait com- ponents. Trait dimensions include openness (O), agreeableness (A), conscien- tiousness (C), extroversion (E), and neuroticism (N). Traits influence a wide range of behaviors and are not limited to emotionally charged situations. Emotion, in the TAME context, is an organized reaction to an event that is relevant to the needs, goals, or survival of the organism (Watson, 2000). It is short in duration, noncyclical, and characterized by a high activation state and significant energy and bodily resource expenditure. A typical set of emotions to which we subscribe includes joy, interest, surprise, fear, anger, sadness, and disgust (Watson, 2000); these are continuously dynamically generated as emotion-eliciting stimuli are detected. Figure 9.9. Integrated model of personality and affect (traits, attitudes, moods, and emotions [TAME] model). FFM, five-factor model. Personality and Affect Module Affective State Affect based attitudes (sentiment) Emotion component Mood component Traits - FFM neuroticism, extraversion, openness, agreeableness, conscientiousness Perceptual input Motor vector Robot Dispositions Behavior coordination Behavioral assemblage Behavioral parameters Perceptual module motivation/emotion in behavior-based robots 265 Moods bias behavior according to favorable/unfavorable environmen- tal conditions and are defined by the two independent categories of posi- tive and negative affect (Revelle, 1995; see also Chapter 7, Ortony et al.). They constitute a continuous affective state that represents low activa- tion and low intensity and, thus, expends less energy and bodily resources than emotion. Moods are mainly stimulus-independent and exhibit cycli- cal (circadian) variation according to time of day, day of the week, and season. An attitude is a “learned predisposition to respond in a consistently fa- vorable or unfavorable manner with respect to a given object” (Breckler & Wiggins, 1989). Attitudes guide behavior toward desirable goals and away from aversive objects and facilitate the decision-making process by reducing the decision space complexity. They are relatively time-invariant, object/ situation-specific, and influenced by affect and result in a certain behavior toward the object. To test the TAME model, a partial integration of the personality and af- fect module into the MissionLab system was undertaken, which is a supple- mented version of the AuRA (Arkin & Balch, 1997; see also Moshkina & Arkin, 2003, for additional details). Research is now under way on the administra- tion of formal usability studies to determine whether this form of affect can play a significant role in improving a user’s experience with a robot. SUMMARY AND CONCLUSION In the end, what can we learn from this journey through a broad range of motivations/emotions that span multiple species? I propose the following: • Emotions, at least to a roboticist, consist of a subset of motiva- tions that can be used to dynamically modulate ongoing behav- ioral control in a manner consistent with survival of the robotic agent (Arkin & Vachtsevanos, 1990). The nuances surrounding which species possess emotions versus motivations and the ter- minological differences between these terms are best left to nonroboticists in my opinion as it is unclear if the resolution to these semantic differences will have any impact whatsoever on our ability to build more responsive machines. Our community, however, sorely needs more and better computational models and processes of affect that effectively capture these components within a behavioral setting. • Human–robot interaction can be significantly enhanced by the introduction of emotional models that benefit humans as much as robots. 266 robots • Motivational/emotional models can be employed that span many different organisms and that can match the requirements of an equally diverse robotic population, ranging from vacuum cleaners to military systems to entertainment robots and others. All of these systems need to survive within their ecological niche and must respond to a broad range of threats toward their extinc- tion or obsolescence. The principle of biological economy would argue that emotions/motivations exist in biology to serve a use- ful purpose, and it is our belief that robots can only benefit by having a similar capability at their disposal. • The diversity of emotional models is something to celebrate and not lament as they all can potentially provide fodder for robotic system designers. As I have often said, I would use phlogiston as a model if it provided the basis for creating better and more intelligent robots, even if it does not explain natural phenomena accurately. Finally, there is much more work to be done. This branch of robotics has been enabled due to major computational and hardware advances that have come into existence only within the past few decades. As such, it is an excit- ing time to be studying these problems in the context of artificial entities. Notes The author is deeply indebted to the researchers whose work is reported in this chapter, including Yoichiro Endo, Khaled Ali, Francisco Cervantes-Perez, Alfredo Weitzenfeld, Maxim Likhachev, Masahiro Fujita, Tsubouchi Takagi, Rika Hasegawa, and Lilia Moshkina. Research related to this article has been supported by the Na- tional Science Foundation, Defense Advanced Research Projects Agency (DARPA), and the Georgia Tech Graphic, Visualization and Usability (GVU) Center. The author also thanks Dr. Doi, the director of Digital Creatures Laboratory, Sony, for his continuous support for our research activity. 1. eBug is available on line (http://www.cc.gatech.edu/ai/robot-lab/research/ ebug/) 2. Etymology: from the Nahuatl word chantli, which means shelter or refuge, and word taxia, the Latin for attraction (Cervantes-Perez, personal communication, 2003). 3. MissionLab is freely available on line (www.cc.gatech.edu/ai/robot-lab/ research/MissionLab.html) References Ainsworth, M. D. S., & Bell, S. M. (1970). Attachment, exploration and separa- tion: Illustrated by the behaviour of one-year-olds in a strange situation. Child Development, 41, 49–67. motivation/emotion in behavior-based robots 267 Arbib, M. A. (1992). Schema theory. In S. Shapiro (Ed.), Encyclopedia of artificial intelligence (2nd ed., pp. 1427–1443). New York: Wiley. Arbib, M. A., & Lieblich, I. (1977). Motivational learning of spatial behavior. In J. Metzler (Ed.), Systems neuroscience (pp. 221–240). London: Academic Press. Arkin, R. C. (1989). Motor schema-based mobile robot navigation. International Journal of Robotics Research, 8, 92–112. Arkin, R. C. (1990). The impact of cybernetics on the design of a mobile robot system: A case study. IEEE Transactions on Systems, Man, and Cybernetics, 20, 1245–1257. Arkin, R. C. (1992). Homeostatic control for a mobile robot: Dynamic replanning in hazardous environments. Journal of Robotic Systems, 9, 197–214. Arkin, R. C. (1998). Behavior-based robotics. Cambridge, MA: MIT Press. Arkin, R. C., Ali, K., Weitzenfeld, A., & Cervantes-Perez, F. (2000). Behavioral models of the praying mantis as a basis for robotic behavior. Journal of Robotics and Autonomous System, 32, 39–60. Arkin, R. C., & Balch, T. (1997). AuRA: Principles and practice in review. Journal of Experimental and Theoretical Artificial Intelligence, 9, 175–189. Arkin, R. C., Cervantes-Perez, F., & Weitzenfeld, A. (1998). Ecological robotics: A schema-theoretic approach. In R. C. Bolles, H. Bunke, & H. Noltemeier (Eds.), Intelligent robots: Sensing, modelling and planning (pp. 377–393). Singapore: World Scientific. Arkin, R., Fujita, M., Takagi, T., & Hasegawa, R. (2003). An ethological and emo- tional basis for human–robot interaction. Robotics and Autonomous Systems, 42, 191–201. Arkin, R. C., Fujita, M., Takagi, T., & Hasegawa, R. (2001). Ethological modeling and architecture for an entertainment robot. In 2001 IEEE International Con- ference on Robotics and Automation (pp. 453–458). Seoul, Korea: IEEE. Arkin, R. C., & Vachtsevanos, G. (1990). Techniques for robot survivability. Proceed- ings of the 3rd International Symposium on Robotics and Manufacturing (pp. 383– 388).Vancouver, BC. Blumberg, B. (1994). Action-selection in hamsterdam: Lessons from ethology. In D. Cliff et al. (Eds.), From Animals to Animats 3 (pp. 108–117). Cambridge, MA: MIT Press. Blurton Jones, N. (1972). Ethological studies of child behavior. London: Cambridge University Press. Bowlby, J. (1969). Attachment and loss. I: Attachment. London: Hogarth. Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology. Cambridge, MA: MIT Press. Breazeal, C. (2002). Designing sociable robots. Cambridge, MA: MIT Press. Breazeal, C., & Scassellati, B. (1999). How to build robots that make friends and influence people. In Proceedings of the International Conference on Intelligent Robots and Systems 1999 (pp. 858–863). Breckler, S. J., & Wiggins, E. C. (1989). On defining attitude and attitude theory: Once more with feeling. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 407–429). Mahwah, NJ: Erlbaum. 268 robots Brooks, R. (1986). A robust layered control system for a mobile robot. IEEE Jour- nal of Robotics and Automation, RA-2, 14–23. Cervantes-Perez, F., Franco, A., Velazquez, S., & Lara, N. (1993). A schema theo- retic approach to study the “chantitlaxia” behavior in the praying mantis. In Proceedings of the First Workshop on Neural Architectures and Distributed AI: From Schema Assemblages to Neural Networks, USC. Colin, V. L. (1996). Human attachment. New York: McGraw-Hill. Darwin, C. (1965). The expression of the emotions in man and animals. Chicago: University of Chicago Press. (Original work published 1872) Dautenhahn, K., & Billard, A. (1999). Bringing up robots or psychology of socially intelligent robots: From theory to implementation. In Proceedings of the 3rd In- ternational Conference on Autonomous Agents (pp. 366–367). Seattle, WA: As- sociation for Computing Machinery. Dolan, R. J. (2002). Emotion, cognition, and behavior. Science, 298, 1191–1194. Dunn, J. 1977). Distress and comfort. Cambridge, MA: Harvard University Press. Ekman, P., & Davidson, R. J. (1994). The nature of emotion. Oxford: Oxford Uni- versity Press. Endo, Y., & Arkin, R. C. (2001). Implementing Tolman’s schematic sowbug: behavior-based robotics in the 1930s. 2001 IEEE International Conference on Robotics and Automation (pp. 487–484). Seoul, Korea. Feeney, J., & Noller, P. (1996). Adult attachment. London: Sage. Fujita M., Hasegawa, R., Costa, G., Takagi, T., Yokono, J., & Shimomura, H. (2001a). Physically and emotionally grounded symbol acquisition for autonomous robots. In Proceedings of the AAAI Fall Symposium: Emotional and Intelligent II (pp. 43– 46). Fujita M., Hasegawa, R., Costa, G., Takagi, T., Yokono, J., & Shimomura, H. (2001b). An autonomous robot that eats information via interaction with human and environment. In Proceedings of the IEEE ROMAN-01. Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting, and knowing. Mahwah, NJ: Erlbaum. Harnard, S. (1990). The symbol grounding problem. Physica D, 42, 335–346. Hebb, D. O. (1946). On the nature of fear. Psychological Review, 53, 259–276. Innis, N. K. (1999). Edward C. Tolman’s purposive behaviorism. In Handbook of behaviorism (pp. 97–117). New York: Academic Press. Likhachev, M., & Arkin, R. C. (2000). Robotic comfort zones. In Proceedings of the SPIE Sensor Fusion and Decentralized Control in Robotic Systems III (pp. 27–41). MacKenzie, D., Arkin, R. C., & Cameron, R. (1997). Multiagent mission specifica- tion and execution. Autonomous Robots, 4, 29–52. McCrae, R. R., & Costa, P. T. (1996). Toward a new generation of personality theo- ries: Theoretical contexts for the five-factor model. In Five-factor model of per- sonality (pp. 51–87). New York: Guilford. McFarland, D. (Ed.) (1974). Motivational control systems analysis. London: Academic Press. McFarland, D., & Bosser, T. (1993). Intelligent behavior in animals and robots. Cam- bridge, MA: MIT Press. Menzel, P., & D’Alusio, F. (2000). Robo sapiens. Cambridge, MA: MIT Press. motivation/emotion in behavior-based robots 269 Minsky, M. (1986). The society of the mind. New York: Simon & Schuster. Moshkina, L., & Arkin, R. C. (2003). On TAMEing robots. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. Piscataway, NJ, 2003. Neisser, U. (1976). Cognition and reality: Principles and implications of cognitive psy- chology. New York: Freeman. Revelle, W. (1995). Personality processes. Annual Review of Psychology, 46, 295– 328. Steels, L. (1994). A case study in the behavior-oriented design of autonomous agents. In D. Cliff, et al. (Eds.), From Animals to Animats 3 (pp. 445–452). Cambridge, MA: MIT Press. Stroufe, L. A., Waters, E., & Matas, L. (1974). Contextual determinants of infant affective response. In M. Lewis & L. A. Rosenblum (Eds.), The origins of fear. New York: Wiley. Takanishi, A. 1999). An anthropomorphic robot head having autonomous facial expression function for natural communication with human. In 9th International Symposium of Robotics Research (ISRR99) (pp. 197–304). London: Springer- Verlag. Tolman, E. C. (1939). Prediction of vicarious trial and error by means of the sche- matic sowbug. Psychological Review, 46, 318–336. Tolman, E. C. (1951). Behavior and psychological man. Berkeley: University of Cali- fornia Press. Walter, G. (1953). The living brain. New York: Norton. Watson, D. (2000). Mood and temperament. New York: Guilford. This page intentionally left blank Robot Emotion A Functional Perspective cynthia breazeal and rodney brooks 10 Robots are becoming more and more ubiquitous in human environments. The time has come when their ability to intelligently and effectively in- teract with us needs to match the level of technological sophistication they have already achieved, whether these robots are tools, avatars (human surrogates), partners (as in a team) or cyborg extensions (pros- theses). Emotion-inspired mechanisms can improve the way autonomous robots operate in a human environment with people, and can improve the ability of these robots to effectively achieve their own goals. Such goals may be related to accomplishing tasks or satisfying motivations, and they may be achieved either autonomously or in cooperation with a person. In order to do this in a way that is natural for humans, the robot needs to be designed with a social model in mind. We illustrate this concept by describing in detail the design of Kismet, an anthropomor- phic robot that interacts with a human in a social way, focusing on its facial and vocal expressions and gaze direction. Kismet’s architecture includes a cognitive system that is tightly coupled to a separate emotive system. Each is designed as interacting networks of “specialists” that are activated when specific conditions are met. The cognitive component is 272 robots responsible for perceiving and interpreting events, and for selecting among a hierarchy of goal-achieving behaviors, in accordance with its current motivational drives. It is primarily concerned with homeostasis and “well being.” The emotive system implements eight basic emotions that are proposed to exist across species. It detects those internal and external events that have affective value, and motivates either task-based or communicative behavior to pursue beneficial interactions and to avoid those that are not beneficial by modulating the operation of the cogni- tive component. When placed in a realistic social setting, these two sys- tems interact to achieve lifelike attention bias, flexible decision making, goal prioritization and persistence, and effective communication where the robot interacts in a natural and intuitive way with the person to achieve its goals. Why should there be any serious research at all on the possibil- ity of endowing robots with emotions? Surely this is the antithesis of engi- neering practice that is concerned with making functional devices rather than ones which invoke emotions in people—the latter is the realm of art or, at best, design. Over the last 100 years, the average home in the Western world has seen the adoption of new technologies that at first seemed esoteric and unnecessar- ily luxurious. These include electricity, refrigeration, running hot water, tele- phone service, and most recently wideband internet connections. Today, the first few home robots are starting to be sold in conventional retail stores. Imag- ine the world 50 years from now, when robots are common in everybody’s home. What will they look like, and how will people interact with them? Electricity, refrigeration, and running hot water are utilities that are sim- ply present in our homes. The first robots that have appeared in homes are largely also a utility but have a presence that triggers in some people responses that are normally triggered only by living creatures. People do not name their electrical outlets and talk to them, but they do name their robots and, today at least, carry on largely one-sided social interactions with them. Will it make sense, in the future, to capitalize on the tendency of people for social inter- actions in order to make machines easier to use? Today’s home-cleaning robots are not aware of people as people. They are not even aware of the difference between a moving obstacle and a static obstacle. So, today they treat people as they would any other obstacle, some- thing to be skirted around while cleaning right up against them. Imagine a future home-cleaning robot, 50 years from now, and the ca- pabilities it might plausibly have. It should be aware of people as people robot emotion 273 because they have very different behaviors from static objects such as chairs or walls. When a person happens to be walking down the hallway where a robot is cleaning, it might determine that there is a person walking toward it. A reasonable behavior would be to then get out of the way. By estimat- ing the person’s mood and level of attention to his or her surroundings, the robot can determine just how quickly it should get out of the way. If a per- son points to the corner of a room and says “Clean more over here,” the robot should understand the focus of attention of the person as that room corner, rather than the end of the finger used to point. If the person utters an angry “No” directed toward the robot, it should reevaluate its recent actions and adapt its future behavior in similar circumstances. These are examples of the robot reading cues, some emotional, from a person; but a robot might also interact with people more easily if it provides social cues. As the robot notices a person approaching down a corridor, it might, like a dog, make brief eye contact with the person (here, the person has to estimate its gaze direction) and then give a bodily gesture that it is accom- modating the person and the person’s intent. When cleaning that dirty cor- ner, it might express increased levels of both frustration and determination as it works on a particularly difficult food spill on the carpet so that the person who directed its attention to that corner can rest assured that it has under- stood the importance of the command and that it will do what it takes to fulfill it. Upon hearing that angry “No,” the robot may express its chagrin in an emotional display so that the person knows intuitively that the command has been heard and understood. A robot with these sorts of capabilities would seem luxuriously out of place in our current homes, but such capabilities may come to be expected as they will make interaction with robots more natural and simple. In order to get to these future levels of social functionality, we need to investigate how to endow our robots with emotions and how to enable them to read social and emotional cues from people. FUNCTIONAL ROLES OF EMOTIONS All intelligent creatures that we know of have emotions. Humans, in par- ticular, are the most expressive, emotionally complex, and socially sophisti- cated of all (Darwin, 1872). To function and survive in a complex and unpredictable environment, animals (including humans) are faced with applying their limited resources (e.g., muscles, limbs, perceptual systems, mental abilities, etc.) to realize multiple goals in an intelligent and flexible manner (Gould, 1982). Those [...]... with a search-and-rescue robot to survey a disaster site In both of these cases, the communication between the robot and the human is very limited (e.g., large delays in transmissions or limited bandwidth) As a result, the robot must be self-sufficient enough to perform a number of tasks in difficult environments where the human supervises the robot at the task level Much like an animal, the robot must... etc.) where the environment and the audience’s interaction with the robot are highly constrained However, as the complexity of the environment and the interaction scenario increases, the social sophistication of the robot will clearly have to scale accordingly Once the robot s behavior fails to support the social model a person has for it, the usefulness of the model breaks down Ideally, the robot s observable... successively closer to the target Robot as Cyborg Extension In the second paradigm, the robot is physically merged with the human to the extent that the person accepts it as an integral part of his or her body 278 robots For example, the person would view the removal of his or her robotic leg as an amputation that leaves him or her only partially whole Consider a robot that has an intimate physical... augment the person’s strength or speed Robot as Avatar In the third paradigm, a person projects him- or herself through the robot to remotely communicate with others (the next best thing to being there) The robot provides a sense of physical presence to the person communicating through it and a sense of social presence to those interacting with it Consider robot- mediated communication with others at... indistinguishable from their biological correlates in humans and other animals Nonetheless, we argue that they are not “fake” because they serve a pragmatic purpose for the robot that mirrors their natural analogs in living creatures Furthermore, the insights these emotion-based and affect-based mechanisms provide robot designers should not be glossed over as merely building “happy” or entertaining robots To do... potentially hostile? Is the robot designed to perform a specific task, or must it satisfy multiple and potentially competing goals? Is the robot expected to function in complete isolation or in cooperation with others? Does the human use the robot to mediate his or her own actions (i.e., as a tool or prosthesis)? Does the human cooperate with the robot as a teammate? Does the robot provide some form... understand the emotions of others like a computer, memorizing and following rules to guide their behavior but lacking an intuitive understanding of others They are socially handicapped, not able to understand or interpret the social cues of others to respond in a socially appropriate manner (Baron-Cohen, 1995) Emotion Theory Applied to Robots This chapter presents a pragmatic view of the role emotion-inspired... impaired people In the best cases, they know what to do but often lack the social–emotional intelligence to do it in an appropriate manner As a result, they frustrate us and we quickly dismiss them even though they can be useful Given that many exciting applications for autonomous robots in the future place them in a long-term relationship with people, robot designers need to address these issues or people... accept robots into their daily lives WHY SOCIAL/SOCIABLE ROBOTS? In order to interact with others (whether it is a device, a robot, or even another person), it is essential to have a good conceptual model for how the other operates (Norman, 2001) With such a model, it is possible to explain and predict what the other is about to do, its reasons for doing it, and how to elicit a desired behavior from it The. .. prosthetic for an amputee Emotions play a critical role in connecting the mind with the body and vice versa Note that the performance of one’s physical body changes depending on whether one is relaxed or exhilarated Although a robotic leg would not “have emotions itself, it is important that it adapt its characteristics to match those of the rest of the body in accordance with the emotional state of the . that they are not “fake” because they serve a pragmatic purpose for the robot that mirrors their natural ana- logs in living creatures. Furthermore, the insights these emotion-based and affect-based. for the user. Hence, the robot would need to take high-level direction from the human and to be responsible for the performance of these physical and expressive abili- ties. To do so, the robot. sophistication of the robot will clearly have to scale accordingly. Once the robot s behav- ior fails to support the social model a person has for it, the usefulness of the model breaks down. Ideally, the robot s

Ngày đăng: 10/08/2014, 02:21

Tài liệu cùng người dùng

Tài liệu liên quan