1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Humanoid Robots Part 2 pot

25 128 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 2,44 MB

Nội dung

Humanoid Robots 18 11. References Adamson A. (1987). “Calaban”, Demonstration at Birmingham University, England, June 1987, http://www.bham.ac.uk/calaban/. Arsenic, A.M. (2004). “Developmental learning on a humanoid robot,” 2004 IEEE International Joint Conference on Neural Networks, vol.4, pp. 3167- 3172, 25-29 July 2004. Arsenio, A.M. (2004). Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's Gift, Doctoral thesis, Massachusetts Inst. of Tech., Cambridge, Dept. of Electrical Engineering and Computer Science. Badler, N. I., & Smoliar, S. W. (1979). “Digital Representation of Human Movement”, Computer Surveys, Vol. 11, No 1, March 1979. Badler, N. I., Bindiganavale, R., Granieri, J. P., Wei, S., and Zhao, X. (1994). “Posture Interpolation with Collision Avoidance,” In Proc. Computer Animation, pp.13-20. Badler, N., Phillips, C., and Webber, B. (1993). Simulating Humans: Computer Graphics, Animation, and Control, Oxford University Press. Bentivegna, D.C. & Atkeson, C.G. (2001). “Learning from observation using primitives,” IEEE International Conference on Robotics and Automation, vol.2, pp. 1988-1993. Brooks, R.A. (2002). “Humanoid robots,” Communications of the ACM, Special Issue: Robots: intelligence, versatility, adaptivity, Vol. 45, Issue 3, pp. 33-38, March 2002. Brooks, R.A. (1996). “Prospects for human level intelligence for humanoid robots,” Proceedings of the First International Symposium on Humanoid Robots (HURO- 96), pp. 17-24. Brooks, R.A., Breazeal, C.; Marjanovic, M.; Scassellati, B. & Williamson, M.M. (1998). “The Cog Project: Building a Humanoid Robot,” IARP First International Workshop on Humanoid and Human Friendly Robotics, (Tsukuba, Japan), pp. I-1, October 26-27. Burghart, C., Mikut, R., Stiefelhagen, R., Asfour, T., Holzapfel, H., Steinhaus, P., & Dillmann, R., (2005). “A cognitive architecture for a humanoid robot: a first approach,” 2005 5th IEEE-RAS International Conference on Humanoid Robots, pp. 357- 362, 5-7 Dec. 2005. Calinon, S., Guenter, F., & Billard, A. (2007). “On Learning, Representing and Generalizing a Task in a Humanoid Robot,” IEEE transactions on systems, man and cybernetics, Part B. Special issue on robot learning by observation, demonstration and imitation, vol. 37, num. 2, 2007, p. 286-298. Calvert, T.W., Bruderlin, A., Mah, S., Schiphorst, T., & Welman, C. (1993). “The Evolution of an Interface for Choreographers”, Interchi, pp. 24-29. Causley, M. (1980). An introduction to Benesh Movement Notation, ISBN: 0836992806. Choi, B. (1998). “Automata for Learning Sequential Tasks,” New Generation Computing: Computing Paradigms and Computational Intelligence, Vol. 16, No. 1, pp. 23-54. Choi, B. (2002). “Applying Learning by Example for Digital Design Automation,” Applied Intelligence, Vol. 16, No. 3, pp. 205-221. Choi, B. (2003). “Inductive Inference by Using Information Compression,” Computational Intelligence 19 (2), 164-185. Humanoid Robotic Language and Virtual Reality Simulation 19 Choi, B., and Chen, Y. (2002). “Humanoid Motion Description Language,” Second International Workshop on Epigenetic Robotics, pp. 21-24. Conway, M.J. (1997). Alice: Easy-to-Learn 3D scripting language for novices, Ph. D thesis, University of Virginia. Eshkol-Wachman (2008). Eshkol-Wachman Movement Notation, http:// www.movementnotation.com/ H-anim (2008). Humanoid animation work group. http://www.h-anim.org. Harris, J.A., Pittman, A.M., Waller, M.S., and Dark, C.L. (1999). Dance a while Handbook for Folk, Square, Contra and Social Dance, Benjamin Cummings. Hirai, K, Hirose, M. Haikawa, Y. Takenaka, T. (1998). “The Development of Honda Humanoid Robot”, Proceeding of IEEE Intl. Conference on Robotics and Automation (ICRA), pp. 1321-1326. Honda (2008). Honda Humanoid robot: http://world.honda.com/ASIMO/ Huang, Z., Eliëns, A., and Visser, C. (2002). “STEP: a Scripting Language for Embodied Agents,” Proceedings of the Workshop of Lifelike Animated Agents, Tokyo. Hutchinson A. & Balanchine, G. (1987). Labanotation: The System of Analyzing and Recording Movement, ISBN: 0878305270. Kanayama, Y., and Wu, C. (2000). “It’s Time to Make Mobile Robots Programmable,” Proceedings of Intl. Conference on Robotic and Automation (ICRA), San Francisco. Kanehiro, F., Hirukawa, H. & Kajita, S. (2004). “OpenHRP: Open Architecture Humanoid Robotics Platform,” The International Journal of Robotics Research, Vol. 23, No. 2, 155- 165. Kanehiro, F., Miyata, N., Kajita, S., Fujiwara, K., Hirukawa, H., Nakamura, Y., Yamane, K., Kohara, I., Kawamura, Y., & Sankai, Y. “Virtual Humanoid Robot Platform to Develop Controllers of Real Humanoid Robots without Porting,” Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hawaii, Oct. 29 - Nov. 03, 2001, pp. 1093-1099. Lee. J and Shin. S. (1999). “A hierarchical approach to interactive motion editing for human- like figures,” Proceedings of SIGGRAPH 1999, pages 39-48. Nakaoka, S., Nakazawa, A., Yokoi, K., & Ikeuchi, K. (2004). “Leg motion primitives for a dancing humanoid robot,” 2004 IEEE International Conference on Robotics and Automation, Vol.1, pp. 610- 615, 26 April-1 May 2004. Nakaoka, S., Nakazawa, A., Yokoi, K., Hirukawa, H., & Ikeuchi, K. (2003). “Generating whole body motions for a biped humanoid robot from captured human dances,” IEEE International Conference on Robotics and Automation, 2003. Proceedings. ICRA '03, vol.3, pp. 3905- 3910, 14-19 Sept. 2003. Nishimura, Y., Kushida, K., Dohi, H., Ishizuka, M., Takeuchi, J., & Tsujino, H. (2005). “Development and psychological evaluation of multimodal presentation markup language for humanoid robots,” 2005 5th IEEE-RAS International Conference on Humanoid Robots, pp. 393- 398, 5-7 Dec. 2005. Nozawa, Y., Dohi, H., Iba, H., & Ishizuka, M. (2004). “Humanoid robot presentation controlled by multimodal presentation markup language MPML,” 13th IEEE Humanoid Robots 20 International Workshop on Robot and Human Interactive Communication, pp. 153- 158, 20-22 Sept. 2004. Oh, J.H., Hanson, D., Kim, W.S., Han, Y., Kim, J.Y., & Park, I.W. (2006). “Design of Android type Humanoid Robot Albert HUBO,” 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1428-1433, Beijing, Oct. 2006. Perlin, K, Gikdberg, A. (1996). “Improv: A System for Scripting Interactive Actors in Virtual Worlds”, Computer Graphics Proceeding, pp.205-216. Perlin, K., and Goldberg, A. (1996). “Improv: A System for Scripting Interactive Actors in Virtual Worlds,” Computer Graphics; Vol. 29 No. 3. Pollard, N. S. (1999). “Simple Machines for Scaling Human Motion,” Eurographics Workshop on Animation and Simulation, Milan, Italy. Python (2008). Python. http://www.python.org Riley, M. & Atkeson, C.G. (2000). “Methods for motion generation and interaction with a humanoid robot: Case studies of dancing and catching,” Proc. 2000 Workshop on Interactive Robotics and Entertainment, pp. 35-42, Robotics Inst., Carnegie Mellon Univ. Ryman, R., Singh, B., Beatty, J., & Booth, K. (1984). “A Computerized Editor of Benesh Movement Notation,” Dance Research Journal, 16(1): 27-34. Safonova A., Nancy S.P., & Hodgins, J.K. (2002). “Adapting human motion for the control of a humanoid robot,” Proceedings of International Conference on Robotics and Automation, pp. 1390-1397. Scassellati, B.M. (2001). Foundations for a Theory of Mind for a Humanoid Robot, Doctoral thesis, Massachusetts Inst. of Tech., Cambridge, Dept. of Electrical Engineering and Computer Science. Schiphorst, T. (1992). “LifeForms: Design Tools for Choreography”, Dance and Technology I: Moving Toward the Future, pp. 46-52. Schrand, R. (2001). Poser 4 Pro Pack f/x & Design, Coriolis Group. Sony (2008). Sony QRIO humanoid robot, http://www.sony.net/SonyInfo/ CorporateInfo/History/sonyhistory-j.html. Ude, A., Atkesona, C.G., & Rileyd, M. (2004). “Programming full-body movements for humanoid robots by observation,” Robotics and Autonomous Systems, Vol. 47, Issues 2-3, 30 June 2004, pp. 93-108. Yokoi, K. (2007). “Humanoid robotics,” International Conference on Control, Automation and Systems, pp. lxxiv-lxxix, Seoul, 17-20 Oct. 2007. Zhou, Y. & Choi, B. (2007). “Virtual Reality Simulation of Humanoid Robots,” IEEE Industrial Electronics Society (IECON) 33rd Annual Conference, pp. 2772-2777. 2 Emotion Mimicry in Humanoid robots using Computational Theory of Perception Mohsen Davoudi 1 , Mehdi Davoudi 2 and Nima Seif Naraghi 2 1. Politecnico di Milano, Milan, Italy 2. Eastern Mediterranean University, Gazimagusa, Cyprus 1. Introduction In robotics research one of the major issues is focused on the humanoids interaction modalities with humans. Humans have developed advanced skills in interpreting his emotions to the bodily expressions. If similar skills can be acquired by robots, it would allow them to generate behaviors that are familiar to us and thus increase their chances of being accepted as partners in our live. It has been known that humans can express a lot of information and the emotional state of the performer using body movements. Human has some perception based behavior seen in his bodily expressions. Movements of hands, legs, head, etc show the inner emotion of the person in specific situations subconsciously. Reflecting the bounded ability of the human brain to resolve details of motions, perceptions are intrinsically imprecise. The main goal of this Chapter is to introduce a novel approach for modeling an emotion- based motion generator for humanoids in order to present emotionally expressive styles of movements and emotion mimicry in different situations using Computational Theory of Perception (CTP). Emotion mimicry is such an area, where expression of the proper emotion is based on a combination of various quantitative as well as qualitative measurements. In this perspective, a major shortcoming of existing approaches is that, as they are based on bivalent, they don’t provide tools for dealing with perception-based information. Humans have capability to perform a wide variety of physical motions without any measurement and computation. In performing such movements, humans employ perceptions of time, situation, possibility, inner emotions and other attributes of physical and mental objects. Motion characteristics in humans consist of measurement-based information and perception-based emotions. A big percent of human knowledge is perception-based. The main goal of this Chapter is to introduce a novel approach for modeling an emotion- based motion generator for humanoids in order to present emotionally expressive styles of movements and emotion mimicry in different situations using CTP which contains a special capability to compute and reason with perception-based information. A fuzzy logic-based analyzer that interprets the linguistic emotions that are common among people into what is called the Generalized Constraint Language (GCL) is introduced in this Humanoid Robots 22 chapter. Fuzzy Logic-Based probability theory has the fundamental ability to operate on perception-based information, which bivalent logic-based probability theory does not posse. A particular GCL [1] is introduced for humanoid robots to categorize data needed for trajectory tracking for each joint of the robot which lead to emotion mimicry. An Artificial Neural Network generates velocity and acceleration for each joint of humanoid using GCL values as well. A concept that plays a key role in emotion mimicry with CTP is fuzzy logic based GCL. An idea which underlies the approach described in this work is that an emotion may be viewed as a proposition that fuzzy analyzer approximates by means of the intensity and type of emotion. A proposition plays the role of a carrier of information. In our design fuzzy analyzer gets two types of linguistic proposition that are detected from a sentence: 1) The type of emotion such as happiness, anger, kindness, stress, surprise and sadness, 2) The intensity of emotion (very, more or less; extremely,…). For example I say “move your hands very angry”, two words are detected: angry and very. In the next step, an Artificial Neural Network (ANN) generates velocity and acceleration for each joint of humanoid using GCL variables. GCL is an interface between two portions of this analysis: a) emotion interpreter (the fuzzy analyzer), b) a data generator for physical motions of the humanoid joints. The overall structure of proposed robot motion system is shown in figure1. The computational theory of perceptions enhances the ability of Artificial Intelligence (AI) to deal with real-world problems such as humanoid interactions in which decision relevant information is a mixture of measurements and perceptions. Motions in an intelligent humanoid are compromise between perceptions of emotions in common sense and measured data for trajectory tracking in humanoid joints. In order to deal with perception-based analysis of the motion, an array of tools centering on Computing with Words and Perceptions (CWP), Computational Theory of Perceptions (CTP) and Precisiated Natural Language (PNL), are required[3]. Fig. 1. The overall structure of intelligent robot motion system Emotion Mimicry in Humanoid robots using Computational Theory of Perception 23 2. Generalized Constrants for Intelligent Robot Motion CW (Computing with Words) serves two major purposes in intelligent robot motion generation: a) provides a machinery for dealing with emotions in which precise information is not available, b) provides a machinery for dealing with emotions in which precise information is available, but there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, simplicity and low solution cost. Precisiated Natural Language (PNL) is a sublanguage of precisiable propositions in Natural Language (NL) which is equipped with two dictionaries: 1) NL to GCL, 2) GCL to PFL (Protoform Language) and a set of rules of deduction (rules of generalized constrained propagation) expressed in PFL. Perceptions in Emotion mimicry are f-granular in the sense that a) the boundaries of perceived classes are fuzzy and b) the values of perceived attributes are granular, with a granule being a clump of values drawn together by indistinguishability, similarity, proximity or functionality of emotions. p=proposition in a natural language (1) A proposition is a perception if it contains a) fuzzy quantifiers such as many, most, few, etc., b) fuzzy qualifiers such as usually, probably, possibly, typically, generally, c) fuzzy modifiers such as very, more or less, extremely,… and d) fuzzy nouns, adjective or adverbs such as emotion nouns that is prevalent among people in a culture. We considered a number of inner emotions in this paper: Angriness, kindness, stress, happiness. Generally, the mathematics of perception-based information has a higher level of generality than the mathematics of measurement-based information. The constrained variable, X, may assume a variety of forms. In particular, X is an n-array variable, X=(X 1 , …, X n ) X is a proposition, e.g., X= very angry X is a function X is a function of another variable, X=f(Y) X is conditioned on another variable, X/Y X has a structure, e.g., X=Emotion(Stress) X is a group variable. In this case, there is a group, G[A]; with each member of the group, Name i , i=1, …, n, associated with an attribute–value, A i . A i may be vector-valued. Symbolically G[A]: Name 1 /A 1 +…+Name n /A n . (2) G[A] is a relation (r) and X is a generalized constraint [4]. Possibilistic of emotion mimicry (r=blank) is written such as equation 3. X is R (3) which R playing the role of the possibility distribution of X. Humanoid Robots 24 For example: X is [a, b] (4) means that [a, b] is the set of possible values of X. Another example: X is “happy” (5) In this case, the fuzzy set labeled happy is the possibility distribution of X. If μ small is the membership function of happy, then the semantics of X is “happy” is defined by Poss{X=happy} = μ small (happy) (6) where u is a generic value of X [6]. Probabilistic in emotion mimicry (r=p) X isp R, (7) which R playing the role of the probability distribution of X. For example: X isp N(m, σ 2 ) (8) Means that X is a normally distributed random variable with mean m and variance σ 2 . For example, if X is a random variable which takes values in a finite set of emotions {e 1 , …, e n } with respective probabilities p 1 , …, p n , then X may be expressed symbolically as equation 9: X isp (p 1 \e 1 +…+ p n \e n ), (9) with the semantics ii peXprob = = )( , i=1, …, n. (10) For e i = happy probability of X is given in equation 11: duufuhappyisXprob small )()()( μ ∫= (11) Hence, test-score of this constraint on f is given by: ))()(()( uuffts happylikely μμ ∫= (12) Emotion Mimicry in Humanoid robots using Computational Theory of Perception 25 As other consideration, Bimodal (r = bm) is the bimodal constraint is symboled in equation 13, X isbm R, (13) R is a bimodal distribution of the form: R: Σ i k i \A i , i=1, …, n (14) this means that Prob(X is A i ) is k i (15) Example: prob(Emotion(happy) is High) (16) is likely 444 )()( deeeg high μ ∫ (17) or, in annotated form, deeegXgGC high )()(/=)( μ ∫ is R/likely. (18) the test-score of this constraint on f and g is given by ))()((=)( duuuffts highlikely μμ ∫ (19) ))()((=)( duuuggts highlikely μμ ∫ (20) Basically, we use two forms of X: 1) proposition form expressed as p, 2) function form (measure-based information) expressed as f. Humanoid Robots 26 Fig. 2. GCL form of proposition expressed as p Fig. 3. Fuzzy-graph constraint. f* is a fuzzy graph which is an approximate representation of function form (measure-based information) expressed as f Considering figure3 we can express the probability of happiness as given in equation 16: P (happiness) = P1\H1 + P2\H2 + P3\H3 (21) 3. Fuzzy Analyzer We supposed four emotions that are common among people: U={e 1 ,e 2 ,e 3 ,e 4 } e 1 : Kindness (k) e 2 : Stress (s) e 3 : Angriness (a) e 4 : Happiness (h) Offensiveness = f (e i ), i=1…4 Decision= (22) Intensity = g (e i ), i=1…4 ⎩ ⎨ ⎧ Emotion Mimicry in Humanoid robots using Computational Theory of Perception 27 Membership functions of offensiveness: (23) Membership functions of intensity: (24) An Example: John is so much happy today. He wants to play guitar. Combined propositions are: (happy + act of playing guitar). Deduction with perceptions involves the use of protoformal rules of generalized constraint propagation. Playing guitar is an action that is done using organs specially hands. Therefore considering his inner emotion, his movement is so quick and we can guess his inner emotion (happiness) from the motions of his hands. As shown in table 1 and table 2, a phrase is converted to prototype form. proposition in NL precisiation p p* (GC-form) John is very happy ΣCount (intensity. happiness /happiness) is much Table 1. Proposition interpretation precisiation protoform p* (GC-form) PF(p*) Σ Count (intensity. happiness /happiness) is much Q A’s are B’s Table 2. Precisiation interpretation After parsing p is converted to semantic parse that may be one of these forms: 1) Logical form 2) Semantic network 3) Conceptual graph 4) Canonical form Semantic parse can be abstracted to more than one protoforms. In order to computation of GCL form for emotion sets we use fuzzy analyzer as inference motor. )))()(((sup)( duufui U eioeo ∫ = μμμ )))()(((sup)( duugui U eioei ∫ = μμμ [...]... design humanoid robots So defining some emotion based rules for fuzzy analysis and creation of deduction database for emotion definition is a base for designing (see figure 6) Emotion Mimicry in Humanoid robots using Computational Theory of Perception 29 Fig 6 Database interaction Information about trajectory of each joints in a humanoid, the time that motions accrue and the supposed age for the humanoid. .. International Journal of Man-Machine Studies,1-13 32 Humanoid Robots V Novak (1999), I Perfilieva, J Mockor, Mathematical Principles of Fuzzy Logic, Kluwer, Boston/ Dordrecht H.T Nguyen(1993), On Modeling of Linguistic Information Using Random Sets, Fuzzy Sets for Intelligent Systems, Morgan Kaufmann Publishers, San Mateo, CA, pp 24 2 -24 6 T.J Ross (20 04), Fuzzy Logic with Engineering Applications, second... 10, is so computed: αR1 = θR2 Δy = R1(1 – cos(α) = R1(1 – cos(θ R2/R1) If R1 is fixed and we vary R2, the maximum can be found quite straightforward given θs: (1) (2) A biologically founded design and control of a humanoid biped R2 = (π/ θs)R1 41 (3) θs is the angle of the bent knee in the instant the foot is closer to the ground According to this simple analysis, the radius R2 should be longer than R1;... inspired many other works, such as the stability analysis (Garcia et al, 1998) and the physical implementation ( Wisse et al, 20 01) (Kuo, 1999)(Collins et al, 20 01) of several prototypes 34 Humanoid Robots In this paper we present LARP (Light Adaptive-Reactive biPed), our humanoid legged system, with the aim to explain how the mechanical design makes the robot able to adapt to the real operating environment... approximate reasoning, Part I: Information Sciences, 199 -24 9 L.A Zadeh (1983), A computational approach to fuzzy quantifiers in natural languages, Computers and Mathematics,149-184 L.A Zadeh (20 02) , toward a perception-based theory of probabilistic reasoning with imprecise probabilities, Journal of Statistical Planning and Inference 105 3 A biologically founded design and control of a humanoid biped Giuseppina... Geophysical, Biological and Engineering Systems, CRC Press D Dubois, H Prade(19 92) , Gradual inference rules in approximate reasoning, Information Sciences: An International Journal 61, 103- 122 D Dubois, H Fargier, H Prade(1993), The calculus of fuzzy restrictions as a basis for flexible constraint satisfaction, in: Proc 2nd IEEE Int Conf on Fuzzy Systems, San Francisco, CA, pp 1131-1136 D Dubois, H... energy A biologically founded design and control of a humanoid biped 37 consumption, a more stable walk This behavior was also underlined by simulations on a PDW, the robot Mike, of the Delft University of Technology (Wisse et al, 20 02) Fig 4 During walking, the knee of the stance leg has to counteract the inertial loads due to the swinging motion 3 .2 The design of the knee joint Regarding the knee structure... legged robot in order to improve the efficiency and stability during walking A pioneering contribution was done (Takanishi et al, 20 04) at the laboratories of Waseda University (Tokyo) Several other modern robots are designed to walk and behave like humans (Hashimoto et al, 20 02) ( 3) but until now the efficiency of the human gait is still far from being reached In this sense, the work of McGeer (McGeer,... running, the arc of the foot stores and returns 17% of the energy the 42 Humanoid Robots body loses and regains at each footfall, while till the 35% of this energy is stored and returned by Achilles tendon The foot mobility of course has a big influence on the whole kinematics and dynamics of the motion, especially on the ankle In particular, during the stance phase, the contact point moves from the... (Saunders et al, 1953) (Alexander, 19 92) For this reason it is important to have a foot that let adapt the position of the ankle, and thus the other joints, without losing grip This aspect is particularly relevant at toe-off, when only a small region of the foot is in contact Also here the mobility and elasticity of the foot plays a very important role (Doneland et al, 20 02) (Kuo, 1998) Fig 11 shows a simple . Choi, B. (20 07). “Virtual Reality Simulation of Humanoid Robots, ” IEEE Industrial Electronics Society (IECON) 33rd Annual Conference, pp. 27 72- 2777. 2 Emotion Mimicry in Humanoid robots using. vol .2, pp. 1988-1993. Brooks, R.A. (20 02) . Humanoid robots, ” Communications of the ACM, Special Issue: Robots: intelligence, versatility, adaptivity, Vol. 45, Issue 3, pp. 33-38, March 20 02. . on a humanoid robot,” 20 04 IEEE International Joint Conference on Neural Networks, vol.4, pp. 3167- 31 72, 25 -29 July 20 04. Arsenio, A.M. (20 04). Cognitive-Developmental Learning for a Humanoid

Ngày đăng: 11/08/2014, 07:23

TỪ KHÓA LIÊN QUAN