1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Visual attention and perception in scene understanding for social robotics

204 584 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 204
Dung lượng 13,35 MB

Nội dung

Founded 1905 VISUAL ATTENTION AND PERCEPTION IN SCENE UNDERSTANDING FOR SOCIAL ROBOTICS HE HONGSHENG (M Eng, NORTHEASTERN UNIVERSITY) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL UNIVERSITY OF SINGAPORE 2012 Acknowledgments I would like to express my deepest gratitude to my supervisor, Professor Shuzhi Sam Ge, for his inspiration, guidance, and training, especially for the teaching by precept and example of invaluable theories and philosophies in life and research It was my great honor to join the research group under the supervision of Professor Ge, without whose enlightening and motivation I would not have considered a research career in robotics Professor Ge is the one mentor who considerably made a difference in my life by broadening my vision and insight, building up my confidence in work and scientific research, and training my leadership and supervision skills There is nothing that I could appreciate more than these most priceless treasure he has granted for my entire academic career and the whole life I could never be able to convey my gratitude to Professor Ge fully My deep appreciation also goes to my co-supervisor, Professor Chang Chieh Hang, for his constant support and assistance during my PhD study His passion and experience influences me greatly on the research work I am indebted to the other committee members of my PhD program, Professor Cheng Xiang and Dr John-John Cabibihan, for the assistance and advice that they provided through all stages of my research progresses I am sincerely grateful to all the supervisors and committee advisers who have encouraged and supported me during my PhD journey In my research, I really felt enjoyable and extremely blessed for knowing and working with brilliant people who are generous with their time and help I am thankful to my senior, Dr Yaozhang Pan, for her lead and discussion i Acknowledgments at the start of my research Many thanks go to Mr Zhengchen Zhang, Mr Chengyao Shen and Mr Kun Yang who worked closely with me and contributed much valuable solutions and experiments in the research work I appreciate the generous help, encouragement and friendship from Mr Yanan Li, Mr Qun Zhang, Ms Xinyang Li, Dr Wei He, Dr Shuang Zhang, Mr Hao Zhu, Mr He Wei Lim, Dr Chenguang Yang, Dr Voon Ee How, Dr Beibei Ren, Dr Pey Yuen Tao, Mr Ran Huang, Ms Jie Zhang, Dr Zhen Zhao and many other fellow students/colleagues since the day I joined the research team My heartfelt appreciations also go to Professor Gang Wang, Professor Cai Meng, Professor Mou Chen, Professor Rongxin Cui, Professor Jiaqiang Yang, for the cooperation, brainstorming, philosophical debates, exchanges of knowledge and sharing of the rich experience All the excellent fellows made my PhD marathon more fun, interesting and fruitful I am aware that this research would not have been possible without the financial support of the National University of Singapore (NUS) and Interactive Digital Media R&D Program of the Singapore National Research Foundation, and I would like to express my sincere gratitude to the organizations I appreciate the wonderful opportunity, provided by Professor Ge, to participate in the project plan and management, manpower recruitment, intellectual property protection and system integration for the translational research project “Social Robots: Breathing Life into Machine” Last but not least, I express my deepest appreciation to my family for their consistent love, trust and support through my life, without which I would not be who I am today ii Contents Introduction 1.1 Background and Objectives 1.2 Related Works 1.2.1 Visual Saliency and Attention 1.2.2 Attention-driven Robotic Head 1.2.3 Information Representation and Perception 11 1.3 1.4 I Motivation and Significance 13 Structure of the Thesis 14 Visual Saliency and Attention Visual Attention Prediction 17 19 2.1 Introduction 19 2.2 Saliency Determination 21 2.2.1 Sensitivities to Colors 22 2.2.2 Measure of Distributional Information 24 2.2.3 Window Search 28 2.3 Visual Attention Prediction 29 2.4 Experimental Evaluation 32 2.4.1 Visual Attention Prediction 33 2.4.2 Quantitative Evaluation 35 2.4.3 Common Attention 35 2.4.4 Selective Parameters 38 iii Table of Contents 2.4.5 2.4.6 2.5 Influence of Lighting and Viewpoint Changes 38 Discussion 40 Summary 40 Bottom-up Saliency Determination 43 3.1 Introduction 43 3.2 Overview of Attention Determination 44 3.3 Saliency Filter 46 3.4 Saliency Refinement 47 3.4.1 3.4.2 3.5 Saliency Energy 48 Saliency Determination 51 Experimental Evaluation 53 3.5.1 3.5.2 Quantitative Evaluation 57 3.5.3 Influence of Selective Parameters 59 3.5.4 Performance to Variance 60 3.5.5 3.6 General Performance 53 Discussion 63 Summary 65 Attention-driven Robotic Head 67 4.1 Introduction 67 4.2 Visual Attention Prediction 70 4.2.1 4.2.2 Saliency Prior Knowledge 76 4.2.4 iv Motion Saliency 72 4.2.3 4.3 Information Saliency 71 Saliency Fusion 77 Modeling of the Robotic Head 78 Table of Contents 4.3.1 4.4 Mechanical Design and Modeling 79 Head-eye Coordination 83 4.4.1 4.4.2 4.5 Head-eye Trajectory 84 Head-eye Coordination with Saccadic Eye Movements 92 Experimental Evaluation 92 4.5.1 4.5.2 4.6 II Visual Attention Prediction 93 Head-eye Coordination 94 Summary 100 Information Representation and Perception Geometrically Local Embedding 101 103 5.1 Introduction 103 5.2 Geometrically Linear Embedding 105 5.2.1 5.2.2 Neighbor Selection Using Geometry Distances 106 5.2.3 Linear Embedding 110 5.2.4 5.3 Overview of GLE 105 Outlier Data Filtering 113 GLE Analysis 116 5.3.1 5.3.2 5.4 Geometry Distance 116 Classification 120 Empirical Evaluation 122 5.4.1 Experiments on Synthetic Data 122 5.4.1.1 Linear Embedding 123 5.4.1.2 Robustness to the Number of Neighbors 126 5.4.1.3 Robustness to Outliers 126 v Table of Contents 5.4.2 Experiments on Handwritten Digits 130 5.4.2.1 Linear Embedding 130 5.4.2.2 Clustering and Classification of Different Digits 132 5.4.3 5.4.4 5.5 Computation Complexity 138 Discussion 140 Summary 140 Locally Geometrical Projection 143 6.1 Introduction 143 6.2 Locally Geometrical Projection 145 6.2.1 6.2.2 6.3 Neighbor Reconstruction and Embedding 146 Geometrical Linear Projection 149 Experimental Evaluation 151 6.3.1 6.3.2 Projection of High Dimensional Data 154 6.3.3 Classification 156 6.3.4 6.4 Synthetic Data Visualization 151 Discussion 158 Summary 158 Conclusion 161 7.1 7.2 vi Conclusion and Contribution 161 Limitations and Future Work 164 Abstract Social robots are envisioned to weave a hybrid society with humans in the near future Despite the development of computer vision and artificial intelligence techniques, social robots are still not acceptable in the sense of perception, understanding and behaving in the complex world The objective of this research was to endow social robots with the capabilities of visual attention, perception and response in a biological manner for natural humanrobot interaction This thesis proposes the methods to predict visual attention, to discover intrinsic visual information, and to guide the robotic head The visual saliency is quantified by measuring color attraction, information scale and object context Together with the visual saliency, the visual attention was predicted by fusing the motion saliency and common attention from prior knowledge To discover and represent intrinsic information, the nonlinear dimension reduction algorithm named Geometrically Local Embedding (GLE) and its linearization Locally Geometrical Projection (LGP) were proposed for information presentation and perception of social robots Towards the predicted attention, the robotic head was designed to behave naturally by following biological laws of the head and eye coordination during saccade and gaze The performance of the proposed techniques was evaluated both in simulation and in actual applications Through comparison with eye fixation data, the experimental results proved the effectiveness of the proposed technique in discovering salient regions and visual attention prediction from different sorts of natural scenes The experiments on both pure and noisy vii Abstract data prove the efficiency of GLE in dimension reduction, feature extraction, data visualization as well as clustering and classification As the optimization of GLE, the LGP presented a good compromise between accuracy and computation speed Targeting for the virtual and actual focuses, the proposed robotic head can follow the desired trajectories precisely and rapidly to respond to the visual stimuli in a human-like pattern In conclusion, the proposed approaches can improve the social sense of social robots and user experience by equipping them with the abilities to determine their attention autonomously, perceive and behave naturally in the human-robot interaction viii Bibliography Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques, 2002, pp 321–326 [38] T Fukuda, M Jung, M Nakashima, F Arai, Y Hasegawa, Facial expressive robotic head system for human-robot communication and its application in home environment, Proceedings of the IEEE 92 (2004) 1851–1865 [39] H Goossens, A Opstal, Human eye-head coordination in two dimensions under different sensorimotor conditions, Experimental Brain Research 114 (3) (1997) 542–560 [40] E Freedman, Interactions between eye and head control signals can account for movement kinematics, Biological Cybernetics 84 (6) (2001) 453–462 [41] D Tweed, Three-dimensional model of the human eye-head saccadic system, Journal of Neurophysiology 77 (2) (1997) 654 [42] D Tweed, B Glenn, T Vilis, Eye-head coordination during large gaze shifts, Journal of Neurophysiology 73 (2) (1995) 766–779 [43] H Misslisch, D Tweed, T Vilis, Neural constraints on eye motion in human eye-head saccades, Journal of Neurophysiology 79 (2) (1998) 859 [44] Y Wang, Y Wu, Complete neighborhood preserving embedding for face recognition, Pattern Recognition 43 (3) (2010) 1008–1015 [45] K Torkkola, Discriminative features for text document classification, Pattern Analysis & Applications (4) (2004) 301–308 172 Bibliography [46] C Zhang, J Wang, N Zhao, D Zhang, Reconstruction and analysis of multi-pose face images based on nonlinear dimensionality reduction, Pattern Recognition 37 (2) (2004) 325–336 [47] S Si, D Tao, K Chan, Evolutionary Cross-Domain Discriminative Hessian Eigenmaps, IEEE Transactions on Image Processing 19 (4) (2010) 1075–1086 [48] C Lu, J Jiang, G Feng, A boosted manifold learning for automatic face recognition, International Journal of Pattern Recognition and Artificial Intelligence 24 (02) (2010) 321–335 [49] K Pearson, LIII On lines and planes of closest fit to systems of points in space, Philosophical Magazine Series (11) (1901) 559–572 [50] A O’Connell, Modern Multidimensional Scaling: Theory and Applications., Journal of the American Statistical Association 94 (445) (1999) 338–340 [51] P Comon, Independent component analysis, a new concept?, Signal Processing 36 (3) (1994) 287–314 [52] J Friedman, J Tukey, A projection pursuit algorithm for exploratory data analysis, IEEE Transactions on Computers 23 (1974) 881–890 [53] H Hoffmann, Kernel PCA for novelty detection, Pattern Recognition 40 (3) (2007) 863–874 [54] J B Tenenbaum, V de Silva, J C Langford, A global geometric framework for nonlinear dimensionality reduction, Science 290 (2000) 2319 – 2323 173 Bibliography [55] S T Roweis, , L K Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (2000) 2323 – 2326 [56] M Belkin, P Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, Advances in neural information processing systems (2002) 585–592 [57] S Lafon, Diffusion maps and geometric harmonics, Ph.D thesis, Yale University (2004) [58] J Park, Z Zhang, H Zha, R Kasturi, Local smoothing for manifold learning, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004) 452–459 [59] H Chang, D Yeung, Robust locally linear embedding, Pattern Recognition 39 (6) (2006) 1053–1065 [60] D de Ridder, V Franc, Robust manifold learning, Tech rep., Technical Report CTU-CMP-2003-08, Department of Cybernetics, Czech Technical University, Prague, Czech Republic (2003) [61] E Kandel, J Schwartz, T Jessell, S Mack, J Dodd, Principles of neural science, Elsevier New York, 1985 [62] L Sharpe, A Stockman, W Jagla, H Jagle, A luminous efficiency function, V* λ, for daylight adaptation, Journal of Vision (2005) 948–968 [63] T Smith, J Guild, The cie colorimetric standards and their use, Transactions of the Optical Society 33 (1931) 73 174 Bibliography [64] M Stokes, M Anderson, S Chandrasekar, R Motta, A standard default color space for the Internet–sRGB, Microsoft and HewlettPackard Joint Report [65] W Wright, A re-determination of the trichromatic coefficients of the spectral colours, Transactions of the Optical Society 30 (1929) 141 [66] S Ge, Y Yang, T Lee, Hand gesture recognition and tracking based on distributed locally linear embedding, in: Proceedings of IEEE International Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, 2006, pp 567–572 [67] J Sun, X Zhang, J Cui, L Zhou, Image retrieval based on color distribution entropy, Pattern Recognition Letters 27 (10) (2006) 1122– 1126 [68] J Fecteau, D Munoz, Salience, relevance, and firing: a priority map for target selection, Trends in Cognitive Sciences 10 (8) (2006) 382–390 [69] C Rother, V Kolmogorov, A Blake, Grabcut: Interactive foreground extraction using iterated graph cuts, ACM Transactions on Graphics 23 (3) (2004) 309–314 [70] S Mallat, A theory for multiresolution signal decomposition: The wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 11 (7) (1989) 674–693 [71] D Gao, N Vasconcelos, Bottom-up saliency is a discriminant process, in: IEEE International Conference on Computer Vision, 2007 175 Bibliography [72] M Ruzon, C Tomasi, Alpha estimation in natural images, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol 1, 2002, pp 18–25 [73] S Feng, R Manmatha, V Lavrenko, Multiple bernoulli relevance models for image and video annotation, in: Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition, 2004 [74] J Goldberger, S Gordon, H Greenspan, An efficient image similarity measure based on approximations of KL-divergence between two Gaussian mixtures, in: Proceedings of Ninth IEEE International Conference on Computer Vision, 2008 [75] Y Boykov, O Veksler, R Zabih, Fast approximate energy minimization via graph cuts, IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (11) (2002) 1222–1239 [76] J Heuring, D Murray, Modeling and copying human head movements, IEEE Transactions on Robotics and Automation 15 (6) (1999) 1095– 1108 [77] N Butko, L Zhang, G Cottrell, J Movellan, Visual saliency model for robot cameras, in: IEEE International Conference on Robotics and Automation, 2008, pp 2398–2403 [78] C Breazeal, B Scassellati, A context-dependent attention system for a social robot, in: Proceedings of the International Joint Conference on Artificial Intelligence, 1999 176 Bibliography [79] L Itti, Realistic avatar eye and head animation using a neurobiological model of visual attention, Tech rep., Defense Technical Information Center Document (2003) [80] C Laschi, G Asuni, E Guglielmelli, G Teti, R Johansson, H Konosu, Z Wasik, M Carrozza, P Dario, A bio-inspired predictive sensorymotor coordination scheme for robot reaching and preshaping, Autonomous Robots 25 (1) (2008) 85–101 [81] L Berthouze, Y Kuniyoshi, Emergence and categorization of coordinated visual behavior through embodied interaction, Autonomous Robots (3) (1998) 369–379 [82] M Pagel, E Maël, C Von Der Malsburg, Self calibration of the fixation movement of a stereo camera head, Autonomous Robots (3) (1998) 355–367 [83] E Maini, L Manfredi, C Laschi, P Dario, Bioinspired velocity control of fast gaze shifts on a robotic anthropomorphic head, Autonomous Robots 25 (1) (2008) 37–58 [84] A Belardinelli, F Pirri, A Carbone, Motion saliency maps from spatiotemporal filtering, in: Lecture Notes in Computer Science: Attention in Cognitive Systems, Springer, 2009, pp 112–123 [85] G Doretto, A Chiuso, Y Wu, S Soatto, Dynamic textures, International Journal of Computer Vision 51 (2) (2003) 91–109 [86] A Chan, N Vasconcelos, Layered dynamic textures, Advances in Neural Information Processing Systems 18 (2006) 203 177 Bibliography [87] R Hartley, A Zisserman, Multiple view geometry in computer vision, Vol 2, Cambridge University Press, 2000 [88] J Lee, G Kim, Robust estimation of camera homography using fuzzy ransac, in: International Conference of Computational Science and Its Applications, Springer, 2007, pp 992–1002 [89] J Morel, G Yu, Asift: A new framework for fully affine invariant image comparison, SIAM Journal on Imaging Sciences (2) (2009) 438–469 [90] M Grgic, M Ravnjak, B Zovko-Cihlar, Filter comparison in wavelet transform of still images, in: Proceedings of the IEEE International Symposium on Industrial Electronics, 1999 [91] S Dravida, J Woods, W Shen, A comparison of image filtering algorithms, in: Processing of IEEE International Conference on Acoustics, Speech, and Signal, 1984 [92] P Viola, M Jones, Robust real-time face detection, International Journal of Computer Vision 57 (2) (2004) 137–154 [93] N Dalal, B Triggs, Histograms of oriented gradients for human detection, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005 [94] J Gllavata, R Ewerth, B Freisleben, Text detection in images based on unsupervised classification of high-frequency wavelet coefficients, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004 178 Bibliography [95] R Smith, An overview of the tesseract ocr engine, in: Proceedings of the Ninth International Conference on Document Analysis and Recognition, 2007 [96] D Guitton, M Volle, Gaze control in humans: eye-head coordination during orienting movements to targets within and beyond the oculomotor range, Journal of Neurophysiology 58 (3) (1987) 427–459 [97] M Corbetta, G Shulman, Control of goal-directed and stimulus-driven attention in the brain, Nature Reviews Neuroscience (2002) 201–215 [98] J Crawford, J Martinez-Trujillo, E Klier, Neural control of threedimensional eye and head movements, Current Opinion in Neurobiology 13 (6) (2003) 655–662 [99] G Westheimer, Kinematics of the eye, Journal of the Optical Society of America 47 (1957) 967–974 [100] F Donders, Beitrag zur lehre von den bewegungen des menschlichen auges, Holland Beitr Anat Physiol Wiss (104) (1848) 384 [101] B Glenn, T Vilis, Violations of listing’s law after large eye and head gaze shifts, Journal of Neurophysiology 68 (1) (1992) 309–318 [102] T Raphan, Modeling control of eye orientation in three dimensions i role of muscle pulleys in determining saccadic trajectory, Journal of Neurophysiology 79 (5) (1998) 2653 [103] J Crawford, T Vilis, Axes of eye rotation and listing’s law during rotations of the head, Journal of Neurophysiology 65 (3) (1991) 407– 423 179 Bibliography [104] W Medendorp, J Van Gisbergen, M Horstink, C Gielen, Donders’ law in torticollis, Journal of Neurophysiology 82 (5) (1999) 2833 [105] T Cover, P Hart, Nearest neighbor pattern classification, IEEE Transactions on Information Theory 13 (1) (1967) 21–27 [106] S Ge, F Guan, Y Pan, A P Loh, Neighborhood linear embedding for intrinsic structure discovery, Machine Vision and Applications 21 (3) (2010) 391–401 [107] S Ge, F Guan, A P Loh, C H Fua, Feature representation based on intrinsic structure discovery in high dimensional space, in: Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, Florida, USA, 2006, pp 3399–3404 [108] D Donoho, C Grimes, Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data, in: Proceedings of the National Academy of Sciences of the United States of America, 2003 [109] Y Pan, S Ge, A Mamun, Weighted locally linear embedding for dimension reduction, Pattern Recognition 42 (2009) 798–811 [110] C Meyer, Matrix analysis and applied linear algebra, Society for Industrial Mathematics, 2000 [111] N Barth, The gramian and k-volume in n-space: Some classical results in linear algebra, Journal of Young Investigators (1999) 1–4 [112] S Roweis, L Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (5500) (2000) 2323 180 Bibliography [113] A Hadid, M Pietikainen, Efficient locally linear embeddings of imperfect manifolds, in: Machine Learning and Data Mining in Pattern Recognition, 2003 [114] G Stewart, Perturbation theory for the singular value decomposition, Tech rep., CS-TR 2539, University of Maryland (1990) [115] R Horn, C Johnson, Matrix analysis, Cambridge Univ Pr, 1990 [116] Y Liu, Z You, L Cao, A novel and quick SVM-based multi-class classifier, Pattern Recognition 39 (11) (2006) 2258–2264 [117] L K Saul, S T Roweis, Think globally, fit locally: Unsupervised learning of low dimensional manifolds, Journal of Machine Learning Research (2003) 119–155 [118] Y Fu, T Huang, Locally linear embedded eigenspace analysis, Tech rep., IFP-TR, University of Illinois at Urbana-Champaign (2005) [119] X Niyogi, Locality preserving projections, in: Proceedings of the Conference on Neural Information Processing Systems, 2004 [120] F Wood, K Esbensen, P Geladi, Principal component analysis, Chemometrics and Intelligent Laboratory Systems (1987) 37–52 [121] Z Zhang, H Zha, Principal manifolds and nonlinear dimensionality reduction via tangent space alignment, Journal of Shanghai University (English Edition) (4) (2004) 406–424 [122] F Samaria, A Harter, Parameterisation of a stochastic model for human face identification, in: Proceedings of the Second IEEE Workshop on Applications of Computer Vision, 1994, pp 138–142 181 Bibliography [123] X He, S Yan, Y Hu, P Niyogi, H Zhang, Face recognition using laplacianfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (3) (2005) 328–340 182 Publications and Awards The contents of this thesis are based on the following papers that have been published, accepted, or submitted to the peer-reviewed journals and conferences Journal papers [1] Shuzhi Sam Ge, Hongsheng He, Zhengchen Zhang, Bottom-up saliency detection for attention determination, Machine Vision and Applications, 2011 [2] Hongsheng He, Shuzhi Sam Ge, Zhengchen Zhang, Visual Attention Prediction Using Saliency Determination of Scene Understanding for Social Robots, Special Issue on Towards an Effective Design of Social Robots, International Journal of Social Robotics, 2011 [3] Shuzhi Sam Ge, Hongsheng He, Chengyao Shen, Geometrically local embedding in manifolds for dimension reduction, Pattern Recognition, 2012 [4] Hongsheng He, Shuzhi Sam Ge, Zhengchen Zhang, An Attention-Driven Robotic Head with Biological Saccade Behaviors for Social Robots, Autonomous Robots, Under Review [5] Kun Yang, Shuzhi Sam Ge, Hongsheng He, Robust line detection using two-orthogonal direction image scanning, Computer Vision and Image Understanding, 2011 Publications and Awards [6] Zhengchen Zhang, Shuzhi Sam Ge, Hongsheng He, Mutual-Reinforcement Document Summarization Using Embedded Graph Based Sentence Clustering, Information Processing and Management, 2012 Conference papers [1] Hongsheng He, Shuzhi Sam Ge and Guofeng Tong, Task-Based Flocking Algorithm for Mobile Robot Cooperation, Lecture Notes in Computer Science: Progress in Robotics, 2009 [2] Yaozhang Pan, Shuzhi Sam Ge and Hongsheng He, Face Recognition Using ALLE and SIFT for Human Robot Interaction, Lecture Notes in Computer Science: Advances in Robotics, 2009 [3] Yaozhang Pan, Shuzhi Sam Ge and Hongsheng He, Lei Chen, Realtime face detection for human robot interaction, The 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2009 [4] Hongsheng He, Zhengchen Zhang and Shuzhi Sam Ge, Attention determination for social robots using salient region detection, International Conference on Social Robotics, 2010 [5] Shuzhi Sam Ge, Zhengchen Zhang and Hongsheng He, Weighted graph model based sentence clustering and ranking for document summarization, The 4th International Conference on Interaction Sciences (ICIS), 2011 [6] Shuzhi Sam Ge, Chengyao Shen and Hongsheng He, Visual Cortex Inspired Junction Detection, International Conference on Neural Information Processing, 2011 184 Publications and Awards [7] Cai Meng, Hongsheng He and Shuzhi Sam Ge, Composite X marker detection and recognition, IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2011 [8] Shuzhi Sam Ge, John-John Cabibihan, Zhengchen Zhang, Yanan Li, Cai Meng, Hongsheng He, Mohammadreza Safi Zadeh, Yinbei Li and Jiaqiang Yang, Design and Development of Nancy, a Social Robot, The 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2011 [9] Shuzhi Sam Ge, Yanan Li and Hongsheng He, Neural-Network-Based Human Intention Estimation for Physical Human-Robot Interaction, The 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2011 International awards Hongsheng He and Shuzhi Sam Ge, Silver medal, Microsoft Wheeledrobot soccer simulation, Robocup China Open 2008 185 Publications and Awards 186 ... In the context of scene understanding for social robots, the thesis investigates and researches the following scientific problems: visual saliency and attention, and information presentation and. .. Behave and respond in a natural and biological manner; and ◦ Understand and reveal the underlying geometry information of a scene In addition to these possible research results in scene understanding, ... improving the capabilities of social robots in social perception, scene understanding, and human robot interaction in order to ◦ Discover and perceive important and salient contents in a scene;

Ngày đăng: 09/09/2015, 17:58

TỪ KHÓA LIÊN QUAN