1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

visual perception and robotic manipulation - taylor & kleeman

230 326 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 230
Dung lượng 6,39 MB

Nội dung

[...]... equivalent homogeneous (n + 1)-vector in the projective space Pn The projective space G Taylor and L Kleeman: Visual Perception and Robotic Manipulation, STAR 26, pp 11–30, 2006 © Springer-Verlag Berlin Heidelberg 2006 12 2 Foundations of Visual Perception and Control Pn is defined as the space of linear subspaces of Rn+1 Consider a point in R3 with inhomogeneous coordinates X, Y , and Z; by introducing a... of a 3D robotic vision system, namely the imaging sensor and active stereo camera head Finally, Sections 2.3 and 2.4 paint a broad outline of issues and methods in visual perception and control as they relate to robotic manipulation In particular, we discuss how the choice of representation in perceptual models affects the capabilities of a visual system, and provide a survey of recent work in visual. .. found a place the home G Taylor and L Kleeman: Visual Perception and Robotic Manipulation, STAR 26, pp 1–10, 2006 © Springer-Verlag Berlin Heidelberg 2006 2 1 Introduction Fig 1.1 Service robots of the past, present and future Elektro (left) was built by Westinghouse in 1937 and exhibited as the ultimate appliance, although it was little more than a complex automaton Present-day service robots such... occlusions and the resulting loss of visual information This can impact 6 1 Introduction the ability to recognize objects, update the world model and control limbs using hand-eye coordination Perception and control algorithms with high tolerance to occlusions are therefore desirable Robustness to operational wear and calibration errors In the classical solution to robotic reaching and grasping, visual. .. and serves as a road-map to the concepts and techniques developed in later chapters In the proposed framework, perception of the world begins with the stereoscopic light stripe scanner and object classification and modelling blocks working together to generate data-driven, textured, polygonal models of segmented objects Object modelling and classification provides the link between low-level sensing and. .. Finally, the desired manipulation is performed by controlling the end-effector using visual feedback in a hybrid position-based visual servoing framework During servoing, active vision controls the gaze direction to maximize available visual information The following chapters fill in the details of the framework sketched above, starting from low-level sensing through to high-level perception and control, the... autonomous behaviour and encompasses both low-level sensing and high-level abstraction of useful information about the world For robotic vacuum cleaners and similar devices, perception may simply drive reactive behaviours without intervening intelligence Complex tasks may require a more sophisticated approach involving the maintenance of a consistent world model, and associated high-level interpretation... called hybrid position-based visual servoing Kinematic measurements allow servoing to continue when the end-effector is obscured, while visual measurements minimize the uncertainty in kinematic calibration A Kalman filter framework for estimating the visual scale factor and fusing visual and kinematic measurements is described, along with image processing, feature association and other implementation... cleaners, and even humanoids These machines will lift the burden of work in our homes and offices by taking the role of cleaner, courier, nurse, and security guard among others Robots will recognize their owners by appearance and voice, and understand natural modes of communication including gestures and speech We will give simple commands such as “Set the table” or “Bring the coffee to my desk”, and our... to off-board computing, such as the keyboard-playing Wabot-2 from Waseda University7 However in more recent years, the exponential increase in available computing power and decrease in the cost of both computers and cameras has stimulated an acceleration in our understanding of robot vision Compact, untethered robots that use real-time visual sensing to navigate complex environments, locate and grasp . /iV}ị] - i`i] > >ế>Li iíôiiVi >` v è>è} èi }è èôi ii>V >ôèi ẻ / ii>V ĩ> ếôôèi` Lị > ếè>> *è}>` ế>èi ĩ>` >` èi - >èi}V > 1ièị ,ii>V ế` -1 ,đ. >ịấếèVếiấè>V}ấấĩấèấếVVii`ấấV>i}} èế>èấĩiiấ}iVếiấè>Viấv> >ị]ấLếèấ>`iịiấV`>èấèấôivấếivếấ>Vèấiàếiấế> vii`L>VấVèấvấ>ấLèấ>ôế>èấ/ấLấè`ếViấịL`ấôèL>i` ế>ấi}]ấĩVấvếiấi>èVấ>`ấế>ấi>ếiièấèấLếèịấ>`i VVếấ>`ấô`iấ>ấiV>ấvấiấVôi>èấvấV>L>èấi] LèấV>V>ấôLiấấôèL>i`ấế>ấi} /iấiếèấvấiíèiiấèiè}ấấ>ấếôôièấế>`ấLèấ>iấôiièi` èấếôôèấèiấv>iĩấ`iVLi`ấấèấLấ/iấVế>èấvấèiấiíôiiè> ĩấấèiấ`iè>èấvấèĩấi>ĩ`ấ`ièVấè>ấV>è}ấ>`ấ}>ô}ấ> ếĩấLiVèấ>`ấôế}ấèiấVèièấvấ>ấèi>VèiịấiiVèi`ấVếôấèấ>ấLĩ >ị]ấèiấiấvấấấ>ấ>}iấếèiấv>iĩấấiíôi`ấèế}ấèi è>ấvấ`ièvị}ấ>ấVếôấvấiè>ấvấ>}ấii>ấV>``>èiấL>i`ấấế>] `ếấ>`ấ>yĩấi} VVô>ị}ấèấLấấ>ấ,"ấVè>}ấ`iấVô]ấ6,ấ`>è>]ấ V`iấĩèấ`>è>ấièấ>`ấiVèếiấ`iấèấếôôiièấèiấ`Vếấ>`ấiíôiiè> iếèấ/ấ>èi>ấấivii`ấèấ>ấèiấếèi`>ấíèiấ>èấii>èấôèấ èiấèiíèấấ>``è]ấ`iấVôấvấôi>ịấ>`ấi>èi`ấiíôiièấèấ`Vếi` ấ>ấèiíèấ>iấ>ấôiièi`]ấô`}ấ>ấL>V}ế`ấèấèiấ`iiôièấvấè ĩấ/ấiĩấèiấếèi`>ấíèi]ấièấèiấ,"ấấ>ấ*ấ`ấ`i >`ấ>ấĩiLấLĩiấế`ấ>ếVấĩèấèiấ/ấèiv>ViấvấèiấLĩiấ`iấè >ếVấ>ếè>èV>ị]ấèiấèiv>ViấV>ấLiấ>VVii`ấLịấôi}ấấèi èấ`iVèịấvấèiấ," /iấ>ếèấĩế`ấiấèấè>ấ,ấ`ịấ,ếi]ấĩấôii`ấếôấèiiè}ấiĩ ii>Vấ`iVèấLịấV>L>è}ấấèiấ`ếấ>`ấ>yĩấi}ấiíôiièấ >ôèiấầ]ấ>`ấ-iấấ>`ấ i. >««i>À>Vi° 8 *iv>Vi èấèiấ}ièấiiấvấôiViôè]ấ`iL>i`ấè>V}ấVôi>èiấvấVii `ị>Vấ>`ấV>L>èấiấ`ế}ấ>ôế>èấ-iiVèấvấ>ièấè>V} vi>èếiấấ>ấV>i}}ấôLiấĩiấLiVèấ>`ấế>ấV`èấ>iấếĩ /ấLấiíôiấ

Ngày đăng: 12/06/2014, 12:08

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN