1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo hóa học: " Research Article An Omnidirectional Stereo Vision-Based Smart Wheelchair" ppt

11 114 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Hindawi Publishing Corporation EURASIP Journal on Image and Video Processing Volume 2007, Article ID 87646, 11 pages doi:10.1155/2007/87646 Research Article An Omnidirectional Stereo Vision-Based Smart Wheelchair Yutaka Satoh and Katsuhiko Sakaue Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Central 2, Umezono 1-1-1, Tsukuba, Ibaraki 305-8568, Japan Received 31 December 2006; Accepted 23 May 2007 Recommended by Dimitrios Tzovaras To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS), which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time The first half of this paper introduces the SOS and the basic technology behind it To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced This method allows the SOS to be used without being affected by the mounting attitude of the SOS The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype The usability of the electric wheelchair is also discussed Copyright © 2007 Y Satoh and K Sakaue This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited INTRODUCTION The purpose of this work is to enhance the abilities of the disabled and the aged and to support their self-movement by smart electric wheelchairs An electric wheelchair is a device used to compensate for the physical ability of a person who experiences difficulty walking unassisted Also, since the aged and the disabled often have decreased sensing and judgment abilities, it is often important to assist these abilities as well as their physical abilities Even a person with normal sensory and judgmental abilities may not be able to check circumferential safety adequately because an electric wheelchair can change its direction quickly, make a free turn, and move in the reverse direction Therefore, there are great needs for preventing the great risks of collision, differences in surface levels, and so on In the automotive field, intelligent systems to predict the potential hazard of a collision and apply braking automatically have already been developed and put to practical use [1] As mentioned before, these functions are extremely important even for electric wheelchairs Compared with the automotive field, the market scale is small but the needs may be even greater [2, 3] Unlike an automobile, an electric wheelchair is used in various living spaces, including crowds This necessitates the introduction of technology to sense an ambient environment more positively Therefore, we developed a smart electric wheelchair equipped with stereo omnidirectional system (hereinafter, the SOS) The SOS has the ability of acquiring highresolution color images and range data in real time in all directions By using this ability, we realized various user support functions: (1) a function to detect potential hazards in a moving environment, (2) a function to recognize the user gestures and riding posture, and (3) a function to distribute omnidirectional color image sequences to remote locations via a wireless network To prevent collisions against obstacles, the electric wheelchairs proposed so far are equipped with infrared sensors, laser range finders, or ultrasonic sensors [4–8] Because obstacles are detected by only the reflected intensity of infrared rays or ultrasonic waves, however, the proposed electric wheelchairs have problems: (1) the detection area is limited, (2) an object of some posture or material may be difficult to detect, and (3) the detected object is difficult to distinguish By using the SOS, which is capable of capturing detailed color images and range data in real time in all directions, we can realize not only the simple avoidance of a collision but also the judgment of a collision object and the EURASIP Journal on Image and Video Processing monitored and supported from a remote place In addition, viewing distributed omnidirectional color image sequences through an immersive visualization system may be able to produce a telepresence effect [12], as if the viewer was attending the user 2.2 Figure 1: The appearance of the prototype distribution of omnidirectional color image sequences to remote locations Several past proposals have used the hyperbolical omnidirectional camera system or the fisheye camera system to realize similar functions [9–11] However, the SOS has several advantages: (1) spatial information can be acquired uniformly with high resolution because the system consists of multiple cameras, (2) the system has a spherical field of view with no blind spots in any direction, and (3) range data can be acquired in real time with color images Because of these advantages, our proposed system has greater performance than that of conventional systems SYSTEM STRUCTURE 2.1 Outline of prototype Figure shows the appearance of the prototype The SOS is installed diagonally in front of the head of the user This position has the following features: (1) the environment around the electric wheelchair can be observed in a wide range, (2) the user can get in and out easily and without great disturbance, (3) enough clearance is secured in ordinary living spaces because the position is about the height of a person (150 cm from the floor) Since our living spaces are naturally designed for a person to recognize potential hazards when walking, the position close to the eye height of a person walking is rational for the recognition of potential hazards in moving environments Figure shows the block diagram of the system The SOS and electric wheelchair are controlled in an integrated form by a small PC (Figure 3: Pentium D 3.2 GHz, Windows XP) mounted behind the seat Since power is supplied from the mounted battery (shared between the PC and electric wheelchair), the system requires no external cable and can operate independently for about three hours The system also has functions to browse omnidirectional image sequences from a remote location by a wireless LAN If the wireless LAN now used is replaced with a high-speed mobile phone line or another similar device, the system can be Stereo omnidirectional system (SOS) The SOS (Figure 4) is an innovative camera system developed by the authors of [13, 14] Despite the fist-size compact structure 11.6 cm in diameter and 615 g in weight, the system can acquire omnidirectional color images and range data of high resolution in real time with no blind spots Table lists the main SOS specifications The SOS has the basic shape of a regular dodecahedron with trinocular stereo camera units on each face (36 cameras total) To ensure the accuracy of the range data, it is necessary to secure an appropriate intercamera distance (a stereo baseline) of the stereo camera units In this system, for measuring objects in a target range of about to meters around the electric wheelchair, a stereo baseline of 50 mm is allocated The cameras on each stereo camera unit are on the same plane and their optical axes are parallel with each other The center camera is placed at right angles to the other two cameras so that their 50-mm stereo baselines intersect at the center camera Thus, each pair of stereo camera units can satisfy both the horizontal and vertical epipolar constraints, and the corresponding point search costs can be reduced Stereo calibration is performed for each stereo camera unit, and theinfluences of lens distortion and the misalignment between cameras are eliminated by software And then, calibrations are performed among the stereo camera units For these calibrations, a known pattern needs to be presented to all stereo camera units at the same time Therefore, we manufactured a calibration box of one cubic meter As Figure shows, the SOS is first placed at the bottom of the box Then the bottom is covered with the main body (In the figure, it is placed next to the bottom.) In other words, the SOS is placed inside the box The box is made of semitransparent acrylic resin so that lighting from the outside can reserve enough illuminance inside for shooting It is a problem that the size of the camera head becomes excessively large when the stereo camera units are arranged in a dodecahedron shape To address this problem, we have mounted the three cameras of each stereo camera unit on a T-shaped arm (see Figure 6), so that the base planes of the stereo camera units intersect each other This reduces the camera head size while keeping the stereo baseline length Figure shows images captured by the stereo camera unit Each camera acquires intensity information (8 bits) of the VGA size (640 × 480 pixels) for disparity calculation In addition, the center camera acquires color information Figure shows the SOS system structure The images acquired by the camera head are output as 1.2-Gbps × optical signals by means of an electro-optical conversion unit fitted in the main body The memory unit and control unit are mounted on a single PCI board for total control and image acquisition (15 fps) by a single PC alone Y Satoh and K Sakaue Stereo omnidirectional system USB Joystick Wheelchair control unit RS232C Memory unit System control unit (PCI card) Optical link (1.2 Gbps × 2) Wireless LAN PCIexpress ×4 Main memory /CPU Control PC Image/control client Figure 2: Block diagram of the system Figure 3: Onboard PC Figure 5: A calibration box Figure 4: Stereo omnidirectional system (SOS) HIGH-SPEED AND HIGH-QUALITY PANORAMIC IMAGE GENERATION In this study, the electric wheelchair moving speed is assumed to be about to km/h For potential hazard detection with no delay, a high frame rate is necessary Meanwhile, images distributed outside for remote monitoring require high quality for the feeling of presence We discuss a method of satisfying these conflicting requirements The SOS provides 12 color images of the VGA size at the frame rate of 15 fps Since the optical centers of the SOS constituent cameras are not centralized, the range data should be used to generate strictly center-projected images Since the camera head is about as small as 5.8 cm in radius and the influence of the parallax between cameras is small, images can be synthesized easily at high speed by assuming that the object of observation is at a fixed distance r mainly for viewing 4 EURASIP Journal on Image and Video Processing where a and b correspond to the longitude and latitude as follows: jπ , M (2i − N)π b= 2N a= 17.4 50 32.6 Mount center Center camera 19.6 Let us say we want to know the location (e.g., w) of p in the camera-coordinate system for a camera c, assuming p is included in the FOV of the camera c The location w = (u, v, 0, 1)T in camera image Wc can be calculated as follows: 30.4 50 (a) (b) w = Ac Rc tc p, Figure 6: Trinocular stereo camera unit Table 1: The main SOS specifications Basic shape Image sensor Resolutions of individual cameras Focal length of individual cameras FOV of individual cameras Stereo base line length Frame rate Diameter of camera head Weight Power consumption dodecahedron 1/4 CMOS image sensor 640 (H) × 480 (V) pixels 1.9 mm 101 deg (H) × 76 deg (V) 50 mm 15 fps 11.6 cm 600 g W (15 V, 0.6 A) First, it is assumed that the SOS has been calibrated and a global coordinate system with the system center (camera head center) as the origin is defined (in other words, the internal and external parameters of each camera are known) It is also assumed that the lens distortion of each camera has been rectified and input images after rectification will be handled Since the SOS covers a spherical field of view with 12 cameras, image synthesis can be considered as a problem of splitting a spherical field Let S be the sphere of radius r in a coordinate system with the origin at the center of the SOS (see Figure 9) Each vector p on S defines a camera set, C p ⊆ {1, 2, , 12}, (1) of which each member camera has p in the field of view (FOV) Once (nonempty) C p is given, the optimal camera c to observe p is decided by the condition p • nc = max p • nc | c ∈ C p (2) Note that this condition is designed to choose the best camera to observe p near the center of the FOV Actual panoramic image synthesis is considered The homogenous coordinates (x, y, z, 1)T of point p on sphere S, corresponding to arbitrary point q = (i, j) on the finally synthesized panoramic image (Mercator projection) Q of M × N pixels, can be expressed as follows: ⎛ ⎞ ⎛ (4) (5) where Ac is the matrix of internal parameters and [Rc tc ] is the matrix of external parameters, both for camera c The size of the image of camera c determines whether if w is actually in Wc If not, we conclude that w ∈ Wc , which means c ∈ C p / / In other words, camera c is not the appropriate camera to observe the point p Starting from {1, , 12}, the iterated removal of inappropriate camera gives us C p In the ideal design of the SOS, the camera where p belongs may be selected as follows: p • nc = max p • nc | ≤ c ≤ 12 (6) However, the image center of the cameras used in the SOS may deviate from the physical design center (This is not a logical problem because the internal parameters are known.) In such a case, the selected camera may be unable to cover the point p The above-mentioned method enables us to choose another camera to cover point p, since the SOS has overlapping regions near the boundaries between constituent cameras As described above, we can construct the mapping F : Q → W1 ∪ · · · ∪ W12 , so that each q in Q corresponds to the point w in the optimal camera Figure 10 shows a panoramic image which shows a result of the sphere division The numbered regions in the figure show that each region c belongs to camera c The mapping F depends only on the geometric design of the SOS and the parameters of the constituent cameras Therefore, the mapping F can be implemented as a lookup table, which enables high-speed rendering of the panoramic image The above functions enable us to generate a panoramic image of 512 × 256 pixels on a PC of 3.6-GHz CPU within 10 milliseconds Figure 11 shows a panoramic image (r = 1.5 m) generated from the actual images Figure 12 shows images of the spherical mapping using OpenGL IMAGE GENERATION INDEPENDENT OF THE CAMERA ATTITUDE Since the SOS has a spherical field of view, even an arbitrary rotation of a camera can be completely recovered ⎞ − cos(b) cos(a) • r x ⎜ y ⎟ ⎜ cos(b) cos(a) • r ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟=⎜ ⎟, − sin(b) • r ⎝z ⎠ ⎝ ⎠ 1 4.1 (3) Estimation of camera attitude The electric wheelchair proposed in this paper has the SOS located diagonally in front of the head of the user, as shown Y Satoh and K Sakaue (a) Top camera (b) Disparity image (c) Center camera (d) Right camera Control & sync Data (1 Gbps) Data transfer unit Optical link 1.2 Gbps × System control unit (PCI card) Data (1 Gbps) Memory unit Data transfer unit Electro optical unit Figure 7: An example of images captured by a stereo camera unit 64-bit PCI bus Main memory /CPU Camera head (stereo camera unit ×12) Control PC Figure 8: The SOS system configuration in Figure Since the height and angle of the SOS arm can be adjusted according to the user, the attitude of the SOS changes arbitrarily To detect the mounting attitude, therefore, attitude sensors were attached (see Figure 13) When the electric wheelchair is not moving, the direction of gravity (vertical direction) is detected by the acceleration sensor to determine the SOS attitude Figure 14 shows an example after the recovery of rotation using the attitude parameters actually acquired by the acceleration sensor The upper stage of the figure is before recovery (the camera support pole direction is the z-axis) and the lower stage is after recovery (the vertical direction detected by the sensor is the z-axis) It is clear that, after recovery, the effect of the camera rotation has been cancelled Next, the arbitrary rotation (inclination) of the electric wheelchair while moving is discussed During movement, the attitude of the electric wheelchair always changes due to slopes and ground irregularities To keep generating stable images under such conditions, it is necessary to estimate the attitude in real time and recover the rotation The authors have already proposed a technique of estimating the attitude with high accuracy at high speed by simultaneously using attitude sensors and omnidirectional image analysis [15] This system also uses the basic technique For details aboutthe attitude estimation technique, see [15] This section describes a method of generating rotation-recovered images in real time that is important for real-time system operation If the attitude does not change, preparing a lookup table EURASIP Journal on Image and Video Processing Panoramic coordinate system Z M Sphere S q Q N P m w n The cth camera nc r X (a) W The cth camera coordinate system Y Figure 9: Relationship between individual cameras and a panoramic coordinate system (b) Figure 12: Spherical mapping images Gyro-sensor Acceleration sensor Figure 13: Attitude sensors Figure 10: A panoramic image which shows the sphere division (a) Figure 11: An example of a panoramic image generated from the actual images based on fixed attitude parameters enables the high-speed integration of individual camera images into omnidirectional panoramic images, as mentioned in Section If the attitudechanges, however, this kind of conversion table cannot be prepared because the attitude parameters are unknown This necessitates extremely costly geometric conversion processing for each frame To solve this problem, we used the method explained next 4.2 High-speed recovery of arbitrary rotation The method described here enables the recovery of arbitrary rotation by only table lookup and C/C++ memory pointer (b) Figure 14: An example of a rotation-corrected image An original image (a) and a corrected image (b) manipulations Figure 15 shows an outline of the procedure We represent the rotation of the SOS with the rotation angles α, β, and γ around the axes x, y, and z, respectively If a spherical image with a rotation change is expressed on a Y Satoh and K Sakaue Z R = RZ (γ)RY (β)RX (α) γ Cylindrical image developed around z-axis γ Cylindrical image developed around y-axis Y β β Cylindrical image developed around x-axis X α α Camera Camera Camera n Camera 12 (i, j) Figure 15: High-speed generation of rotation-corrected image three-axis angular coordinate system (A, B, Γ), a spherical image with no rotation change can be obtained as (A − α, B − β, Γ − γ) Judging from this relationship, the volumes of rotation around the x-, y-, and z-axes correspond to the horizontal image shift in the panoramic image developed on each axis (This shift is possible by a memory pointer manipulation only.) By using this property, the rotation is recovered at high speed, as explained next The correspondence between a panoramic image (θx , φx ) developed on the x-axis and individual camera images is calculated first These correspondence relations are created by planar projection between the camera coordinate system and the global coordinate system of the SOS, and are represented as cx (θx , φx ), ix (θx , φx ), and jx (θx , φx ) Here, cx is the camera number and ix and jx are the coordinates of an image from camera cx Then, the relationship between the panoramicimages (θ y , φ y ) and (θx , φx ) developed on the Y axis and x-axis, respectively, is calculated and represented by xθ (θ y , φ y ) and xφ (θ y , φ y ) Likewise, the relationship between the panoramic image (θz , φz ) developed on the z-axis and the panoramic image (θ y , φ y ) developed on the y-axis are represented by yθ (θz , φz ) and yφ (θz , φz ), respectively By using the SOS attitude parameters (α, β, γ), the relationship between the rotation-recovered panoramic image and the individual camera images can be obtained from the above relationships by multiple indexing The following is the formula for the relationship with camera number cr (θ, φ): c y θ y , φ y = cx xθ θ y , φ y − α, xφ θ y , φ y , cz θz , φz = c y yθ θz , φz − β, xφ θz , φz , (7) cr (θ, φ) = cz (θ − γ, φ) In the same way as for the above formula, multiplex index tables can be built for ir (θ, φ) and jr (θ, φ) as well Us- Figure 16: An example of an omnidirectional depth image (bottom) The higher the brightness of the pixels, the shorter the range ing the above method, a PC of 3.6-GHz CPU can generate a rotation-recovered image in about 15 milliseconds DETECTION OF POTENTIAL HAZARDS IN MOVING ENVIRONMENTS To support safe movement, potential hazards are detected during movement and braking is assisted The potential hazards to be detected are as follows: (1) collision against a pedestrian, wall, desk, or other object, (2) a level difference or stairs, and (3) a rope or beam in the air Since the SOS can acquire omnidirectional range data in real time and handle the data on a coordinate system with the camera head center as the origin, obstacles can be detected directly by using the range data Figure 16 shows an example of a depth image mapped on a panoramic coordinate system 8 EURASIP Journal on Image and Video Processing F− x F− x y F0 x F+1 x y y 0◦ 20◦ x F+2 x y y x 40◦ 80◦ y y Tilt angle of the joystick L x x y B −2 x y B −1 x y B0 x y y B+1 R B+2 Stop area Slowdown area Figure 17: Mask patterns for limiting the obstacle detection area A detected potential hazard factor actually disturbs the electric wheelchair, depending on the direction of movement As Figure 17 shows, mask patterns that change with the input direction of the joystick are set to limit the detection area In the figure, Fn represents the forward direction, Bn represents the backward direction, and L and R represent in situ rotation Each pattern shows the electric wheelchair viewed from above (z-axis direction) With the floor height as z = 0, the range of detection along the zaxis is −0.5 m < z < 1.6 m Therefore, the detection area is cylindrical To prevent fine floor irregularities or measurement errors from stopping the electric wheelchair frequently, the range of −0.05 m < z < 0.05 m is excluded from the detection area (the prototype electric wheelchair is capable of traversing a 10-cm level difference without problem) For straight forward and backward runs (F0 and B0 ), a square area is set to allow passing through a narrow aisle For other areas (turns), a sector area is set by considering probabilistic expansion because the volume of the turn is always changed by user operation The electric wheelchair slows down if any object is detected in the slowdown area and stops completely if any ob- ject is detected in the stop area More specifically, if the number of 3D points observed at a position higher than the floor area (0.05 m < z < 1.6 m) in each detection area exceeds a threshold, the electric wheelchair slows down or stops The threshold is set to eliminate the influences of noises and observation errors, and its value is determined experimentally By testing under various environmental conditions, we verified that up to about 80 false positives could be detected For this experiment, therefore, the threshold was set to 100 in all areas Since the detection range includes an area lower than the floor (−0.5 m < z < −0.05 m), the system can even detect stairs going down and stops the electric wheelchair, as shown in Figure 18 Figure 19 shows the transition of the number of 3D points in the F0 area in the case of Figure 18 The horizontal axis indicates the wheelchair position (beginning of the stairs: 0) and the horizontal axis indicates the number of 3D points Figure 17 shows the settings optimized for the electric wheelchair used in the present study and its control patterns These settings depend on the shape and motion characteristics of the electric wheelchair and cannot always be described Y Satoh and K Sakaue Stop Figure 18: Potential hazard detection 700 Gesture detection area Number of 3D points 600 500 400 300 Collision detection area 200 100 −2.0 −1.5 −1 −0.5 Wheelchair position (m) Figure 19: Transition of the number of 3D points Figure 20: Gesture detection area as general rules As the results of orbital simulation and several tests using several kinds of electric wheelchairs, we verified that the rules shown in Figure 17 are applicable in many cases For evaluating the usability of the potential hazard detection function, three users performed run tests for more than a month using the prototype This clarified several advantages and limitations First, regarding the potential hazard detection performance, the evaluation was generally good For an object such as a table, as an example, the conventional ultrasonic or infrared sensors of limited detection areas successfully detected the legs of the table but not the tabletop This detection failure caused a collision in some cases, since the SOS has no dead angle in the detection area and stops the electric wheelchair safely even in this type of case For an object of poor texture, such as a wall having a wide area painted evenly white, the reliability of range data by stereo measurement has become low, and false negatives have occurred partially This system, however, has only a small possibility of collision because it reduces the speed or stops if any object in the detection area is detected even partially GESTURE/POSTURE DETECTION The users of electric wheelchairs often have problems other than walking [16–18] Therefore, there is a great need for gesture operations This system can simultaneously monitor the moving environment and the user status, so gestures and postures of the user can be detected easily When an electric wheelchair is moving, there are basically no objects in the cubic area ahead of the user, as shown in Figure 20 By analyzing the range data of this area, therefore, changes of the user posture and gesture can be detected More specifically, 3D points detected in the area are counted as potential hazard detection described in the previous section If the count exceeds a specified value, a gesture is assumed to have been detected Figure 21 shows a case of an emergency stop after a change of user posture is detected When the user stoops, 10 EURASIP Journal on Image and Video Processing Emergency stop Figure 21: A case of an emergency stop after a change of user posture is detected Assisting Completed (a) Assisting ASSISTING (b) Figure 22: Gesture control feature If the user keeps extending his arm to grab an object or press an elevator button during a stop, the wheelchair automatically starts moving forward and stops when his or her arm is retracted or before the wheelchair interferes with obstacles ahead the upper part of the user’s body enters the area shown in Figure 20 and the electric wheelchair stops An emergency stop is necessary, especially when a voice control interface is used, because the electric wheelchair stops moving until the voice command for stop is properly recognized If this status continues beyond a specified time, an alarm can be sent to a remote location by a mobile phone In the example shown in Figure 22, gesture detection and potential hazard detection are used simultaneously If the user extends his or her arm to grab an object or press an elevator button during a stop, the user’s arm enters the area shown in Figure 20 If this continues for seconds or more, the electric wheelchair starts moving at a very slow speed Since the hazard detection described in the previous section is performed simultaneously, the electric wheelchair stops automatically before making contact with obstacles ahead Since an electric wheelchair requires a lot of training for fine positioning by the joystick, this function is highly acceptable and important for realizing an electric wheelchair helpful for more users As mentioned above, the 3D points in the detection area are now counted for triggering However, since the finger Y Satoh and K Sakaue pointing direction can be theoretically estimated using the range data, we are planning to expand the concept to the detection of finger-pointing gestures 11 [10] CONCLUSIONS This paper described the development of a smart electric wheelchair equipped with the SOS, which is capable of capturing omnidirectional color image sequences and range data in real time In particular, this paper described the fundamental technology required for application of the SOS to electric wheelchairs, such as high-speed, high-definition integrated processing of omnidirectional images, and highspeed generation of rotation-corrected images We are now preparing an experiment with a laser range finder (LRF) that can scan 270 degrees horizontally In the future, we will quantitatively evaluate the accuracy of the SOS potential hazard detection function compared with the LRF, and enhance the detection accuracy by a hybrid configuration with the SOS and the LRF ACKNOWLEDGMENT This work was supported by the “Special Coordination Funds for Promoting Science and Technology” of the Ministry of Education, Culture, Sports, Science (MEXT), Japan [11] [12] [13] [14] [15] [16] REFERENCES [17] [1] H H Meinel, “Commercial applications of millimeterwaves history, present status, and future trends,” IEEE Transactions on Microwave Theory and Techniques, vol 43, no 7, part 1-2, pp 1639–1653, 1995 [2] Y Kuno, N Shimada, and Y Shirai, “Look where you’re going [robotic wheelchair],” IEEE Robotics & Automation Magazine, vol 10, no 1, pp 26–34, 2003 [3] G Bourhis, O Horn, O Habert, and A Pruski, “An autonomous vehicle for people with motor disabilities,” IEEE Robotics and Automation Magazine, vol 8, no 1, pp 20–28, 2001 [4] U Borgolte, H Hoyer, C Bă hler, H Heck, and R Hoelper, u “Architectural concepts of a semi-autonomous wheelchair,” Journal of Intelligent and Robotic Systems, vol 22, no 3-4, pp 233–253, 1998 [5] J.-D Yoder, E T Baumgartner, and S B Skaar, “Initial results in the development of a guidance system for a powered wheelchair,” IEEE Transactions on Rehabilitation Engineering, vol 4, no 3, pp 143–151, 1996 [6] H A Yanco, “Wheelesley, a robotic wheelchair system: indoor navigation and user interface,” in Assistive Technology and Artificial Intelligence, Lecture Notes in Artificial Intelligence, pp 256–268, Springer, New York, NY, USA, 1998 [7] R Simpson, E LoPresti, S Hayashi, I Nourbakhsh, and D Miller, “The smart wheelchair component system,” Journal of Rehabilitation Research and Development, vol 41, no 3B, pp 429–442, 2004 [8] R C Simpson, “Smart wheelchairs: a literature review,” Journal of Rehabilitation Research and Development, vol 42, no 4, pp 423–438, 2005 [9] Y Yagi, S Kawato, and S Tsuji, “Real-time omnidirectional image sensor (COPIS) for vision-guided navigation,” IEEE [18] Transactions on Robotics and Automation, vol 10, no 1, pp 11–22, 1994 J Kurata, K T V Grattan, and H Uchiyama, “Navigation system for a mobile robot with a visual sensor using a fish-eye lens,” Review of Scientific Instruments, vol 69, no 2, pp 585– 590, 1998 C Mandel, K Huebner, and T Vierhuff, “Towards an autonomous wheelchair: cognitive aspects in service robotics,” in Proceedings of Towards Autonomous Robotic Systems (TAROS ’05), pp 165–172, London, UK, September 2005 S Moezzi, Ed., “Immersive telepresence,” IEEE Multimedia, vol 4, no 1, pp 17–56, 1997 H Tanahashi, D Shimada, K Yamamoto, and Y Niwa, “Acquisition of three-dimensional information in a real environment by using the stereo omni-directional system (SOS),” in Proceedings of the 3rd International Conference on 3D Digital Imaging and Modeling (3DIM ’01), pp 365–371, Quebec City, Que, Canada, May-June 2001 S Shimizu, K Yamamoto, C Wang, Y Satoh, H Tanahashi, and Y Niwa, “Moving object detection by mobile Stereo Omni-directional System (SOS) using spherical depth image,” Pattern Analysis and Applications, vol 9, no 2-3, pp 113–126, 2006 C Wang, H Tanahashi, Y Satoh, et al., “Generation of rotation invariant image using Stereo Omni-directional System (SOS),” in Proceedings of the 10th International Conference on Virtual Systems and Multimedia (VSMM ’04), pp 269–272, Ogaki, Japan, November 2004 D L Jaffe, “An ultrasonic head position interface for wheelchair control,” Journal of Medical Systems, vol 6, no 4, pp 337–342, 1982 ´ R Barea, L Boquete, L M Bergasa, E Lopez, and M Mazo, “Electro-oculographic guidance of a wheelchair using eye movements codification,” International Journal of Robotics Research, vol 22, no 7-8, pp 641–652, 2003 L Fehr, W E Langbein, and S B Skaar, “Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey,” Journal of Rehabilitation Research and Development, vol 37, no 3, pp 353–360, 2000 ... no blind spots in any direction, and (3) range data can be acquired in real time with color images Because of these advantages, our proposed system has greater performance than that of conventional... wireless LAN If the wireless LAN now used is replaced with a high-speed mobile phone line or another similar device, the system can be Stereo omnidirectional system (SOS) The SOS (Figure 4) is an innovative... SOS can acquire omnidirectional range data in real time and handle the data on a coordinate system with the camera head center as the origin, obstacles can be detected directly by using the range

Ngày đăng: 22/06/2014, 19:20

Xem thêm: Báo cáo hóa học: " Research Article An Omnidirectional Stereo Vision-Based Smart Wheelchair" ppt

TỪ KHÓA LIÊN QUAN

Mục lục

    Stereo omnidirectional system (SOS)

    High-speed and High-quality PanoramicImage Generation

    Image generation independent of the camera attitude

    Estimation of camera attitude

    High-speed recovery of arbitrary rotation

    Detection of Potential Hazards inMoving Environments

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN