1. Trang chủ
  2. » Ngoại Ngữ

Difficult situatioins recognition system for visually impaired aid using a mobile kinect

76 178 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 17,91 MB

Nội dung

HOANG VAN NAM MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY - Hoang Van Nam COMPUTER SCIENCE DIFFICULT SITUATIONS RECOGNITION SYSTEM FOR VISUALLY-IMPAIRED AID USING A MOBILE KINECT MASTER THESIS OF SCIENCE COMPUTER SCIENCE 2014B Ha Noi – 2016 MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY Hoang Van Nam DIFFICULT SITUATIONS RECOGNITION SYSTEM FOR VISUALLY-IMPAIRED AID USING A MOBILE KINECT Department : COMPUTER SCIENCE MASTER THESIS OF SCIENCE … SUPERVISOR : Dr Le Thi Lan Ha Noi – 2016 CỘNG HÒA XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập – Tự – Hạnh phúc BẢN XÁC NHẬN CHỈNH SỬA LUẬN VĂN THẠC SĨ Họ tên tác giả luận văn : ………………………………… …………… Đề tài luận văn: ………………………………………… …………… .… Chuyên ngành:…………………………… ………………… … Mã số SV:………………………………… ………………… … Tác giả, Người hướng dẫn khoa học Hội đồng chấm luận văn xác nhận tác giả sửa chữa, bổ sung luận văn theo biên họp Hội đồng ngày… .………… với nội dung sau: …………………………………………………………………………………………………… ………… ………………………………………………………………………………………… ………………………… ………………………………………………………………………… ………………………………………… ………………………………………………………… ………………………………………………………… ………………………………………… ………………………………………………………………………… ………………………… …………………………………………………………………………………… Ngày Giáo viên hướng dẫn CHỦ TỊCH HỘI ĐỒNG tháng năm Tác giả luận văn Declaration of Authorship I, Hoang Van Nam, declare that this thesis titled, ’Difficult situations recognition for visual-impaired aid using mobile Kinect’ and the work presented in it are my own I confirm that: This work was done wholly or mainly while in candidature for a research degree at this University Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated Where I have consulted the published work of others, this is always clearly attributed Where I have quoted from the work of others, the source is always given With the exception of such quotations, this thesis is entirely my own work I have acknowledged all main sources of help Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself Signed: Date: i HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY Abstract International Research Institute MICA Computer Vision Department Master of Science Difficult situations recognition for visual-impaired aid using mobile Kinect by Hoang Van Nam By 2014, according to figures from some organization, here are more than one million people in the Vietnam living with sight loss, about 1.3% of Vietnamese people Although the big impact to the daily living, especially with the ability to move, read, communicate with another, only a small percentage of blind or visually impaired people live with assistive device or animal such as a dog guide Motivated by the significant changes in technology have take place in the last decade, especially in the introduction of varies types of sensors as well as the development in the field of computer vision, I present in this thesis a difficult situations recognition system for visually impaired aid using a mobile Kinect This system is based on data captured from Kinect and using computer vision technique to detect obstacle At the current prototype, I only focused on detecting obstacle in the indoor environment like public building and two types of obstacle will be exploited: general obstacle in the moving way and staircases-which causes a big dangerous to the visually impaired people The 3D imaging techniques were used to detect the general obstacle including: plane segmentation, 3D point clustering and the mixed strategy between depth and color image is used to detect the staircase based on detecting the stair edges and its structure The system is very reliable with the detection rate is about 82.9% and the time to process each frame is 493 ms Acknowledgements I am so honor to be here the second time, in one of the finest university in Vietnam to write those grateful words to people who have been supporting, guiding me from the very first moment when I was a university student until now, when I am writing my master thesis I am grateful to my supervisor, Dr Le Thi Lan, whose expertise, understanding, generous guidance and support made it possible for me to work on a topic that was of great interest to me It was a pleasure to work with her Special thanks to Dr Tran Thi Thanh Hai, Dr Vu Hai and Dr Nguyen Thi Thuy (VNUA) and all of the members in the Computer Vision Department, MICA Institute for their sharp comments, guidance for my works which helps me a lot in how to study and to research in right way and also the valuable advices and encouragements that they gave to me during my thesis I would like to express my gratitude to Prof Veelaert Peter, Dr Luong Quang Hiep and Mr Michiel Vlaminck at Ghent University, Belgium for their supporting It’s been a great honor to cooperate and work with them Finally, I would especially like to thank my family and friends for their continues love, support they have given me through my life, helps me pass through all the frustrating, struggling, confusing Thanks for everything that helped me get to this day Hanoi, 19/02/2016 Hoang Van Nam iii Contents Declaration of Authorship i Abstract ii Acknowledgements iii Contents iv List of Figures vi List of Tables ix Abbreviations x Introduction 1.1 Motivation 1.2 Definition 1.2.1 Assistive systems for visually impaired 1.2.2 Difficult situations 1.2.3 Mobile Kinect 1.2.4 Environment Context 1.3 Difficult Situations Recognition System 1.4 Thesis Contributions people 1 2 11 12 13 Related Works 14 2.1 Assistive systems for visually impaired people 14 2.2 RGB-D based assistive systems for visually impaired people 18 2.3 Stair Detection 19 Obstacle Detection 3.1 Overview 3.2 Data Acquisition 3.3 Point Cloud Registration 3.4 Plane Segmentation 3.5 Ground & Wall Plane Detection iv 25 25 26 27 30 32 Contents 3.6 3.7 32 34 34 35 45 46 48 Experiments 4.1 Dataset 4.2 Difficult situation recognition evaluation 4.2.1 Obstacle detection evaluation 4.2.2 Stair detection evaluation 49 49 51 51 53 3.8 Obstacle Detection Stair Detection 3.7.1 Stair definition 3.7.2 Color-based stair detection 3.7.3 Depth-based stair detection 3.7.4 Result fusion Obstacle information representation v Conclusions and Future Works 58 5.1 Conclusions 58 5.2 Future Works 59 Publications 60 Bibliography 61 List of Figures 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 2.1 2.2 2.3 2.4 2.5 2.6 A Comprehensive Assistive Technology (CAT) Model provided by [12] A model for activities attribute and mobility provided by [12] Distribution of frequencies of head-level accidents for blind people [18] Distribution of frequencies of tripping resulting a fall [18] A typical example of depth image (A) raw depth image, (B) depth image is visualized by jet color map and the colorbar shows the real distance with each color value, (C) Reconstructed 3D scene A stereo images that taken from OpenCV library and the calculated depth image (A) left image, (B) right image, (C) depth image (disparity map) Some existed stereo camera From left to right: Kodak stereo camera, View-Master Personal stereo camera, ZED, Duo 3D Sensor Time of flight systems from [3] Some ToF cameras From left to right: DepthSense, Fotonic, Microsoft Kinect v2 Structured light cameras From left to right: PrimeSense, Microsoft Kinect v1 Structured light systems from [3] Figure from [16], (A) raw IR image with pattern, (B) depth image Figure from [16] (A) Errors for structured light cameras, (B) Quantization errors in different distances of a door: 1m, 3m, 5m Prototype of system using mobile Kinect, (A) Kinect with battery and belt, (B) Backpack with laptop (C)Mobile Kinect is mounted on human body Two different environments that I tested with (A) Our office build (B) Nguyen Dinh Chieu secondary school Prototype of our obstacle detection and warning system Robot-Assisted Navigation from [17] (A) RFID tag, (B) Robot (C) Navigation NXT Robot System from [6] (A) The system’s Block Diagram, (B) NXT Robot Mobile robot from [22] [21] BrainPort vision substitution device [32] Obstacle detection process from [30] Stair detection from [26] (A) Input image (B)(C)Frequency as a output of Gabor filter (D)Stair detection result vi 4 9 10 11 12 13 15 16 16 18 20 21 List of Figures A near-approach for stair detection in [13] (A) Input image with detected stair region, (B) Texture energy, (C)Input image with detected lines are stair candidates, (D)Optical flow maps in this image, there is a significant changing in the line in the edge of stair 2.8 Example of segmentation and classification in [24] 2.9 Stair modeling(left) and features in each plane [24] 2.10 Stair detection algorithm proposed in [29] (A) Detected line in the edge image (using color infomation) (B) Depth profiles in each line (red line: pedestrian crosswalk, blue: down stair, green: upstair) vii 2.7 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 Obstacle Detection Flowchart Kinect mounted on body Coordinate Transformation Process Kinect Coordinate Point Cloud rotation using normal vector of ground plane (while arrow): left: before rotating, right: after rotating Normal vector estimation algorithms [15] (a) Normal vector of the center point can be calculated by a cross product of two vectors of four neighbor points (red), (b) Normal vector estimation in a scene Plane segmentation result using algorithm proposed in [15] Each plane is represented by a distinctive color Detected Ground and Walls plane (ground: blue, wall: red) Human Segmentation Data by Microsoft Kinect SDK (a) Color Image, (b) Human Mask Detected Obstacles (a) Color Image, (b) Detected Obstacles Model of stair Coordinate transformation models from [7] Projective chirping: a) A real world object that generate a projection with ”chirping” - ”periodicity-in-perspective” b) Center raster of image c) Best fit projective chirp A pin-hole camera model with stair A vertical Gabor filter kernel Gabor filter applied on a color image (a) Original (b) Filtered Image Thresholding the grayscale image (a) Original (b) Thresholded Image Example of thinning image using morphological Thresholding the grayscale image (a) Original (b) Thresholded Image Six points vote for a line will make an intersection in Hough space, this intersection has higher intensity than neighbor pixels Hough space (a) Line in the original space (b) Three curves vote for this line in Hough space Hough space on stair image (a) Original image (b) Hough space Chirp pattern detection (a) Hough space (b) Original image with detected chirp pattern Point cloud of stair (a) Original color image (b)Point cloud data created from color and depth image Detected steps Detected planes Detected stair on point cloud 22 23 23 24 26 27 28 29 30 31 31 33 34 34 35 36 38 38 39 40 40 41 42 42 43 43 44 45 46 47 47 Chapter Experiments 4.1 Dataset In this section, I will present in detail about how to collect a database in the real environment in order to develop/evaluate the system and the characterize of the dataset The mobile Kinect and backpack have been mounted on the visually impaired body, which is presented in the Chapter To record a dataset, I wrote another program for getting data from Kinect and save it to computer as a video file By default, Kinect provides several types of data, but in my work, I only collected depth image, color image, and accelerometer data Those data will be organized as follow: • Depth data: Because Kinect returns depth data as 16-bit single channel image, so we cannot write this data as a video stream in the normal way To deal with this problem, I encoded depth image as a 8-bit three channels image (or RGB image) where blue channel represents the MSB (the most significant bit/high-order bit) and g channel represents LSB bit (the least significant bit/low-order bit) and saves it as uncompressed video This is very important thing because the proposed method is one type of image encoding if we save the video with some encoder like H.264, MJPG, we’ll loose our encoding and the decoded depth image will have wrong values Fig 4.1 illustrates depth image after encoding • Color data: This kind of data can be written as a normal video • Accelerometer data: This data can be written to a text file where each line represents information of ground’s normal vector (3 dimensions) in Kinect coordinate (with meter unit) of each frames in the video 49 Chapter Experiments 50 (a) (b) (c) Figure 4.1: Depth image encoding (A) Original, (B) Visualized Image (C) Encoded Image Table 4.1: Database specifications Average video length minutes Lighting condition low lighting minutes high (sunny day) Place Number of video MICA building Nguyen Dinh Chieu school Total size 14 GB 34 GB In my work, I have collected dataset in two places: our office building and Nguyen Dinh Chieu secondary school for blind pupils In each environment, a blind person (in Nguyen Dinh Chieu school) or normal person (in MICA building) was asked to walking along the lobby of this building In MICA building, I prepared some static obstacles like trashbins, flower pot, distinguisher before recording database and few people walk backward and forward he/she In the Nguyen Dinh Chieu school, because it’s hard to prepare obstacle in the school, therefore the obstacles appear in this dataset are natural, they’re including static object like a brick column, balcony and moving obstacle is students in the school Table 4.1 shows the specifications of collected database However, due to the limitations in ground truth preparing, I only tested the system only with 243 images extracted from those databases Chapter Experiments 4.2 4.2.1 51 Difficult situation recognition evaluation Obstacle detection evaluation To evaluate the obstacle recognition module, in each image, a mask of the object will be segmented to create the ground truth data To that, I used an interactive segmentation tools from [2] to manually segment the image Then, with each object detected in the point cloud, I convert it back to 2D image to create a mask of the object Each object will be assigned with a different color in the final mask Finally, the result will be evaluated in two different level: pixel level and object level using Jaccard index as shown as follows: J(H, O) = RH∩O RH∪O where: H: hypothesis (detected object’s region) O: object (object’s region in ground truth) RH∩O : Area of intersection region between hypothesis and object in image RH∪O : Area of union region between hypothesis and object in image Then, I defined true positives (TP), true negatives (TN), false positives (FP) as follows: T P : J(H, 0) > 0.5 F P : J(H, 0) < 0.5 or does not exist ground truth data in that region F N : does not exist a hypothesis that matched with ground truth region (T P : J(H, 0) > 0.5) and in the pixel level: T P : ∃PH where PH x = PO x and PH y = PO y F P : ∃PO where PH x = PO x and PH y = PO y F N : ∃PH where PH x = PO x and PH y = PO y where: PH is obstacle point in hypothesis (detected pixel) PO is obstacle point in object (ground truth pixel) Then, I will measure the detection rate by using precision and recall measurement: TP T P +F P TP T P +F N P recision = Recall = Chapter Experiments 52 For pixel level, I used Watershed algorithm on depth image to segment an object from the background and make the ground-truth Table 4.2 shows the results at pixel level: Table 4.2: Pixel level evaluation result (TP,FP,FN: million pixels) TP 5.02 FP 1.31 FN 2.11 Precision 79% Recall 70% F-Measure 74.2% For object level, I annotated manually each object by a rectangle Table 4.3 shows the results at image level: Table 4.3: Object level evaluation result (TP,FP,FN: objects) TP 344 FP 71 FN 154 Precision 82.9% Recall 69% F-Measure 75.3% The system operates at an average speed of Hz (493 ms/frame) with downsampling block is 2x2 (about 75000 points in point cloud), which is fast enough to be used in practice Fig 4.2 shows average detection time of each step and the whole process 493 500ms 400ms 300ms 200ms 127 165 201 100ms Plane Obstacle Normal Estimation Segmentatio Detection n Total Figure 4.2: Detection time of each step in our proposed method As shown in Table 4.2 and Table 4.3, the precision achieved with the obstacle detection is 82.9% (71/415 objects missed) The precision in pixel level is slightly lower than object level because in this evaluation, obstacle must be well segmented from image while in object level, only obstacle’s bounding boxes will be used to make an evaluation However, almost missed detection objects are far away or obscured by other objects (eg: extinguisher) Besides, there’s still one more limitation is that depth data from Kinect can be loss in some material or in strong light environment In this case, system doesn’t have enough information to detect an obstacle because building point cloud depends a lot on depth image Although obstacle is a ambiguous concept, it depends a lot on Chapter Experiments 53 user’s need so the object can be a obstacle or not and the false detection will cause the annoying to the user but in general, the proposed system can detect an candidate obstacle as defined and it meets the accuracy for the detection and obstacle warning to the user in real time 4.2.2 Stair detection evaluation For stair detection evaluation, I evaluate on the dataset from [30] and dataset collected in my institute, which includes stair and object looks like stair (uniform table array, bookshelf) from Monash and UGent University, Belgium (Fig 4.3) Table 4.4 shows the specifications of stair detection and Table 4.5 shows the results of stair detection algorithm In this table, Positive means there is a stair in the image and Negative means this image not have stair With the Monash and UGent dataset, there are many objects exist concurrent parallel lines similar to stair edges, so it can make confusing with the system to detect a stair, the False Positive is higher than in MICA dataset, where there is no object has concurrent parallel lines except the floor plane With the Positive rate, in MICA dataset, the images are taken from the video sequence and the lighting condition is not good, so the Positive rate is lower than Monash and UGent dataset (40/50 in comparison with 27/27 and 80/90) Table 4.4: Stair dataset for evaluation Dataset Monash UGent MICA Number of positive images 90 27 50 Number of negative images 58 50 Table 4.5: Stair detection result of the proposed method on different datasets Dataset Monash UGent MICA Positive 80/90 27/27 40/50 Negative 51/58 0/0 50/50 TP 80 27 40 FP 0 FN 10 10 Precision 91.95% 100% 100% Recall 88.89% 100% 80% To make the comparison between my proposed method with another method, I have re-implemented the method proposed in [29] as shown in Algorithm and tested with MICA dataset Table 4.6 shows the results: As can be seen in Fig 4.4, Fig 4.5 and Fig 4.6 The Tian’s based method use normal edge detection method, the edge image produced multiple lines around the real edge (see Fig 4.4 C) Therefore, when applying it with Hough transform, the result can Chapter Experiments 54 (a) (b) (c) (d) Figure 4.3: Example stair image to evaluation (A)Positive sample from MICA dataset (B) Negative sample from MICA dataset (C) Positive sample from MONASH dataset (D) Negative sample from MONASH dataset Algorithm Stair detection using RGB-D from [29] 1: 2: 3: 4: 5: 6: 7: Grayscale color image Edge detection (I tested with Canny edge detection algorithm) Line detection using probabilistic Hough transform which is integrated in OpenCV library Merge nearby lines by distance between lines, different in angles Find concurrent parallel lines using line length, line angle, distance between two concurrent lines (more in Algorithm 1) Project concurrent parallel lines to depth image Detect stair on depth image be a lot of line segments for a single stair edge (see Fig 4.4 D) while in my proposed method (see Fig 4.4 H), the edge image has only thin line for stair edge And because in my proposed method, the line equation and stair model are calculated directly on Hough map (see Fig 4.4 I) so the problem with line merging, duplicated line is removed (Fig 4.4 G) Chapter Experiments 55 Table 4.6: Comparison of the proposed method and the method of Tian et al [29] on MICA dataset Method Based on Tian et al [29] My proposed method Positive 20/50 40/50 Negative 50/58 50/58 TP 20 40 FP 0 FN 30 10 Precision 100% 100% (a) (b) (c) (d) (e) (f) (g) (h) (i) Recall 40% 80% Figure 4.4: Detected stair in Tian’s based method (A-F) and detected stair in my proposal method (G-I) (A) Color image (B) Depth image (C) Edges (D) Line segments (E) Detected concurrent lines (F) Depth values on detected lines (G) Detected stair with blue lines are false stair edge and green lines are stair edge (H) Edges Image, (I) Detected peaks in Hough map corresponding to lines in Figure G Chapter Experiments 56 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 4.5: Miss detection in Tian’s based method because of missed depth on stair(AF) and detected stair in my proposed method (G-I) Chapter Experiments 57 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 4.6: Miss detection in Tian’s based method because of missed depth on stair(AF) and detected stair in my proposed method (G-I) Chapter Conclusions and Future Works 5.1 Conclusions This thesis has presented a difficult situation recognition system in the context of moving along the lobby of the public building This system contains the obstacle detection module which is can detect a normal object on the moving way and the stair case in front of the user A prototype of the system is also deployed The proposed framework works on the data taken from Kinect, including depth and color image, accelerometer data to build a point cloud using PCL library in order to detect an obstacle For obstacle detection, my algorithm is based on point cloud data with ground plane detection and clustering module The benefit of using point cloud data (or depth data) is that depth data provides additional information about distance for each pixel so that distance from the user to the object can calculate easily as well as extract ground, wall plane, cluster 3D data based on its distance Another advantage is that this kind of data is very reliable since it is measured from physical depth sensor using IR light The disadvantage of this module is that depth data is not good in outdoor environments, and the depth range is limited to meters So the working space of this system is only inside a building, which is a small, closed space With stair detection, the algorithm is using the mixture between depth and color data by exploiting the special structure of stair: a line on the edge of each step By using a color image, the limitation in depth image (measurable range, in-completed depth data) is partially removed By combining depth information, this algorithm also takes the advantage of depth data in order to determine if the image has a stair or not 58 Chapter Conclusions and Future Works 59 Finally, the overall system is proved that fasting enough to run in real-time and can be deployed in the smaller, cheap devices such as embedded system, micro computer 5.2 Future Works In this section, I proposed some improvements for my system and summary it as following: • Concerning obstacle detection evaluation, all obstacles must be segmented manually in each image, it requires a lot of time to make an annotation and segmentation for all images since my data contains large number of videos Therefore, in my thesis, the evaluation is still limited with some datasets (Mica, Gent, Monash) In the near future, I will make a full evaluation of obstacle detection algorithm • The next step I would like to in the near future is that some test on real impaired people using my system with a full scenario and get completely evaluation • In the long term, I will improve my system using Kinect v2 which can provide a better depth data The next thing that I will in the long term is that improved my algorithm takes into account the information of times (detect object in the sequence of frames, not for each frame) and add some popular object which is important with visually impaired people such as door and door state, recognize a text and a classification for each detected obstacle in order to give obstacle’s name to the visually impaired people Chapter Publications • Conference(s) [Published]Van-Nam Hoang, Thanh-Huong Nguyen, Thi-Lan Le, ThiThanh Hai Tran, Tan-Phu Vuong, and Nicolas Vuillerme Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In 2015 2nd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), pages 54-59.IEEE, sep 2015 [Accepted] Michiel Vlaminck, Hiep Quang Luong, Hoang Van Nam, Hai Vu, Peter Veelaert, Wilfried Philips Indoor assistance for visually impaired people using a RGB-D camera In The Southwest Symposium on Image Analysis and Interpretation (SSIAI) 2016, New Mexico, USA • Journal(s) [Extended version, Accepted with major revision] Van-Nam Hoang, Thanh-Huong Nguyen, Thi-Lan Le, Thi-Thanh Hai Tran, Tan-Phu Vuong, and Nicolas Vuillerme Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In Vietnam Journal of Computer Science (VJCS) 60 Bibliography [1] BBC - Visually impaired see the future - http://news.bbc.co.uk/2/hi/technology/4412283.stm [2] Interactive Segmentation Tool - http://kspace.cdvp.dcu.ie/public/interactivesegmentation/ [3] Photonic Frontiers: Gesture Recognition: Lasers bring gesture recognition to the home - Laser Focus World - http://www.laserfocusworld.com/articles/2011/01/lasersbring-gesture-recognition-to-the-home.html [4] Sound Foresight Technology - http://www.ultracane.com/soundforesigntechnologyltd [5] The VOICE - https://www.seeingwithsound.com/ [6] Alghasra, D M and Saeed, H Y (2013) Guiding Visually Impaired People with NXT Robot through an Android Mobile Application International Journal of Computing and Digital Systems, 2(3):129–134 [7] Barfield, W and Caudell, T (2001) Fundamentals of Wearable Computers and Augmented Reality Taylor & Francis [8] Bernabei, D., Ganovelli, F., Benedetto, M., Dellepiane, M., and Scopigno, R (2011) A low-cost time-critical obstacle avoidance system for the visually impaired In International conference on indoor positioning and indoor navigation [9] Burrus, N Nicolas Burrus Homepage - http://nicolas.burrus.name [10] Ceipidor, U B., D’Atri, E., Medaglia, C M., Mei, M., Serbanati, A., Azzalin, G., Rizzo, F., Sironi, M., Contenti, M., and D’Atri, A (2007) A rfid system to help visually impaired people in mobility In EU RFID Forum, Brussels, Belgium [11] Craven, J (2003) Access to electronic resources by visually impaired people University of Sheffield, Department of Information Studies [12] Hersh, M A and Johnson, M A., editors (2008) Assistive Technology for Visually Impaired and Blind People Springer London, London 61 References 62 [13] Hesch, J A., Mariottini, G L., and Roumeliotis, S I (2010) Descending-stair detection, approach, and traversal with an autonomous tracked vehicle In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5525– 5531 IEEE [14] Hoang, V.-N., Nguyen, T.-H., Le, T.-L., Tran, T.-T H., Vuong, T.-P., and Vuillerme, N (2015) Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In 2015 2nd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), pages 54–59 IEEE [15] Holz, D., Holzer, S., Rusu, R B., and Behnke, S (2011) Real-time plane segmentation using rgb-d cameras In RoboCup 2011: robot soccer world cup XV, pages 306–317 Springer [16] Khoshelham, K and Elberink, S O (2012) Accuracy and resolution of kinect depth data for indoor mapping applications Sensors, 12(2):1437–1454 [17] Kulyukin, V., Gharpure, C., Nicholson, J., and Pavithran, S (2004) RFID in robotassisted indoor navigation for the visually impaired 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat No.04CH37566), 2:1979–1984 [18] Manduchi, R and Kurniawan, S (2011) Mobility-related accidents experienced by people with visual impairment AER Journal: Research and Practice in Visual Impairment and Blindness, 4(2):44–54 [19] Mayol-Cuevas, W., Tordoff, B., and Murray, D (2009) On the Choice and Placement of Wearable Vision Sensors IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 39(2):414–425 [20] Nguyen, B.-H (2015) A haptic device for blind people - http://annualconf.shtp.hochiminhcity.gov.vn/ [21] Nguyen, Q H., Vu, H., Tran, T H., Hoang, V N., and Nguyen, Q H (2015) Detection and estimate the distance to the obstacle warning support for the visually impaired (in vietnamese) In National Conference on Electronics, Communications and Information Technology - ECIT2015, pages 45–50 REV, IEEE [22] Nguyen, Q H., Vu, H., Tran, T H., Nguyen, Dinh Van Hoang, V N., and Nguyen, Q H (2014) Navigational aids to visually impaired people in pervasive environments using robot (in vietnamese) In National Conference on Electronics, Communications and Information Technology - ECIT2014 - Nha Trang- VietNam IEEE, REV References 63 [23] Nguyen, T H., Nguyen, T H., Le, T L., Tran, T T H., Vuillerme, N., and Vuong, T P (2013) A wireless assistive device for visually-impaired persons using tongue electrotactile system In 2013 International Conference on Advanced Technologies for Communications (ATC 2013), pages 586–591 IEEE [24] Perez-Yus, A., Lopez-Nicolas, G., and Guerrero, J J (2014) Detection and Modelling of Staircases Using a Wearable Depth Sensor Second Workshop on Assistive Computer Vision and Robotics (ACVR) held with ECCV2014 [25] Rusu, R B and Cousins, S (2011) 3D is here: Point Cloud Library (PCL) In 2011 IEEE International Conference on Robotics and Automation, pages 1–4 IEEE [26] Se, S and Brady, M (2000) Vision-based detection of staircases Fourth Asian Conference on ComputerVision (ACCV) [27] Tang, H., Vincent, M., Ro, T., and Zhu, Z (2013) From RGB-D to low-resolution tactile: Smart sampling and early testing In 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pages 1–6 IEEE [28] Tang, T J J., Lui, W L D., and Li, W H (2012) Plane-based detection of staircases using inverse depth Australasian Conference on Robotics and Automation, pages 3–5 [29] Tian, Y (2014) RGB-D Sensor-Based Computer Vision Assistive Technology for Visually Impaired Persons Computer Vision and Machine Learning with RGB-D sensors, pages 173–194 [30] Vlaminck, M., Jovanov, L., Van Hese, P., Goossens, B., Philips, W., and Pizurica, A (2013) Obstacle detection for pedestrians with a visual impairment based on 3D imaging In 3D Imaging (IC3D), 2013 International Conference on, pages 1–7 IEEE [31] Weisstein, E W Sweep Signal - http://mathworld.wolfram.com/SweepSignal.html [32] Wicab, I BrainPort V100 Vision Aid - http://www.new.wicab.com/ [33] Z¨ ollner, M., Huber, S., Jetter, H C., and Reiterer, H (2011) NAVI - A proofof-concept of a mobile navigational aid for visually impaired based on the microsoft kinect Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6949 LNCS(c):584–587 ... like Kinect, long range 3D camera Each device has it own advantages, disadvantages and only suitable for a particular use case Stereo Camera Stereo camera is the kind of camera was used in the robotics... EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY Hoang Van Nam DIFFICULT SITUATIONS RECOGNITION SYSTEM FOR VISUALLY- IMPAIRED AID USING A MOBILE KINECT Department... helmet There are two main functions in this system called “Micro-Navigation” and “Macro-Navigation”, Micro-Navigation means obstacle avoidance and Macro-Navigation is stand for path finding For the

Ngày đăng: 16/07/2017, 17:26

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN