Định vị dựa trên thông tin hình ảnh( vision based localization)

66 249 0
Định vị dựa trên thông tin hình ảnh( vision based localization)

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY - NGUYEN VAN GIAP VISIONBASED LOCALIZATION ID: 2014BMTCT-KH03 MASTER THESIS OF SCIENCE COMPUTER SCIENCE SUPERVISOR: Vu Hai, Ph.D Hanoi – 2016 CỘNG HÒA XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập – Tự – Hạnh phúc BẢN XÁC NHẬN CHỈNH SỬA LUẬN VĂN THẠC SĨ Họ tên tác giả luận văn : Nguyễn Văn Giáp Đề tài luận văn: Định vị sử dụng thông tin hình ảnh Chuyên ngành: Khoa học máy tính Mã số SV: CB140975 Tác giả, Người hướng dẫn khoa học Hội đồng chấm luận văn xác nhận tác giả sửa chữa, bổ sung luận văn theo biên họp Hội đồng ngày 21 tháng 10 năm 2016 với nội dung sau:  Bổ sung trích dẫn cho hình vẽ, bổ sung thứ nguyên  Sửa lỗi soạn thảo toàn luận văn  Bổ sung thêm danh mục ký hiệu, ý nghĩa thứ nguyên  Bổ sung thông tin thiếu tài liệu tham khảo  Làm rõ việc sử dụng lại code, rõ phần nào, đâu Người hướng dẫn Hà Nội, ngày……tháng 11 năm 2016 Tác giả luận văn TS Vũ Hải Nguyễn Văn Giáp CHỦ TỊCH HỘI ĐỒNG GS.TS Eric Castelli DECLARATION OF AUTHORSHIP I, Nguyen Van Giap, declare that this thesis titled, “Vision – based localization” and the work presented in it are my own I confirm that:  This work was done wholly or mainly while in candidature for a research degree at this University  Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated  Where I have consulted the published work of others, this is always clearly attributed  Where I have quoted from the work of others, the source is always given With the exception of such quotations, this thesis is entirely my own work  I have acknowledged all main sources of help  Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself Signed: Date: HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY International Research Institute MICA Computer Vision Department Master of Science VisionBased Localization by Nguyen Van Giap Abstract Nowadays, the vision-based localization systems using vision sensors are widely used in public and crowded places Positioning information, which is extracted from the image data stream by surveillance cameras, could support monitoring services in different manners For instance, to detect people/subjects who are not allowed entrancing in a certain place; or to link human trajectories and then to identify their abnormal behaviors, so on These services always require positioning information of the interested subjects Whereas, the other localization techniques are still limited about distance, accuracy (e.g., WIFI, RFID), or setting the environment and usability (e.g., Bluetooth, GPS) The vision–based localization technique, particularly, in indoor-environment has many advantages such as: scalable, high accuracy; without requirements of the additional attachedequipment/devices to the subjects Motivated by above advantages, this thesis aims to study, and propose a high accuracy vision-based localization system in indoor environments We also take into account detailed implementations and developments, as well as testing the proposed techniques It is noticed that the thesis focuses on moving human indoor environments that is monitored in a surveillance network camera To archive a high accuracy positioning system, the thesis deals with the critical issues of a vision-based localization We observe that there is no a perfect human detector and tracking Then, we use a regressionmodel to eliminate outlier detections The system therefore improves detection and tracking results Throughout the thesis, we first briefly introduce an overview of visionbased localization; We then present the proposed frame-work including steps: Background Subtraction for detecting moving subject; shadows removal techniques for improving detection result, and linear regression method to eliminate the outliers; and finally the tracking object using a Kalman Filter The most important result of the thesis is demonstrations which show a high accuracy and real-time computation for human positioning in indoor environments These evaluations are implemented in several indoor environments ACKNOWLEDGEMENTS I am so honor to be here the second time, in one of the finest university in Vietnam to write those grateful words to people who have been supporting, guiding me from the very first moment when I was an undergraduate student until now, when I am writing my master thesis I am grateful to my supervisor, Dr.Vu Hai, whose expertise, understanding, generous guidance and support made the research topics to be possible for me that were of great interest to me I am pleasure to work with him I would like to special thanks to Dr Le Thi Lan, Dr Tran Thi Thanh Hai and all of the members in the Computer Vision Department, MICA Institute for their sharp comments, guidance for my works which helps me a lot in how to study and to researching in right way and also the valuable advices and encouragements that they gave to me during my thesis Particularly, I would to express my appreciations to a Ph.D Student Pham Thi Thanh Thuy, who allow me to use a valuable database of human tracking in a surveillance camera network Without her permission, I could not make extensive evaluations for the proposed method Finally, I would especially like to thank my family and friends for their continue love, support they have given me through my life, helps me pass through all the frustrating, struggling, confusing Thanks for everything that helped me get to this day Hanoi, 10/2016 Nguyen Van Giap CONTENTS ACKNOWLEDGEMENTS LIST OF FIGURES LIST OF TABLES 10 ABBREVIATIONS 11 Chapter 1: INTRODUCTION .12 1.1 Context and Motivation 12 1.2 Objectives .14 1.3 Vision-based localization and main contributions 15 1.4 Scope and limitations on the research .16 1.5 Thesis Outline 17 Chapter 2: RELATED WORK 18 2.1 A briefly survey on localization techniques .18 2.2 A brief servey on vision-based localization systems 19 Chapter 3: PROPOSED FRAMEWORK 25 3.1 Formulate the vision-based localization 25 3.2 Background subtraction 28 3.3 Post-processing procedure 30 3.4 Shadow removal techniques 32 3.4.1 Chromaticity-based feature extraction 32 3.4.2 Shadow-matching score utilizing physical properties 33 3.4.3 Existing issues after applying shadow removal .34 3.5 A localization estimation using a regression .35 3.5.1 Linear regression 35 3.5.2 Definition of Gaussian processing 35 3.5.3 Regression model 37 3.5.4 Estimating height with regression model .38 3.5.2 Outliers removal .41 3.6 Object tracking 42 Chapter 4: EXPERIMENTAL EVALUATIONS 44 4.1 Experimental setup 44 4.2 Evaluation results of the BGS and shadow removal 46 4.3 Evaluation results of the Gaussian processing regression .49 4.3.1 Evaluation results with GP .49 4.3.2 Evaluation about suitable method 51 4.3.3 Discuss about  52 4.4 The final evaluation results of the proposed system 55 Chapter 5: CONCLUSION AND FUTURE WORKS .57 5.1 Conclusion 57 5.2 Future works 57 PUBLICATION .58 BIBLIOGRAPHY 59 APPENDIX 61 LIST OF FIGURES Figure 1.1: Indoor localization techniques 13 Figure 1.2: Surveillance camera network 13 Figure 1.3: Positioning a human from video stream 14 Figure 1.4: Casting shadow problem a) Origin image; (b)-(c) Casting shadows; d) Mask of object; (e) Shadow pixels 14 Figure 1.5: Some examples of the experimental environments a) hallway; b) Lobby and c) in a room 16 Figure 2.1 The flow chart of a common vision-based localization technique 19 Figure 2.2 Kinds of object shadows ([9]) 22 Figure 2.3: An illustratrion of the human tracking results in [9] 23 Figure 2.4: Different tracking approaches 23 Figure 3.1: Foot-point definition .25 Figure 3.2: Transformation 2D point to 3-D point real world 25 Figure 3.3: Calibration procedure Top row: The original image; Bottom row: the corner points are detected for calculating the homographic matrix 26 Figure 3.4 A wrong tracked-point detection 26 Figure 3.5: The general flow chart of the proposed method .27 Figure 3.6: Result of BGS with adaptive Gaussian Mixture .29 Figure 3.7: Widespread object shadow: a) Origin situation; b) Mask situation .30 Figure 3.8: An illustration of the wrong object detection results due to shadow appearances .30 Figure 3.9: Noisy by illumination changing 31 Figure 3.10: Results of preprocessing .32 Figure 3.11: Example using chromatic-based features for shadow remova .33 Figure 3.12: (a) Physical shadow model and examples of (b) original image, (c) logscale of _(p) property, (d) log-scale of _(p) property 34 Figure 3.13: Problems of shadow removal .34 Figure 3.14 : Graphical model of the Gaussian Regression 37 Figure 3.15: Position and height of object 38 Figure 3.16: Position and height of object 39 Figure 3.17: Ground truth dataset to train GP model 39 Figure 3.18: Height detection object (H_det) 40 Figure 3.19: Estimated height of object (H_est) .40 Figure 3.20: Tracking results high object with GP model scanario .40 Figure 3.21: Height of object tracking results scenario 41 Figure 3.22: Outliers of tracking object and removal outliers result 41 Figure 3.23: Consensus between H_det and H_est 42 Figure 3.24: Result of Kalman filter using .43 Figure 3:25: Tracking results with and without processing .43 Figure 4.1: Testing environment .44 Figure 4.2: Processing dataset 45 Figure 4.3: Some frames of scenario .45 Figure 4.4: Some frames of scenario .45 Figure 4.5: Low quality tracking results 46 Figure 4.6: Mapping moving object result with BGS and shadow removal .47 Figure 4.7 Compare tracking BGS H_det with H_gt .48 Figure 4.9 Compare tracking BGS and shadow removal H_det with H_gt 49 Figure 4.10: Results with BGS-Shadow removal and GP 50 Figure 4.11: Results with application t .51 Figure 4.12: Calculation  in scenario 53 Figure 4.13: Result with  scenario .53 Figure 4.14: Result with  scenario .54 Figure 4.15: Values of  scenario 55 b) Results tracked points of object Figure 4.11: Results with application t In Fig 4.11 and Fig 4.10 are shown that application  will improve tracking result The mapping detection and tracking moving object strongly fits with ground truth line 4.3.2 Evaluation about suitable method We proposed solution to remove outlier tracking points with Gaussian processing regression addition to framework The reasons which we suggest to choose the GP for our regression model are that the strong relation between position of moving object P(x,y) with height of moving object H_det; H_gt; H_est These are shown the below detail In our situation testing processing, we gathered ground truth data (appendix 1, are examples) that provides strong relation between position of moving object P(x,y) and height of the object Testing correlation between P(x,y) and height of moving object ground truth data, we find correlation coefficient of the model to be R=0.91/1.0 and Adjusted R2 = 0.824 that means 82.4% variance of height of moving object is explained by P(x,y), (it is shown in Table 4.5) Table 4.4: Testing correlation Position and Height of moving object Variables Entered/Removeda Model Variables Variables Method Entered Removed b GT.y, GT.x Enter a Dependent Variable: H_gt b All requested variables entered 51 Table 4.5: Relation of Position and Height of moving object Model Summaryb Model R R Square Adjusted R Std Error of the Square Estimate a 828 26.753229 910 824 a Predictors: (Constant), GT.y, GT.x b Dependent Variable: H_gt At the ANOVA analysis, Table 4.6, Sig = 0.000

Ngày đăng: 16/07/2017, 17:26

Tài liệu cùng người dùng

Tài liệu liên quan