Xây dựng và thử nghiệm hệ thống thị giác và quán tính để hạ cánh tự động cho máy bay không người lái

110 21 0
Xây dựng và thử nghiệm hệ thống thị giác và quán tính để hạ cánh tự động cho máy bay không người lái

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

State Estimation in Autonomous Unmanned Aerial Vehicle Landing Submitted to the Department of Aerospace Engineering, Faculty of Transportation Engineering in partial fulfillment of the requirements for the degree of Master’s of Science in Aerospace Engineering at Ho Chi Minh City University of Technology (HCMUT) - Vietnam National University HCMC (VNU-HCM) Hoang Dinh Thinh July 2020 i Intentionally left blank ii Acknowledgements I would like to express my deep appreciation of the valuable comments from my advisors: Dr Ngo Dinh Tri (Viettel Aerospace), Dr Le Thi Hong Hieu (HCMUT); my father and mother for their endless care and nurturing I also felt greatly indebted to my significant other, Tran Nhat Vy for providing me with a lot of support and my ophthalmologist, Huynh Vo Mai Quyen MD, whose talents, cordiality and graciousness have comforted me a lot during the difficult days of treatment I’m also grateful to my labmates: Nguyen Xuan Thanh Dat, Phan Minh Cuong, Huynh The Hai Nam, Tran Manh Hung, Huynh Tan Phuoc for their help with the black box and the experiments I would also like to express my sincere thanks to Dr Ben M Chen from Chinese University of Hong Kong, Dr Yan Wan from University of Texas - Arlington and other anonymous reviewers for providing me with various insights concerning the shortcomings of my work Finally, I would like express my appreciation for the work of many scientists, medical professionals, the police and military force for their collective attempt to contain the coronavirus outbreak My acknowledgement also goes to many scientists working on ophthalmic and neurologic conditions as their work brings hope and inspirations to many patients to keep carrying on Amidst the coronavirus outbreak, Ho Chi Minh City, July 2020 Hoang Dinh Thinh iii Intentionally left blank iv Abstract This thesis summarizes two papers of my work on the localization problem of a rotarywing type aircraft around a landing target, which involves estimation of the aircraft’s relative position with respect to the landing target by fusing data obtained from an Inertial Measurement Unit (IMU) and a visible-light RGB camera This information is crucial to development of a robust and consistently-performing autonomous landing control algorithm - a key to autonomous UAV fleet operation, as well as in search and rescue A unified approach comprising of three algorithms is presented with the first one designed to run when the landing target is visible to the camera and the other provide localization information to the controller when the landing target temporarily moves out of view In this manner, the landing process can be continuously performed, yielding consistent performance without cancellation or restart when the aircraft is perturbed and moves away from the helipad The last algorithm is proposed for calibration of the accelerometer For the first algorithm, a convolutional neural network was employed for recognition of the helipad in the image Combined with attitude data obtained from the IMU, localization data can be inferred This approach is indifferent to different helipad designs and landing target and capable of performing in real-time on a Raspberry Pi 3B computer Experiments show that while the estimation is noisy, it is unbiased and expressed decent agreement with estimation from the fiducal marker approach such as ARUCOTag in particular For the second algorithm, we developed a Linear Time Varying (LTV) Kalman Filter for localizing the aircraft based on optical flow and IMU gyroscope and accelerometer data Through transformation via virtual measurements, the LTV measurement model v allows a much more simplified analytical analysis compared to linearization of camera’s projection model in EKF approaches We provided proof for a theorem that extends the result of contraction analysis for general Hamiltonian systems to encompass secondorder dynamics, resulting in stability of the filter for assimilating acceleration, rather than velocity data directly through the process model The algorithm demonstrated excellent localization performance in the time window of 30 seconds without loop closure techniques, light complexity compared to fiducial marker approaches and show agreement with monocular ORB-SLAM2, a state-of-the-art SLAM We also developed a prototyping device and detailed the accelerometer calibration process, with the development of a novel calibration algorithm that operates with several ARUCOTags Although rigorous testing of this calibration method was not performed, the calibrated accelerometer has very little residual gravity and contributed significantly to estimation of the accurate scale in aforementioned localization algorithms Keywords: localization, SLAM, estimation theory vi Commitment I hereby commit that this is my scientific work The results are presented as-is and have never been published anywhere else, except on the papers that I authored Under no circumstance did I intentionally claim facts from other people’s work without attributing them properly in the bibliography Hoang Dinh Thinh vii CHAPTER IMPLEMENTATION AND CALIBRATION Table 5.2: Calibration result Parameters X axis misalignment Y axis misalignment Z axis misalignment Scale X Scale Y Scale Z Bias X Bias Y Bias Z Values -0.00573419838339676 -0.00186736890453765 -0.00900500887137062 1.20767818666729 1.16825035803752 1.16585007950014 -0.214331157891075 -0.116907377183635 0.332252604436723 Note that if Ψi matrices are known in advance, the optimization problem is convex, and a unique solution can be found Equation 5.3 can easily be transformed into a residual term, and least squares method are then employed In practice, we performed 50 runs, each lasted for 2-3 seconds and moved the IMU in all directions The calibration result is obtained with the Trust Region Reflective method Implementation of this calibration algorithm is included in the repository shown at the end of this chapter, in the folder IMUCALIB The calibration result is shown in Table 5.2 The αz x, αyz , αzy terms are small, close to zero which validate our assumptions of small angle approximation The comparison of gravitational acceleration before and after calibration when the IMU was left resting at different attitude is shown in Figure 5.3 5.3 Data acquisition The data acquisition module is programmed in Python 3, using picamera library for communication with the Pi camera and RTEMISLib to communicate with the IMU Since the RTEMISLib already included compensation for the gyroscope and acceleration values, the calibrator can work with these values straight out of the box Because the Pi camera uses significant buffer memory for storing the image, we encountered a significant lag when running RTEMISLib and picamera on the same thread Decision was made to separate these two modules into different processes and run on different CPU cores The images and IMU readings were saved at a frequency 73 5.4 CAMERA CALIBRATION Figure 5.3: Gravitational acceleration before and after calibration of, on average, 10 times/sec and 100 times/sec However, the sampling time of the two sensors are not constant, but vary a little bit However, resampling of data was not necessary since all the expressions above were written in the time-varying form The DAQ code can be found in the daq main.py file 5.4 Camera calibration The camera was calibrated following the tutorial [92] 5.5 Algorithm implementation The algorithms presented in Chapter 3, and were implemented in Python with numpy library However, the calibration of the IMU was done in MATLAB 74 CHAPTER IMPLEMENTATION AND CALIBRATION 5.6 Open-source release All the implementation code for this thesis can be found at the following repositories: • https://github.com/thinhhoang95/RTIMULib2 • https://github.com/thinhhoang95/helipad-yolo • https://github.com/thinhhoang95/panacea-astro • https://github.com/thinhhoang95/panacea 75 Chapter Conclusion 6.1 Conclusion In this thesis, we have presented three approaches for the problem of helipad localization in autonomous landing for two circumstances The first algorithm is to estimate the helipad’s position when the helipad is visible to the camera with deep learning and simple constraints derived from the projection of an object whose size was known in advance The second algorithm provide localization information based on filtering with a novel analytical proof on the neccessary condition for stability Finally, an accelerometer calibration algorithm was presented to help cancel out the residual acceleration due to gravity Although the UAV control problem (that is, to autonomously land the aircraft on the helipad) was not studied in this thesis, it was addressed in the work of Nam Huynh’s graduation thesis which I co-developed [76] These results differ from available approaches in SLAM and autonomous landing as they present a holistic solution to localization in both circumstances: landing target visible and landing target out of sight These work also contributes to the understanding about analytical behaviour of LTV Kalman Filter, expanding the literature of contraction analysis and paved the way for interesting engagements of drones in both economic and social missions 76 6.2 FUTURE WORK 6.2 77 Future Work As these results came off from theoretical analysis and implementations were available, the filtering and optimization code will be subjected to a lot of optimization, for example, to exploit the sparsity of the matrices and perform parallel computing to speed up computation time On the theoretical side, the behavior of the covariance matrix of the LTV Filter proposed should be studied in case of non-uniform observability condition cannot be established due to insufficient feature observation Bibliography [1] Skybrary (2019) Unmanned aerial systems (uas), [Online] Available: https: / / www skybrary aero / index php / Unmanned _ Aerial _ Systems _ (UAS ) #Definitions [2] C A Rokhmana, “The potential of UAV-based remote sensing for supporting precision agriculture in Indonesia,” Procedia Environmental Sciences, vol 24, pp 245–253, 2015 [3] E Semsch, M Jakob, D Pavlicek, and M Pechoucek, “Autonomous UAV surveillance in complex urban environments,” in Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent TechnologyVolume 02, IEEE Computer Society, 2009, pp 82–85 [4] P Doherty and P Rudol, “A UAV search and rescue scenario with human body detection and geolocalization,” in Australasian Joint Conference on Artificial Intelligence, Springer, 2007, pp 1–13 [5] J Mo, A Z Chen, et al., “UAV delivery system design and analysis,” in 17th Australian International Aerospace Congress: AIAC 2017, Engineers Australia, Royal Aeronautical Society, 2017, p 146 [6] K M´ath´e and L Bu¸soniu, “Vision and control for UAVs: A survey of general methods and of inexpensive platforms for infrastructure inspection,” Sensors, vol 15, no 7, pp 14 887–14 916, 2015 [7] A Zulu and S John, “A review of control algorithms for autonomous quadrotors,” arXiv preprint arXiv:1602.02622, 2016 78 BIBLIOGRAPHY 79 [8] L Merino, F Caballero, J Martinez-de Dios, and A Ollero, “Cooperative fire detection using unmanned aerial vehicles,” in Proceedings of the 2005 IEEE international conference on robotics and automation, IEEE, 2005, pp 1884–1889 [9] Brandesscence, “Last-mile delivery market,” Brandesscence Research, Tech Rep., 2018 [10] Amazon (2020) Amazon primeair, [Online] Available: https://www.amazon com/Amazon-Prime-Air [11] DroneLife (Sep 2019) Droneii: The drone delivery market map, [Online] Available: https://dronelife.com/2019/11/07/droneii-the-drone-deliverymarket-map [12] D R Dale et al., “Automated ground maintenance and health management for autonomous unmanned aerial vehicles,” PhD thesis, Massachusetts Institute of Technology, 2007 [13] K A Swieringa, C B Hanson, J R Richardson, J D White, Z Hasan, E Qian, and A Girard, “Autonomous battery swapping system for small-scale helicopters,” in 2010 IEEE International Conference on Robotics and Automation, IEEE, 2010, pp 3335–3340 [14] A B Junaid, Y Lee, and Y Kim, “Design and implementation of autonomous wireless charging station for rotary-wing UAVs,” Aerospace Science and Technology, vol 54, pp 253–266, 2016 [15] V Grindheim, “Accurate drop of a GPS beacon using the x8 fixed-wing UAV,” Master’s thesis, NTNU, 2015 [16] D Boura, K Strang, W Semke, R Schultz, and D Hajicek, “Automated air drop system for search and rescue applications utilizing unmanned aircraft systems,” in Infotech@ Aerospace 2011, 2011, p 1528 [17] W Lohmiller and J.-J Slotine, “Contraction analysis of nonlinear hamiltonian systems,” in 52nd IEEE Conference on Decision and Control, IEEE, 2013, pp 6586– 6592 80 BIBLIOGRAPHY [18] F Tan, W Lohmiller, and J.-J Slotine, “Simultaneous localization and mapping without linearization,” arXiv preprint arXiv:1512.08829, 2015 [19] D Forsyth and J Ponce, Computer Vision: A Modern Approach: A Modern Approach Pearson Education Limited, 2015, isbn: 9781292014081 [Online] Available: https://books.google.com.vn/books?id=pAWpBwAAQBAJ [20] Wikipedia (Aug 2020) Pinhole camera model, [Online] Available: https:// en.wikipedia.org/wiki/Pinhole_camera_model [21] R E Kalman, “A new approach to linear filtering and prediction problems,” 1960 [22] Y Bar-Shalom, X Li, and T Kirubarajan, Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software Wiley, 2004, isbn: 9780471465218 [Online] Available: https://books.google.com.vn/books? id=xz9nQ4wdXG4C [23] S J Julier and J K Uhlmann, “New extension of the kalman filter to nonlinear systems,” in Signal processing, sensor fusion, and target recognition VI, International Society for Optics and Photonics, vol 3068, 1997, pp 182–193 [24] S Thrun, “Particle filters in robotics,” in Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, Morgan Kaufmann Publishers Inc., 2002, pp 511–518 [25] M Bocquet, K S Gurumoorthy, A Apte, A Carrassi, C Grudzien, and C K Jones, “Degenerate kalman filter error covariances and their convergence onto the unstable subspace,” SIAM/ASA Journal on Uncertainty Quantification, vol 5, no 1, pp 304–333, 2017 [26] W Li, G Wei, D Ding, Y Liu, and F E Alsaadi, “A new look at boundedness of error covariance of kalman filtering,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol 48, no 2, pp 309–314, 2016 [27] N Thacker and A Lacey, “Tutorial: The kalman filter,” [28] G Welch, G Bishop, et al., An introduction to the kalman filter, 1995 BIBLIOGRAPHY 81 [29] M B Rhudy and Y Gu, “Online stochastic convergence analysis of the kalman filter.,” International Journal of Stochastic Analysis, 2013 [30] S Thrun, W Burgard, D Fox, and R Arkin, Probabilistic Robotics, ser Intelligent Robotics and Autonomous Agents series MIT Press, 2005, isbn: 9780262201629 [Online] Available: https://books.google.com.vn/books?id=2Zn6AQAAQBAJ [31] A I Mourikis and S I Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, IEEE, 2007, pp 3565–3572 [32] C T Kelley, Iterative methods for optimization SIAM, 1999 [33] F J Romero-Ramirez, R Mu˜ noz-Salinas, and R Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and vision Computing, vol 76, pp 38–47, 2018 [34] VIB (2017) Navigation and spatial memory: New brain region identified to be involved, [Online] Available: https : / / www sciencedaily com / releases / 2017/08/170816085537.htm [35] M J Milford, G F Wyeth, and D Prasser, “RatSLAM: A hippocampal model for simultaneous localization and mapping,” in IEEE International Conference on Robotics and Automation, 2004 Proceedings ICRA’04 2004, IEEE, vol 1, 2004, pp 403–408 [36] L Carlone and A Censi, “From angular manifolds to the integer lattice: Guaranteed orientation estimation with application to pose graph optimization,” IEEE Transactions on Robotics, vol 30, no 2, pp 475–492, 2014 [37] F Lu and E Milios, “Globally consistent range scan alignment for environment mapping,” Autonomous robots, vol 4, no 4, pp 333–349, 1997 [38] M Montemerlo, S Thrun, D Koller, B Wegbreit, et al., “FastSLAM: A factored solution to the simultaneous localization and mapping problem,” Aaai/iaai, vol 593598, 2002 82 BIBLIOGRAPHY [39] G Grisetti, C Stachniss, and W Burgard, “Nonlinear constraint network optimization for efficient map learning,” IEEE Transactions on Intelligent Transportation Systems, vol 10, no 3, pp 428439, 2009 [40] R Kă ummerle, G Grisetti, H Strasdat, K Konolige, and W Burgard, “G o: A general framework for graph optimization,” in 2011 IEEE International Conference on Robotics and Automation, IEEE, 2011, pp 3607–3613 [41] M Kaess, H Johannsson, R Roberts, V Ila, J Leonard, and F Dellaert, “Isam2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering,” in 2011 IEEE International Conference on Robotics and Automation, IEEE, 2011, pp 3281–3288 [42] T Venugopalan, T Taher, and G Barbastathis, “Autonomous landing of an unmanned aerial vehicle on an autonomous marine vehicle,” in 2012 Oceans, IEEE, 2012, pp 1–9 [43] A Junaid, A Konoiko, Y Zweiri, M Sahinkaya, and L Seneviratne, “Autonomous wireless self-charging for multi-rotor unmanned aerial vehicles,” Energies, vol 10, no 6, p 803, 2017 [44] J Kim, Y S Lee, S S Han, S H Kim, G H Lee, H J Ji, H J Choi, and K N Choi, “Autonomous flight system using marker recognition on drone,” in 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), IEEE, 2015, pp 1–4 [45] R Bart´ak, A Hraˇsko, and D Obdrˇza´lek, “A controller for autonomous landing of ar drone,” in The 26th Chinese Control and Decision Conference (2014 CCDC), IEEE, 2014, pp 329–334 [46] Y Bi and H Duan, “Implementation of autonomous visual tracking and landing for a low-cost quadrotor,” Optik-International Journal for Light and Electron Optics, vol 124, no 18, pp 3296–3300, 2013 [47] P Smyczy´ nski, L Starzec, and G Granosik, “Autonomous drone control system for object tracking: Flexible system design with implementation example,” in 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), IEEE, 2017, pp 734–738 BIBLIOGRAPHY 83 [48] M Skoczylas, “Vision analysis system for autonomous landing of micro drone,” acta mechanica et automatica, vol 8, no 4, pp 199–203, 2014 [49] V Sudevan, A Shukla, and H Karki, “Vision based autonomous landing of an unmanned aerial vehicle on a stationary target,” in 2017 17th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2017, pp 362– 367 [50] O Araar, N Aouf, and I Vitanov, “Vision based autonomous landing of multirotor UAV on moving platform,” Journal of Intelligent & Robotic Systems, vol 85, no 2, pp 369–384, 2017 [51] D K Kim and T Chen, “Deep neural network for real-time autonomous indoor navigation,” arXiv preprint arXiv:1511.04668, 2015 [52] R Polvara, M Patacchiola, S Sharma, J Wan, A Manning, R Sutton, and A Cangelosi, “Autonomous quadrotor landing using deep reinforcement learning,” arXiv preprint arXiv:1709.03339, 2017 [53] P Nguyen, M Arsalan, J Koo, R Naqvi, N Truong, and K Park, “Lightdenseyolo: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone,” Sensors, vol 18, no 6, p 1703, 2018 [54] J Redmon and A Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018 [55] T Baca, P Stepan, and M Saska, “Autonomous landing on a moving car with unmanned aerial vehicle,” in 2017 European Conference on Mobile Robots (ECMR), IEEE, 2017, pp 1–6 [56] M Robinson, Topological signal processing Springer, 2014 [57] D Nist´er, O Naroditsky, and J Bergen, “Visual odometry,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004 CVPR 2004., Ieee, vol 1, 2004, pp I–I [58] G Klein and D Murray, “Parallel tracking and mapping for small ar workspaces,” in Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, IEEE Computer Society, 2007, pp 1–10 84 BIBLIOGRAPHY [59] A J Davison, I D Reid, N D Molton, and O Stasse, “MonoSLAM: Realtime single camera SLAM,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no 6, pp 1052–1067, 2007 [60] R A Newcombe, S J Lovegrove, and A J Davison, “Dtam: Dense tracking and mapping in real-time,” in 2011 international conference on computer vision, IEEE, 2011, pp 2320–2327 [61] C Kerl, J Sturm, and D Cremers, “Dense visual SLAM for rgb-d cameras,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2013, pp 2100–2106 [62] R A Newcombe, S Izadi, O Hilliges, et al., “Kinectfusion: Real-time dense surface mapping and tracking.,” in ISMAR, vol 11, 2011, pp 127–136 [63] C Forster, Z Zhang, M Gassner, M Werlberger, and D Scaramuzza, “Svo: Semidirect visual odometry for monocular and multicamera systems,” IEEE Transactions on Robotics, vol 33, no 2, pp 249265, 2016 [64] J Engel, T Schăops, and D Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European conference on computer vision, Springer, 2014, pp 834–849 [65] D G Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol 60, no 2, pp 91–110, 2004 [66] H Bay, T Tuytelaars, and L Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision, Springer, 2006, pp 404–417 [67] E Rublee, V Rabaud, K Konolige, and G R Bradski, “Orb: An efficient alternative to sift or surf.,” in ICCV, Citeseer, vol 11, 2011, p [68] R Mur-Artal and J D Tard´os, “Orb-slam2: An open-source SLAM system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol 33, no 5, pp 1255–1262, 2017 [69] S Leutenegger, P Furgale, V Rabaud, M Chli, K Konolige, and R Siegwart, “Keyframe-based visual-inertial SLAM using nonlinear optimization,” Proceedings of Robotis Science and Systems (RSS) 2013, 2013 BIBLIOGRAPHY 85 [70] C Forster, L Carlone, F Dellaert, and D Scaramuzza, IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation, Georgia Institute of Technology, 2015 [71] T Qin, P Li, and S Shen, “Vins-mono: A robust and versatile monocular visualinertial state estimator,” IEEE Transactions on Robotics, vol 34, no 4, pp 1004– 1020, 2018 [72] K He, X Zhang, S Ren, and J Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp 770–778 [73] Ultralytics (Aug 2020) Yolov3, [Online] Available: https : / / github com / ultralytics/yolov3 [74] J Redmon, Darknet: Open source neural networks in c, http://pjreddie.com/ darknet/, 2013–2016 [75] Y Kwon (Aug 2020) Yolo label, [Online] Available: https://github.com/ developer0hye/Yolo_Label [76] H T H Nam, “Autonomous UAV landing for drones,” Graduation Thesis, Ho Chi Minh City University of Technology, 2020 [77] E Leffens, F L Markley, and M D Shuster, “Kalman filtering for spacecraft attitude estimation,” Journal of Guidance, Control, and Dynamics, vol 5, no 5, pp 417–429, 1982 [78] A L´opez-Cer´on and J M Canas, “Accuracy analysis of marker-based 3d visual localization,” in XXXVII Jornadas de Automatica Workshop, 2016 [79] Richards-Tech (2018) Rtimulib2, [Online] Available: https://github.com/ RTIMULib/RTIMULib2 [80] F Tan, W Lohmiller, and J.-J Slotine, “Analytical SLAM without linearization,” The International Journal of Robotics Research, vol 36, no 13-14, pp 1554– 1578, 2017 [81] R Kan, “From moments of sum to moments of product,” Journal of Multivariate Analysis, vol 99, no 3, pp 542–554, 2008 86 BIBLIOGRAPHY [82] G Cui, X Yu, S Iommelli, and L Kong, “Exact distribution for the product of two correlated gaussian random variables,” IEEE Signal Processing Letters, vol 23, no 11, pp 1662–1666, 2016 [83] D Katz, J Baptista, S Azen, and M Pike, “Obtaining confidence intervals for the risk ratio in cohort studies,” Biometrics, pp 469–474, 1978 [84] W Lohmiller and J.-J E Slotine, “On contraction analysis for non-linear systems,” Automatica, vol 34, no 6, pp 683–696, 1998 [85] W Lohmiller and J.-J Slotine*, “Contraction analysis of non-linear distributed systems,” International Journal of Control, vol 78, no 9, pp 678–688, 2005 [86] raspberrypi.org (2019) Camera module v2, [Online] Available: https://www raspberrypi.org/products/camera-module-v2/ [87] ——, (2018) Raspberry pi model b, [Online] Available: https : / / www raspberrypi.org/products/raspberry-pi-3-model-b/ [88] O J Woodman, “An introduction to inertial navigation,” University of Cambridge, Computer Laboratory, Tech Rep., 2007 [89] N Antigny, H Uchiyama, M Servi`eres, V Renaudin, D Thomas, and R.-i Taniguchi, “Solving monocular visual odometry scale factor with adaptive step length estimates for pedestrians using handheld devices,” Sensors, vol 19, no 4, p 953, 2019 [90] T D Hoang (2020) Rtemislib, [Online] Available: https : / / github com / thinhhoang95/RTIMULib2 [91] D Tedaldi, A Pretto, and E Menegatti, “A robust and easy to implement method for IMU calibration without external equipments,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2014, pp 3042– 3049 [92] OpenCV (2013) Camera calibration, [Online] Available: https : / / opencv python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/ py_calibration/py_calibration.html ... Gaussian The Kalman Filter is often regarded as the analytical solution to the general class of Bayes filter under such assumptions, and spurred a whole class of variants that apply to different... and N (0, R) respectively Very often, the central limit theorem (CLT) serves as the ground for choice of the normal distribution for the noise terms Define the a priori state estimate at time

Ngày đăng: 02/03/2021, 14:17

Mục lục

    Hoàng Đình Thịnh - 1970062 (bs CT, NV, LLTN)

Tài liệu cùng người dùng

Tài liệu liên quan