1. Trang chủ
  2. » Luận Văn - Báo Cáo

Luận văn thạc sĩ Kỹ thuật điều khiển và tự động hóa: Quadcopter bám theo vật thể dựa trên mô hình xử lý ảnh deep-learning

79 1 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Quadcopter bám theo vật thể dựa trên mô hình xử lý ảnh deep-learning
Tác giả NguyӉn Phỳc Khụi
Người hướng dẫn PGS. TS. HuǤnh Thỏi Hoàng, TS. NguyӉn Trӑng Tài, TS. Ngụ Thanh QuyӅn
Trường học ĈҤI HӐC QUӔC GIA TPHCM
Chuyên ngành Kӻ thuұW ĈLӅu khiӇn và Tӵ ÿӝng húa
Thể loại Luұn văn thҥc sƭ
Năm xuất bản 2020
Thành phố TP. HӖ CHÍ MINH
Định dạng
Số trang 79
Dung lượng 1,88 MB

Cấu trúc

  • Chapter 1. INTRODUCTION (0)
    • 1.1. Overview (16)
    • 1.2. Objective of Study (28)
    • 1.3. Outline of the thesis (28)
  • Chapter 2. THEORITICAL BASIS (0)
    • 2.1. Convolutional Neural Network (CNN) and YOLOv3 Algorithm (29)
    • 2.2. Fuzzy Logic (38)
  • Chapter 3. HARDWARE DESIGN (0)
    • 3.1. Quadcopter system overview (41)
    • 3.2. Quadcopter configuration (44)
  • APM 2.8 Flight controller (44)
  • Chapter 4. SOFTWARE DESIGN (0)
    • 4.1. Quadcopter control diagram (52)
    • 4.2. Object detection (54)
    • 4.3. Fuzzy logic algorithm (59)
  • Chapter 5. RESULT AND FUTURE DEVELOPMENT (0)
    • 5.1. Result (63)
    • 5.2. Future Development (69)

Nội dung

A Convolutional Neural Network CNN named YOLOv3 is used for objection recognition and a fuzzy processor is used for determining the PWM signals sent to the Ardupilot microcontroller.. LI

INTRODUCTION

Overview

Unmanned aerial vehicles (UAVs) are also known as drones These are variously sized aircrafts that may be remotely controlled or can fly autonomously Autonomous flight of drones is controlled by sensors, global positioning systems (GPS), and embedded systems

History of Unmanned Aerial Vehicles

1849: The Austrian balloons ± the earliest recored use of a UAV is August 22,

1849, when the Austrians attacked Venice using unmanned balloons loaded with explosives

1915: Battle of Neuve Chapelle ± Britsh military used aerial imagery to capture more than 1500 sky view maps of the German trench fortifications in the region

1916: The First Major American UAVs was built during World War I Shortly after, smal BiPlanes such as the Kettering Bug were built

1930: Aerial torpedoes ± The U.S Navy began experimenting with radio- controlled aircraft during the 1930s resulting in the Curtiss N2C-2 drone in 1937

1941: Reginald Denny and the Radioplane ± In the early stages of World War

II, the U.S created the first remote controlled aircraft called the Radioplane OQ2 The first large-scale production, purpose-built drone was the product of Reginald Denny

1973: Mastiff UAV and IAA Scout ± Israel developed the mastiff UAV and IAA Scout, both unpiloted surveillance machines

1982: Battlefield UAVs ± The attitude towards UAVs, which were often seen as unreliable and expensive toys, changed dramatically with the Israeli $LU)RUFHảVYLFWRU\RYHUWKH6\ULDQ$LU)RUFHLQ,VUDHOảVFRRUGLQDWHGXVHRI8$9VDORQJVLGH

2 manned aircraft, allowed the state to quickly destroy dozens of Syrian aircraft with minimal losses

1985: Pioneer UAV Program ± In the 1980s, U.S military operations in Grenada, Lebanon, and Libya identified a need for an on-call, inexpensive, unmanned, over-the-horizon targeting, reconnaissance, and battle damage assessment (BDA) capability for local commanders As a result, in July 1985, the Secretary of the Navy directed the expeditious acquisition of UAV systems for fleet operations using non developmental technology

More recently, in 1990 miniature and micro UAVs were introduced and in 2000 the U.S deployed the Predator drone in Afghanistan while searching for Osama Bin Laden Although many of the most notable drone flights have been for military purposes, technology is continuing to advance and receive more attention

In 2014, Amazon proposed using UAVs to deliver packages to customers and some real-estate companies are using drones to shoot promotional videos The uses of drones will continue to grow in many industries worldwide

The very small UAV class applies to UAVs with dimensions ranging from the size of a large insect to 30-50 cm long The insect-like UAVs, with flapping or rotary wings, are a popular micro design They are extremely small in size, are very light weight, and can be used for spying and biological warfare Larger ones utilize conventional aircraft configuration The choice between flapping or rotary wings is a matter of desired maneuverability

3 Figure 1.1 Examples of very small UAVs The Small UAV class (which also called sometimes mini-UAV) applies to UAVs that have at least one dimension greater than 50 cm and no larger than 2 meters Many of the designs in this category are based on the fixed-wing model, and most are hand-launched by throwing them in the air

Figure 1.2 Examples of small UAVs

4 The medium UAV class applies to UAVs that are too heavy to be carried by one person but are still smaller than a light aircraft They usually have a wingspan of about 5-10 m and can carry payloads of 100 to 200 kg Examples of medium fixed- wing UAVs are (see Figure 1-3, below) the Israeli-US Hunter and the UK Watchkeeper There are other brands used in the past, such as the US Boeing Eagle Eye, the RQ-2 Pioneer, the BAE systems Skyeye R4E, and the RQ-5A Hunter The Hunter has a wingspan of 10.2 m and is 6.9 m long It weighs about 885 kg at takeoff The RS-20 by American Aerospace is another example of a crossover UAV that spans the specifications of a small and medium sized UAV

Figure 1.3 Example of medium UAVs

5 The large UAV class applies to the large UAVs used mainly for combat operations by the military Examples of these large UAVs are the US General Atomics Predator A and B and the US Northrop Grumman Global Hawk (Figures 1- 4)

Figure 1.4 Example of large UAVs

Classification According to Range and Endurance

Very low-cost, close range UAVs: This class includes UAVs that have a range of 5 km, endurance time of 20 to 45 minutes, and cost of about $10,000 (2012 estimate) Examples of UAVs in this class are the Raven and Dragon Eye UAVs in this class are very close to model airplanes

Close range UAVs: This class includes UAVs that have a range of 50 km and endurance time of 1 to 6 hours They are usually used for reconnaissance and surveillance tasks

6 Short range UAVs: This class includes UAVs that have a range of 150 km or longer and endurance times of 8 to 12 hours Like the close range UAVs, they are mainly utilized for reconnaissance and surveillance purposes

Mid-range UAVs: The mid-range class includes UAVs that have super high speed and a working radius of 650 km They are also used for reconnaissance and surveillance purposes in addition to gathering meteorological data

Endurance UAVs: The endurance class includes UAVs that have an endurance of 36 hours and a working radius of 300 km This class of UAVs can operate at altitudes of 30,000 feet They are also used for reconnaissance and surveillance purposes

Aerial photography: Drones are now being used to capture footage that would otherwise require expensive helicopters and cranes Fast paced action and sci-fi scenes are filmed by aerial drones, thus making cinematography easier These autonomous flying devices are also used in real estate and sports photography Furthermore, journalists are considering the use of drones for collecting footage and information in live broadcasts

Figure 1.5 Aerial photograph taken by quadcopter

Shipping and Delivery: Major companies like Amazon, UPS, and DHL are in favor of drone delivery Drones could save a lot of manpower and shift unnecessary road traffic to the sky Besides, they can be used over smaller distances to deliver small packages, food, letters, medicines, beverages and the like

Geography mapping: Available to amateurs and professionals, drones can acquire very high-resolution data and download imagery in difficult to reach locations like coastlines, mountaintops, and islands They are also used to create 3D maps and contribute to crowd sourced mapping applications

Figure 1.7 Aerial map with construction zones, existing and projected building

Objective of Study

With the development of science and technology, drones are increasingly being applied in many fields, and computer vision is an indispensable part in these applications All kinds of detection, recognition, surveillance, mapping and monitoring need to deal with the images That is why developing a system to handle the processing operation has the most important position

In this project, the objective is building a ground station system to control a quadcopter automatically The reasons why a ground station is needed:

- Running the object recognition algorithm needs a lot of computer resources, especially the computation on GPU

- The embedded computers which have GPU are expensive, bulky and battery-consuming

- Putting embedded computers on a small size quadcopter can cause overload and lack of space

The ground station takes the images from FPV camera and use fuzzy logic to output control signals via bluetooth to drones.

Outline of the thesis

The thesis is presented in chapters as follows:

Chapter 1: Present the history of Unmanned Arial Vehical, applications in some typical domains, the thesis in comparison with other projects and objective of the thesis

Chapter 2: Present the theoretical basis: Fuzzy logic, Object recognition and tracking algorithm

Chapter 4: Present the software design, control diagram, fuzzy logic algorithm diagram and image processing algorithm diagram

Chapter 5: Present the result and future developments.

THEORITICAL BASIS

Convolutional Neural Network (CNN) and YOLOv3 Algorithm

In neural networks, Convolutional neural network (ConvNets or CNNs) is one of the main categories to do images recognition, images classifications Objects detections, recognition faces etc., are some of the areas where CNNs are widely used

Technically, deep learning CNN models to train and test, each input image will pass it through a series of convolution layers with filters (Kernals), Pooling, fully connected layers (FC) and apply Softmax function to classify an object with probabilistic values between 0 and 1 The below figure is a complete flow of CNN to process an input image and classifies the objects based on values

Figure 2.1 Complete flow of CNN

The main building block of CNN is the convolutional layer Convolution is a mathematical operation to merge two sets of information In our case the convolution is applied on the input data using a convolution filter to produce a feature map

15 Figure 2.2 Input and Filter of Convolutional Layer

We perform the convolution operation by sliding this filter over the input At every location, we do element-wise matrix multiplication and sum the result This sum goes into the feature map The green area where the convolution operation takes place is called the receptive field Due to the size of the filter the receptive field is also 3x3

Figure 2.3 Receptive field and Feature Map

Figure 2.5 Computation result in 3D input Stride specifies how much we move the convolution filter at each step By default the value is 1

Figure 2.6 Feature map with stride 2

To maintain the same dimensionality, padding is used to surround the input with zeros

Figure 2.7 Stride 1 with Padding Calculating formula of output dimension:

P: Number of padding rows or columns of zero pixels

After a convolution operation we usually perform pooling to reduce the dimensionality This enables us to reduce the number of parameters, which both shortens the training time and combats overfitting The most common type of pooling is max pooling which takes the max value in the pooling window and the second type is average pooling which takes the average value in the pooling window

Figure 2.8 Max Pooling vs Average Pooling with 2x2 window and stride 2 Calculating formula of output dimension:

(H,W,D): the dimensions of input +ả:ả'ảWKHGLPHQVLRQRIRXWSXW F: Filter size

Fully connected layers are placed before the classification output of a CNN and are used to flatten the results before classification

After the Feature Extraction phase, the latest image should be flattened into columnvector before being fed to a feed-forward neural network and backpropagation applied to every iteration of training Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks Their activations can hence be computed with a matrix multiplication followed by a bias offset Over a series of epochs, the model is able to distinguish between dominating and certain low-level features in images and classify them using the Softmax Classification technique

Network structure ³ 90% The objects can be detected in different angles The FPS is around 11 which is sufficient for observing the object in real time

In the project, we choose Pikachu as the main object to be tracked, only the FHQWHURI3LNDFKXảVERXQGLQJER[LVGLVSOD\HGE\XVLQJDJUHHQGRW&DWDQG5DEELWảV bounding boxes do not show the center point

49 Figure 5.2 Pikachu detected in the front direction

Figure 5.3 Pikachu and Rabbit detected in the back direction

50 Figure 5.4 Pikachu and Rabbit detected in the front direction The 4 figures below are showing the value of Top-Left coordinates and Bottom- Right coordinates variation in 35 seconds From 0 to 25s, the figures describe the first tested position and from 25 to 35s, the figures describe the second tested position

As we can see, the values of the coordinates oscillate with amplitude less than 0.1 which means the position of the bounding box is stable

51 Figure 5.5 Line Plot of Top-Left x value over time

Figure 5.6 Line Plot of Top-Left y value over time

52 Figure 5.7 Line Plot of Bottom-Right x value over time

Figure 5.8 Line Plot of Bottom-Right y value over time

Using the 3 values dx, dy, dS derived from the image processing, the Fuzzy logic process is able to calculate the output throttle, yaw angle and pitch angle These values then sent to flight controller via bluetooth The string format:

Roll , Pitch , Throt , Yaw * Checksum \r\n

Checksum function returns a BCD value: ord() function in python returns an integer representing the Unicode character The string then received in flight controller:

5ROOYDOXHLVDOZD\VàảDVZHGRnot control roll angle

Within the thesis, student has completed the assembly of quadcopter Research and implementation issues include: x Learning about the composition, sensors, flight controller, motors, propelers, power distributioQ ERDUG ôas a basis for designing the automation algorithm x Learning about the process of developing a convolutional neural network, collect dataset, training the model, adjust the configuration parameters and running with GPU

54 x Developing a real-time object detection application for FPV camera x Learning about fuzzy logic, membership function, fuzzy rules, fuzzification, defuzzification x Developing a fuzzy logic processor to output the PWM values to controll the motors x Combining the object detection and fuzzy logic process to build a ground station which controls the quadcopter to track the object automatically

Quadcopter stabilization is not perfectly complete therefore the object detection algorithm is affected

The FPS of the object detection output images is quite low, around 11 FPS, it is sufficient for slow tracking but should be around 20-30 FPS for other applications

The application is able to dectect only 3 classes of objects Due to low resolution of the camera, the objects can only be detected within 50 cm from the camera

The small dataset (400 photos/class) has led to incorrect recognition in some cases

The areas of bounding boxes are different between front, back and side direction This will affect the pitch angle output of fuzzy processor

Bluetooth has a low range of communication and is affected by obstacles, needs an open large space for best performance.

Future Development

In the future, we would like to build a complete application on quadcopter with gimbal to stabilize the camera and obstacle avoidance feature Currently, the quadcopter does not use the roll angle, as following object in an open space without obstacle does not require roll movement The camera is permanently attached to the quadcopter frame, therefore it lacks of flexibility, the object can move out of the

55 vision The gimbal is now playing an important role, it helps stabilize the images and has more flexibility to track and lock the object

In addition, when we use the quadcopter in the environments which contain many obstacles, we definitely need an algorithm for avoidance, and roll angle is an indispensable factor

Taking off and landing automatically are also important features needed to be considered Now for safety, we manually control the quadcopter to take off and land But in many situations, the quadcopter must run without human command

Furthermore, object detection algorithm must be enhanced in terms of speed and accuracy

[1] M.Navabi, H.Mirzaei (2017), Robust Optimal Adaptive Trajectory Tracking Control of Quadcopter Helicopter Latin American Journal of

Solids and Structures 14, pp 4-16 Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020

[2] Ramiro Ibarra Pérez (2017), Attitude Control of a Quadcopter Using Adaptive Control Technique, UAM Reynosa Rodhe, Autonomous University of Tamaulipas, Tamaulipas, México, pp 4-11 Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020

[3] Axel Reizenstein (2017), Position and Trajectory Control of a Quadcopter Using PID and LQ Controllers, Master of Science Thesis in Electrical Engineering, Department of Electrical Engineering, Linkửping University, pp 17-63 Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020

[4] Ali Rohan, Mohammed Rabah, Sung-ho Kim (2019), " Convolutional Neural Network-Based Real-Time Object Detection and Tracking for Parrot AR Drone 2", School of Electronics and Information Engineering,

Kunsan National University, pp 2-8 Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020

[5] Katsuya Ogura, Yuma Yamada, Shugo Kajita, Hirozumi Yamaguchi, Teruo Higashino, Mineo Takai (2019), "Ground object recognition and segmentationfrom aerial image-based 3D point cloud", Graduate School of Information Science and Technology, Osaka University, Osaka, Japan, pp 3-9 Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020

57 [6] Sabir Hossain and Deok-jin Lee (2019), "Deep-Learning Based Real- Time Multiple-Object Detection and Tracking from Aerial Imagery via Flying Robot with GPU-based Embedded Devices", School of Mechanical & Convergence System Engineering, Kunsan National University, pp 1-20 Link: https://www.researchgate.net/, date access: 30- 01-2020

[7] Anjan Chakrabarty and Robert Morris (2016), "Autonomous Indoor Object Tracking with the Parrot AR.Drone", 2016 International Conference on Unmanned Aircraft Systems (ICUAS) Link: https://www.researchgate.net/, date access: 30-01-2020

[8] Peng Chen, Yuanjie Dang, Ronghua Liang, Wei Zhu and Xiaofei He (2018), "Real-Time Object Tracking on a Drone With Multi-Intertial Sensing Data", IEEE transactions on itelligent transportation systems, Vol.19 Link: https://www.researchgate.net/, date access: 30-01-2020

[9] 'DQLHO%RO\D&KRQJ=KRX)DQ\L;LDR

Ngày đăng: 03/08/2024, 13:56

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[4] Ali Rohan, Mohammed Rabah, Sung-ho Kim (2019), " Convolutional Neural Network-Based Real-Time Object Detection and Tracking for Parrot AR Drone 2", School of Electronics and Information Engineering,Kunsan National University, pp. 2-8. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020 Sách, tạp chí
Tiêu đề: Convolutional Neural Network-Based Real-Time Object Detection and Tracking for Parrot AR Drone 2
Tác giả: Ali Rohan, Mohammed Rabah, Sung-ho Kim
Năm: 2019
[5] Katsuya Ogura, Yuma Yamada, Shugo Kajita, Hirozumi Yamaguchi, Teruo Higashino, Mineo Takai (2019), "Ground object recognition and segmentationfrom aerial image-based 3D point cloud", Graduate School of Information Science and Technology, Osaka University, Osaka, Japan, pp. 3-9. Link: https://ieeexplore.ieee.org/Xplore/home.jsp, date access:20-01-2020 Sách, tạp chí
Tiêu đề: Ground object recognition and segmentationfrom aerial image-based 3D point cloud
Tác giả: Katsuya Ogura, Yuma Yamada, Shugo Kajita, Hirozumi Yamaguchi, Teruo Higashino, Mineo Takai
Năm: 2019
[6] Sabir Hossain and Deok-jin Lee (2019), "Deep-Learning Based Real- Time Multiple-Object Detection and Tracking from Aerial Imagery via Flying Robot with GPU-based Embedded Devices", School of Mechanical & Convergence System Engineering, Kunsan National University, pp. 1-20. Link: https://www.researchgate.net/, date access: 30- 01-2020 Sách, tạp chí
Tiêu đề: Deep-Learning Based Real-Time Multiple-Object Detection and Tracking from Aerial Imagery via Flying Robot with GPU-based Embedded Devices
Tác giả: Sabir Hossain and Deok-jin Lee
Năm: 2019
[7] Anjan Chakrabarty and Robert Morris (2016), "Autonomous Indoor Object Tracking with the Parrot AR.Drone", 2016 International Conference on Unmanned Aircraft Systems (ICUAS). Link:https://www.researchgate.net/, date access: 30-01-2020 Sách, tạp chí
Tiêu đề: Autonomous Indoor Object Tracking with the Parrot AR.Drone
Tác giả: Anjan Chakrabarty and Robert Morris
Năm: 2016
[8] Peng Chen, Yuanjie Dang, Ronghua Liang, Wei Zhu and Xiaofei He (2018), "Real-Time Object Tracking on a Drone With Multi-Intertial Sensing Data", IEEE transactions on itelligent transportation systems, Vol.19. Link: https://www.researchgate.net/, date access: 30-01-2020 Sách, tạp chí
Tiêu đề: Real-Time Object Tracking on a Drone With Multi-Intertial Sensing Data
Tác giả: Peng Chen, Yuanjie Dang, Ronghua Liang, Wei Zhu and Xiaofei He
Năm: 2018
[10] Cheng-Yang Fu, Mykhailo Shvets, and Alexander C Berg (2019), "Retinamask: Learning to predict masks improves state-of-the-art single- shot detection for free", arXiv:1901.03353, pp. 1-9. Link:https://www.researchgate.net/, date access: 30-01-2020 Sách, tạp chí
Tiêu đề: Retinamask: Learning to predict masks improves state-of-the-art single-shot detection for free
Tác giả: Cheng-Yang Fu, Mykhailo Shvets, and Alexander C Berg
Năm: 2019
[11] Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia (2018), "Icnet for real-time semantic segmentation on high-resolution images", In ECCV. Link: https://ieeexplore.ieee.org/Xplore/home.jsp,date access: 10-02-2020 Sách, tạp chí
Tiêu đề: Icnet for real-time semantic segmentation on high-resolution images
Tác giả: Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia
Năm: 2018
[1] M.Navabi, H.Mirzaei (2017), Robust Optimal Adaptive Trajectory Tracking Control of Quadcopter Helicopter. Latin American Journal ofSolids and Structures 14, pp. 4-16. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020 Link
[2] Ramiro Ibarra Pérez (2017), Attitude Control of a Quadcopter Using Adaptive Control Technique, UAM Reynosa Rodhe, Autonomous University of Tamaulipas, Tamaulipas, México, pp. 4-11. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 20-01-2020 Link
[3] Axel Reizenstein (2017), Position and Trajectory Control of a Quadcopter Using PID and LQ Controllers, Master of Science Thesis in Electrical Engineering, Department of Electrical Engineering, Linkửping University, pp. 17-63. Link: https://ieeexplore.ieee.org/Xplore/home.jsp,date access: 20-01-2020 Link
[12] Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi (2015), You Only Look Once: Unified, Real-Time Object Detection, University of Washington, Allen Institute for AI, pp. 1-8. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 10-02-2020 Link
[13] Joseph Redmon, Ali Farhadi (2015), YOLO9000: Better, Faster, Stronger, University of Washington, Allen Institute for AI, pp. 1-8. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 10-02-2020 Link
[14] .LPRQ 39DODYDQLV ³$GYDQFHV LQ 8QPDQQHG $HULDO 9HKLFOHV´1st Ed., Springer Ltd, pp. 2-5. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 10-02-2020 Link
[15] Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas (2011), Tracking- Learning-Detection, IEEE Transactions on pattern analysis and machine intelligence, Vol. 6, No. 1, pp. 1-12. Link:https://ieeexplore.ieee.org/Xplore/home.jsp, date access: 10-02-2020 Link
[9] 'DQLHO%RO\D&KRQJ=KRX)DQ\L;LDR<RQJ-DH/HH³<2/$&7Real-WLPH ,QVWDQFH 6HJPHQWDWLRQ´ DU;LYY SS -9. Link:https://www.researchgate.net/, date access: 30-01-2020 Khác

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN

w