1. Trang chủ
  2. » Luận Văn - Báo Cáo

vision based navigation of autonomous car using convolutional neural network

89 43 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 89
Dung lượng 4,63 MB
File đính kèm Code.rar (3 KB)

Nội dung

CONTENTS THESIS TASKS i SCHEDULE ii ASSURANCE STATEMENT iv ACKNOWLEDGEMENT v ADVISOR’S COMMENT SHEET vi REVIEWER’S COMMENT SHEET vii CONTENTS viii ABBREVIATIONS AND ACRONYMS x LIST OF FIGURES xii LIST OF TABLES xv ABSTRACT xvi CHAPTER 1: OVERVIEW .1 1.1 INTRODUCTION 1.2 BACKGROUND AND RELATED WORK 1.2.1 Overview about Autonomous Car 1.2.2 Literature Review and Other Study .3 1.3 OBJECTIVES OF THE THESIS 1.4 OBJECT AND RESEARCHING SCOPE 1.5 RESEARCHING METHOD 1.6 THE CONTENT OF THESIS CHAPTER 2: THE PRINCIPLE OF SELF – DRIVING CARS 10 2.1 INTRODUCTION OF SELF – DRIVING CARS 10 2.2 DIFFERENT TECHNOLOGIES USED IN SELF-DRIVING CARS 11 2.2.1 Laser 11 2.2.2 Lidar .12 2.2.3 Radar 15 2.2.4 GPS 16 2.2.5 Camera 16 2.2.6 Ultrasonic Sensors .17 2.3 OVERVIEW ABOUT ARTIFICIAL INTELLIGENCE 18 2.3.1 Artificial Intelligence 18 2.3.2 Machine Learning 19 2.3.3 Deep Learning .21 CHAPTER 3: CONVOLUTIONAL NEURAL NETWORK 24 3.1 INTRODUCTION .24 3.2 STRUCTURE OF CONVOLUTIONAL NEURAL NETWORKS 24 3.2.1 Convolution Layer .25 3.2.2 Activation function 27 3.2.3 Stride and Padding .28 3.2.4 Pooling Layer .29 viii 3.2.5 Fully-Connected layer 30 3.3 NETWORK ARCHITECTURE AND PARAMETER OPTIMIZATION 31 3.4 OBJECT DETECTION 32 3.4.1 Single Shot Detection framework 32 3.4.2 MobileNet Architecture .34 3.4.3 Non-Maximum Suppression 38 3.5 OPTIMIZE NEURAL NETWORKS 39 3.5.1 Types of Gradient Descent 39 3.5.2 Types of Optimizer 40 CHAPTER 4: HARDWARE DESIGN OF SELF-DRIVING CAR PROTOTYPE 43 4.1 HARDWARE COMPONENTS 43 4.1.1 1/10 Scale 4WD Off Road Remote Control Car Buggy Desert 43 4.1.2 Brushed Motor RC-540PH 44 4.1.3 Motor control module BTS7960 45 4.1.4 RC Servo MG996 47 4.1.5 Raspberry Pi Model B+ 47 4.1.6 NVIDIA Jetson Nano Developer Kit 50 4.1.7 Camera Logitech C270 53 4.1.8 Arduino Nano 54 4.1.9 Lipo Battery 2S-30C 2200mAh 56 4.1.10 Voltage reduction module 57 4.1.11 USB UART PL2303 59 4.2 HARDWARE WIRING DIAGRAM 59 4.2.1 Construct The Hardware Platform .60 4.2.2 PCB Of Hardware 61 CHAPTER 5: CONTROL ALGORITHMS OF SELF-DRIVING CAR PROTOTYPE 62 5.1 CONTROL THEORY 62 5.1.1 Servo Control Theory 62 5.1.2 UART Communication Theory 64 5.2 FLOWCHART OF COLLECTING TRAINING DATA 67 5.3 FLOWCHART OF NAVIGATING THE CAR USING TRAINED MODEL 68 CHAPTER 6: EXPERIMENTS 69 6.1 EXPERIMENTAL ENVIRONMENTS 69 6.2 COLLECT DATA .70 6.3 DATA AUGMENTATIONS 71 6.4 TRAINING PROCESS 72 6.5 OUTDOOR EXPERIMENTS RESULTS 75 CHAPTER 7: CONCLUSION AND FUTURE WORK 77 REFERENCES 79 ix ABBREVIATIONS AND ACRONYMS ADAS : Advanced Driving Assistance System CCD : Charge Coupled Device CMOS : Complimentary Metal-Oxide Semiconductor CNN : Convolutional Neural Network DSP : Digital Signal Processing FOV : Field of View FPGA : Field-Programmable Gate Array GPS : Global Positioning System GPIO : General Purpose Input-Output GPU : Graphics Processing Unit IMU : Inertial Measurement Unit LIDAR : Light Detection And Ranging PAS : Parking Assistance System PCB : Printed Circuit Board PWM: Pulse Width Modulation RADAR : Radio Detection And Ranging RC : Radio Controlled RNN : Recurrent Neural Network SOCs : System-On-a-Chips UV : UltraViolet 4WD : Wheel Drive WP : Water Proof YAG : Yttrium Aluminum Garnet x SSD : Single Shot Detection YOLO : You Only Look Once NMS : Non-Max Suppression IOU : Intersection Over Union AI : Intelligence Artificial ML : Machine Learning DL : Deep Learning SGD : Stochastic Gradient Descent RMSProp : Root Mean Square Propagation NAG : Nesterov Accelerated Gradient Adagrad : Adaptive Gradient Algorithm Adam : Adaptive Moment Estimation xi LIST OF FIGURES Figure 1.1 Google’s fully Self-Driving Car design introduced in May 2014 Figure 1.2 Features of Google Self-Driving Car .4 Figure 1.3 Uber Self-driving Car .5 Figure 1.4 Self-driving Car of HCMUTE Figure 2.1 How Cars are getting smarter 10 Figure 2.2 Important components of a self-driving car 11 Figure 2.3 A laser sensor on the roof constantly scans the surroundings 12 Figure 2.4 The map was drawn by LIDAR .13 Figure 2.5 Structure and functionality of LIDAR .14 Figure 2.6 Comparison between LIDAR and RADAR .16 Figure 2.7 Camera on Autonomous Car 17 Figure 2.8 Placement of ultrasonic sensors for PAS 17 Figure 2.9 Ultrasonic Sensor on Autonomous Car 18 Figure 2.10 Relation Between AI, Machine Learning, and Deep Learning 20 Figure 2.11 Neural networks, which are organized in layers consisting of a set of interconnected nodes Networks can have tens or hundreds of hidden layers 22 Figure 2.12 A simple Neural Network or a Perceptron 22 Figure 2.13 Compare performance between DL with order learning algorithms 23 Figure 3.1 CNN architecture 24 Figure 3.2 The input data, filter and result of a convolution layer 25 Figure 3.3 The convolutional operation of a CNN I is an input array K is a kernal I*K is a output of convolution operation .26 Figure 3.4 The result of a convolution operation (a) is input image 26 Figure 3.5 Perform multiple convolutions on an input 27 Figure 3.6 The convolution operation for each filter .27 Figure 3.7 Apply zero Padding for input matrix 29 Figure 3.8 The max pooling operation 29 Figure 3.9 The deep Neural Networks to classify multiple classes .30 Figure 3.10 Network architecture 31 Figure 3.11 Classify multiple classes in image (a) Image with generate boxes (b) Default boxes with 8x8 feature map (c) Default boxes with 4x4 feature map 32 Figure 3.12 The standard convolutional filters in (a) are replaced by two layers: depthwise convolution in (b) and pointwise convolution in (c) to build a depthwise separate filter .36 Figure 3.13 The left side is a convolutional layer with batchnorm and ReLU The right side is a depth wise separate convolutions with depthwise and pointwise layers followed by batchnorm and ReLU .37 Figure 3.14 Detected Images before applying Non-max suppression (a) Left traffic sign, (b) Car object 38 xii Figure 3.15 Detected Images after applying Non-max suppression (a) Left traffic sign, (b) Car object 38 Figure 4.1 Block diagram of RC self-driving car platform 43 Figure 4.2 1/10 Scale 4WD Off Road Remote Control Car Dessert Buggy 43 Figure 4.3 Brushed Motor RC-540PH .45 Figure 4.4 Outline drawing of Brushed motor RC-540 45 Figure 4.5 Motor control motor BTS7960 46 Figure 4.6 Digital RC Servo FR-1501MG .47 Figure 4.7 Raspberry Pi Model B+ .47 Figure 4.8 Raspberry Pi block function 49 Figure 4.9 Raspberry Pi B+ Pinout 49 Figure 4.10 Location of connectors and main ICs on Raspberry Pi .50 Figure 4.11 NVIDIA Jetson Nano Developer Kit .50 Figure 4.12 Jetson Nano compute module with 260-pin edge connector 51 Figure 4.13 NVIDIA Jetson Nano Pinout 52 Figure 4.14 Performance of various deep learning inference networks with Jetson Nano and TensorRT, using FP16 precision and batch size 53 Figure 4.15 Camera Logitech C270 53 Figure 4.16 Arduino Nano .54 Figure 4.17 Arduino Nano Pinout 56 Figure 4.18 Lipo Battery 2S-30C 2200mAh .56 Figure 4.19 LM2596S Dual USB Type .57 Figure 4.20 DC XH-M404 XL4016E1 8A 58 Figure 4.21 USB UART PL2303 59 Figure 4.22 Hardware Wiring Diagram 59 Figure 4.23 Hardware Platform (a) Front, (b) Side .60 Figure 4.24 Circuit diagram 61 Figure 4.25 Layout of the PCB 61 Figure 5.1 Inside of RC Servo 63 Figure 5.2 Variable Pulse width control servo position 63 Figure 5.3 UART Wiring 64 Figure 5.4 UART receives data in parallel from the data bus 65 Figure 5.5 Data Frame of Transmitting UART .66 Figure 5.6 Transmitting and Receiving UART 66 Figure 5.7 Data Frame of Receiving UART 66 Figure 5.8 Converts the serial data back into parallel 66 Figure 5.9 Flowchart of collect image training .67 Figure 5.10 Flowchart of Navigating the Car using Trained Model 68 Figure 6.1 The overall oval-shaped lined track .69 Figure 6.2 Lined track and traffic signs recognition 69 Figure 6.3 Traffic signs (a) Left, (b) Right, (c) Stop .70 Figure 6.4 Some typical images of the Dataset 70 Figure 6.5 Horizontal Flipis (a) Original image, (b) Horizontal flipped image .71 xiii Figure 6.6 Brightness Augmentation (a) Original image, (b) Brighter image and (c) Darker image 72 Figure 6.7 GUI of Training App .72 Figure 6.8 Model is under training 73 Figure 6.9 The visualization output of convolutional layers (a) is an originally selected frame.(b), (c), (d), (e) and (f) are the feature maps at first five convolutional layers .74 Figure 6.10 Change in loss value throughout training 74 Figure 6.11 Change in accuracy value throughout training 75 Figure 6.12 Experimental Results: The top is images and the bottom is outputs after through Softmax function of the model (a) Steer is100, (b) Steer is 110, (c) Steer is 120, (d) Steer is 130, (e) Steer is 140, (f) Steer is 150, (g) Steer is 160 75 Figure 6.13 The actual and predicted steering wheel angles of the models 76 Figure 6.14 The outputs of the object detection model 76 xiv LIST OF TABLES Table 3.1 MobileNet body Architecture 377 Table 4.1 Specification of Brushed motor RC-540 455 Table 4.2 Specifications of Raspberry Pi Model B+ 488 Table 4.3 Specifications of NVIDIA Jetson Nano .51 Table 4.4 Technical specifications of Arduino Nano 555 xv ABSTRACT Recent years have witnessed amazing progress in AI related fields such as computer vision, machine learning and autonomous vehicles In this thesis, we present a Deep neural network (DNN) model based on autonomous car platform This is a small-scale replication of a real self-driving car, which drove on the roads using a convolutional neural network (CNN) model, that takes images from a front-facing camera as input and produces car steering angles as output The experimental results demonstrate the effectiveness and robustness of autopilot model in lane keeping task and detect traffic sign The speed is about 56km/h in a wide variety of driving conditions, regardless of whether lane markings are present or not Keywords: Computer Vison, Real-time navigation, self-driving car, Object Detection Model, Classification Model xvi CHAPTER 1: OVERVIEW 1.1 INTRODUCTION Lately year, technology companies have been discussing autonomous cars and trucks Promises of life-changing safety and ease have been on these vehicles Now some of these promises are beginning to come to fruition, as cars with more and more autonomous features hit the market each year Autonomous car is most likely still years away from being available to consumers, they are closer than many people think Current estimates predict that by 2025 the world will see over 600,000 self-driving cars on the road, and by 2035 that number will jump to almost 21 million Trials of self-driving car services have actually begun in some cities in the United States And even though fully selfdriving cars are not on the market yet, current technology allows vehicles to be more autonomous than ever before Using intricate systems of cameras, lasers, radar, GPS, and interconnected communication between vehicles Since their introduction in the early 1990s, Convolutional Neural Networks (CNNs) [1] have been the most popular deep learning architecture due to the effectiveness of conv-nets on image related problems such as handwriting recognition, facial recognition, cancer cell classification The breakthrough of CNNs is that features are learned automatically from training examples Although their primary disadvantage is the requirements of very large amounts of training data, recent studies have shown that excellent performance can be achieved with networks trained using “generic” data For the last few years, CNNs have achieved state-of-the-art performance in most of important classification tasks [2], Object detection tasks [3]; Generative Adversarial Networks [4] Besides, with the increase in computational capacities, we are presently able to train complex neural networks to understand the vehicle’s environment and decide its behavior For example, Tesla Model S was known to use a specialized chip (MobileEye EyeQ), which used a deep neural network for vision-based real-time obstacle detection and avoidance More recently, researchers are investigating DNN based end-to-end control of cars and other robots [5] Executing CNN on an embedded computing platform has several challenges First, despite many calculations, strict real-time requirements are demanded For instance, latency in a vision-based object detection task may be directly linked to the safety of the vehicle On the other hand, the computing hardware platform must also satisfy cost, size, weight, and power constraints These two conflicting requirements complicate  The transmitting UART adds the start bit, parity bit, and the stop bit(s) to the data frame Figure 5.5 Data Frame of Transmitting UART  The entire packet is sent serially from the transmitting UART to the receiving UART The receiving UART samples the data line at the preconfigured baud rate Figure 5.6 Transmitting and Receiving UART  The receiving UART discards the start bit, parity bit, and stop bit from the data frame Figure 5.7 Data Frame of Receiving UART  The receiving UART converts the serial data back into parallel and transfers it to the data bus on the receiving end Figure 5.8 Converts the serial data back into parallel 66 5.2 FLOWCHART OF COLLECTING TRAINING DATA Figure 5.9 Flowchart of collect image training Figure 5.9, when we begin to collect data to training, we use PS2 to control the car After run the script on the Raspberry Pi, we run the car and Raspberry Pi will take images if the R2 button on PS2 not press And the image will be saved to the folder corresponding to the steer angles received from PS2 67 5.3 FLOWCHART OF NAVIGATING THE CAR USING TRAINED MODEL Figure 5.10 Flowchart of Navigating the Car using Trained Model To begin this process, run the predict script on the Raspberry Pi and Jetson Nano, the car runs and captures image and then takes it to the CNN model to predict angle for the car Then the car will run and navigates itself with the steering angle received from the CNN model 68 CHAPTER 6: EXPERIMENTS In this chapter, the data used to train the network architecture in chapter and the experimanetal results is presented This includes (1) the experimental environments to collect data,(2) data collection, (3) some methods to augment data, (4) training process and (5) the results of training process and experiment 6.1 EXPERIMENTAL ENVIRONMENTS 50cm Figure 6.1 The overall oval-shaped lined track 50cm Figure 6.2 Lined track and traffic signs recognition To collect the training data, we manually drive the RC car platform to record timestamped images and control commands in two outdoor tracks created on the asphalted plane 69 Experimental environment has black background and white wire, 50-centimeter wide course with borders in 10-centimeter wide tape show in Figure 6.1 – 6.2 There are three traffic signs used to navigate in this environment: Left, Right and Stop (show in Figure 6.3) (a) (b) Figure 6.3 Traffic signs (a) Left, (b) Right, (c) Stop (c) 6.2 COLLECT DATA Data preparation is required when working with deep learning networks Raspberry Pi records the images and driving information from the user manually driving the car around the track with the speed at 4-5km/h Collected data contains over 10,000 images couple with steering angles The original resolution of the image is 480x640 Camera is configured to capture at a rate of 10 frames per second with exposure time 5000us to prevent the blur caused by elastic vibration when the car drive on the track Sample images of this dataset are shown in Figure 6.4 Figure 6.4 Some typical images of the Dataset 70 6.3 DATA AUGMENTATIONS Deep learning model tends to over fit the small dataset because of having too few examples to train on, resulting in a model that has poor generalization performance Data augmentation is a technique of manipulating the incoming training data to generate more instances of training data by creating new samples via random transformation of existing ones This method boosts the size of the training set, reducing overfitting Common transformations are horizontal flip, brightness adjusts are illustrated in Figure 6.5 In addition, data augmentation is only performed on the training data, not validation or test set Horizontal Flip (a) ` (b) Figure 6.5 Horizontal Flipis (a) Original image, (b) Horizontal flipped image The model needs to learn to steer correctly whether the car is on the left or right side of the road Therefore, we apply a horizontal flip to a proportion of images and naturally invert the original steering angle Brightness Augmentation Brightness is randomly changed to simulate different light conditions since some parts of the tracks are much darker or lighter, due to shadows and sun light We generate augmented images with different brightness by first converting images to HSV, scaling up or down the V channel and converting back to the RGB channel 71 (a) (b) (c) Figure 6.6 Brightness Augmentation (a) Original image, (b) Brighter image and (c) Darker image 6.4 TRAINING PROCESS The stored data is then copied to a desktop computer, which is equipped with a Core-i7 8750H, 2.2GHz, Ram 16GB, which we trained the network to accelerate training speed With classification model, we split the dataset into two subsets: a training set (80%) and a test set (20%) Training dataset contains 15,000 samples, test set contains 3,000 samples With object detection model, we made labelling 1000 images, 250 images for each category Then we split data into two subsets: a training set (85%) and a test set (15%) We used Training App to load Dataset and make training process is more easily to use This app was built on C# platform by Dao Duy Phuong, the GUI of this App is showed in Figure 6.7 Figure 6.7 GUI of Training App 72 We set some parameters as following:  Number of Epochs : 100 It is the number of passing through whole the training dataset  Minibacth: 32 It is the number of samples used in one iteration  Optimizer: Adam It is a loss function to measure how wrong your predictions are  Loss: Categorical cross entropy It is used for multi-class classification  Momentum: 0.9 It helps reduce time for training model  Learning rate: 0.0001 It is a hyper-parameter that controls how much the weights of network with aspect the loss gradient are adjusted  Decay: 0.009 It is a hyper-parameter that learning rate is calculated each epoch as following: lrate = initial_Lrate/(1+ decay*epoch)  Nesterov: True It is a hyper-parameter that helps reduce time for training  Epsilon: None It returns the value of the fuzz factor used numeric expressions  Transfer Learning: If True, the weights of network will be continuely trained from trained model If False, the weights of network will be randomly initialized Figure 6.8 Model is under training 73 (a) (b) (c) (d) ` (e) (f) Figure 6.9 The visualization output of convolutional layers (a) is an originally selected frame.(b), (c), (d), (e) and (f) are the feature maps at first five convolutional layers Figure 6.10 Change in loss value throughout training 74 Figure 6.11 Change in accuracy value throughout training 6.5 OUTDOOR EXPERIMENTS RESULTS Once trained on the desktop computer, the model was copied back to the Raspberry Pi The network is then used by the car’s main controller, which feeds an image frame from the Camera as input In each control period, 10 images per second can be processed and top speeds of the car around curves are about - km/h The representative testing images and steering wheel angles of model’s prediction can be seen in Figure 6.12 The trained model was able to achieve a reasonable degree of accuracy The car successfully navigates itself in both two tracks with diverse driving conditions, regardless of whether lane markings are present or not (g) (a) (b) (c) (d) (e) (f) Figure 6.12 Experimental Results: The top is images and the bottom is outputs after through Softmax function of the model (a) Steer is100, (b) Steer is 110, (c) Steer is 120, (d) Steer is 130, (e) Steer is 140, (f) Steer is 150, (g) Steer is 160 75 Figure 6.13 The actual and predicted steering wheel angles of the models Figure 6.14 The outputs of the object detection model 76 CHAPTER 7: CONCLUSION AND FUTURE WORK The paper in Chapter used one model (Classification model), it can run well in the lane with the speed is 5-6km/h and the accuracy of the model is 92.38% But it cannot identify and zone the traffic signs by draw a bounding box, so we decide to add the Object Detection model In this thesis, we presented an autonomous car platform based on state-of-the-art AI technology: End-to-end deep learning based on real-time control This work addresses a novel problem in computer vision, which aims to autonomously drive a car solely from its camera’s visual observation There are two model were used in this thesis: Classification model (training accuracy: 96.7%, improve than the Paper in Chapter 1) and Object Detection model (training accuracy: 76.3%) After we ended up with one that is able to power our car to drive itself on both tracks with the speed is 5-6km/h The encouraging result showed that it is possible to use a deep convolutional neural network to predict steering angles of the vehicle directly from camera input data in real-time Despite the complexity of the neural network, the embedded computing platforms like Raspberry Pi and NVIDIA Jetson Nano Kit are powerful enough to support the vision and end-to-end deep learning based real-time control applications Due to the complexity of the network, each embedded computer can only handle one model, so we need Raspberry Pi to process the Classification model and Jetson Nano to process the Object Detection model We found the data very important for good model, we need to collect lots of image Without all those images and steering angles, along with their potentially infinite augmentations, we would not have been able to build a robust model The issue we observed in training/testing the network is camera latency It is defined as the period from the time the camera sensor observes the scene to the time the computer actually reads the digitized image data Unfortunately, this time can be significantly long depending on the camera and the performance of the Pi and the Jetson, which is about 100-120 milliseconds This is remarkably higher than the latency of human perception, which is known to be as fast as milliseconds Higher camera latency could negatively affect control performance, especially for safetycritical applications, because the deep neural network would analyze stale scenes There are many areas we could explore to push this project further and obtain even more convincing results In the future, we continue to investigate ways to achieve 77 better prediction accuracy in training the network, identify and use low-latency cameras, as well as improving the performance of the RC car platform, especially related to precise steering angle control Furthermore, we are going to take throttle into the model with the ambition of achieving higher levels of autonomous vehicles Most importantly, we will research How to use only one model, one computer and one camera on our car 78 REFERENCES [1] LeCun, Y., Boser, B., Denker, J S., Henderson, D., Howard, R E., Hubbard, W., & Jackel, L D (1989) Backpropagation applied to handwritten zip code recognition Neural computation, 1(4), 541-551 [2] Krizhevsky, A., Sutskever, I., & Hinton, G E (2012) Imagenet classification with deep convolutional neural networks In Advances in neural information processing systems (pp 1097-1105) [3] Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T (2014, January) Decaf: A deep convolutional activation feature for generic visual recognition In International conference on machine learning (pp 647-655) [4] Fei-Fei, L., Fergus, R., & Perona, P (2007) Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories Computer vision and Image understanding, 106(1), 59-70 [5] Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., & Zhang, X (2016) End to end learning for self-driving cars arXiv preprint arXiv:1604.07316 [6] Karaman, S., Anders, A., Boulet, M., Connor, J., Gregson, K., Guerra, W., & Vivilecchia, J (2017, March) Project-based, collaborative, algorithmic robotics for high school students: Programming self-driving race cars at MIT In Integrated STEM Education Conference (ISEC), 2017 IEEE (pp 195-203) IEEE [7] N Otterness, M Yang, S Rust, E Park, J H Anderson, F D Smith, Berg, and S Wang An Evaluation of the NVIDIA TX1 for Supporting Real-Time Computer-Vision Workloads In IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), pages 353–364 IEEE, apr 2017 [8] Truong-Dong Do, Minh-Thien Duong, Quoc-Vu Dang, My-Ha Le, Real-Time Self-Driving Car Navigation Using Deep Neural Network, 4th “International Conference on Green Technology and Sustainable Development – GTSD 2018” [9] Zhi Titan, Chunhua Shen, Hao Chen, Tong he Fully convolutional one-stage object detection arXiv: 1904, 2019 [10] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C Berg Single Shot MultiBox Detector arXiv:1512, 2016 79 [11] Andrew G.Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam MobileNet: Efficent convolutional Neural Networks for Mobile Vision Applications arXiv: 1704,2017 [12] S Ioffe and C Szegedy Batch normalization: Accelerating deep network training by reducing internal covariate shift arXiv preprint arXiv:1502.03167, 2015 [13] C Szegedy, V Vanhoucke, S Ioffe, J Shlens, and Z Wojna Rethinking the inception architecture for computer vision arXiv preprint arXiv:1512.00567, 2015 [14] J Jin, A Dundar, and E Culurciello Flattened convolutional neural networks for feedforward acceleration arXiv preprint arXiv:1412.5474, 2014 [15] D Cire¸san, U Meier, and J Schmidhuber Multi-column deep neural networks for image classification Arxiv preprint arXiv:1202.2745, 2012 [16] Keiron O’Shea and Ryan Nash An Introduction to Convolutional Neural Networks arXiv:1511, 2015 [17] T.R Padmannabhan Programming with Python Springer [18] Vu Huu Tiep Machine Learning Co Ban Nha xuat ban Khoa Hoc va Ky Thuat 80 ... of self-driving car, artificial intelligence, machine learning, deep learning, convolutional neural network Chapter III: Convolutional Neural Network: This chapter gives the knowledge about Convolutional. .. location of a car Using GPS- based on techniques alone, a lot of the challenges can be triumphed over for the development of autonomous cars 2.2.5 Camera Cameras are already commonplace on modern cars... computer vision, machine learning and autonomous vehicles In this thesis, we present a Deep neural network (DNN) model based on autonomous car platform This is a small-scale replication of a real

Ngày đăng: 24/07/2020, 17:17

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w