1. Trang chủ
  2. » Luận Văn - Báo Cáo

Artificial intelligence for autonomous vehicles

185 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

With the advent of advanced technologies in AI, driverless vehicles have elevated curiosity among various sectors of society. The automotive industry is in a technological boom with autonomous vehicle concepts. Autonomous driving is one of the crucial application areas of Artificial Intelligence (AI). Autonomous vehicles are armed with sensors, radars, and cameras. This made driverless technology possible in many parts of the world. In short, our traditional vehicle driving may swing to driverless technology. Many researchers are trying to come out with novel AI algorithms that are capable of handling driverless technology. The current existing algorithms are not able to support and elevate the concept of autonomous vehicles. This addresses the necessity of novel methods and tools focused to design and develop frameworks for autonomous vehicles

Trang 2

1. Cover

2. Table of Contents3. Series Page

4. Title Page5. Copyright Page6. Preface

7. 1 Artificial Intelligence in Autonomous Vehicles—A Survey of Trendsand Challenges

4. 2.4 Vehicle Architecture Adaptation

5. 2.5 Future Directions of Autonomous Driving6. 2.6 Conclusion

2. 4.2 A Study on Technologies Used in AV

3. 4.3 Analysis on the Architecture of Autonomous Vehicles4. 4.4 Analysis on One of the Proposed Architectures

5. 4.5 Functional Architecture of Autonomous Vehicles

6. 4.6 Challenges in Building the Architecture of AutonomousVehicles

7. 4.7 Advantages of Autonomous Vehicles

8. 4.8 Use Cases for Autonomous Vehicle Technology9. 4.9 Future Aspects of Autonomous Vehicles

10. 4.10 Summary

Trang 3

11. References

11. 5 Autonomous Car Driver Assistance System1. 5.1 Introduction

2. 5.2 Related Work3. 5.3 Methodology

4. 5.4 Results and Analysis5. 5.5 Conclusion

1. 7.1 Introduction2. 7.2 Related Works

3. 7.3 Methodology of the Proposed Work4. 7.4 Experimental Results and Analysis5. 7.5 Results and Analysis

6. 7.6 Conclusion7. References

14. 8 Drivers’ Emotions’ Recognition Using Facial Expression from LiveVideo Clips in Autonomous Vehicles

1. 8.1 Introduction2. 8.2 Related Work3. 8.3 Proposed Method4. 8.4 Results and Analysis5. 8.5 Conclusions

Trang 4

9. 11.9 Cyberattack Defense Mechanisms10. 11.10 Solution Based on Machine Learning11. 11.11 Conclusion

12. References18. Index

1. Table 9.1 Input parameters with results.

List of Illustrations

1 Chapter 1

1. Figure 1.1 Representation of the AV system.

2. Figure 1.2 Traffic signs from the GTSRB dataset [16].3. Figure 1.3 Straight lane line.

4. Figure 1.4 Curved lane line.2 Chapter 2

1. Figure 2.1 Summary of fatalities due to traffic accidents.2. Figure 2.2 Primary components in driverless technology.3. Figure 2.3 Schematic block diagram for autonomous vehicles.4. Figure 2.4 Autonomous levels.

5. Figure 2.5 Fundamental navigation functions.6. Figure 2.6 An insight of a self-driving car.

7. Figure 2.7 Training the model using deep learning techniques.8. Figure 2.8 Communications between V2V and V2R.

9. Figure 2.9 Automated control system.

Trang 5

10. Figure 2.10 Identification of ambulance locations.11. Figure 2.11 Alternate routes to reach the target.3 Chapter 3

1. Figure 3.1 System architecture for selecting the leader car.4 Chapter 4

1. Figure 4.1 An autonomous vehicle simulator in a trafficenvironment is propose

2. Figure 4.2 Levels of vehicle autonomy.5 Chapter 5

1. Figure 5.1 Aim of the proposed research.

2. Figure 5.2 Challenges for gesture rule recognition.3. Figure 5.3 Vehicle features.

4. Figure 5.4 Challenges for vehicle light detection.5. Figure 5.5 Road markings along the road.

6. Figure 5.6 Road lane marks affected by shadow.

7. Figure 5.7 Road lane marks affected by low illumination.6 Chapter 6

1. Figure 6.1 Architecture of a complete drone.2. Figure 6.2 Drug delivery flow diagram.

3. Figure 6.3 Block diagram of first aid delivery system.7 Chapter 7

1. Figure 7.1 Driverless car.

2. Figure 7.2 Flow diagram of ground surface detection.3. Figure 7.3 Input and the obstacle detection.

4. Figure 7.4 Image processing stage 2.5. Figure 7.5 Image processing stage 3.6. Figure 7.6 Image with the car detection.

7. Figure 7.7 Image processing background reduction.8. Figure 7.8 Final result analysis.

Trang 6

4. Figure 9.4 (a) Preprocessed image, and panels (b), (c), and (d)are the

5. Figure 9.5 (a) Face image, (b) eye blink, (c) yawning, and (d)head

6. Figure 9.6 Output image of histogram of oriented gradients(HOG).

7. Figure 9.7 Graphical representation of the proposed systemperformance.

8. Figure 9.8 Graphical comparison of the proposed system withdissimilar classif

10 Chapter 10

1. Figure 10.1 Input image.

2. Figure 10.2 Log Gabor filtered image.

3. Figure 10.3 Autonomous underwater vehicle.4. Figure 10.4 Detected number plate.

5. Figure 10.5 Recognized number.11 Chapter 11

1. Figure 11.1 The layered architecture of the autonomous vehicle.2. Figure 11.2 Automated vehicle.

3. Figure 11.3 Automation levels of autonomous cars.4. Figure 11.4 Threat model (Karthik et al. , 2016).

5. Figure 11.5 IoT infrastructure for autonomous vehicles using AI.6. Figure 11.6 Cyberattack defense mechanisms.

Artificial Intelligence in AutonomousVehicles—A Survey of Trends andChallenges

Umamaheswari Rajasekaran, A Malini* and Mahalakshmi Murugan

Thiagarajar College of Engineering, Madurai, IndiaAbstract

The potential for connected automated vehicles is multifaceted, and automated advancementdeals with more of Internet of Things (IoTs) development enabling artificial intelligence (AI).Early advancements in engineering, electronics, and many other fields have inspired AI Thereare several proposals of technologies used in automated vehicles Automated vehicles contributegreatly toward traffic optimization and casualty reduction In studying vehicle autonomy, thereare two categories of development available: high-level system integrations like new-energyvehicles and intelligent transportation systems and the other involves backward subsystemadvancement like sensor and information processing systems The Advanced Driver AssistanceSystem shows results that meet the expectations of real-world problems in vehicle autonomy.

Trang 7

Situational intelligence that collects enormous amounts of data is considered for high-definitioncreation of city maps, land surveying, and quality checking of roads as well The infotainmentsystem of the transport covers the driver’s gesture recognition, language transaction, andperception of the surroundings with the assistance of a camera, Light Detection and Ranging(LiDAR), and Radio Detection And Ranging (RADAR) along with localization of the objects inthe scene This chapter discusses the history of autonomous vehicles (AV), trending researchareas of artificial intelligence technology in AV, state-of-the-art datasets used for AV research,and several Machine Learning (ML)/Deep Learning (DL) algorithms constituting the functioningof AV as a system, concluding with the challenges and opportunities of AI in AV.

Keywords: Vehicle autonomy, artificial intelligence, situational intelligence, datasets, safety

standards, network efficiency, algorithms

1.1 Introduction

An autonomous vehicle in simple terms is that its movements are from the start to predecidedstop in “autopilot” mode Autonomous vehicle technology is developed to provide several prosin comparison with human-driven transport Increased safety on the road is one such potentialprimacy—connected vehicles could drastically decrease the number of casualties every year Theautomated driving software is the most comfortable transport system that highly supports theclass of people who could not drive because of age and physical constraints Autonomousvehicles help them to find a new smart ideas, and it is predicted that it could provide them withdifferent opportunities to work in fields that require driving Automated decisions are takenwithout any human intervention, and the necessary actions are implemented to ensure stability ofthe system The smart connected vehicles are supported by an AI-based self-driving system thatresponds to external conditions through the ability to sense their surroundings using AdaptiveDriver Control (ADC) that uses lasers and radars enabled with ML and DL technologies Toextract object information from noisy sensor data, these components frequently employ machinelearning (ML)-based algorithms An understandable representation of autonomous vehicles as asystem is shown in Figure 1.1.

The history of driverless cars took the spark from the year 1478 with Da Vinci creating the firstdriverless automobile prototype In Da Vinci’s vehicle, there was a self-propelled robot that wasdriven by springs It had programmable steering and could output predetermined routes Thisbrief history shows where the roots of artificial intelligence may be found in philosophy,literature, and the human imagination Autonomous vehicles (AVs) were first conceptualized inthe 1930s It was the Houdina Radio Control project that demonstrated a radio-controlled“driverless” car In the mid-20th century, General Motors took the initiative to develop a conceptcar called Firebird II This was considered the basis for the first cruise control car named“Imperial” designed by the Chrysler company in 1958 Back in the 1990s, organizations slowlyproceeded by considering safety measures including cruise and brake controls After the turn ofthe century, blind-spot detection and electronic stability controls through sensors were madeavailable in self-driven vehicles One of the biggest achievements in the year 1995 was theVaMP-designed autonomous vehicle that drives (almost) by itself for 2,000 km Between 1995and 1998, the National AHS Consortium was held for cars followed by the PATH organizationthat conducted the automated bus and truck demos In the year 2009, the Google Self-DrivingCar Project began; eventually, Tesla Multinational Automotive released the update in autopilot

Trang 8

software Self-driving cars designed by Google, having met with a test drive accident in the year2016, were considered a major damage to the development of AVs At the end of 2016, the Boltand Super Cruise in Cadillac have autonomous controls At the start of January 2022, Volvounveiled Ride Pilot, a new Level 3 autonomous driving technology, at the CES consumerelectronics expo Without human input, the system navigates the road using LiDAR, radar,ultrasonic sensors, and a 360-degree camera setup.

Figure 1.1 Representation of the AV system.

As of 2019, self-driven vehicles possess the following features: Free-hand handles the steering,but monitoring the system is also needed With additional improvement in free-hand steering, theadaptive cruise control (ACC) keeps the vehicle at a predetermined displacement from thesurrounding objects When an interruption is encountered like the motorist crossing the lane, thecar insists that the system slows down and changes the lane One of the main problems with AVmobility is how to combine sensors and estimate circumstances to distinguish between riskyscenarios and less dangerous ones AV’s mobility is quite tedious, but continuous acceptedreleases in this field give better mobility to the vehicles AI gives a fantastic change in thetechnological revolution The Department of Transportation (DOT) and NHTSA deal with safetyin addition to automation protocols Section 1.2 of this chapter presents a detailed survey ofmachine learning and deep learning algorithms used in the AV literature where the importantAV-pipeline activities are analyzed individually Section 1.3 presents a survey of the state-of-

Trang 9

the-art datasets used in autonomous vehicles, and Section 1.4 discusses the industry standards,risks, challenges, and opportunities, with Section 1.5 summarizing the analysis.

1.2 Research Trends of AI for AV

To provide readers a comprehensive picture of the status of the literature in AI on AV, thecurrent research trends of ML/DL algorithms in the design of autonomous vehicles are presented.The systems rely on several technological advancements in AI technicalities When looking intothe automation ideology in vehicles, it comprises six different levels Starting from level 0, thehuman driver operates with no self-control in the cars We could say that there is no self-workingat level 0 At the first level, the Advanced Driver Assistance System (ADAS) in vehiclessupports the driver with either accelerating or steering controls In addition to level 1, the ADASprovides brake controls to the system such as Automated Emergency Braking System (AEBS).However, the driver must pay complete attention to the surroundings—level 2 At level 3, almostall automation processes are performed by the system, but driver’s access is required in certainoperations Real-time high-definition maps must be accessible, complete, and detailed Thesemaps are necessary, and they are used to decide their path and trajectory Further development oflevel 3 is that the vehicle’s advanced driving system (ADS) provides complete automationwithout a human even if a person does not respond to a request in some circumstances Avehicle’s six degrees of freedom are indicated by the stance, which is monitored by AV posesensors: x, y, z, ϕ, θ, and ψ; here, x, y, and z depicts actual positions of the system at level 4.Eventually, at level 5, virtual chauffer concept is introduced by ADS along with ACC and doesall the driving in all circumstances Autonomous vehicles can be successfully implemented in arange of use case situations combining the complete working of sensors, actuators, maps,detectors, etc It is necessary to demonstrate the functionalities in real-life environments likeurban and rural developments combining smart vehicles.

LiDAR technology ensures the functional and safe operation of an autonomous vehicle LiDARis more efficient in creating 3D images of the surroundings that is considered critical in urbansettings [1] A rotating roof-mounted LiDAR sensor creates and maintains a live 3D map of a 60-m range of surroundings and thus generates a specific route for travel to the destination specifiedby the driver Radars are mounted to measure the distance between obstructions and are placed atthe front and rear To determine the car’s position in the lane concerning the 3D map, a sensor onthe left rear wheel tracks and signals the system’s sideway movement All of the sensors areconnected to the AI program, which receives the dataset in accordance Thirdly, collaborativemapping efforts gather massive amounts of raw data at once utilizing GPS technologies torecover information on traffic signals, landmarks, and construction projects that cause deviations.By acquiring more knowledge on the NVH behavior of the entire vehicle, the vehicle’sperformance systems can be improved Utilizing Simcenter NVH systems has additionaladvantages, such as AI-based strategies for improving the effectiveness of NVH testing, andNVH performance prediction without the need for intricate simulation models After thedevelopment of the 3D models, the developers focused on substances in the surroundings.

A revolutionary YOLOv2 vehicle architecture has been developed to address accuracy-basedconcerns, enhance sluggishness identification, and solve the lagging in classification criteria Anautonomous driving system was overlaid with augmented reality (AR), displayed on the

Trang 10

windshield AR vehicle display systems are necessary for navigation Situational awareness hasbeen considered as the command-line interface that was developed using AR technology TheINRIA dataset was used to identify vehicles and pedestrians as part of the demonstration of anAR-HUD-based driving safety instruction based on an article released by SAE International.SVM and HOG algorithms have been used to build the identification method, which detectedpartial obstacles with 72% and 74% accuracy in frames per second [2] Since improvement in theabove system has been demanded, technologists modeled a better version through the use ofstereo cameras and augmented reality The forward collision alert system detected vehicles andpeople and showed early warnings while applying an SVM classifier for apprehension thatreceived 86.75% for identifying cars and 84.17% for identifying pedestrians [3] Although it waschallenging to establish the motion-detecting strategy, the model was able to control the car inconfusion situations This method established a rule base to handle illogical situations Themethod might complete 45.6 m of designing a path in 50.2 s [4] The study suggested the use ofR-tree data structure and the continuous nearest neighbor search [5] To simulate 3D rain, stereopictures are used The method can be used for already existing photographs, preventing the needto retake The Recurrent Rolling Convolution (RRC) for object prediction and KITTI datasetwere employed in the study’s tests at the Karlsruhe Institute of Technology The results of testing450 photographs reveal that the algorithm’s measured average precision decreases by 1.18% [6].The most likely to happen 5G-enabled vehicle autonomy says that through C-RAN, autonomousand connected vehicles running on 5G will be able to access cloud-based ML/AI to make self-drive decisions In high-mobility settings, edge computing and 5G can greatly speed up andimprove data decryption In these cases, edge computing-based data analysis and processing areautomatically executed in the servers that are nearest to the cars This might significantly lessendata traffic, remove Internet constraints for handling greater speed in data transfer, and lower theexpense of transmission [7] The researchers confirm that high-speed Internet access canimprove data transaction, and eventually, autonomous driving is highly benefited by the same.The stochastic Markov decision process (MDP) has been used to simulate how an autonomousvehicle interacts with its surroundings and learns to drive like an expert driver The MDP modelconsiders the road layout to account for a wider range of driving behaviors The autonomousvehicle’s desired expert-like driving behavior is achieved by choosing the best driving strategyfor the autonomous car A deep neural network (DNN) has been used to make an approximationof the expert driver’s unknown reward function The maximum entropy principle (MEP) hasbeen used to train DNN The simulations show the ideal driving characteristics of an autonomousvehicle The use of the MEP has been modeled to train a DNN’s reward function Additionally,essential derivations for applying the MEP to know the applications and workings of a rewardfunction have also been offered (certain functionalities) [8].

1.3 AV-Pipeline Activities

Each domain was examined by looking at various approaches and techniques and comparingtheir benefits, drawbacks, results, and significance To hasten the construction of a level 4 or 5AVS, the examination of each domain is summarized as follows.

Trang 11

1.3.1 Vehicle Detection

This section analyzes deep learning methods, which are used for quicker and more precisevehicle identification and recognition in a variety of unpredictable driving situations This papersuggested an online network architecture for identifying and tracking vehicles Using the KITTIdata-set [9] and a trajectory approximation of LSTM, the framework changed 3D poses wheninstances were moved around in a global coordinate system, outperforming long-range LiDARresults LiDAR obtained 350.50 false negatives in a 30-m range LiDAR-based false-negativescores for 50-m and 10-m tests have been mentioned as 857.08 and 1,572.33 [10] To improverecognition speed, and as a way of resolving it, a novel YOLOv2 vehicle architecture wasproposed [11] One of the main problems with the real-time traffic monitoring approach was thelow resolution of the images, which may be attributed to factors like poor weather Vehicles inlow-resolution photographs and videos were examined for this issue in terms of CNN’seffectiveness The architecture utilized by the neural network operated in two stages: first, itdetected high-level features, and then it detected low-level attributes It evaluated the model’scapability to recognize automobiles at various degrees of input resolution Results state that CNNis surprisingly positive in the identification even with low resolution [12] Liu et al.

demonstrated the least complex approach for AVS by combining a lightweight YOLO network.The technique was applied to a dataset that was created by themselves and, within 44.5 ms,achieved 90.38% precision This could be a turning point for AVS, a quicker and more precisesolution for diverse fields of vision and implementation during the day or at night [13].

1.3.2 Rear-End Collision Avoidance

Collision negatively influences road safety and the assurance for a safe ride Wide and variedpublications emphasize extensively for proposing methods to reduce collisions based on differentapproaches such as collision avoidance, collision warning, and collision risk assessment(VANET) These algorithms were implemented in vehicular ad hoc network In this paper [14],the authors proposed a collision avoidance theory model This model works by giving weights tothe inputs Certain characteristics like driver’s age, visual power, mental health, and physicalhealth conditions are provided to the algorithm The algorithm computes the weights for thecharacteristics, processes the input, and generates the output Generally, the output is always awarning The implementation of the algorithm was done in MATLAB, and the other platform ofimplementation was Vissim The above paper was completely based on human factors Whereasthe other model called collision warning that was discussed by the authors considers humanfactors and weather situations and timings This algorithm highly focuses on the visual power ofhumans and puts forth a visuality-based warning system The MATLAB software and PreScancommercial software were used for testing the visuality-based warning system This system isaccepted to provide better performance than the other warning systems To prevent crashes, aMulti-Factor-Based Road Accident Prevention System (MFBRAPS) was suggested Theimplementation made use of MATLAB and NetLogo Based on a previous study, the authors areretrieving new significant elements that might aid collision avoidance algorithms in a moreefficient manner in this work They added these fresh parameters to MFBRAPS to improve theV2V collision warning system The suggested algorithm activates the various functions of thedriver assistance system to control the speed and apply the brake after estimating the likelihoodof an accident The alarm is produced as necessary by the driver assistance system [15].

Trang 12

1.3.3 Traffic Signal and Sign Recognition

The literature on traffic sign recognition is very vast, and there are so many unsolved questionsyet to be addressed by researchers from all over the world Traffic sign detection is indispensablein the design of the autonomous vehicle, as the signs communicate commands to the driver inplace to take appropriate actions The results of traffic sign recognition using the GTSRB datasetis shown in Figure 1.2 A novel structure named Branch Convolutional Neural Network (B-CNN) is now changed into site visitors’ signal identification A branch-output mechanism wasintroduced to the architecture and inserted between the pooling and convolutional layers toincrease the recognition along with the machine’s speed and accuracy The B-CNN-basedtechnique performed exceptionally well in complicated visual settings [17] The LeNet-5 CNNarchitecture was used to train the 16 varietal Korean traffic The practice set consisted of truepositives—25,000—and false positives—78,000 [18] Additionally, a convolutional traffic lightidentification feature for AVS based on Quicker R-CNN is appropriate for traffic lightidentification as well as categorization They applied their method to a sizable dataset calledDriverU traffic light and got an average precision of 92% However, restrictions on falsepositives will be depreciated by using a strategy or an RNN [19] Traffic sign recognition basedon the HSV color model by the LeNet-5 architecture with Adam optimizer was proposed andtested with the German technology-based traffic identification and yielded a median accuracy of99.75 with 5.4 ms per frame [20] DeepTLR traffic light recognition and classification systemare a real-time vision-dependent, intricate, and very complex system that did not want positionalinformation or temporal principles [14] The GTSRB dataset served as the training set formajority of the deep learning techniques In the LetNet-5-based CNN on a self-created datasetwith spatial threshold segmentation using the HSV, the GTSRB dataset scored the best for trafficsign identification, despite a decline in performance in a complicated environment and thedetection of separated signs Despite a drop-off act in a composite setting and the apprehensionof divided indications on account of the submitted extent, the GTSRB dataset outperformed theLetNet-5-based CNN on a designed dataset accompanying geographical opening separationadopting the HSV Despite relying on prelabeled data, the YOLO method-based technique fortraffic light detection with recognition of interior signs obtained the highest accuracy in theshortest amount of time from the above discussed [21].

Trang 13

Figure 1.2 Traffic signs from the GTSRB dataset [16].1.3.4 Lane Detection and Tracking

Lane detection and tracking have been considered as essential modules in a completeautonomous vehicle system In various literatures, it has been considered as an important field ofresearch in autonomous vehicles, and many algorithms have been proposed to solve theencountered problems of lane detection and tracking It is an important module that solves theperception problems of autonomous vehicles In comparison with straight lane detection, someliteratures introduce curved road lane detection tracking as challenging and important to avoidaccidents Figure 1.3 and Figure 1.4 shows straight lane line and curved lane line respectively.This paper categorized the lane detection methods into three as feature-based, model-based, andmethods based on other trending technologies and discussed a novel curved road detectionalgorithm, where the road images were initially divided into ROI and the background The ROIhas been further divided into linear and curved portions The proposed approach evolved as aseries of mathematical equations and Hough transformation It has been concluded that theproposed algorithm effectively identifies lane boundaries and to be effective enough to assistdrivers, improving safety standards [22].

Trang 14

Figure 1.3 Straight lane line.

Figure 1.4 Curved lane line.

Trang 15

This paper presented a lane detection and tracking approach based on Gaussian sum particlefilter The proposed feature extraction approach in this paper was based on the linear propagationof boundary and lane points while zooming in Three types of algorithms used for tracking,namely, SIR particle filter, Gaussian particle filter, and the used GSPF, are compared, and it hasbeen concluded that GSPF outperforms the other two widely used tracking algorithms [23] Thispaper presented a survey of trends and methods in road and lane detection, highlighting theresearch gaps Seven different modalities, namely, monocular vision, LiDAR, stereo imaging,radar, vehicle dynamics, GPS, and GIS, have been utilized for perception of roads and lanes Thesurvey highlighted the gap in the literature of perception problems of multiple-lane roads andnonlinear lanes Detection of lane split and merges has been proposed as an unnoticed perceptionproblem to be solved in the near future for the development of fully autonomous vehicles Lackof established benchmarks has also been discussed as an important issue in order to compare thecurrently efficient methods in different scenarios [24] This paper compared the computer vision-based lane detection techniques with sensor-based algorithms The traditional modules inconventional computer vision-based lane detection algorithms as highlighted in the survey areimage preprocessing, feature extraction, fitting the lane, and the tracking module Popular edgedetection operators such as Canny, Sobel, Prewitt, Line detection operators, and thresholdingtechniques have been highlighted as the popularly used feature extraction techniques based onthe assumptions in the structure, texture, and color of the road The potential of deep learning forautomatic feature extraction and to learn the environment with most variance conditions has beenhighlighted The survey also criticized the lack of bench-marked datasets and standard evaluationmetrics to test the efficiency of the algorithms [25] In this paper [26], the authors presented adetailed survey of deep learning methods used in lane detection The survey categorized thestate-of-the art methods into two based on the number of modules as two-step and single-stepmethods based on feature extraction and postprocessing Classification, object detection, andsegmentation-based architectures have been detailed as three types of model architectures usedfor lane detection The paper also discussed the evaluation metrics of different algorithms basedon the TuSimple dataset, which has been regarded as the largest dataset for lane detection Basedon the analysis, the CNNLSTM(SegNet+) architecture has been reported to have the highestaccuracy of 97.3% The computational complexity of deep learning models like CNN and RNNhas been emphasized as a disadvantage for implementation on low-memory devices In thispaper [27], the authors applied a new CNN learning algorithm for lane detection, namely,Extreme Learning Machine (ELM), to reduce the training time, also by using a relatively smallerdata-set for training The training time of the ELCNN has been reported to be 800 times faster,and the experimental results established that incorporating ELM into CNN also increases theperformance of the model based on accuracy This paper emphasized an edge-cloud computing-based CNN architecture for lane detection, where the concepts of edge computing and cloudcomputing have been introduced to increase the data-processing efficiency for real-time lanedetection Their approach achieved fair accuracy than other state-of-the art algorithms inchallenging scenarios of insufficient light, shadow occlusion, missing lane line, and curved laneline The accuracy achieved on normal road conditions was 97.57, and all of the evaluationmetrics have been derived based on the TuSimple data-set [28] In this paper [29], the authorproposed a multiple-frame-based deep learning lane detection architecture using CNN and RNNto avoid the ambiguities that arise due to detections based on a single frame in the presence ofshadow, occlusion of vehicles, etc The continuous scene-based architecture has CNN layers,where the multiple-frame input-outputs constitute time series data that are fed into the recurrent

Trang 16

neural network The CNN architecture acts as the encoder decoder circuitry for reducing the sizeof the input, and the LSTM architecture determines the next scene, thus detecting the lane Thework laid a strong foundation for using CNN-RNN architectures for detecting lanes inchallenging scenarios It established the idea that the situations that cannot be analyzed using asingle-image input can be improved by using a multiple-frame input architecture In this paper[30], the authors introduced YOLO-based obstacle and lane detection algorithms based on videoinputs and TuSimple datasets The inverse perspective transform has been used for constructingthe bird’s-eye view of the lane The deep learning models are criticized for their poorperformance, and their YOLO-based architecture achieved an accuracy of 97.9% for lanedetection within a time span of 0.0021 s This increased speed of processing has been claimedadvantageous for implementation in real-time autonomous vehicle scenarios.

1.3.5 Pedestrian Detection

One of the main vision-based issues for AVS is accurately recognizing and localizing pedestrianson roads in a variety of settings The study of vehicle autonomy aims at increasing the accuracyin identifying and localizing the pedestrians to prevent fatalities Numerous studies have beensuccessful in lowering accident rates and developing a more accurate and long-lasting approachto autonomous driving technologies The approaches used for pedestrian detection can bebroadly categorized into two as those that use man-in-the-loop feature extraction and deeplearning-based methods For pedestrian detection, a most sought deep learning model is theconvolutional neural network, which has been used in many studies to detect pedestrians In thatway, a deep grid named large-field-of-view (LFOV) was imported to uniformly skim perplexingrepresentations of walkers The submitted structure was created to analyze and createcategorization judgments across a broad extent of neighborhoods concurrently and efficiently.The LFOV network can implicitly reuse calculations since it analyzes enormous regions athigher speed that are substantially faster than those of standard deep networks This pedestriandetection system demonstrated a convincing performance for practical application, taking 280 msper image on GPU and having an average miss rate of 35.85% [31] For passersby discovery, anew CNN structural version was proposed [32] To annotate the positions of pedestrians in themodel, they used synthetic photographs and transfer learning, together with a bounding boxsuggestion for an uncovered region When crowded scenarios were taken into account, it got a26% lost count in the CUHK08 dataset and a 14% missing proportion in the Caltech walkerdataset The major benefit was that it did not need a region proposal method and did not requireexplicit detection during training.

In this paper [33], the authors presented a detailed survey of the state-ofthe-art pedestriandetection algorithms in the domain of automobiles and safety The position of humans on thelane, pose, color, environment, weather, texture, etc., increase the complexity of the pedestriandetection problem, making the classical template matching approach to be inefficient for real-time detections The feature-based detection approach requires man-in-theloop feature extractionmethods, whereas deep learning models self-learn features during training This has beenhighlighted as an advantage of deep learning-based detection methods where the models canachieve fair accuracy with a huge training set; however, both the approaches have beenconsidered to be implemented on par with each other CNN has been regarded as the mostwidely used algorithm for pedestrian detection Bounding box, missing rate, TP, TN, FP, FN,False Positives per Window, AMR, and IoU are regarded as the most commonly used evaluationmetrics It has been concluded that accuracy and cost are still a trade-off to be balanced inpedestrian detection, as the robust solution to this complex problem is yet to be proposed more

Trang 17

efficiently In this paper [34], the authors exemplified the similarities between pedestriandetection pipeline and object detection pipeline and conducted experiments to optimize a CNN-based detection pipeline for pedestrian detection Sliding Window, Selective Search, and LDCFalgorithms have been tested for best efficiency in the selection of candidate regions SelectiveSearch has been concluded not suitable, the Sliding Window approach achieves better recall, andthe LDCF algorithm has been concluded as the best region proposal algorithm Their approachhas been evaluated based on the Caltech-Pedestrian dataset using error metrics such as MR andFPPI The entire experiment has been tested on the NVIDIA Jetson TK1, the most widely usedhardware in smart cars In this paper [35], the authors proposed a hybrid approach by combiningHOV, LUV, and CNN classifier, namely, the multilayer-channel feature framework It has beendiscussed that by eliminating the overlapping windows, the cost of CNN has been reduced Theproposed approach has been evaluated in the Caltech, KITTI, INRIA, TUD-Brussels, and ETHdatasets The performance of MCF has been considered fair in comparison with the state-of-the-art methods In this paper [36], the authors proposed a feature sharing-based deep neural networkarchitecture for pedestrian detection, thereby reducing the computational cost of featureextraction during the model training phase Feature sharing has been emphasized among thedifferent ConvNet-based detectors that differ in their model window sizes INRIA and Caltechhave been used for the evaluation of their approach, and it has been reported using four ConvNetdetector units; the time for detection decreased almost by 50% In this paper [37], the authorscriticized the slow detection speed of conventional deep learning algorithms and used YOLOarchitecture for pedestrian detection The experiments have been trained using the INRIA datasetand a few selected images from PASCAL VOC YOLOv2 has been concluded to have a higherdetection speed than the presented approach in the paper In this paper [38], the authors proposeda two-modular pedestrian detection architecture based on convolutional neural networks In theregion generation module, contrasting to image pyramids, feature pyramids are used to capturefeatures of different resolutions The Modified ResNet-50 architecture has been used as thebackground network of both modules The deep supervision-based region prediction module thusachieved a precision score of 88.6% and an MR of 8.9% It has been concluded that a fair trade-off balance has been achieved between real-time predictions and accuracy In this paper [39], theauthors proposes a convolutional neural network architecture for real-time pedestrian detectiontasks The approach signified a lightweight architecture for real-time detection and warning toavoid accidents A regression-based problem-solving approach has been implemented in contrastto the conventional classification modules to increase the inference speed Pruning andquantization techniques have been highlighted as the model compression techniques employed.The observed inference speed was 35 frames per second with a maP of 87% in the specificallydesigned hardware for autonomous vehicles Hazy weather pedestrian detection has beenconsidered even more challenging than the clear-day pedestrian detection In this paper [40], theauthors proposed a hazy weather pedestrian detection framework based on a modified YOLOarchitecture, reducing cost, thereby improving accuracy The contributions made in the researchwork also include a hazy weather pedestrian detection dataset They present three YOLO-baseddetection models, namely, Simple-YOLO, VggPrioriboxes-YOLO, and MNPrioriboxes-YOLO.The MNPrioriboxes-YOLO that uses depth-wise convolutions based on the MobileNetV2architecture has the least number of parameters among all of the other methods It has beenconcluded that the strategies employed reduced the number of parameters, thereby decreasing therunning time of the algorithm In this paper [41], the authors presented an approach to pedestriandetection in nighttime scenarios using multispectral images based on CNN The authors studied

Trang 18

different image fusion and CNN fusion methods The pixel-level image fusion technique hasbeen considered superior to CNN fusion methods All of the experimental conclusions presentedin the research work were based on the KAIST multispectral database In this paper [42], theauthors considered the multispectral approach to pedestrian detection as a solution to solvechallenges encountered in cases of detection in poor illumination, occlusion, etc The R-FCN-based detector has been used, and information on both the color and thermal images is fusedusing a network in the network model Their approach evaluated based on the KAIST datasetrevealed that the proposed approach has better performance than the other state-of-the-artmethods implemented for multispectral images.

1.4 Datasets in the Literature of Autonomous Vehicles

Datasets contain advanced and to a certain extent upgraded information in many scholarlystudies by benefiting to solve legitimate-experience models of issues They enable quantitativeanalysis of methods, revealing important information about their strengths and weaknesses Theperformance of algorithms must be ensured on real-world datasets Algorithms used, forinstance, must deal with complicated objects and settings while contending with difficultenvironmental factors like direct illumination, shadows, and rain This section surveys thecurrent datasets used in the literature of autonomous vehicles.

1.4.1 Stereo and 3D Reconstruction

To evaluate the effectiveness of stereo matching algorithms, the Middlebury stereo wasdeveloped in 2002 that created the Middlebury Multi-View Stereo (MVS) Being only twoscenes in size, the benchmark was crucial in the development of MVS methods In comparison,124 diverse scenarios were captured in a controlled laboratory environment Light scans fromeach camera position are combined to create reference data, and the resulting scans are extremelydense, with an average of 13.4 million points per scan The whole 360-degree model for 44sceneries were created by rotating and scanning in contrast to the datasets so far High-resolutiondata were captured DSLR images and synced stereo videos in various interior and outdoorsettings were also included With the use of a reliable procedure, all images can be registeredusing a high-precision laser scanner The evaluation of a thorough 3D reconstruction is madepossible by high-resolution photographs [43].

1.4.2 Optical Flow

The mobility concept in AI has a huge capability and upgrades the working efficiency bycollecting enormous amount of data, and norms will direct the system to the output platform;thereby, it meets the goal of vehicle autonomy A toothbrush is used to track concealedfluorescent presence on the objects in all non-fixed sequences to decide the base reality progress.The dataset is made up of eight distinct sequences with a frame count of 8 each For eachsequence, a pair receives the ground truth flow [44] Contrary to other datasets, the Middleburydataset offers extremely accurate and deep ground truth, which enables the evaluation of sub-pixel precision Accurate reference data are generated by monitoring pixels over extensivelysampled space-time volumes using high-speed video cameras With this technique, optical flowground truth may be automatically acquired in difficult everyday settings, and realistic elementslike motion blur can be added to compare approaches under various circumstances.

Trang 19

1.4.3 Recognition and Segmentation of Objects

The Microsoft COCO dataset was introduced in 2014 for object detection They offer visualrepresentations with intricate scenarios with typical items established in their normalsurroundings The count of situations by means of division for Microsoft COCO is indeed abovein consideration of the PASCAL VOC target separation standard The dataset contains328k photographs, 2.5 million commented details, and 91 object classes In a significant crowd,all objects have per-instance segmentations tagged on them The intersection-over-unionmeasure is employed for evaluation, much like PASCAL VOC [45].

1.4.4 Tracking Datasets

Multi-object tracking benchmarks consists of 14 difficult video sequences shot in unrestrictedcontexts with both stationary and moving cameras The trio groups were annotated: moving orstationary walkers, humans who are not standing up straight, and others They assess theapproaches using two well-known tracking standards, MOTA—Multiple Object TrackingAccuracy, and the other one is MOTP—Multiple Object Tracking Precision [46].

1.4.5 Datasets for Aerial Images

Data from air sensors for the detection of urban objects as well as the reconstruction andsegmentation of 3D buildings were indicated by the ISPRS benchmark It is made up of theVaihingen and Downtown Toronto databases The scene in a road setup is the object classestaken into account in the object detection process Three locations with different object classesand sizable road detection test methods are available in the Vaihingen data-set There are twosmaller sections for building reconstruction and object extraction, similar to Vaihingen.

1.4.6 Sensor Synchronization Datasets

A camera’s sync connects different levels and angles of photographs The exposure trigger timeis represented by the time stamp of the image, and the full rotation of the current LiDAR frame isrepresented by the time stamp of the LiDAR scan This method typically produces good dataalignment since the camera’s sync is instantaneous [47].

1.5 Current Industry Standards in AV

Innovative technologies like AVs come with risks and consequences that could reduce society’sacceptance These risks include those related to the environment, market, society, organizations,politics, economy, technology, and turbulence The examination of 86 documents producedby 29 leading AV technology firms summarizes the industry standards The main topic of thisessay is a technological risk that is defined as potential adverse social, economic, and physicaleffects linked to individuals’ worries about utilizing cutting-edge technologies The consequenceis that since no single framework can adequately account for the wide range of potential machinedecisional applications, we should not try to create a generic “one size fits all” model of artificialintelligence application AVs are related to five different categories of technological risk:security, privacy, cybersecurity, liability, and industry impact Governments must implementnew policies and laws to resolve the risks connected with AVs to guarantee the society withbenefits as much as possible from the developing AV sector According to estimates, humanerror is to blame for at least 90% of automobile collisions By surpassing drivers (humans) inperception, decision-making, and execution, AV adoption potentially reduces or eliminates the

Trang 20

main cause of auto accidents According to authors Collingwood and Litman, very less driversuse seatbelts and pedestrians may become less cautious as a result of feeling safer This meansthat a nonlinear relational model of classification notions must be constructed and that thetechnology must be accepted at face value to appropriately frame each unique choice situation.Additionally, the absence of human mistakes does not imply the absence of mechanical error.The possibility of scientific weaknesses risking vehicle security hikes in addition to theramification of telecommunications The catastrophic Tesla automatic steering system accidentin 2016 emphasizes the obstructions of components’ skill in order to avoid accidents anddiscovered the vagueness of the developed arrangement understanding Concerns are raisedabout how “crash algorithms” should be used to program AVs to react in the event of inevitableaccidents Rules to control AVs’ responses to moral standards are necessary because the harmproduced by AVs in accidents cannot be subjectively assessed due to the “absence ofresponsibility.” The overall tone of the AV reports examined for this study is predominantlyfavorable However, this finding must be taken in the context of the fact that these reports areprepared for a specific group of stakeholders, including investors, consumers, and regulatorybodies The AV industry papers make numerous references to ethical dilemmas, even thoughdeficient in the definiteness and insight of the descriptions in the experimental essay It manifeststhat prevalent reliability, sustainability, comfort, human listening, control, and scrutinizing, aswell as the science-policy relationship, are the ethical issues that were handled by mostorganizations, with safety and cybersecurity coming in second.

1.6 Challenges and Opportunities in AV

Without a doubt, the driverless car has many benefits, such as providing a means oftransportation for people who cannot drive and reducing the driver’s stress on roads However, inaddition to these positive outcomes, numerous obstacles were encountered and they have toaddressed to implement a successful AV model The subsequent are few of the predominantdifficulties alongside driverless jeeps:

1.6.1 Cost

Numerous automakers had to invest a significant sum of money in constructing theseautonomous vehicles One can use Google as an example, which pays about $80,000 for one ofits AV models, making it completely out of reach for the average person or business This priceis expected to decrease by half in the future, which is still more affordable, according toprojections According to a recent JD Power survey, 37% of consumers say they will select anautonomous vehicle as their next vehicle in the future.

1.6.2 Security Concerns

The largest problem with electronic systems is usually security and privacy The AI system thatautonomous vehicles are based on needs Internet connection to manage and transmitinformation, making it a vulnerable medium that hackers can exploit The second main worry isthe possibility of terrorist action, where the autonomous car platform could provide a convenientlocation for them to execute their self-murder duty Moreover, the jeeps depend on GPSschemes; anybody can operate them for malicious purposes by gaining access to them.

Trang 21

1.6.3 Standards and Regulations

There are some rules and regulations that must be established before autonomous vehicles can beused Not only the owner of these vehicles but also the automakers using this technology mustformally adopt and strictly adhere to these criteria The United States has proposed the followinglaws and guidelines that include Nevada (NRS 482.A and NAC 482.A) and California (Cal Veh.Code Division 16.6).

1.7 Conclusion

This chapter highlighted the research flows and issues related to artificial intelligence inautonomous cars, elaborating on the history of autonomous vehicles SVM is identified as theML algorithm that has been given significant importance for the development of classificationmodels in AV literature Other than SVM, mostly deep learning algorithms such as CNN, R-CNN, and YOLO architectures are utilized for all of the object recognition and classificationjobs The performance of the model highly depends upon the quality of data Deep learningalgorithms require huge datasets to achieve fair accuracy KITTI, KAIST, COCO, PASCALVOC, GTSRB, INRIA, and TuSimple are some of the benchmarked datasets observed to bewidely used in the survey In the AV pipeline activities, machine learning-based algorithmsdemand the man-in-the-loop feature extraction method; hence, deep learning has been suggestedas the better alternative However, due to the computational complexity of the deep learningmodels, they are not suitable for real-time implementation The literature on AI in AVprofoundly suggests the design of both less costly and more accurate design models solving theproblems of AV pipeline activities For object detection and recognition activities, YOLOreplaced the traditional convolutional architectures For edge-based implementations, in a fewresearch papers, edge-compatible lightweight convolutional architectures are proposed usingtechniques such as depth-wise separable convolutions Lack of proper and benchmarked datasetsfor the comparison of the state-of-the-art algorithms has been reported in a few literatures.Several studies also suggested that the performance of the algorithms may not be the same in allof the datasets; hence, without proper comparison based on datasets, the efficiency of thealgorithms cannot be concluded The modules described in the AI pipeline, for example,pedestrian detection, has been regarded not only in the literature of AV but also in other domainsof research where there are necessities to identify humans The literature of AI in autonomousvehicle can be concluded as the booming field of research with more and more focus laid on thedevelopment of self-driving cars, ADAS, etc With the advancements of cloud computing, edgecomputing, and the swiftly refined designs of edge-compatible GPU, the research in AVpromises advanced outcomes in the future.

Age of Computational AI for AutonomousVehicles

Trang 22

Akash Mohanty1, U Rahamathunnisa2, K Sudhakar3 and R.Sathiyaraj4*

1School of Mechanical Engineering, Vellore Institute of Technology, Vellore,Tamil Nadu, India

2School of Information Technology and Engineering and Technology, VelloreInstitute of Technology, Vellore, India

3Department of Computer Science and Engineering, Madanapalle Institute ofTechnology, Madanapalle, Andhra Pradesh, India

4Department of CSE, GITAM School of Technology, GITAM University,Bengaluru, Karnataka, India

Autonomous vehicles have made a great impact on research and industrial growth over the pastera The automobile industry is now being revolutionized by self-driving (or driverless)technology owing to enhanced and advanced autonomous vehicles that make use of cutting-edgecomputational methods from the fields of machine intelligence and artificial intelligence (AI).Autonomous vehicles are now able to assess their surroundings with high accuracy, makesensible choices in real-time environments, and function legitimately without human interventionand technological advancements in the arena of computationally powerful AI algorithms Thedevelopment of autonomous vehicles relies heavily on cutting-edge computational technologies.The chapter aims to review the contemporary methods of computational models over time andpresents the computational models in the arena of Machine Learning, its subset Deep Learningand Artificial Intelligence The chapter initially discusses the role of AI, followed by itsautonomy levels The learning algorithms that perform continual learning are addressed alongwith advances in intelligent vehicles We disparagingly evaluate the key issues withcomputational approaches for driverless complex applications Integration of computationaltechnologies is presented in brief, addressing how technologies can empower autonomousvehicles Classification of technological advancements with future directions was given andconcluded.

Keywords: Autonomous vehicles, computational methods, artificial intelligence, machine

learning, deep learning, continual learning, classification, intelligent vehicles

2.1 Introduction

Autonomous vehicles are vehicles that can operate independently of a human driver, a scientificdevelopment that aims to revolutionize transportation and is currently one of the biggest trends.Although the development of autonomous vehicles has gained popularity over the past 20 years,it is true that it first began in the 1990s Francis Houdina, an electrical engineer from New York,was the first to put the idea of an autonomous vehicle into practice in 1925; however, his vehiclerequired remote control A car crashed with the prototype while it was on display to the public inManhattan, delaying its path for nearly 19 km between Broadway and Fifth Avenue Chandler,Houdina’s car, was nevertheless created between 1926 and 1930 A Mercedes-Benz van waslater transformed into an autonomous vehicle in the 1980s by the German Ernst Dickmanns, whois regarded as the inventor of the modern autonomous vehicle This vehicle was driven by anintegrated computer The vehicle was able to travel 63 km per hour through streets with notraffic in 1987 Similar maneuvers were made in 1994 with a car that covered more than 1,000

Trang 23

km through congested Paris A Mercedes-Benz made an autonomous trip from Munich toCopenhagen in 1995 These initiatives were funded by the European Commission as part ofProject Eureka, which gave Dickmanns over 800 million euros to conduct this kind of vehicleresearch Driverless cars are indeed a major topic in the realm of vehicular technology, whichnow has rapidly grown in prominence [1] Over the past few decades, numerous businesses andscholars have become interested in the burgeoning field of autonomous driving, which includesself-driving automobiles, drones, trains, and agricultural equipment in addition to military uses[2] The first radio-controlled Chandler, American Wonder, which introduced the idea ofautonomous driving, was displayed in New York City in the 1920s [3] After several years,Carnegie Mellon University built the first self-driving car, which was given in 1986; it is merelya van that can drive itself.

In 2005, a different team from Carnegie Mellon University developed a self-driving car that wassuccessful over an 8-mile range and won the DARPA Grand Challenge [3] In 2010, Googleresearchers created an autonomous vehicle that has already driven over 140,000 miles betweenSan Francisco and Los Angeles Self-driving hardware was implemented into the products ofmany automakers in 2016 [3], comprising Mercedes-Benz, BMW, Tesla and Uber is used byvarious transit companies Several categories of automated vehicles, such as drones or trains,may evolve more rapidly while their habitats have been less influenced by human activities thanautomated vehicles The central concept behind self-governing vehicles is to employ artificialintelligence (AI) to intelligently guide the vehicles depending on their surroundings [4] Thesedisciplines include (a) machine learning-supervised and unsupervised approaches for decision-making; (b) to analyze the information from video, deep learning and computer visiontechnologies were used; and (c) to ensure vehicle safety and control, sensor-based processingtechniques were applied [4 5] The progress of autonomous vehicles [1] also strongly dependson improvements in distributed hardware devices, namely, big data technologies and networkconnectivity, and utilizes graphical processing units (GPUs) for quick processing [6 7].

2.1.1 Autonomous Vehicles

Automated vehicle users are enthusiastic about its advent to the public sector A self-drivingvehicle is one that can run unsupervised and without human interference Modern autonomousvehicles, according to Campbell et al., [9] can assess their immediate surroundings, classifyvarious items they come across, and interpret sensory data to choose the best routes to take whileadhering to traffic laws Remarkable encroachments been formed in generous a relevant reactionto unforeseen circumstances wherever there may be a payback in the drive systems or where anoutward medium might not perform as anticipated by underlying prototypes To successfullyexecute self-governing navigation in those kinds of situations, it is crucial to integrate a varietyof technologies from various fields, notably electronics engineering, mechanical engineering,computer science, control engineering, and electrical engineering, among many others [8].

The first radio-controlled car, known as the “Linriccan Wonder,” marks the beginning of theautonomous vehicle era in 1926 The emphasis has been mostly upon vision-guided structuresconsuming GPS, LiDAR, computer vision, and radar from the launch of the vision-guidedvehicle Mercedes-Benz, a robotic van in the year 1980, and there have been considerabledevelopments in technology for autonomous vehicles since then As a way, steer assist, adaptivecruise control, and lane parking became autonomous technologies seen in modern cars We willeventually live in a world with fully autonomous vehicles, as per official projections given byauto manufacturers.

Trang 24

Traffic calamities are some of the most prevalent reasons of death globally The world may avert5 million civilian mortalities and 50 million major harms by 2020 by putting newer creativeideas into practice and improving road safety at all scales—from the small to the global It iscritical, in the opinion of the Commission for Global Road Safety, to put an end to this horrifyingand unnecessary rise in traffic accidents and begin year over year declines [9] According to apaper [8], over 50% of the 3,000 people who pass away each day as a consequence of trafficdisasters are not passengers in a car Deshpande et al [8] have also stated that if urgent action isnot taken, the number of transportation-related injuries is expected to increase to 2.4 millionannually and rank as the fifth greatest cause of mortality worldwide The amount of trafficcollisions will consequently drastically decline as a result of an automated system’s greaterdurability and speedier response time than humans Autonomous vehicles would also reducesafety gaps and improve traffic flow management, which would increase roadway volume andlessen traffic congestion Figure 2.1 presents a summary of fatalities due to traffic accidents.With the introduction of autonomous vehicles, parking shortages will be a thing of the pastbecause vehicles may pick up passengers, drop them off, park in any available spot, and thenpick them up again There would be less parking available as a result Physical road signs willbecome less important as autonomous vehicles obtain all the information they require via a

network.

Trang 25

Figure 2.1 Summary of fatalities due to traffic accidents.

There would be less demand for traffic officers Autonomous vehicles can therefore cut back ongovernment spending on things like traffic enforcement Along with a decline in automobiletheft, there will also be a reduction in the demand for auto insurance The elimination ofunnecessary passengers altogether will allow for the establishment of efficient systems for thetransportation of goods and shared vehicles Not everyone is appropriate Therefore, autonomousvehicles relieve the need to drive and navigate Additionally, commuting times will be shortenedbecause autonomous vehicles can move more quickly with little danger of mistake The ride willbe more comfortable for the car’s passengers than it would be in a non-autonomous vehicle.While autonomous vehicles have many advantages, there are also some drawbacks Although theidea has been disproven, it is still believed that the introduction of autonomous vehicles wouldresult in a decline in occupations involving driving Another significant difficulty is when driverslose control of their vehicles because of inexperience, for example Many individuals enjoydriving, so giving up control of their vehicles would be tough for them Additionally challengingis the interaction of human-driven and autonomous cars within the same path Another concern

Trang 26

with self-driving cars is who should be held accountable for damage: the automaker, the car’susers or proprietor, or the government Determining a legal mechanism and governmentregulations for automated automobiles is thus a crucial delinquent Another significant issue issoftware reliability Additionally, there is a chance that the computer or communication systemin a car could be hacked There is a chance that terrorist and criminal activity may increase Forinstance, terrorist groups and criminals might pack cars with explosives They might also beutilized in numerous other crimes and as getaway cars Autonomous vehicles thus have benefitsand drawbacks This article explores the history of autonomous vehicles in a sequential fashion,starting with historical precedents and moving on to modern developments and projections forthe future The development of autonomous vehicle technology has advanced significantlyduring the last couple of decades The vehicle’s adaptive safe and energy-saving capabilities aresignificantly enhanced, which establishes it as a distinct area of study for automotiveengineering, a trend for the automotive industry, and a force for economic progress [10] Theintegration of so many advanced and cutting-edge technologies into autonomous driving presentsboth obstacles and opportunities in plenty Critical questions include what the autonomousvehicle’s purpose is, what the main technical obstacles are, and how to overcome them so thatthe objective may be achieved The computer science of science is acknowledged as the greatestmethod for keeping track of research activities and trends [11] and may be clustered to learnabout the hottest subjects in a certain field [12] It may offer useful details regarding the state ofautonomous driving research This study analyzes the research hot spots that have changed overthe last 20 years, pointing out the path for additional research and tracing the evolution ofautonomous driving technology It also discusses recent accomplishments based on bibliometricanalysis and literature evaluation.

2.1.2 AI in Autonomous Vehicles

John McCarthy, a computer scientist, coined the word “artificial intelligence” in the year 1955.Artificial intelligence is the capability of a computer program or machine to reason, learn, andmake decisions In most cases, the phrase describes a machine that mimics human intelligence.Through AI, we can instruct computers and other gadgets to carry out tasks that a human beingwould do Such algorithms and devices were pumped enormous volumes of data, which are thenevaluated and processed in order to think logically and perform human actions The primarycomponents of driverless technology were given in Figure 2.2 AI is being used in automateddriving and diagnostic imaging tools with the goal of preserving life, and automating repetitioushuman labor is simply the idea of an AI iceberg.

2.1.2.1 Functioning of AI in Autonomous Vehicles

Although AI has recently gained popularity, how will it work in driverless vehicles? Let us thinkabout how a person may drive a car when using his or her sight and hearing to maintain an eyeon both the road and other cars We swiftly decide whether it should stop at a red light or waitfor someone to cross the road by using our memory Decades of driving practice teach us toalways look for the little things like a greater speed bump or a more direct route to work Whilewe are developing autonomous vehicles, we want them to operate similarly to human drivers.Because of this, it is essential to enable these robots access to the sensorial, intellectual, andexecutive processes that individuals hold when driving a car The automobile sector hasundergone steady change during the last few years in order to achieve this According to Gartner,multiple V2X (vehicle-to-everything) techniques will connect 250 million vehicles to each otherand the associated functions by 2020 Since this quantity of information entered into telematicssystems or in-vehicle infotainment (IVI) units increases, vehicles would be able to not only

Trang 27

capture and broadcast underlying system status and location information but also alter in theirenvironment in real-time basis By the addition of sensors, cameras, and communication systems,autonomous cars will be capable to produce massive volumes of facts that, combined via AI,

will allow them to see, hear, think, and act in the same ways as human drivers.

Figure 2.2 Primary components in driverless technology.

2.2 Autonomy

With no human input, an autonomous vehicle is one that can perceive its environment andoperate on its own A human rider is not required to control the vehicle at any time Figure2.3 portrays a compacted architecture block diagram for autonomous vehicles.

2.2.1 Autonomy Phases

The term “autonomous” have multiple definitions, and this vagueness led specialists in the fieldof autonomous cars to categorize autonomy into different levels, with the following stagesdenoting the degree of an automated system control and the involvement of drivers in a vehicle[13] Autonomous levels are summarized in Figure 2.4 Level 0: Only optional and necessaryhazardous alarms are offered as autonomous and zero-control driving capabilities at this level.Level 1: Drivers still have complete control of the vehicle, but automated controlling systemsshare that power with them These cars frequently come equipped with cutting-edge driving aids.Level 2: The automated control system provides the opportunity to take complete control of thecar, but the driver is still entirely in charge of operating it Level 3: The autonomous car candrive completely independently of the passengers, but drivers must take over if the alarmsystems call for any physical intervention Level 4: With no driver’s engagement or observation,the automated mechanism takes complete governance of the vehicle, except in unforeseen events

Trang 28

where the driver regains control Level 5: No consideration is given to human interference in thisfinal level.

Figure 2.3 Schematic block diagram for autonomous vehicles.

Artificial intelligence, which also has fundamentally changed the automotive sector, hashastened the advancement of Level 4 and Level 5 autonomous vehicles An autonomous vehicle(AV) is a means of transportation that is proficient in handling a range of conditions and driveitself throughout public highways with minimal to no direct human interaction [14] Intelligentand connected vehicles (ICVs) are a broader and looser concept, and their evolution can bebroken down into four stages: high/ fully automated driving, connected assistance, cooperativeautomation, and advanced driver assistance systems (ADAS) [15] According to this standard,ICV’s AV stage is its conclusion.

Trang 29

Figure 2.4 Autonomous levels.

Trang 30

Figure 2.5 Fundamental navigation functions.

Figure 2.5 represents the components required for navigation functionality The accurate positionestimations, the accuracy of the digital plots, then the map-matching techniques all play a part indetermining the position of a means of transportation for autonomous navigation.

Establishing the spatiotemporal link between the vehicle and its surroundings is necessary forvehicle guiding The same can be viewed by means of modeling and comprehending the realmfrom the perspective of the chauffeur or computer control.

This function can be described as a set of activities that allows the platform to move safely andeffectively These activities include path arrangement to get to the target, collision avoidance,and vehicle control High levels of interaction exist between the localization, mapping, andactuation tasks At the vehicle levels, they adhere to a complicated reliant system that isdeveloping in a wildly unpredictable environment.

2.2.2 Learning Methodologies for Incessant Learning in Real-Life AutonomySystems

2.2.2.1 Supervised Learning

It is the technique of learning using training information that has been labeled and hascomprehensive information of the system’s input and output [16] It is possible to map facts tolabels throughout the whole series using supervised incessant learning, which permits learning

Trang 31

from sequential streaming data The complexity of the learning method was decreased by theusage of continuous learning in supervised learning.

2.2.2.2 Unsupervised Learning

Algorithms that do not need labels include those used in unsupervised learning The most typicalapplication is the training of generative models that replicate the distribution of input data Whenthe distributional shift happens in the continuous learning scenario, the generative model ismodified, causing in the final yield being engendered from the complete input dissemination.The generative replay continuous learning approach has been seen to make the most frequent useof these models In autonomous systems, unsupervised continuous learning may be helpful forbuilding progressively more potent illustrations over interval, which can subsequently be fine-tuned via an outward feedback indication from the environments Unsupervised learning’s major

objective is to create adequate self-supervised learning signals that can serve as a suitablesubstitute in order to build adaptable and robust representations.

2.2.2.3 Reinforcement Learning

This trains an agent to carry out a series of actions in a particular environment by using a prizefunction as a tag It is possible to think of reinforcement learning in complicated environments asa continuous learning situation because they do not afford users admittance to all data atformerly Reinforcement learning utilizes numerous crucial continuous learning model elements,including the capacity to learn several agents concurrently and the usage of a replay memory, inorder to assume data distribution Additionally, the TRPO algorithm, a well-liked stablereinforcement learning technique, enhances learning in a manner comparable to the continuouslearning strategy because of the Fisher Matrix used in it The strong restrictions on reinforcementlearning algorithms are the problems of continuous learning Therefore, enhancing continuouslearning will also enhance the effectiveness of reinforcement learning Unpretentiouselectromechanical systems gave way toward sophisticated computer-controlled interactedelectromechanical systems as ground transportation systems developed Initially intended forrecreation, automobiles are now a crucial component of our daily life Highly practical modes oftransportation Road infrastructure has been created as a result over the Vehicle costs havedecreased throughout the years and as a result of mass production and the general public’s abilityto afford them However, these modifications have brought up a variety of issues, including fatalor serious injuries, traffic accidents, gridlock, pollution, and aggressive drivers.

2.2.3 Advancements in Intelligent Vehicles

The way that society views transportation networks has altered during the last few years.Automobiles were formerly seen as a source of social status and convenience that gave industrya great deal of flexibility, despite the high accident rate, environmental constraints, high fuelcosts, etc Today’s modern transport structures are a basis of significant apprehension To solvethe aforementioned concerns, governments, businesses, and society at large are shifting to whatare known as sustainable forms of transportation Automobile manufacturers are forced bysocietal expectations for safety, pollution reduction, and networked mobility to continuouslyadapt their product lines and explore new ways to innovate their offers Information andcommunications technology (ICT) advancements created prospects for the addition of innovativefeatures to existing vehicles These vehicles already include a variety of proprioceptive sensorsand are gradually adding exteroceptive sensors, which are cameras, radars, and other devices.Additionally, several propulsion methods are employed, and the majority of automobiles utilizetheir identifiable internal computer networks where diverse nodes handle dissimilar functions.

Trang 32

Vehicles can also link to one another and the infrastructure that provides them with constantcommunication by forming networks As a result, the car is gradually evolving from a single,mostly mechanical entity into a network of cutting-edge intelligent platforms [17] Geneticalgorithm-based approach for traffic congestion reduction and avoidance with smarttransportation focusing on intelligent vehicles is presented [18] Region identification, extractionof features, and classifications are the three broad classifications of object recognition [19] Theprocess of employing sensors to find pedestrians in or near an AV’s route is known as pedestriandetection Segmentation, Extraction of Features, Segment Classification, and TrackClassification are its four constituent parts [20] A related study looked at how people assignblame for a good or bad event underneath the Expectancy Violations Theory to humans orartificial intelligence agents [21] The various advanced predictive models were presented [22].This can be applied for smart traffic management and sustainable environment An agent-basedsystem for traffic light monitoring and controlling is illustrated [23] Modern AI algorithms areused by autonomous cars to localize themselves in both known and unknown settings [24].Regarding short-term prediction, a pattern-based technique utilizing local weighted learning issuggested [25] A summary of the most recent advancements in autonomous vehicle architectureis made in the research work [26] Despite the fact that some individuals may seem to besuspicious about the practical application of autonomous driving (AD) as a kind of replacementfor current cars, the abundance of research and trials being carried out indicates the contrary.Automated vehicle researchers and organizations are developing effective tools and strategies.An extensive investigation of the design methodologies for smarter frameworks and tools for AIand IoT-based automated driving was performed in this study [27] Additionally, its implicationsencompass the functionality of automated electric vehicles In this article, several nations’ andorganizations’ actual driverless truck, car, bus, shuttle, rover, helicopter, and subterraneanvehicle implementations are explored While academic robotics research shifts from usinginternal mobile systems by means of full-scale means of transportation with high stages ofautomation, the automobile industry is undergoing a shift at the same time The motor systems ofmodern vehicles have changed, making them easier to handle by computers They also now havevarious sensors for navigational purposes, and they have nodes in vast communication networks.The most recent advancements in intelligent vehicles as they develop autonomous navigationalabilities were discussed A typical intuition of a self-driving car is given in Figure 2.6 Thedriving forces behind this ongoing modern vehicle revolution are discussed in the context ofapplication, security, and external concerns like pollution and fossil fuel restrictions.

Trang 33

Figure 2.6 An insight of a self-driving car.Navigation:

Trang 34

In order to concentrate on the device intellect and decision-making methodologies that existbeing established and implemented to renovate contemporary automobiles into modularcomponents through cutting-edge advanced features, the province analysis is organized in termsof the vehicle’s infotainment functionalities It discourses concerns with driving requirements,interaction, and autonomous challenges By structuring the onboard cognition of the vehicles as anavigational challenge, it establishes the functional criteria for automobiles to demonstrateautonomous navigation ability Three main views have been used to categorize recentdevelopments in the related vehicle technologies: (1) driver-centric systems aim to improvedrivers’ situational awareness by offering a number of driving assisting technologies that let thedriver make decisions; (2) network-centric systems emphasize the practice of wirelesscommunications to enable sharing facts among infrastructure and cars, enhancing awareness ofthe operators and machineries beyond the capabilities of stand-alone vehicle systems; (3)vehicle-centric systems focus on the goal of making cars totally autonomous, with the driver outof the control loop We will describe these various viewpoints and discuss recent advancementsin research and business A viewpoint on potential advances in the future and in what way thesetechniques might be implemented while pleasing into interpretation societal, legitimate, andfinancial limits were presented.

Autonomous vehicles are among the primary uses of AI Autonomous vehicles were using avariety of sensors to assist them in better understanding their environment and charting theirroutes, including certain camera systems, radar systems, and LiDAR Huge volumes of data areproduced by these sensors To extract meaning from the information produced by these sensors,AVs require processing rates that are comparable to those of supercomputers AV systemdevelopers heavily rely on AI to train and assess their self-driving techniques Companies areturning to deep and machine learning, in particular, to manage the enormous amount of dataquickly.

2.2.3.1 Integration of Technologies

Despite the frequent interchangeability of the concepts artificial intelligence, machine learning,and deep learning, these concepts are not necessarily related AI, as it is commonly known, is abranch of computer science that examines all elements of making computers intelligent.Therefore, when a device accomplishes activities in accordance with appropriate rules thataddress a specific issue, this behavior can be regarded as intelligent behavior or AI Deeplearning and machine learning approaches can be used to develop AI The study of structuredinformation then the methods a device utilizes to carry out a certain activity without preciseinstructions are referred to as machine learning AI is used in machine learning, which enablescomputers to get better by the mistakes they make.

Deep learning is a component of machine learning; otherwise, the progression of machinelearning Deep learning draws its inspiration from how the mammalian brain deals withinformation It uses sophisticated neural networks that continuously learn from and assess theirinput data to extract more precise information Unsupervised learning uses less structuredtraining sources, while supervised learning uses labeled training data Apparently, supervised aswell as unsupervised deep learning is acceptable Businesses creating AV technology primarilyrely on deep learning or machine learning or both Machine learning and deep learning differgreatly in that machine learning needs that features be explicitly labeled with less flexible rulesets, while in unsupervised tasks, deep learning might inevitably determine the feature to be

Trang 35

utilized for categorization In contrast to machine learning, deep learning necessitates a largequantity of computing power with training facts in order to create outcomes that are moreprecise AVs are aimed to decrease vehicle accidents, improve traffic flow and maneuverability,use less fuel, eliminate the need for driving, and ease commercial operations and transit [28, 29].Despite the enormous potential benefits, there are numerous unresolved technological, social,ethical, legal, and regulatory challenges [30–32].

Deep learning has assisted businesses in accelerating AV development efforts in recent years.These businesses are depending more and more on deep neural networks (DNNs) to furthereffectively interpret sensor information DNNs permit AVs to acquire how to navigate the worldon their specific by means of sensor information rather than taking to be physically programmedwith a set of instructions like “halt if you perceive red.” Since these procedures were modeledafter the human brain, it is likely that they learn via experience A deep learning expert named

NVIDIA claims that uncertainty a DNN is exposed to photographs of a halt sign in varioussettings, it can acquire to recognize halt signs on its peculiar To provide safe autonomousdriving, however, businesses creating AVs must create a whole set of DNNs, each focused on adifferent task The number of DNNs needed for autonomous driving has no predetermined upperbound and is actually increasing as new capabilities are developed The indications produced byeach distinct DNN requisite are managed in real time by powerful computer platforms in order toactually operate the car.

2.2.3.2 Earlier Application of AI in Automated Driving

In the Second Defense Advanced Research Projects Agency (DARPA) Intelligent VehicleChallenge, the autonomous robotic car named “Stanley” from the Stanford University RacingTeam in 2005 marks the beginning of the use of AI for autonomous driving Multiple sensorsand specialized software, including machine learning techniques, supported Stanley’s ability torecognize impediments and avoid them while maintaining its course Thurn then administeredGoogle’s “Self-Driving Car Project,” which in 2016 evolved into Waymo Waymo greatlyutilized AI to bring completely autonomous driving to the masses In partnership with theGoogle Brain team, the company’s engineers developed a pedestrian detection approach thatintegrates DNN Employing a deep learning model, the engineers have been able to reduce theerror rate for walker identification by half Waymo’s CTO and vice president of engineering,Dmitri Dolgov, deliberated how AI as well as machine learning abetted the business in emergingan AV scheme in a blog post on Medium last year.

2.3 Classification of Technological Advances in VehicleTechnology

Recent developments in artificial intelligence and deep learning have made autonomous vehiclesmore sophisticated Most modern parts of self-driving cars employ current AI algorithms [33].Autonomous vehicles are sophisticated systems for transporting people or goods Similar toestablishing AI-powered automated driving on public highways, doing so on public roads comeswith a number of difficulties Controlling the movable platform as it moves safely from itsstarting point to its goal while navigating an infrastructure is referred to as vehicle navigation,constructed for human driving, the most recent developments in sensing and computing If amobile platform were to make decisions today, it could traverse the majority of road networks.

Trang 36

The main challenge is utilizing the infrastructure together with additional entities, such as severalmobile platforms with varied power sources, at-risk drivers, etc Despite having laws and drivingnorms, their behavior is unpredictable Despite enforcement measures, driver errors still happenand result in a lot of traffic accidents The greatest obstacle to the adoption of driverlessautomobiles was indeed identifying solutions that enable utilization of the identical workspacewith other parties The development of autonomous vehicles is influenced by network-centric,vehicle-centric, and driver-centric advances, which is why it is important to examine vehiclenavigation technology used in passenger vehicles The illustration of training the model withdeep learning is shown in Figure 2.7.

Figure 2.7 Training the model using deep learning techniques.

Intelligent vehicles are now being developed from a variety of angles by the transportationsector, academic institutions, and government R&D facilities For instance, the automotiveindustry is implementing vehicle onboard technologies to improve driver convenience andsafety That is to say, some activities are performed by the vehicle’s assigned computer controlsystems that use sensors This tactic is defined by two key concerns, and responsibility The waypeople view cars has altered as a result of the late 1970s, are no longer considered status symbolsor arouse emotion; instead, today’s convenience The primary determinants of a vehicle’sacquisition are cost and usefulness Exteroceptive sensors, including laser scanners as well asinfrared cameras, have been shown to have the ability to identify obstacles in regular traffic, buttheir application is currently restricted to high-end vehicles owing to the associated expenses.The intention is that the costs are too high for mid-priced cars, where radars and video camerasare still chosen despite their drawbacks From this vantage point, improvements in modern off-the-rack.

The first definition of an intelligent vehicle is one that puts the driver first Driver-centricmethods make it possible to comprehend the circumstances that self-governing vehicles will facewhen they are utilized in actual traffic conditions This proves that the methods used by human-

Trang 37

controlled structures to improve protection are also applicable to autonomous vehicles.Additionally, it enables the transfer of technologies from advanced experimental platforms tomass market vehicles and vice versa, enhancing the engineering expertise of the vehicle

OEMs Autonomous vehicles’ longitudinal and lateral control, for instance, can be affected bythe interface, actuation, and safety mechanisms provided by vehicles equipped with automaticparking assistance systems, which consume the organization to undertake sensor-basedcomputer-manageable motions.

In network-centric, vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) data distributionis made possible by the inclusion of communication technologies in passenger vehicles.Communications between V2V and vehicle-to-roadside unit (V2R) are depicted in Figure 2.8.This has steered to the development of various vehicle types known as cooperative vehicles,whose operation depends on their amalgamation onto a transportation network that supportswireless V2V and V2I connectivity These kinds of vehicles are categorized as network-centricfrom a functional perspective All players in road networks can now share information—thanksto network-centric technologies Data can then be gathered and analyzed by the network beforebeing disseminated to other networks This is a crucial contribution for driverless vehiclesbecause it implies that future autonomous vehicles will not necessarily need to be stand-alonesystems They must be mobile nodes that cooperate by means of further mobile nodules to move.Vehicle-Centric:

A diverse strategy involves automating the car navigation features as much as is practical Thedriver’s involvement is diminished until it is prolonged to a part of the vehicle governor loop, atwhich point the vehicle is put under computer control and becomes autonomous The systemdesign is concentrated on the vehicle utilities since the architectures duplicate the functionalityrequired to drive an autonomous car These vehicle designs could be categorized as vehicle-centric Vehicle-centric cars are autonomous vehicle systems that take into account the mostnotable experimental platforms that have been created so far and the related technology Asummary of the technology presence created for self-governing vehicles is given by thiscomplete panorama.

Trang 38

Figure 2.8 Communications between V2V and V2R.

2.4 Vehicle Architecture Adaptation

Since the standpoint of automobile controllability, if the driver has complete control, there willnot be a navigation system in the car Today’s applications tend to be driver-centric, focusinginitially on providing information before allowing the driver control the car There are not manyapplications that need direct machine management, but those that do are being added one at atime, such as Lane Keeping Support Computers are rapidly taking over the controller of theautomobile from the driver From a diverse angle, computer-driven vehicles are becoming moreautonomous as they progress from basic tasks Recent vehicle OEM prototypes have shown thatit is possible to use wireless communications for safety-related energetic functions, since theyallow information to be shared and so increase driver awareness These technologies ought to beusable in a system that is centered around automobiles, since communication networks helpcreate intelligent environments that make it easier to use autonomous vehicles Network-centricdesigns are those that result from the utilization of communication systems.

In order to completely commercialize the technology, autonomous vehicle dealings havecapitalized a momentous amount of currency in its research There are several issues that hinderthis goal These difficulties comprise legal, nonlegal, and technical complications [34] Morethan ever, we are very close to achieving the long-desired goal of vehicle automation.Additionally, major automakers are spending billions of dollars to create automated vehicles.

Trang 39

Among other advantages, this innovative technique has the prospective to increase passengersafety, reduce traffic congestion, streamline traffic, save fuel, reduce pollution, and improvetravel experiences [35] Similarly, AVs can examine other cutting-edge technologies likeblockchain and quantum [36] In order to communicate information, autonomous cars usewireless sensor networks [37] The architecture for effective knowledge distillation usingtransformers is proposed in Liu et al [38] for the semantic segmentation of driving incidents onroads For example, segmentation is proposed to use a convolutional neural networkstechnologies is a leading multiscale attention [39] A strong cooperative positioning (RCP) [40]approach that adds an ultrawide band to GPS has indeed been presented to get accurate position(UWB) To estimate grid maps, a multitask recurrent neural network is suggested [41] Sematicdata, occupancy estimates, velocity forecasts, and drivable area are all provided via grid maps.

The motion control of autonomous US vehicles now places a significant emphasis on deeplearning-based techniques [41, 42] But integrated optimization of sensory, decision-making, andmotion control can also be achieved by utilizing the power of DNNs [43].

In the driver-centric technology category, the driver still maintains overall control of the vehicle.Various features are created to improve the driver’s situational awareness andsafety via enhancing environmental sensing or preserving stability and controllability of the

automobile Every perception function aims to provide information to the driver regarding whattakes place in its immediate surroundings The merging of is what is highlighted, data fromproprioceptive and exteroceptive sensors that enable model construction of the spatiotemporalinteractions among the vehicle and its immediate surroundings and its current condition Then,situational information is inferred using this model Decisions are constantly made in all facets ofdriving As previously stated, mistakes made by the driver that result in poor decisions are themain cause of accidents The focus is on data collection and association to promote awarenessand give the best options for improving the driver’s situational awareness and, as a result, aid inmaking decisions This typically entails observing what happens within the near area in front ofthe car or to gauge how it will react A common illustration is the detection of pedestrians usingvision systems or laser scanners either to alert drivers or lower the vehicle’s rapidity Here staysa feature that is now being incorporated into new car generations Due to the wide range ofsituations that could occur, the design of the sensors and their capabilities, and deployment costs,the perception systems face the greatest challenges There are numerous programs available tohelp with driving Many of these can be found on current automobiles or are built into upcomingproducts Others have been demonstrated and are being tested in real-world settings Today,drivers being informed is preferred The justification is tied to liability; specifically, by observingthe driver in the circlet during vehicle control, the driver is held accountable for any vehiclemaneuvers The automated communication and control of the vehicle to the road sign indicationswere illustrated in Figure 2.9.

Trang 40

Figure 2.9 Automated control system.

Its operation is based on the integration of a priori data from cardinal navigational maps and the

combined usage of onboard sensors The three groups of functions include enhanced perception,longitudinal and lateral control, and route guidance (navigation) Applications that employdigital road maps are included in the first column, applications that have some degree of controlover how the car moves are in the second column, and applications that improve the driver’sperception are in the third column A vehicle might be operated tenuously or under the control ofa principal server, as is the case with the usage of automated guided vehicles (AGVs), and theformer role would be ancillary driving—thanks to driver-centric architectures.

Tele-operation of vehicles:

It suggests that the safety of a driver operating a vehicle from a distance can be maintained whileexchanging enough information The military prefers this application domain for managing landvehicles in hazardous conditions For this aim, video cameras are employed to examine theenvironment Laser scanner returns are utilized to give the impression of depth, and they aremixed with video images to create depth-colored images This kind of technology needs a way tomove the vehicle controller near to the vehicle and its surroundings.

The defense vehicles use driving strategies where the driver solely interacts with the externalenvironment and the vehicle’s instructions via electrooptic mechanisms, i.e., a mix of cameras

and other perceptive sensors are utilized to show the driver the surroundings The driver is usinghis or her eyes instead of evaluating and engaging with the outside world via monitors that show

live camera system images and other sensor information Thus, it is feasible to identify the In the

Ngày đăng: 02/08/2024, 17:10

w