1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Research And Development Of A Self-Driving Electric Vehicle In Public Places.pdf

122 2 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Research And Development Of A Self-Driving Electric Vehicle In Public Places
Tác giả Phạm Đình Thắng, Trần Phạm Trung Hy, Võ Trần Nhật Quang
Người hướng dẫn Dr. Đặng Trí Dũng
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Mechatronics Engineering Technology
Thể loại graduation thesis
Năm xuất bản 2024
Thành phố Ho Chi Minh City
Định dạng
Số trang 122
Dung lượng 9,13 MB

Cấu trúc

  • CHAPTER 1: OVERVIEW (21)
    • 1.1. Introduction (21)
      • 1.1.1. The urgency of the subject (21)
      • 1.1.2. Meaning and Practicality of The Topic (24)
      • 1.1.3. Topic Objective (24)
      • 1.1.4. Research Object and Scope of Research (25)
      • 1.1.5. Summary of Graduation Thesis (25)
      • 1.1.6. Research Methodology (25)
    • 1.2. System characteristics (26)
    • 1.3. Structure of the system (27)
    • 1.4. Research related to the topic (28)
    • 1.5. The limitations of the system (29)
  • CHAPTER 2: THEORETICAL FOUNDATIONS (31)
    • 2.1. Autopilot Levels (31)
    • 2.2. Commonly Used Devices in Self-driving Cars (39)
    • 2.3. Problems and Algorithmic Solutions for Self-driving Cars (39)
  • CHAPTER 3: HARDWARE DESIGN (40)
    • 3.1. Design Overview (40)
      • 3.1.1. Front Wheels Movement (41)
      • 3.1.2. Steering Part (42)
      • 3.1.3 Rear Wheels Movement (42)
      • 3.1.3. Vehicle Braking System (45)
    • 3.2. Vehicle Configuration (47)
    • 3.3 Kinematic model of vehicle (49)
  • CHAPTER 4: ELECTRICAL SYSTEM DESIGN AND OPERATING PRINCIPLE . 31 4.1. Overview of the sensor system and actuator (51)
    • 4.2. Controller Circuit (52)
      • 4.2.1. Operating Principle (52)
      • 4.2.2 CP2102 Driver (52)
      • 4.2.3 STM32F103C8T6 [8] (53)
      • 4.2.4. CAN Driver (54)
    • 4.3. Front Wheels Circuit (55)
      • 4.3.1. Operating Principle (56)
      • 4.3.2. BTS7960 (56)
    • 4.4. Rear Wheels Circuit (57)
      • 4.4.1. Operating Principle (58)
      • 4.4.2. MCP4725 [9] (58)
    • 4.5. Brake Circuit (59)
      • 4.5.1. Operating Principle (59)
      • 4.5.2. MPU 6050 [10] (59)
    • 4.6. Angle Estimator Circuit (60)
      • 4.6.1. Operating Principle (60)
  • CHAPTER 5: TRAJECTORY AND MOTION CONTROL (61)
    • 5.1 General Control and Algorithm of Car (61)
      • 5.1.1 General Control of Car (61)
      • 5.1.2 General Algorithm of Controlling Car (63)
    • 5.2 Communication Protocols (64)
      • 5.2.1 Ethernet Protocol (64)
      • 5.2.2 UART Protocol (66)
      • 5.2.3 CAN Protocol (69)
    • 5.3. A-star algorithm for path finding (71)
      • 5.3.1. Introduction about Hybrid A-star algorithm (71)
      • 5.3.2. Implementation hybrid A-start in project (74)
    • 5.4 Lane Detection (75)
    • 5.5 Semantic segmentation to detect and avoid obstacle (79)
      • 5.5.1 U-network (79)
        • 5.5.1.1 Introduction (79)
        • 5.5.1.2 Implementation U-net for self-driving car (81)
      • 5.5.2 High – Resolution Network (HRNet – v2) (85)
        • 5.5.2.1 HRNet – v2 architecture (85)
        • 5.5.2.2 Implementation HRNet-v2 for self-driving car (87)
      • 5.5.3 Evaluate and select models segmentation (87)
      • 5.5.4 Creates an occupancy map for avoiding obstacles (88)
    • 5.6 Traffic Sign Recognition (90)
    • 5.7 Kalman Filter Algorithm (92)
      • 5.7.1 Introducing the Kalman Filter (92)
      • 5.6.2. Kalman Filter in Inertial Measurement Unit (IMU) Sensor (93)
      • 5.6.3. Implementing the filter (94)
    • 5.8 Actuator operation (97)
      • 5.8.3.1 Overview of The Brake Control System (100)
      • 5.8.3.2 Control Algorithm (102)
  • CHAPTER 6: RESULTS, COMMENTS, AND DEVELOPMENT DIRECTIONS (104)
    • 6.1. Archived Results (104)
      • 6.1.1. The Result of Controlling the Front Wheels (104)
      • 6.1.2. The Result of Getting Angle Value from IMU Sensor (104)
      • 6.1.3 Evaluation and Result of Training Model (107)
      • 6.1.4. Results in the application of the A-star Hybrid pathfinding algorithm (110)
      • 6.1.5 Result of U-net segmentation (111)
      • 6.1.6 Result of HRNet-v2 segmentation (112)
      • 6.1.7 Result in application segmentation image for create local map (113)
      • 6.1.8 Result in operating car (113)
    • 6.2. Project limitations (118)
    • 6.3 Project Development (118)

Nội dung

ABSTRACT RESEARCH AND DEVELOPMENT OF A SELF-DRIVING ELECTRIC VEHICLE IN PUBLIC PLACES The graduation thesis explores the advancements and applications of autonomous electric vehicles i

OVERVIEW

Introduction

1.1.1 The urgency of the subject

The urgent topic of self-driving cars holds significant implications for transportation, safety, the economy, and the environment Their development and integration into society could revolutionize how we travel, enhance road safety, stimulate economic growth, and contribute to environmental sustainability.

Self-driving cars could greatly improve road safety by minimizing accidents attributed to human error, which accounts for most traffic incidents The World Health Organization identifies road traffic accidents as the primary global cause of death, highlighting the critical need to implement automated driving technology to enhance safety and protect lives.

Self-driving cars enhance traffic efficiency by optimizing flow and reducing congestion through advanced algorithms and connectivity These autonomous vehicles coordinate their movements, resulting in smoother traffic patterns and shorter travel times As urbanization rises, addressing traffic congestion becomes increasingly critical.

Self-driving cars can significantly improve mobility for individuals unable to drive, including the elderly and people with disabilities, thereby enhancing their independence and overall quality of life It is crucial to develop comprehensive mobility solutions that ensure equitable transportation access for all.

The transportation sector plays a major role in greenhouse gas emissions and air pollution By designing self-driving cars to enhance fuel efficiency and optimize routes, we can significantly reduce emissions and lessen environmental impacts In light of the pressing need to combat climate change and enhance air quality, the development and implementation of eco-friendly autonomous vehicles is crucial.

The rise of self-driving cars is poised to significantly disrupt multiple industries and job markets, making it crucial to anticipate and prepare for these changes By understanding and managing the economic implications of autonomous vehicles, we can facilitate a smooth transition and mitigate potential negative impacts, ultimately fostering a sustainable and equitable future.

Self-driving cars raise significant ethical dilemmas, particularly in decision-making during unavoidable accidents It is crucial to engage in urgent discussions and build consensus to tackle these challenges Establishing regulatory frameworks that prioritize public safety, privacy, and equity is essential for the responsible deployment of autonomous vehicles.

The urgency for self-driving cars arises from their potential to transform transportation, increase road safety, improve accessibility, reduce environmental impact, and address the socioeconomic changes they may introduce To ensure the responsible and beneficial integration of autonomous vehicles into society, proactive research, development, and collaboration among diverse stakeholders are essential.

The rapid advancement of autonomous driving technology is reshaping the automotive landscape, driven by market demand for innovative automation solutions Leading tech nations like the United States, the United Kingdom, Japan, and China are advocating for more flexible regulations to facilitate the development and testing of self-driving vehicles A notable example is Nuro, an innovative company founded in 2016 by former Google engineers Dave Ferguson and Jiajun Zhu, which has made significant strides in the autonomous vehicle sector, raising substantial funding in 2019.

Nuro has secured a significant $940 million investment from SoftBank Group Corp and other businesses, bringing its total asset valuation to $2.7 billion Distinct from other autonomous driving companies, Nuro focuses on developing self-driving vehicles specifically for the transportation of goods without requiring human drivers Since late 2020, the company has received government permits in the United States to operate in the autonomous freight transportation sector.

Overall, Nuro is an innovative company that focuses on developing autonomous vehicles for goods transportation, contributing to the advancement of self-driving technology and its applications

. compartments Nuro is equipped with lidar, radar, and cameras that enable 360-degree observation Customers receiving deliveries will interact with a touchscreen located between the two doors

Nuro is designed to be about 20% smaller than traditional cars, enabling it to navigate narrow roads with ease Its electric motor allows for a maximum speed of 72 km/h, making it efficient for urban environments.

The autonomous electric vehicle features advanced front airbags that activate during collisions with pedestrians or cyclists, prioritizing the safety of individuals nearby.

Nuro's autonomous vehicle maximizes its interior space exclusively for goods transportation, boasting a payload capacity of 220 kg The innovative system enables temperature control, allowing for cooling down to -6 degrees Celsius and heating up to 47 degrees Celsius, ensuring optimal conditions for various types of deliveries.

The information presented pertains to the Nuro autonomous vehicle, but it may not accurately represent the design or features of registered vehicles in Vietnam.

Vietnam has achieved a significant milestone by developing its first Level 4 autonomous vehicle, created by the Phenikaa Group This electric vehicle boasts a daily range of 100 km and an average speed of 20 km/h, requiring approximately 7 hours for a full battery charge Adhering to the Society of Automotive Engineers (SAE) standards, the car is equipped with advanced Level 4 autonomous features, including 3D mapping, high-resolution Lidar sensors, and GPS, alongside machine learning and deep learning technologies These components work together across four key system groups: control, safety, perception, and intelligent control systems.

Figure 1.2: Actual image of self-driving cars in Vietnam [2]

System characteristics

Self-driving cars, also known as autonomous vehicles (AVs), possess several distinct system characteristics that enable them to operate without human intervention These characteristics include:

Self-driving vehicles utilize an array of sensors, including cameras, lidar, radar, and ultrasonic sensors, to effectively perceive and interpret their surroundings These advanced technologies gather crucial data about the environment, identifying the positions of other vehicles, pedestrians, road signs, and potential obstacles, ensuring safe navigation.

Sensors gather data that is processed and integrated to provide a detailed understanding of the vehicle's environment Utilizing advanced algorithms, this sensor data is analyzed to inform decisions and predict the car's trajectory and behavior.

Self-driving cars utilize high-resolution maps and advanced localization methods to navigate accurately These maps provide intricate details about road layouts, lane markings, traffic signs, and other essential features By analyzing sensor data in conjunction with these maps, autonomous vehicles can pinpoint their precise location on the road.

Self-driving cars rely on their environmental awareness and knowledge of their own location to generate an effective action plan This system is responsible for making crucial decisions such as when to accelerate, brake, change lanes, or turn The decision-making process incorporates traffic regulations, safety measures, and the vehicle's intended destination to ensure a smooth and secure journey.

The control and operation of a self-driving vehicle involve managing its acceleration, braking, and steering systems The vehicle's control system issues commands to its actuators, including the engine, brakes, and steering, to execute predetermined actions, ensuring accurate and precise vehicle movement.

Autonomous vehicles rely on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies for connectivity and communication This advanced connectivity facilitates the sharing of crucial information regarding road conditions, traffic patterns, and potential hazards, significantly enhancing the safety and efficiency of automated driving systems.

Self-driving vehicles rely on a combination of redundancy and safety mechanisms to ensure system reliability and safety These technologies enable precise actions for navigating roads safely and efficiently Ongoing research and continuous technological advancements aim to enhance these features and improve the overall performance of autonomous cars.

Structure of the system

The self-driving car system is composed of various integrated components that facilitate autonomous driving, primarily through its advanced sensor system This system includes cameras for visual data collection, lidar for distance measurement and 3D mapping, radar for object detection and velocity assessment, and ultrasonic sensors for short-range detection, all working together to analyze the surrounding environment effectively.

The perception system in self-driving cars utilizes advanced sensors and algorithms to interpret the vehicle's environment, employing computer vision for visual data analysis, object recognition for identifying and classifying objects, and sensor fusion to combine information from various sources This cognitive system enables a thorough understanding of surroundings, including the detection of other vehicles, pedestrians, road markings, and traffic signs Additionally, self-driving cars depend on high-definition maps and accurate localization techniques; mapping creates detailed representations of roads, while localization leverages sensor data to pinpoint the vehicle's exact position on the map By aligning real-time sensor information with pre-existing maps, these vehicles can navigate safely and effectively on the road.

The decision-making system utilizes perceptual data, maps, and pertinent information to guide vehicle behavior effectively By employing advanced algorithms, it analyzes this data to identify the safest and most efficient maneuvers, including acceleration, braking, lane changes, and turns Additionally, the system takes into account various factors such as traffic regulations, road conditions, and the vehicle's intended destination.

The control system executes the decisions made by the decision-making system by sending commands to the vehicle's actuators, including the engine, brakes, and steering This system ensures precise control over acceleration, braking, and steering, enabling safe and smooth navigation on the road.

Self-driving vehicles equipped with communication capabilities can exchange critical data with other vehicles (V2V) and infrastructure (V2I), enhancing safety and efficiency This technology enables the sharing of vital information regarding traffic conditions, road hazards, and other relevant data, facilitating coordinated actions and promoting smoother traffic flow.

Self-driving cars rely on advanced computing power to analyze vast amounts of sensor data and execute intricate algorithms, often utilizing centralized systems that may be integrated or cloud-based These systems leverage artificial intelligence and machine learning to enhance vehicle performance and decision-making through real-world data To ensure reliability and minimize risks, self-driving vehicles incorporate redundancy in critical components, such as sensors and control systems, along with safety protocols to detect and respond to malfunctions or unsafe conditions This integrated approach enables autonomous vehicles to effectively perceive their surroundings, make informed decisions, and control their movements, with ongoing advancements aimed at improving safety and performance in autonomous driving technology.

Research related to the topic

Below are some studies related to the topic of self-driving cars that have been released Waymo One (formerly known as the Google Self-Driving Car Project) - Released in

2018 by Waymo, a subsidiary of Alphabet Inc (Google's parent company) Waymo One is a commercial autonomous ride-hailing service available in select areas

Tesla's Autopilot, launched in 2015, is an advanced driver-assistance system that has been gradually enhanced over the years, with the goal of achieving full self-driving capabilities in the future.

The Cruise Origin, an innovative autonomous electric vehicle intended for shared rides, was introduced by Cruise in 2020 and is currently in development, awaiting public release.

Mobileye, a subsidiary of Intel, launched the Responsibility-Sensitive Safety (RSS) model in 2017 as a safety framework designed for autonomous vehicles, focusing on promoting safe decision-making while driving.

Baidu's Apollo, launched in 2017, is an open-source self-driving car platform designed for developers to innovate in autonomous vehicle technology It has undergone extensive testing in both China and the United States, showcasing its capabilities and commitment to advancing self-driving solutions.

Nuro by USA - Since the end of 2020, Nuro has been licensed by the US government to

Recent advancements in self-driving car technology highlight the significant progress being made in this field The development and deployment of autonomous vehicles are ongoing, featuring continuous updates and enhancements aimed at improving their capabilities and safety.

The limitations of the system

Despite their potential, self-driving cars still face several limitations and challenges that need to be addressed before widespread adoption Here are some key limitations:

Ensuring the safety of self-driving cars is a vital challenge, as accidents involving autonomous vehicles have raised significant concerns Despite the promise shown by these systems, there is an urgent need for comprehensive safety measures and extensive testing to address these issues effectively.

Self-driving cars encounter significant challenges in navigating complex urban environments characterized by heavy traffic, crowded streets, and unpredictable human behavior They struggle particularly in scenarios involving construction zones, unmarked roads, and the need to respond to emergency vehicles, making urban navigation a complex task for autonomous technology.

Inclement weather, including heavy rain, snow, and fog, can significantly hinder sensor performance, impacting a vehicle's ability to accurately perceive its surroundings This limitation presents challenges for self-driving cars, making it difficult for them to operate safely and effectively in various weather conditions.

The legal and regulatory frameworks for self-driving cars are currently evolving, with significant variations across different regions and countries This inconsistency creates challenges for companies aiming to deploy autonomous vehicles at scale To facilitate broader adoption, it is essential to establish standardized regulations that prioritize safety and address liability issues.

Self-driving cars encounter ethical challenges in critical scenarios, particularly when it comes to choosing between the safety of passengers and pedestrians during unavoidable accidents The question of how autonomous systems should navigate these moral dilemmas is a complex and contentious topic.

Self-driving cars depend on intricate software systems and connectivity, which creates cybersecurity vulnerabilities that can make them targets for hacking and malicious attacks To safeguard these vehicles from potential threats, it is crucial to implement strong cybersecurity measures.

The advancement and implementation of self-driving cars necessitate substantial financial investments Furthermore, modifying current infrastructure to accommodate autonomous vehicles—such as updating road signs, traffic lights, and communication systems—can be both costly and time-consuming.

The successful integration of self-driving cars into society hinges on public trust and acceptance Many individuals harbor concerns about their safety when relying on autonomous systems, necessitating time to adjust to the concept of sharing roadways with these vehicles.

To overcome the limitations of self-driving cars, it is essential to foster continuous technological advancements, enhance collaboration among industry stakeholders, implement regulatory developments, and increase public awareness As technology progresses, significant efforts are underway to address these challenges, paving the way for self-driving cars to become a mainstream transportation option.

THEORETICAL FOUNDATIONS

Autopilot Levels

Figure 2.1: Image of self-driving car level [3]

At this level of vehicle automation, the driver maintains complete control over essential functions such as steering, acceleration, braking, and parking While the vehicle operates in automatic mode, it is equipped with supportive features like emergency braking assistance, blind spot warnings, and lane keeping assistance These technologies enhance safety by alerting the driver to potential issues without fully taking over vehicle control Most modern car models on the market today incorporate these convenience features, striking a balance between driver autonomy and technological support.

Figure 2.2: Functional image of level 1 vehicle [4]

At the foundational level of autonomous driving, the driver remains responsible for the majority of vehicle control tasks, while benefiting from specific features designed to assist in particular situations, enhancing navigation and safety during certain driving scenarios.

Level 1 automation includes technologies like Adaptive Cruise Control (ACC), which allows vehicles to automatically adjust their speed to maintain a safe distance from other cars Another example of this level of automation is Lane Keeping Assist (LKA), which helps drivers stay centered in their lanes.

The Adaptive Cruise Control system is considered Level 1 automation technology

Lane keeping assist differs from lane departure warning by actively steering the vehicle to maintain its position within the designated lane When a vehicle is equipped with both lane keeping assist and adaptive cruise control, it qualifies for Level 2 autonomous driving classification.

Figure 2.3: Functional image of a level 2 vehicle [4]

At this level of autonomous driving, vehicles are equipped with an Advanced Driver Assistance System (ADAS) that enhances driver support by enabling automatic steering, acceleration, and braking in complex scenarios.

Despite the car's ability to autonomously steer and brake, drivers must remain actively engaged in the driving process by keeping both hands on the steering wheel and consistently monitoring the vehicle's direction.

Level 2 is often referred to as partial automation or semi-autonomous driving, and many car models introduced in the US and European markets in 2020 can be classified at this level

Figure 2.4: Functional image of a level 3 vehicle [4]

Conditional automation, often referred to as Level 3 autonomy, involves driver assistance systems that utilize artificial intelligence (AI) to make real-time decisions based on traffic conditions The advancement of autonomous vehicles hinges on the technology that maps their surrounding environment effectively.

Level 3 autonomous vehicles can function independently without the driver's constant involvement; however, the driver must remain present to regain control during emergencies or if the system experiences malfunctions.

Figure 2.5: Functional image of a level 4 vehicle [4]

Level 4 autonomous vehicles are designed for high automation, eliminating the need for driver interaction during operation These vehicles are equipped with advanced systems that ensure safe stopping in the event of a failure, allowing for a seamless driving experience under most real-world conditions without the need for driver intervention.

Level four autonomous driving allows vehicles to travel completely automated from one location to another within designated geographic areas Waymo, a company focused on self-driving technology and a subsidiary of Google, has successfully implemented this service in Phoenix, Arizona, following comprehensive mapping of the city's roadways.

However, weather conditions can limit the performance of Level 4 autonomous vehicles

The performance of smart cars largely relies on their advanced technology, such as LiDAR sensors, which enable them to effectively analyze environmental data regardless of weather conditions.

Figure 2.6: Functional image of a level 5 vehicle [4]

Level 5 autonomy represents the pinnacle of the SAE scale, where vehicles can operate entirely on their own, managing real-time scenarios without any input from a driver Unlike traditional cars, Level 5 vehicles are designed without steering wheels, pedals, or rearview mirrors, emphasizing their fully autonomous nature.

Level 5 autonomous vehicles resemble robotic vehicles rather than traditional cars, featuring advanced self-driving technologies that enable operation in any geographic or weather conditions These fully driverless vehicles require minimal human interaction, as users can simply provide a destination remotely, such as through a smartphone app With their capability to function without restrictions, Level 5 autonomous vehicles represent the future of transportation.

2.1.1 Safety Standards for Self-driving Car Technology

To enhance safety and reduce risks like accidents and software vulnerabilities, automobile manufacturers must adhere to the Federal Motor Vehicle Safety Standards (FMVSS) and certify the safety of autonomous vehicles for road use Self-driving vehicles must meet specific safety standards to ensure their safe operation.

Autonomous vehicles must be able to detect objects within their movement range The system needs to make immediate decisions regarding acceleration, deceleration, and changes in direction

The control software must ensure safety and address cybersecurity risks

Commonly Used Devices in Self-driving Cars

Self-driving cars are self-driving vehicles equipped with sensors, cameras, radar, GPS and artificial intelligence (AI) to be able to automatically control and move without human intervention

- Lidar device is the most important device in self-driving vehicles,

- mounted on the roof of the vehicle

Problems and Algorithmic Solutions for Self-driving Cars

Self-driving cars utilize advanced algorithms for essential functions such as trajectory planning, image processing, distance measurement, GPS position estimation, and error correction These algorithms play a vital role in improving the safety and efficiency of autonomous vehicle operations.

For a vehicle to operate safely and earn the trust of users, it must meet specific parameters The detailed calculations and analyses related to this will be thoroughly presented in Chapter 4.

HARDWARE DESIGN

Design Overview

This is the car the group inherited:

- The limit for rotation of the front wheels is

Figure 3.2: Image of the front wheel of the car

Figure 3.3: Schematic image of the front wheel

The front wheel gearbox facilitates vehicle movement and seamless changes in driving states By engaging the steering system and pulling the propeller shaft, the large and small gears rotate in unison with the engine, enabling the wheels to turn left and right for easy navigation.

The steering wheel is a crucial component of a vehicle's control system, allowing the driver to directly influence the car's direction Typically circular in shape, the steering wheel operates by converting the driver's rotational input into lateral movement of the front wheels, enabling precise left or right turns.

Figure 3.5: Image of the vehicle's rear wheel and DC motor The mounting position of the differential is at the rear wheel axle

DC motor is a motor that uses direct current to operate the vehicle DC motors are the

DC motors operate by converting direct electrical power into motion, utilizing the fundamental principle of electromagnetic induction to achieve rotor rotation within the motor.

A DC motor comprises a rotor, which is the rotating component connected directly to the vehicle axle, and a stator, the stationary part The rotor is situated between two magnetic poles within the stator, and when current passes through the sensor, it generates a magnetic field that causes the rotor to rotate.

Advantage: DC motor can provide a strong pulling force from low speed to high speed This makes them suitable for automotive applications where a wide range of speeds is required

Disadvantages: DC motors are often less efficient than some other types of motors, and they may require special maintenance

DC motors are commonly used in small electric and hybrid vehicles, especially in applications where production costs and compact size are important

Figure 3.6: Motor Specification DIRECT CURRENT MOTOR

Table 1: Direct current motor specification

Figure 3.7: Image of rear wheel differential diagram The transmission ratio is 14:1

The Limited-Slip Differential (LSD) is a crucial component situated in the rear axle, designed to regulate the wheel speeds during turns and slides, thereby enhancing the vehicle's smoothness and handling.

The differential is a gear system situated on the shaft that links two wheels, responsible for receiving torque from the drive shaft and distributing it to each wheel.

In cars, you often see a round headstock in the middle of the rear axle, which is the position of the differential

The drive gear at the end of the moving shaft rotates the gear ring, which transmits power to the differential gear combination This differential allows each wheel to rotate at its own speed by effectively combining gears The ratio of teeth between the driving gear and the ring is referred to as the axle ratio or final drive ratio.

The group's vehicle is equipped with an open differential, known for its affordability, lightweight design, and durability However, a significant drawback is the uneven torque distribution to the wheels, which can lead to slipping, particularly on wet surfaces To address this issue, the limited-slip differential (LSD) was developed, effectively improving torque transmission to both wheels that have better grip.

Brakes are a crucial component of a vehicle's safety system, featuring various types to enhance safety during travel Among these, drum brakes play a significant role in allowing drivers to control speed and stop the vehicle promptly when needed.

The braking system is primarily situated on the rear axle of the wheel, featuring a straightforward design in the form of a closed box, which operates on a simple principle.

Compared to disc brakes of equivalent diameter, drum brakes create more braking force

In addition, the brakes do not cause dangerous slipping or dragging for the vehicle driver, a phenomenon commonly encountered with disc brakes when braking suddenly

The monolithic box design helps protect the internal components from environmental impacts such as mud, water, dust, etc

The thickness of the brake shoe is thicker than the thickness of the disc brake pad, so users will have less maintenance costs

Production costs are low and reproducible

Due to the closed box design, the temperature mechanism, temperature escape, and brake temperature accelerate rapidly, which can affect the operation of the empty internal components

Because the deceleration time is slow, in case the user brakes suddenly or wears a brake, this type of brake works closer than a disc brake

Disc brakes are heavier than disc brakes, which can affect vehicle load

Figure 3.9: Image of drum brake creation [5]

Drum brakes, also known as hub brakes or shoe brakes, consist of essential components including brake shoes, brake drums, and brake pads, along with various other parts that work together to transmit braking force effectively.

The brake drum features a hollow box structure securely attached to the drive shaft, typically constructed from metal with a rough interior surface to enhance braking efficiency Meanwhile, the brake shoe is generally made of stainless steel and is coated with a friction-resistant compound, ensuring optimal performance during braking.

When joy 1 is activated, it stretches joy 2, which in turn affects joy 3 and applies tension to the axis, leading to the vertical stretching of joy 4 Pressing the brake reduces the vehicle's speed and halts the wheel's movement.

Vehicle Configuration

The team measured the vehicle configurations in the table below:

Diameter of wheel 400 mm Number of Seats 4 seats

Table 2: Vehicle specifications are measured From the parameters mentioned in Table 2, the following necessary parameters can be calculated:

To convert speed from kilometers per hour (km/h) to revolutions per minute (rpm), you can use the formula that incorporates the diameter of the wheel in meters For this project, we will consider a speed of 10 km/h.

The Tangential Traction Force (𝐹𝑘) is the road reaction force that acts on the drive wheel in the direction of the vehicle's motion, calculated using the formula 60 × 0.4 × 𝜋 ≈ 132.63 (𝑟𝑝𝑚) This force is applied at the center of the contact patch between the wheel and the road surface, playing a crucial role in vehicle dynamics.

To determine the maximum load value for a vehicle, it is essential to ensure that the traction force meets or exceeds the total resistive forces acting against it According to Formula 3.2.1, these resistive forces include rolling resistance (𝐹 𝑓), air resistance (𝐹 𝜔), gradient resistance (𝐹 𝑖), and inertial resistance (𝐹 𝑗).

In the analysis of vehicle forces, the force 𝐹 𝑖 indicates movement direction, with a positive sign (+) representing uphill motion and a negative sign (–) denoting downhill motion Meanwhile, force 𝐹 𝑗 reflects acceleration, where a positive sign (+) indicates acceleration and a negative sign (-) signifies deceleration Given a constant velocity of 10 km/h and the infrequent occurrence of significant slopes in public areas, we can simplify our calculations by excluding air resistance (𝐹 𝜔) and gradient resistance (𝐹 𝑖) from formula 3.2.1, leading us to the revised formula 3.2.2.

𝐹 𝑓 = 𝑓𝐺𝑐𝑜𝑠𝛼 = 𝑓𝐺 when angle α is very small

𝐺, 𝑔 are force and acceleration gravity α is slope angle

𝛿 𝑖 is a coefficient that takes into account the influence of rotating masses

𝑎 is acceleration when vehicle is moving

𝑑 (3.2.3) From fomula 3.2.3, so we can find maximum load (𝑚 𝑚𝑎𝑥 ) below, choosing rolling resistance coefficient 𝑓 = 0.015 [7] and acceleration is 0.2 (m/s 2 ):

Kinematic model of vehicle

The kinematic bicycle model is a popular choice for low-speed, four-wheeled car-like mobile robots, as illustrated in Figure 3.11 This model effectively simplifies the intricate mechanics of a four-wheeled vehicle by emulating the steering and motion dynamics of a standard bicycle, facilitating easier comprehension of the vehicle's behavior.

The bicycle model is designed with a rear wheel securely attached to the chassis, allowing it to rotate forward and backward without any roll, pitch, or yaw Steering is achieved through the front wheel, which rotates around its vertical axis This analysis assumes that the wheel rolls without any sideways slipping.

The mobile robot's position is defined by its body coordinate frame, with the x-axis aligned in the direction of forward movement and the origin situated at the center of the rear axle.

Dashed lines indicate the areas where the wheels are unable to move, known as the lines of no motion These lines meet at a point called the Instantaneous Center of Rotation (ICR) Consequently, the vehicle's reference point follows a circular trajectory, and its angular velocity is defined.

The turning radius \( R_B \) of a vehicle can be calculated using the formula \( R_B = L / \tan \gamma \), where \( L \) represents the vehicle's length or wheelbase As anticipated, an increase in the vehicle's length results in a larger turning circle Additionally, the steering angle \( \gamma \) is typically subject to mechanical limitations, with its maximum value directly influencing the minimum turning radius \( R_B \).

Figure 3.11 Bicycle model of a car

The kinematic bicycle model can be reformulated with nonlinear continuous-time equations, as illustrated in Figure 3.12 For more detailed information, please refer to [33] The dynamic bicycle model is presented as follows:

In the equations, the coordinates of the vehicle's center of mass are represented by 𝑥 and 𝑦 within an inertial frame (X,Y), while 𝜓 indicates the inertial heading and 𝑣 signifies the vehicle's speed The distances from the center of mass to the front and rear axles are identified as 𝑙 𝑓 and 𝑙 𝑟, respectively The angle 𝛽 describes the relationship between the current velocity at the center of mass and the vehicle's longitudinal axis Additionally, 𝑎 denotes the acceleration of the center of mass in the direction of velocity, and the control inputs consist of the front steering angles, 𝛿 𝑓, along with the acceleration.

𝑎 Given that the rear wheels are typically non-steerable in most vehicles, it is assumed that

Compared to more complex vehicle models, system identification using the kinematic

ELECTRICAL SYSTEM DESIGN AND OPERATING PRINCIPLE 31 4.1 Overview of the sensor system and actuator

Controller Circuit

The CP2102 module serves as a bridge for transmitting and receiving data through UART between a PC and an STM32 microcontroller

The STM32 serves as the central control unit on the Controller board, efficiently managing data distribution and processing from multiple UART and CAN sources while also transmitting essential information to the PC and other control boards.

The CAN module functions as a bridge for transmitting and receiving data

Core: ARM 32-bit Cortex™-M3 CPU

- 72 MHz, 90 DMIPS with 1.25 DMIPS/MHz

- Single-cycle multiplication and hardware division

- Nested interrupt controller with 43 maskable interrupt channels

- Interrupt processing (down to 6 CPU cycles) with tail chaining

- 32-to-128 Kbytes of Flash memory

- Clock, reset and supply management

- 2.0 to 3.6 V application supply and I/Os

- POR, PDR, and programmable voltage detector (PVD)

- Internal 8 MHz factory-trimmed RC

- Dedicated 32 kHz oscillator for RTC with calibration

- Sleep, Stop and Standby modes

- VBAT supply for RTC and backup registers

- Dual-sample and hold capability

- Synchronizable with advanced control timer

- Peripherals supported: timers, ADC, SPIs, I2Cs and USARTs

4 Connecting ports: 3.3, CAN Tx, CAN Rx, GND

Front Wheels Circuit

Figure 4.6: Front Wheels Circuit Schematic

The STM32 acts as the primary control unit on the Front board, responsible for generating PWM signals for motor control, reading button inputs, managing executive mechanisms, and receiving control data from the STM32 located on the Controller board.

The CAN module acts as a bridge for transmitting and receiving data

The BTS7960 provides PWM signals to control the DC motor

Source: 6 ~ 27V Load current of the circuit: 43A

Automatic shutdown on low voltage: when the voltage is 5.5V

Overheat protection: BTS7960 provides overheat protection with an integrated thermal sensor The output will be cut off in case of overheating

Figure 4.7: BTS7960 motor control circuit

Rear Wheels Circuit

Figure 4.8: Rear Wheels Circuit Schematic

The STM32 functions as the primary control unit on the Rear Wheels board, managing PWM signal generation for motor control, processing encoder values, and facilitating data communication through CAN with the Controller and Brake modules.

The CAN module acts as a bridge for transmitting and receiving data

The MCP4725 converts digital signals to analog for further use in the system

On-Board Non-Volatile Memory (EEPROM) ±0.2 LSB DNL (typical)

Normal or Power-Down Mode

Fast Settling Time of 6 às (typical)

Standard (100 kbps), Fast (400 kbps), and High-Speed (3.4 Mbps) Modes

Brake Circuit

The STM32 acts as the primary control unit on the Brake board, tasked with generating PWM signals for the control of the brake motor, reading input from buttons, and receiving braking data from the STM32 located on the rear wheels.

The CAN module acts as a bridge for transmitting and receiving data

The MPU6050 is used to read acceleration values for implementing braking at a specific acceleration level

Gyroscope values in range: +/- 250, 500, 1000, 2000 degree/sec

Angle Estimator Circuit

Figure 4.12: Angle Estimator Circuit Schematic

The STM32 acts as the main control unit on the Angle Estimator board, efficiently reading yaw angle data from the MPU6050 sensor, packaging the angle values, and transmitting them through CAN to the STM32 Controller.

The CAN module acts as a bridge for transmitting and receiving data

The MPU6050 is used to read yaw axis values for providing feedback on the vehicle's rotation angle

TRAJECTORY AND MOTION CONTROL

General Control and Algorithm of Car

Figure 5.1: General Control Diagram of Car

The automatic vehicle's operation and control method, as illustrated in Figure 5.1, is based on the circuit design from Chapter 4 This system utilizes two cameras linked to personal computers for processing road lane images, which are essential for calculating control parameters Machine learning models, trained on specific driving scenarios, enable the vehicle to recognize traffic signs and make informed decisions to navigate effectively.

Two personal computers communicate via Ethernet to exchange vital information for calculating the vehicle's trajectory The exchanged data is divided into two main categories: Data_2 and Data_3 Data_2 includes the vehicle's actual turning angle, map coordinates, lane processing mode signals, and terminal station coordinates, providing real-time feedback essential for accurate navigation and control Conversely, Data_3 encompasses the desired turning angle, road curvature, vehicle deviation from the center, movement direction, and stop signals, focusing on the desired outcomes and necessary adjustments to maintain the vehicle's correct path.

PC2, as depicted in Figure 5.1, is responsible for packaging and sending vital control values to a microprocessor controller as data_1, which includes essential information such as motor rotation speed, front wheel steering response pulses, travel direction, and desired vehicle speed via the UART protocol In addition to sending control commands, PC2 receives operational data from the controller and collects information from other microprocessors managing the steering, transmission, and braking systems This operational data, encapsulated in data_0 and transmitted using the CAN protocol, comprises the vehicle's rotation angle, distance traveled, current speed, and encoder readings from the front wheel motor This information is then sent back to the computer through UART for further calculations and adjustments, ensuring the system adapts accurately to changing conditions.

When PC2 receives calculated control parameters, it transmits this data to the actuators' controllers using the CAN protocol These controllers then perform actions based on the received instructions Throughout the vehicle's autonomous operation, microprocessors gather data from encoder sensors, converting these readings into useful values and sending the necessary parameters back to the main controller via the CAN protocol This ongoing data exchange guarantees smooth vehicle operation and quick responses to environmental changes.

This innovative approach combines image processing, machine learning, and real-time data communication to facilitate autonomous vehicle navigation By utilizing Ethernet, UART, and CAN protocols, it guarantees dependable data transfer among components The integration of machine learning models significantly improves the vehicle's capability to identify and react to road signs and environmental signals The meticulous design allows for effective adjustments in path, speed, and direction, ensuring safe and efficient operation at all times.

5.1.2 General Algorithm of Controlling Car

Figure 5.2: General Algorithms applied in Controlling Car

The algorithm is crucial for the functioning of autonomous vehicles, enabling them to navigate accurately, recognize traffic signs, and avoid obstacles Key algorithms utilized in the vehicle control model include those for trajectory generation, lane recognition, precise navigation, and filtering input signal interference Additionally, various algorithms are responsible for calculating and processing essential data to ensure the vehicle operates efficiently.

- The Hybrid-A* algorithm is used to generate the trajectory of the rover

The Lane Detection Algorithm plays a crucial role in vehicle navigation by identifying the active lane, calculating road curvature, and measuring the vehicle's distance from the lane center This technology ensures that vehicles maintain proper alignment within their lanes, enhancing driving safety and efficiency.

The segmentation algorithm is utilized within the control model to determine the navigable portions of the lane, effectively identifying obstacles on the road to assist the vehicle in avoiding potential hazards.

Sign Recognition utilizes a pre-trained YOLO AI model, tailored with custom datasets of traffic signs relevant to the vehicle's environment This technology aims to ensure that vehicles adhere to the directives of traffic signs and comply with established traffic regulations.

The Stanley Control algorithm is utilized in the PC2 program to determine the steering parameters for the front wheel system, using input data such as the real car's rotation angle, position, and velocity This helps optimize the vehicle's movement to align with the desired trajectory.

The PD control algorithm is implemented to manage the drive motor for the front-wheel steering system, utilizing control parameters derived from the Stanley control algorithm on the PC2 kit This integration enhances the precision and responsiveness of the steering mechanism, ensuring optimal performance in various driving conditions.

For the brake and angular estimator:

- Applying the Kalman filter algorithm to filter out the noise signal read from the MPU6050 sensor.

Communication Protocols

Ethernet is a networking technology that encompasses the necessary protocol, port, cable, and computer chip to connect desktops or laptops to a local area network (LAN), enabling fast data transmission through coaxial or fiber optic cables.

Ethernet, a communication technology created by Xerox in the 1970s, enables wired connections between computers in a network It facilitates the integration of local area networks (LAN) and wide area networks (WAN), allowing multiple devices like printers and laptops to connect across various locations, including buildings, homes, and small communities.

The article highlights a user-friendly interface that allows for easy connection of multiple devices, such as switches, routers, and PCs By utilizing a router and a few Ethernet cables, users can create a local area network (LAN) to facilitate communication among all connected devices Laptops equipped with Ethernet ports enable this setup, as cables are plugged into these ports and connected to the routers Furthermore, the Ethernet communication protocol is integrated into the processing and control systems of autonomous technologies.

User Datagram Protocol (UDP) is a transport layer protocol within the Open Systems Interconnection (OSI) model, designed for client-server network applications It features a straightforward transmission model that eliminates the need for handshaking dialogs, which enhances speed but sacrifices reliability, ordering, and data integrity By assuming that error-checking and correction are unnecessary, UDP minimizes processing requirements at the network interface level, making it an efficient choice for applications where speed is critical.

Figure 5.2: The Structure of UDP [13]

Following the structure of UDP in Figure 5.2 The communication protocol in the system functions as depicted in Figure 5.3 below:

Figure 5.3: Communication in Car’s Control System between two PCs

In the PCx block, each computer is assigned a distinct Ethernet ID formatted as 192.168.1.XX, with "XX" representing a unique address for every machine This configuration guarantees precise targeting of Ethernet connections to specific IP addresses and client ports, facilitating seamless communication between devices.

When the control program initiates the PC2 block, it first sends a test request to PC1 A successful response from PC1 within one minute confirms the establishment of the Ethernet connection between the two computers, enabling them to exchange data and collaborate on calculations.

If PC1 fails to respond within 60 seconds after receiving a test signal from PC2, the connection process will automatically reset This reset mechanism is essential for maintaining a reliable connection, which is vital for the uninterrupted data exchange necessary for the system's autonomous operation.

During autonomous operation, computers exchange parameters via two data packets, data_2 and data_3, formatted as comma-separated strings Each computer's processing program extracts these parameters upon receipt, utilizing them in control and calculation functions to enhance the vehicle's path determination and refinement.

The Universal Asynchronous Receiver-Transmitter (UART) is a vital serial communication protocol that enables data exchange between electronic devices, allowing for seamless asynchronous communication despite differing clock rates or architectures Commonly used in microcontrollers, embedded systems, and computer peripherals, UART facilitates the transmission of vehicle control parameters from the PC to the microprocessor within the Controller block In return, the Controller sends back essential readings from encoders and sensor variables, which are crucial for the mathematical functions and algorithms in the operating program, with the PC and Controller utilizing UART as their shared communication language.

To enable automatic driving, the car's PC functions as its brain, calculating essential parameters such as trajectory, velocity, and rotation angle This data is then encapsulated and transmitted to the central microcontroller using the UART protocol, ensuring seamless vehicle operation.

Figure 5.4: General Communication Between PC and Car

The data sending on UART protocol in this system must follow the structure of data like the Figure 5.4 below:

Figure 5.5: General System Data Structure in UART protocol

The communication between the PC and the vehicle involves transmitting and receiving data through the central microcontroller, known as the "Controller." This process follows a series of general steps to ensure effective data exchange.

Receiving Data_0 from Controller using UART – with Data_0 including the following data:

- The actual rotation angle of the car (for calculating parameters for controlling the front wheels)

- The distance traveled by the car (this data is used for the calculation of controls for the rear wheels)

- Vehicle speed (this data is used for controlling the steering angle in the front wheels)

- Encoder value of the motor control the steering system

Based on the general data structure from the Figure 5.5, all the data from the packet Data_0 have the architect demonstrated in Figure 5.6 belows:

Figure 5.6: Data Structure of Each Data in Data_0

The UART data string received by the PC from the Controller begins with a Start Bit, which consists of specific characters: 'A' denotes the encoder value of the front wheel motor, 'D' indicates the distance traveled by the vehicle from the rear wheel control module, 'Y' represents the actual angle measured by sensors via the Angle Estimator module, and 'V' signifies the start bit for the vehicle speed value during operation Each Start Bit is followed by its corresponding value, and the data string transmission concludes with a newline character ('\n') serving as the stop bit.

After receiving the Data_0 and performing parameter calculation tasks, the PC will transmit the Data_1 packet to the Controller via UART – with Data_1 including:

- Rotational speed of the front wheels

- Pulse for turning the front wheels

- The direction of the rear wheels

- Running speed of the rear wheels

Same as all data in Data_0, each data in Data_1 also follow the rule of the general structure in Figure 5.5 and illustrated in Figure 5.7:

Figure 5.7: Data Structure of Each Data in Data_1 Regarding to the Figure 5.7, the data string structure in Data_1 comprises:

To control the rear wheels, the data string begins with a 'B' start bit, allowing the Controller to recognize that the information pertains to rear wheel operation The string includes data indicating the direction of rotation, with 'T' for forward and 'L' for reverse It also specifies the rotation speed, which is determined by the slider value in the "Back Wheel" section when the button is pressed and held; if the button is not engaged, the speed defaults to 0 The data string concludes with a stop bit.

The steering system utilizes a data string that begins with a start bit 'F' to help the controller identify and process information specifically for front-wheel control It includes ROTATION SPEED data that reflects the engine's rotational speed at 70%, and PULSE data that indicates the number of pulses supplied to the engine, derived from the slider value in the Front Wheel assembly, which ranges from 0 to 200% This data is then converted to pulse counts for the front wheel, spanning from 0 to 40,000 pulses, and concludes with a stop bit to finalize the data string.

The Controller Area Network (CAN) protocol is a versatile communication standard originally developed for the automotive industry, but it has also found applications in industrial automation and medical equipment This serial communication protocol operates on a multi-master, distributed control system, allowing any device, or node, to initiate communication and enabling all other nodes to participate By facilitating information sharing and synchronizing actions without a central controller, the CAN protocol enhances system efficiency It employs collision detection and arbitration methods to prevent simultaneous transmissions, ensuring that only one node can communicate at a time Consequently, the CAN protocol is essential for microprocessors managing critical vehicle systems, such as front and rear wheel steering and braking, enabling seamless information exchange and control parameter adjustments.

-CAN protocol on the steering system:

A-star algorithm for path finding

5.3.1 Introduction about Hybrid A-star algorithm

Building on the interface design and algorithm outlined in Section 5.2, which successfully executes trajectory movements based on specified coordinates, we now address the challenge of pathfinding for the autonomous vehicle The vehicle is required to autonomously compute and identify an optimal path from a predetermined map while adhering to dynamic feasibility constraints.

The hybrid A-star algorithm, known for its efficiency in path planning for non-holonomic autonomous vehicles, was first utilized by the Stanford Racing Team's robot, Junior, during the 2007 DARPA Urban Challenge This algorithm enhances the traditional A-star by addressing its limitations in expanding nodes within a discrete state space, which often fails to consider the dynamic feasibility constraints of vehicles The hybrid A-star operates in two phases—forward search and analytic expansions—resulting in the generation of effective paths for autonomous navigation.

(a) A-star search (b) Hybrid A-star search

Figure 5.14: Node expansion of A-star and hybrid A-star search algorithm

The hybrid A-star algorithm enhances the conventional A-star approach by utilizing a cost-to-go function 𝑔(𝑛) and two heuristic functions, ℎ1(𝑛) and ℎ2(𝑛) The heuristic ℎ1(𝑛) employs an obstacle map to estimate the distance from the current node to the goal, while ℎ2(𝑛) focuses on the kinematics of non-holonomic vehicles, such as autonomous cars, by factoring in speed, direction, and steering angle This dual heuristic approach enables the algorithm to effectively predict vehicle motions and select appropriate successors in continuous coordinates These continuous successors are then converted into discrete coordinate nodes to calculate the total movement cost, as depicted in the accompanying figure.

Achieving precise continuous coordinate goal nodes can be challenging due to the grid map's resolution and the vehicle's minimum motion capabilities.

Analytic expansions are essential in the second phase of the hybrid A-star algorithm, ensuring precise continuous coordinates for the goal state This phase utilizes Reeds-Shepp (RS) curves, which are ideal for non-holonomic vehicles that can move both forward and backward An RS curve computes the shortest path from a starting point to a target point, as demonstrated in Figure 5.15, which shows a combination of Left, Straight, and Right curves (LSR curve) between two configurations: [0, 0, 0] and [15, 2, −p/2] Here, the first two values indicate position, while the third represents vehicle orientation, with the red arrow showing the car's direction The curvature value of 0.5 enables the robot to achieve the maximum turning angle or minimum turning radius The hybrid A-star developers assert that the analytic expansion method significantly enhances accuracy and search speed.

Additionally, a Voronoi field was proposed to assist the vehicle in navigating narrow openings by introducing a path-cost function as shown in Equation below:

In this model, 𝛼 represents a constant falloff rate, where 𝛼 > 0, and 𝑑 0 𝑚𝑎𝑥 denotes the maximum effective range of the field, with 𝑑 0 𝑚𝑎𝑥 > 0 The variable 𝑑 0 indicates the distance to the nearest obstacle, constrained by 𝑑 0 < 𝑑 0 𝑚𝑎𝑥, while 𝑑 𝑣 measures the distance to the nearest edge of the generalized Voronoi diagram This function's effectiveness is rooted in its cost structure, which is higher near obstacles and diminishes to zero at points equidistant from them As a result, the hybrid A-star path algorithm tends to favor positioning itself in the center of the road, optimizing navigation.

5.3.2 Implementation hybrid A-start in project

In our self-driving car project, we utilize the hybrid A-star algorithm to create a Binary Occupancy Map, which is a 48x56 matrix In this map, the elements indicate obstacles with a value of 1 and drivable paths with a value of 0, featuring a resolution of 0.25 meters.

Secondly, we expand the influence range of obstacles With the rationale of ensuring a minimum safe distance, obstacle areas are extended by 0.25 meters

The Hybrid A-star algorithm is employed to calculate the vehicle's trajectory, starting from the position (28, 1) with an orientation angle of π/2, which aligns with the map's center and the target point To enhance project implementation efficiency, the "plannerHybridAstar" function in Matlab is utilized for effective path computation.

Hybrid A-star Path Planner Algorithm with Two Waypoints

1 Algorithm HybridAStarPlanner(Map, Start, Goal)

2 Initialize Start node with (x, y, theta, g=0, h=heuristic(Start, Goal), f=g+h)

5 while OpenSet is not empty do

6 current ← node in OpenSet with the lowest f-value

7 if current position equals Goal then

9 else Remove current from OpenSet

11 for each neighbor of current generated by motion primitives

12 if neighbor is in ClosedSet then

20 if neighbor not in OpenSet

22 return Failure, no path found

This detailed pseudocode outlines the Hybrid A* algorithm's flow:

- Initialization: The start node is initialized with its position, orientation, and costs

- Main Loop: The algorithm continuously processes nodes from the open set, which are prioritized by the lowest f-value (g+h)

- Neighbor Expansion: For each current node, the algorithm generates neighbors based on valid motion primitives (actions the vehicle can take based on its dynamics)

The cost calculation process involves evaluating each neighbor by determining the temporary g-value, which represents the cost from the start node It then compares this newly calculated path to the neighbor against any previously identified paths, updating the costs if the new path offers a lower cost.

- Path Construction: If the goal is reached, the path is reconstructed from the goal node to the start node by backtracking through the parent links of each node

The loop concludes when the open set is empty and no path has been identified, resulting in a failure Conversely, if a path is discovered, the specific steps to reach the goal are provided.

This structure provides a clear, step-by-step representation of the Hybrid A* process, adapting traditional A* methods to effectively address vehicle dynamics and continuous space requirements.

We transform the coordinate arrays generated by the previous function into actionable data for vehicle movement This process is repeated to continuously calculate new paths for the vehicle.

Lane Detection

Lane detection is crucial for the effective control of autonomous vehicles, as it enables precise identification of road markings and ensures the vehicle stays on its intended path This capability is essential for maintaining proper direction and significantly enhances the navigation and control systems, ultimately contributing to the safety and reliability of autonomous driving technology.

Accurate lane detection and tracking are crucial for self-driving cars, enabling them to adjust their position swiftly and avoid hazards like lane deviations and collisions This technology also enhances advanced features such as automatic lane keeping, lane departure warnings, and lane change assistance, ultimately improving user safety and comfort.

In self-driving vehicle control, lane recognition is crucial for maintaining proper lane positioning and preventing lane departures The detection and calculation processes are executed following the steps illustrated in Figure 5.16.

Figure 5.16: General Process of Lane Detection

From Figure 5.16 about the process of lane detection, the process of identifying the program is performed according to the following steps:

Computer PC1 will execute a Python program that leverages the OpenCV library to capture images from a camera Following the image capture, the program will apply an undistortion process to improve image quality for subsequent processing, utilizing an interpolation matrix for this enhancement.

After calibrating or undistorting the images, the next step in the image processing workflow involves identifying and separating key areas, such as roads and objects The input image is converted to the HLS color space to isolate chroma and luminance components, using the lightness channel to detect edges and contours with horizontal Sobel gradients This enhances object segmentation and recognition The saturation channel is then analyzed to identify significant color regions, facilitating the detection of prominent features while preserving white areas and darkening colors below a certain threshold to improve contrast and reduce noise The results are compiled into a binary image, where significant regions are marked as 1 and unimportant areas as 0 Following this, the processed image is transformed into a bird's-eye view to minimize geometric distortion, using a transformation matrix calculated from original and new corner points provided in either relative or absolute coordinates This process results in a perspective-corrected image that enhances the visibility of important features.

Figure 5.17: Lane Picture after perspective transform

The process of lane line segmentation utilizes the sliding windows method, beginning with the computation of an image histogram to identify initial lane positions The image is then divided into small vertical windows, where lane points are searched for, and if the number of points meets a specified threshold, the window's center is recalibrated based on the average of these points This iterative process results in an image marked with sliding windows that delineate the left and right lanes Following the identification of lane points, a second-degree polynomial method is employed to fit curves that accurately represent each lane's path The final output is a color image featuring the sliding windows, clearly indicating the detected left and right lane positions.

Figure 5.18: Lane after applying Sliding window method

After utilizing the sliding windows method to process the image and eliminate the lane's edges, the program identifies the lane that needs to be followed The subsequent step involves determining the lane's curvature and the vehicle's position relative to the lane's center, using image data This process includes evaluating lane data, converting pixel measurements to real-world units, and employing polynomial interpolation for curvature calculations.

The function begins by interpolating a second-degree polynomial for both the left and right lanes using the detected lane points Subsequently, it converts the pixel values into real-world units by applying scaling factors for meters per pixel along both the x-axis and y-axis.

The function utilizes interpolated polynomials and conversion values to determine the curvature of the lane at specific points ahead of the vehicle Curvature is computed using a defined mathematical formula.

The function processes the |2𝐴| image by averaging the intersections to identify the vehicle's center position It then calculates the offset by comparing this position with the lane's center Deviation is determined using a specific formula.

- The results of lane curvature and center offset will be sent by PC1 via Ethernet to PC2 as shown in Figure 5.1 to perform calculations for front wheel control.

Semantic segmentation to detect and avoid obstacle

Recent advancements in GPU performance and the availability of extensive public datasets have significantly propelled the exploration of deep learning networks The introduction of Fully Convolutional Networks (FCN) has led to widespread applications of deep learning techniques in diverse fields such as healthcare, autonomous driving, remote sensing, and security.

Segmentation in autonomous driving is essential for improving the safety and efficiency of self-driving vehicles This technology processes visual data from the vehicle's surroundings, allowing for accurate perception and interpretation of the road environment By differentiating between lanes, vehicles, pedestrians, and obstacles, autonomous cars can make informed navigation decisions.

Image segmentation employs various methods, including traditional techniques like thresholding, edge detection, and region-based segmentation, as well as advanced deep neural networks such as U-Net, SegNet, and High-Resolution Network (HRNet) The application of neural networks for segmentation has gained significant popularity, particularly in real-world scenarios like obstacle detection in autonomous vehicles and robots This graduation thesis focuses on utilizing two specific networks, U-Net and HRNet-v2, for effective image segmentation.

The U-Net model, developed by Olaf Ronneberger, Philipp Fischer, and Thomas Brox in their 2015 paper, has established itself as a benchmark in medical image segmentation Designed for scenarios with limited data, U-Net excels in providing precise outcomes, even when faced with incomplete datasets.

The U-network architecture, characterized by its distinctive symmetric U-shape design, significantly enhances the transmission of contextual information across network layers, leading to improved accuracy in image segmentation This architecture comprises four key components: the encoder, decoder, skip connection, and output layer.

The encoder phase of the U-Net architecture is characterized by several repeating blocks, each comprising two convolutional layers (Conv2D) followed by a ReLU activation layer and a Max Pooling layer, which effectively reduces the spatial dimensions of the feature maps Utilizing 3x3 filters, the number of filters doubles with each block, enabling the network to capture increasingly complex features at deeper layers This segment of the network is essential for learning to recognize contextual features in the input image, including edges, angles, and various structural details.

In the decoder phase, Conv2DTranspose layers or upsampling operations are utilized to incrementally restore the feature maps to their original image dimensions Skip connections are employed to concatenate feature maps from the encoder at each stage, preserving crucial spatial information that may have been lost during encoding This process effectively reconstructs segmentation details from the contextual features captured by the encoder.

Next, skip connections directly connect the output of each layer in the encoder to the

The final output layer in a segmentation model is usually a convolutional layer, where the number of output channels matches the number of segmentation classes It employs a softmax activation function to classify each pixel into one of the predefined classes, ultimately generating the final segmentation map that identifies target object classes.

The U-Net architecture is widely used for medical image segmentation and can also be adapted for other tasks, including satellite image segmentation and object segmentation in road contexts Its flexibility and effectiveness in learning from limited data contribute to its popularity in both research and practical applications.

5.5.1.2 Implementation U-net for self-driving car

Implementing U-Net involves several key components: preparing the data, constructing the U-Net architecture, training the model, and assessing its performance In this guide, we outline detailed steps using Python and TensorFlow, a widely-used framework for deep learning applications.

Various datasets, including Cityscapes and CamVid, are available for training image segmentation models Additionally, the CARLA Simulator is a powerful tool designed specifically for the development, training, and validation of automated driving systems CARLA provides open-source code and protocols, along with freely available digital assets such as urban layouts, buildings, and vehicles This simulation platform allows for flexible sensor specifications, environmental conditions, and comprehensive control over both static and dynamic agents, as well as map generation.

Figure 5.20: Some images is collected from Carla Simulator

Utilizing Carla for data collection, we amassed over 7,000 RGB and semantic segmentation images This dataset is divided into training and testing sets, facilitating model training and evaluation of its generalization on unseen data This process guarantees a fair and effective assessment of the model's performance.

Data preprocessing is essential for transforming data into a format that models can comprehend and learn from effectively This involves normalizing images to scale pixel values between 0 and 1, resizing images and masks to meet the model's input specifications, and utilizing techniques like one-hot encoding for the masks, as outlined in Table 5.5.1 Proper data preparation significantly improves the training accuracy and efficiency of the model.

Table 3: Label of RGB value[27]

We have enhanced our model's generalizability by incorporating augmented data Techniques like horizontal flipping and color adjustment enable the model to learn to identify key features from various perspectives and lighting conditions, significantly improving its performance in real-world applications.

Configuring the training pipeline is essential for an efficient training process Implementing techniques like caching and prefetching data significantly reduces data loading times, accelerating the overall training Additionally, batching and shuffling the data during training are vital to prevent bias from the order of input, which enhances the model's generalization capabilities.

Traffic Sign Recognition

Self-driving cars must strictly adhere to traffic signs to ensure safe and compliant driving, facilitating correct navigation of lane changes, turns, and other road requirements This compliance enhances safety for all road users and promotes efficient movement within the transportation network Consequently, a thorough understanding and accurate response to traffic signs are vital for self-driving vehicles to navigate and interact safely in various traffic environments.

Self-driving cars utilize cameras and artificial intelligence (AI) models to recognize traffic signs The cameras capture images of the environment, which the AI analyzes to identify and classify various traffic signs, including stop signs, speed limit signs, and directional signs This recognition process allows autonomous vehicles to comprehend and adhere to traffic regulations, enhancing safety and efficiency on the road.

Accurate identification of traffic signs is crucial for vehicle operation, necessitating the optimization and training of machine learning models The Yolov8 model, known for its high speed and accuracy, is particularly effective for this purpose Developed by Joseph Redmon and Ali Farhadi at the University of Washington, YOLO (You Only Look Once) has become a leading object detection and image segmentation model since its launch in 2015 According to Ultralitycs, YOLOv8 encompasses a wide range of vision AI tasks, including detection, segmentation, pose estimation, tracking, and classification This versatility makes Yolov8 an ideal choice for training sign recognition systems in dynamic driving environments.

Training the Yolo model to recognize signs in this project is done according to the steps shown in Figure 5.24 below:

Figure 5.25: Step of Input Image Preparation

To enhance safety and awareness, gather images of traffic signs relevant to the vehicle's operating area This can be achieved by either downloading images of various sign types from the internet or capturing photographs of actual signs encountered in the vehicle's vicinity.

- Enhance the image data by adjusting the brightness, rotating the image at small angles, and adding a bit of noise to create variations

Label images according to seven categories: right, crossing, junction, parking, stop, forbid, and no parking Ensure that the labeling includes the coordinates of the signs within the images to assist the model in learning accurate positions while minimizing unnecessary details.

- Split the labeled image set into two subsets: one for training and one for testing the model

- Utilize YOLO's pre-trained model and train it with the custom input dataset prepared in step 1, specifically tailored for autonomous vehicle operations Train the model for 30 epochs

- Evaluate the accuracy of the YOLO model using the test set

- Assess the model using popular evaluation metrics such as precision, recall, F1-score, or a confusion matrix

- If the model's accuracy does not meet the requirements, retrain the model using the custom dataset and new data until the desired accuracy is achieved.

Kalman Filter Algorithm

The Kalman filter, developed by Rudolf E Kálmán in 1960, is an advanced algorithm designed to estimate variables by processing a series of noisy measurements, thereby enhancing accuracy over individual measurements Utilizing a recursive approach, the Kalman filter effectively optimizes the prediction of a system's state, making it a vital tool in various applications where precision is critical.

The Kalman filter is extensively utilized in engineering, particularly for navigation, positioning, and vehicle control In self-driving car projects in public areas, this algorithm is employed to derive rotation angle data from gyroscope and acceleration sensors.

The Kalman filter algorithm, as shown in Figure 5.26, operates through three key steps: calculating the Kalman Gain, estimating the current value, and determining the error value based on newly acquired data The filtered data is derived from the output of the second step, ensuring accurate and reliable estimations.

The initial step involves calculating the Kalman Gain, a crucial coefficient that assesses previous parameters, including the Kalman gain and estimation values By recalculating the estimation error, we aim to enhance the accuracy of the estimator for subsequent iterations.

5.6.2 Kalman Filter in Inertial Measurement Unit (IMU) Sensor

Because the use of sensors is discrete, the resulting data is discrete, so we will use The Discrete Kalman Filter Algorithm

The state of the system at time 𝑘 [31]:

𝑥 𝑘 is the state matrix which is given by 𝑥 𝑘 = [𝜃

The filter output provides both the angle (𝜃) and the bias (𝜃̇ 𝑏), which are derived from accelerometer and gyroscope measurements The bias indicates the gyroscope's drift, and to obtain the true rate, the bias must be subtracted from the gyroscope measurement.

The next element is the 𝐹 matrix, serving as the state transition model applied to the previous state 𝑥 𝑘−1 In this case 𝐹 is defined as:

0 1 ] The next thing is the 𝐵 matrix Which is called the control-input model, which is defined as 𝐵 = [∆𝑡

To determine the angle, we multiply the rate by the delta, which is a logical approach Since direct calculation of the bias from the rate is not feasible, we establish a baseline by setting the bottom of the matrix to zero.

The process noise 𝑤 𝑘 follows a Gaussian distribution with a mean of zero and a covariance matrix 𝑄 𝑘 at time 𝑘

The process noise covariance matrix, denoted as \( Q_k \), characterizes the covariance of the state estimate for both the accelerometer and its bias In this context, we assume independence between the bias estimate and the accelerometer, which leads to \( Q_k \) being equivalent to the variance of these estimates.

The 𝑄 𝑘 matrix is defined as so:

The time-dependent covariance matrix \( Q_k \) incorporates the accelerometer variance \( Q_\theta \) and the bias variance \( Q_{\dot{\theta}_b} \), both scaled by the time interval \( \Delta t \) This adjustment reflects the natural increase in process noise over time since the last state update, as exemplified by potential gyroscope drift.

The measurement 𝑧 𝑘 of the true state 𝑥 𝑘 , given by the current state multiplied by the 𝐻 matrix plus the measurement noise 𝑣 𝑘 [31]:

𝐻 is called the observation model and is used to map the true state space into the observed space 𝐻 is given by :𝐻 = [1 0] [31]

The noise of the measurement 𝑣 𝑘 have to be Gaussian distributed as well with a zero mean and 𝑅 as the covariance:

Some explanation of the other aspects of the notations

𝑥̂ 𝑘−1|𝑘−1 Which is the previous estimated state based on the previous state and the estimates of the states before it

The next is the a priori state:

A priori means the estimate of the state matrix at the current time k based on the previous state of the system and the estimates of the states before it

The last one is called a posteriori state:

Is the estimated of the state at time k given observations up to and including at time k

Predict the current state and the error covariance matrix Initially, the filter attempts to estimate the current state by considering all the previous states and the gyro measurements

The a priori estimate of the angle, represented as 𝜃̂ 𝑘|𝑘−1, is calculated by adding the previous state estimate 𝜃̂ 𝑘−1|𝑘−1 to the product of the unbiased rate and the time increment ∆𝑡 Since direct measurement of bias is not possible, the a priori bias estimate is simply taken as the previous bias value Additionally, we aim to estimate the a priori error covariance matrix 𝑃 𝑘|𝑘−1 using the prior error covariance matrix 𝑃 𝑘−1|𝑘−1.

In the next computational loop, we update the predicted values by first calculating the difference, known as the innovation, between the measurement \( z_k \) and the a priori state \( \hat{x}_{k|k-1} \).

From that the innovation is equal to new angle minus the rate In next step, we will do is calculate whats called the innovation covariance:

This process aims to assess the trustworthiness of measurements by analyzing the a priori error covariance matrix \(P_{k|k-1}\) alongside the measurement covariance matrix \(R\) The observation model \(H\) is utilized to convert the a priori error covariance into the observed space A higher measurement noise leads to an increased value of \(R\), indicating a reduced confidence in the incoming measurements when noise levels are elevated.

The subsequent step involves calculating the Kalman gain The Kalman gain is utilized to express the level of trust in the innovation and is defined as:

𝑃 10𝑘|𝑘−1 /𝑆 𝑘 ] Now we can update the a posteriori estimate of the current state:

From that equation we saw the new estimate angle equal to the total of priori angle with Kalman gain times the innovation

Finall step,we will do is update the a posteriori error covariance matrix Where 𝐼 is called the identity matrix and is defined as [1 0

The filter self-corrects the error covariance matrix by adjusting the estimate based on the corrections made, utilizing both the a priori error covariance matrix \( P_{k|k-1} \) and the innovation covariance \( S_k \) The Kalman algorithm effectively processes and reduces signal noise when reading the vehicle's rotation angle from the IMU To estimate the steering angle, the vehicle's rotation angle is compared with the desired angle, and the difference is mapped to the corresponding front wheel encoder value.

Actuator operation

Figure 5.27: Front Wheels Operating Diagram

To achieve autonomous steering of the front wheels, the motor regulates the steering system based on the direction calculated by the PC This process involves several key steps to ensure precise control and functionality.

To align the front wheels straight ahead, the vehicle's Microcontroller Unit (MCU) activates the front wheel motor upon startup, turning the wheels fully to the right until a Limit Switch is engaged This process takes about 2 seconds, after which the dynamic Encoder function begins The MCU then sends a specific number of pulses to the motor, guiding the front wheels to turn left until the Encoder registers 20,000 pulses At this point, the front wheels are precisely adjusted to be parallel and centered with the vehicle's wheelbase.

After centering the front wheels, the control circuit's Microcontroller Unit (MCU) processes the Desired Pulse generated by the Stanley Controller algorithm The Stanley Controller is designed to enhance vehicle steering precision and stability.

The Stanley controller, first utilized by the Stanford University team in the 2005 DARPA Grand Challenge, employs nonlinear feedback control to manage lateral error, enabling a tractor to accurately follow a designated path through the steering command of the front wheel The parameters of the Stanley controller are visually represented in Figure 5.28, illustrating the original steering command.

The heading error between the tractor and the trajectory is denoted as 𝜙(𝑡), while e(t) represents the lateral error from the center of the front axle to the closest point on the path The tractor's speed is indicated by v(t), and k is a variable gain that can be adjusted for optimal performance.

Figure 5.28: Relationship between the parameters in Stanley controller [32]

The stanley_control function utilizes the Stanley Control algorithm to steer the front wheels of a vehicle, relying on key parameters such as the desired angle, current angle, desired length, actual length, vehicle velocity, and a coefficient (k_coef) It employs a mapping function to convert the steering angle value from the Stanley Control function into an encoder pulse count, ensuring that input values are accurately mapped from a specified input range to an output range If the input value exceeds the defined limits, the function returns the corresponding output limit value The algorithm also computes the error and delta angle for control purposes, returning a value of 20000 when delta equals zero, and further processing if delta is greater.

0, it uses the map function to map delta from the range (0, 38) to (20000, 39900) If delta is

The controller parameters for the steering system include a Proportional (P) stage of 0.1 and a Derivative (D) stage of 0.01, determined through direct testing The controller computes the Pulse Width Modulation (PWM) output by comparing the desired pulse value with the actual pulses detected by the Encoder during wheel rotation This PWM signal regulates the front wheel motor, with a percentage value indicating the voltage applied, which directly affects the motor's rotational speed The values are transmitted via the CAN protocol from the Controller to ensure the motor responds accurately to the steering direction calculated by the PC Additionally, while in operation, the MCU of the steering system sends the encoder values from the sensor back to the Controller for processing by PC2.

The rear wheel control circuit's Microcontroller Unit (MCU) receives a speed percentage from the Controller via the Controller Area Network (CAN), which determines the required rotation speed based on the algorithm from the PC Upon receiving this speed value, the MCU generates a Pulse Width Modulation (PWM) signal to drive the rear wheel motor As the motor functions, the rear wheel motor's Encoder sends feedback to the MCU, allowing for accurate calculations of the distance traveled and the vehicle's speed.

The calculated values are transmitted back through the CAN to the Controller, enabling communication of these parameters with the PC2 This information is essential for computational and control tasks, ensuring the vehicle operates according to the specified parameters and conditions established by the PC2 algorithm.

Besides, the MCU control the rear wheel motor also transfer the mode of movement and percentage of force to activate the brake system via CAN

5.8.3.1 Overview of The Brake Control System

The braking system, as discussed in part 3, utilizes a DC motor to engage and disengage the brakes, relying on internal springs within the drum brake hub When the motor operates at a higher voltage, it generates increased torque, resulting in stronger braking force Conversely, lower voltage operation leads to insufficient torque to counteract the spring force, resulting in weaker braking performance Figure 5.30 illustrates this relationship clearly.

Figure 5.30: Simple diagram illustrating forces in the brake mechanism

The project utilizes an STM32F103C8T6 microcontroller and an H-bridge circuit to control a motor with a maximum input voltage of 24VDC The microcontroller generates PWM (Pulse Width Modulation) signals to modulate the voltage supplied to the motor Furthermore, an IMU sensor is integrated to enhance the system's functionality by recognizing movement and orientation.

Figure 5.31: Brake System Control Diagram

From Figure 5.31, we can envision the operation of the braking system The controller receives values from various devices to perform specific actions as required:

The back wheel controller sends signals regarding the vehicle's motion states such as STOP (S), RUN (R), DECELERATION (G) and value for control the engine

The limit switch activates a high-level signal when the motor reaches its starting position Meanwhile, the Inertial Measurement Unit (IMU) sensor provides roll angle data to assess the vehicle's tilt The sensor's positioning on the vehicle is illustrated in Figure 5.32.

Push buttons and potentiometers are used for testing and controlling the brake system

Figure 5.32: The placement of the IMU sensor on the vehicle and direction of angle value

To comprehend Table 3, it's crucial to understand the communication signal structure between the rear wheel controller and the brake controller, which comprises 2 bytes of data The first byte conveys the vehicle's motion state using three characters: 'S' for stop, 'R' for run, and 'G' for deceleration, with only one state transmitted at a time The second byte represents the magnitude of the brake value, which ranges from 0 to a specified maximum.

100, controlling the motor's braking force

Value for control the brake motor

≤ −5° X X PWM = 500*roll/5 any X initial position X

Value for control the brake motor any X PWM = valueIn*200 X

When the brake value is set to 0, indicating that the brake system is inactive, the microcontroller directs the motor to return to the initial position regardless of the vehicle's angle In the DECELERATION state, if the angle exceeds -5°, regenerative braking is employed since the rear wheel motor is not powered Conversely, if the vehicle is on an incline, braking force is applied according to the formula PWM = 500 * roll / 5, which signifies that increased incline results in greater vehicle tilt and stronger braking force.

When the brake value is greater than 0, indicating a full stop is required, the vehicle will halt based on the formula: brake value x 200, adjusting the braking level from 0 to 20000 In the STOP state, if the vehicle's angle is between -3° and 3°, the brake system will remain in its initial state to conserve energy and prolong its lifespan However, if the angle exceeds this range, the brake system will engage at maximum power to ensure the vehicle comes to a complete stop.

RESULTS, COMMENTS, AND DEVELOPMENT DIRECTIONS

Archived Results

6.1.1 The Result of Controlling the Front Wheels

Front wheel control requires precise adjustments to the steering angle, which ranges from -33° to 33°, with 0° indicating the straight-ahead position This steering range corresponds to encoder values between 0 and 40000, allowing for accurate positioning based on specific requirements.

Figure 6.1: The chart responds to the position of the front wheel

Figure 6.1 demonstrates the responsiveness and speed of the front wheel's position response The position error is measured between 300 and 400, equating to an angular deviation of approximately 0.57° to 0.75°, with an estimated response time of about 200 ms.

6.1.2 The Result of Getting Angle Value from IMU Sensor

The Kalman filter algorithm effectively mitigates external interference, enabling accurate signal reception from sensors and translating this data into the vehicle's motion angle.

Figure 6.2a: Diagram showing the results of noise filtering by rotation angle around the X axis

Figure 6.2b: Diagram showing the results of noise filtering by rotation angle around the Y axis

Figure 6.2c: Diagram showing the results of noise filtering by rotation angle around the Z axis

The Kalman algorithm demonstrates excellent and stable noise filtering capabilities; however, its response time is hindered by processor speed limitations Additionally, since the Z-axis angle cannot be derived from acceleration values, an average algorithm is employed for the gyroscope readings, resulting in a stable output.

6.1.3 Evaluation and Result of Training Model

Figure 6.3: Confusion matrix of the training

The confusion matrix reveals the accuracy and errors in predicting traffic signs using the Yolo machine learning model across three training sessions For the "Crossing" class, the model achieved 68 accurate predictions but confused it once with "forbid" and three times with "junction." The "Forbid" class had 21 accurate predictions, with seven confusions with "junction." The "Junction" class was the most successfully predicted, with 75 accurate instances and two confusions with "forbid." The model accurately identified "Parking" 27 times, with one confusion with "junction." For "Right," there were 56 accurate predictions, but nine confusions with "stop." The "Stop" class experienced significant confusion, being mistaken for "parking" seven times and "right" eight times, despite only nine accurate predictions Lastly, the "Background" class had five instances confused with other sign classes.

Figure 6.4: Precision-Confidence Curve of all classes and each class

The graph in Figure 6.4 illustrates that the model achieves over 80% accuracy for classes such as crossing, junction, parking, and right, even at low confidence levels, showcasing its strong recognition capabilities In contrast, the accuracy for the "stop" class only increases significantly when confidence exceeds 80%, indicating a need for higher certainty for accurate predictions Additionally, the "forbid" and "no-parking" classes show lower accuracy, revealing the difficulties in distinguishing these signs Notably, when confidence reaches 98%, the model's precision for all classes improves to 100%, underscoring its effectiveness in recognizing signs within the input dataset.

Figure 6.5: Recall-Confidence Curve of all classes and each class

Figure 6.5 reveals that the "Crossing" and "Junction" classes exhibit consistently high recall levels, even at low confidence, indicating the model's reliability in recognizing these traffic signs In contrast, the "Stop" class only achieves high recall at elevated confidence levels Additionally, the "Forbid" and "No-parking" classes experience a significant drop in recall as confidence levels rise, highlighting the challenges in accurately identifying these signs without a high degree of certainty This analysis suggests that while the model excels at recognizing certain traffic signs, it encounters difficulties with others unless it is highly confident in its predictions.

Figure 6.6: Result of Sign Recognition in Real Situation

Figure 6.6 shows the model's sign recognition after training The result is that the signs can be accurately recognized

6.1.4 Results in the application of the A-star Hybrid pathfinding algorithm

We implemented the algorithm using the Matlab engine API in Matlab 2021b integrated with Python The results, shown in Figure 6.7, demonstrate the operations outlined in Section 5.3, starting from the initial point [20, 15, π/2] to the final destination.

(a) (b) (c) Figure 6.7 Result of application the Hybrid A-star algorithms

(a) initiate the Occupancy Map, (b) expansion the Occupancy Map, (c) output trajectory by using Hybrid A-star pathfinding algorithm

The training results from section 5.5.1, illustrated in Figure 6.8, indicate that the model demonstrates a commendable pixel accuracy of 0.89 and a mean Intersection over Union (mIoU) of 0.46 when tested with an untrained dataset from the Carla simulator However, a comparison of images 6.8 b) and c) reveals that while the road area is effectively identified, the road line remains unrecognized.

(c) Figure 6.8:Result of U-net segmentation

(a) Input RGB image (b) Prediction image, (c)Ground true image

The results of the training process detailed in section 5.5.2 are illustrated in Figure 6.9, which demonstrates the model's performance on an untrained dataset sourced from CamVid The model achieved a pixel accuracy of 0.8245 and a mean Intersection over Union (mIoU) of 0.34, indicating satisfactory predictive capabilities.

In a comparison of images 6.9 b) and c), HRNet-v2 demonstrates superior road line recognition capabilities despite having a lower prediction index than U-net Additionally, HRNet-v2 excels in segmenting high-resolution images effectively.

6.1.7 Result in application segmentation image for create local map

Figure 6.10: Segmentation image for create local map

The results demonstrate the application of the HRNet-v2 model on a test road, showcasing an RGB input image (a), a prediction image (b), and a segmented image with a masked local view (c) In Figure 6.9 b), the perspective shifts from the camera view to a bird's-eye view, overlaid with a mask corresponding to the vehicle's angle on the global map Figure 6.9 c) illustrates the masked result, where the lane remains colored for visibility; however, in practice, this section would appear black to indicate the absence of obstructions on the road.

Figures 6.11 and 6.12 showcase the vehicle navigating smoothly on an unobstructed road In contrast, Figure 6.13 illustrates the vehicle's response to an obstacle, demonstrating its ability to calculate a new trajectory for avoidance Figures 6.14, 6.15, and 6.16 highlight the vehicle's performance at curves and junctions Figure 6.17 depicts a situation where the vehicle encounters an obstacle and fails to find an alternative path, leading to a complete stop Lastly, Figure 6.18 compares the vehicle's actual trajectory with the planned one; Figure 6.18a shows the vehicle successfully creating and following a new path around an obstacle, while Figure 6.18b reveals that, in the absence of obstacles, the actual and planned trajectories align closely, showcasing the vehicle's effective navigation and obstacle avoidance capabilities.

Figure 6.11: Car when not run

Figure 6.12: Car after running 5 seconds

Figure 6.14: Car turns into allowed lane

Figure 6.15: Car follows the curve lane

Figure 6.16: Car acrosses the junction

Figure 6.17: Car stops in front of obstacle

Figure 6.18: Responsiveness to the vehicle's location (a) In case when there are obstacles, (b) In case when there are no obstacles

Project limitations

The conduct of research and development of self-propelled vehicle projects encounters some of the following limitations:

- First, data acquisition from sensors is still susceptible to noise from various sources, such as sensor quality or interference from the motor's magnetic field

- Noise affects the communication signal lines of the CAN, UART, and I2C networks because most devices and cables lack protective shielding against interference

- Environmental noise such as light intensity and lane color affects lane and obstacle recognition

- There is no on-board energy management system such as batteries, lights and indicator lights

Project Development

Autonomous vehicles continue to garner significant interest and development from various research groups Our project aims to address existing limitations by integrating advanced sensors for instant collision detection and enhancing GPS navigation systems for improved positioning We are implementing multitasking methods and intelligent processing flow splitting to accelerate data processing Additionally, we are advancing image processing technology for side and rear areas to better recognize obstacles, thereby increasing safety during movement Finally, our team is focused on researching and integrating advanced algorithms to enable accurate and safe automated parking.

Xe tự hành Nuro đã được đăng ký kiểu dáng công nghiệp tại Việt Nam, theo thông tin được công bố vào ngày 01/09/2022 Thông tin chi tiết có thể được tìm thấy tại liên kết: [tuoitrethudo.com.vn](https://tuoitrethudo.com.vn/xe-tu-hanh-nuro-duoc-dang-ky-kieu-dang-cong-nghiep-tai-viet-nam-204827.html).

[2] Nguyễn Xuân, Người Việt phát triển xe tự hành cấp độ 4 đầu tiên, Published: 19/04/2021, Link: https://vnexpress.net/nguoi-viet-phat-trien-xe-tu-hanh-cap-do-4-dau-tien-

Xe tự lái được phân chia thành sáu cấp độ khác nhau, từ cấp độ 0 (không tự lái) đến cấp độ 5 (tự lái hoàn toàn) Công nghệ tự lái của các hãng như VinFast, Tesla và nhiều thương hiệu khác đang phát triển mạnh mẽ, với mục tiêu mang đến sự an toàn và tiện lợi cho người sử dụng Mỗi cấp độ có những tính năng và khả năng tự động hóa riêng, từ hỗ trợ lái xe cơ bản đến khả năng điều khiển hoàn toàn mà không cần sự can thiệp của người lái Việc hiểu rõ các cấp độ này sẽ giúp người tiêu dùng đưa ra quyết định đúng đắn khi lựa chọn xe tự lái.

VinFast là một thương hiệu nổi bật trong lĩnh vực xe tự lái, với các công nghệ tiên tiến và hệ thống hoạt động thông minh Xe tự lái được phân loại theo cấp độ SAE, từ cấp độ 0 (không tự động) đến cấp độ 5 (hoàn toàn tự động) Nguyên lý hoạt động của xe tự lái dựa trên việc sử dụng cảm biến, camera và trí tuệ nhân tạo để nhận diện môi trường xung quanh và đưa ra quyết định lái xe VinFast không ngừng nghiên cứu và phát triển để nâng cao tính năng tự động hóa, hướng tới việc mang lại trải nghiệm lái xe an toàn và tiện lợi cho người dùng.

[5] Dotv, Hệ thống phanh ô tô là gì? Phân loại, cấu tạo, nguyên lý?, Published: 06/2023,

Link: https://dotv.vn/he-thong-phanh-o-to/, 1/6/2024

[6] Đặng Quý , Giáo trình lý thuyết ô tô 1, NXB Hồ Chí Minh, Published: 9/2010

[7] The Engineering ToolBox, Rolling Resistance, Link: https://www.engineeringtoolbox.com/rolling-friction-resistance-d_1303.html, 04/06/2024

[8] STMICROELECTRONICS, STM32F103C8T6 Datasheet, Link: https://www.alldatasheet.com/datasheet- pdf/pdf/201596/STMICROELECTRONICS/STM32F103C8T6.html, 04/06/2024

[9].Microchip Technology, MCP4725 Datasheet, Link: https://www.alldatasheet.com/datasheet-pdf/pdf/305688/MICROCHIP/MCP4725.html,

[10] TDK Electronics, MPU6050 Datasheet, Link: https://www.alldatasheet.com/datasheet- pdf/pdf/1132807/TDK/MPU-6050.html, 04/06/2024

[11] Chiradeep BasuMallick, What Is Ethernet? Definition, Types, and Uses, Published: January 12, 2023, link: https://www.spiceworks.com/tech/networking/articles/what-is- ethernet/, 15/06/2024

The User Datagram Protocol (UDP) is a crucial communication protocol used in networking, allowing for the transmission of data packets without the need for a connection Unlike TCP, UDP prioritizes speed over reliability, making it suitable for applications where timely delivery is essential, such as video streaming and online gaming It operates on a best-effort delivery model, meaning that it does not guarantee packet delivery, order, or integrity Understanding UDP is vital for developers and network engineers, as it plays a significant role in optimizing performance for real-time applications For more in-depth information, refer to resources like Techopedia and Shiksha, which provide comprehensive insights into UDP and its functionalities.

[14] Vijay Kanade, What Is Universal Asynchronous Receiver-Transmitter (UART)? Meaning, Working, Models, and Uses, Published: March 22,2024, link: https://www.spiceworks.com/tech/networking/articles/what-is-uart/,15/06/2024

[15] Satish Kumar, CAN Protocol, Published : February 8, 2023, link: https://www.tutorialspoint.com/can-protocol, 15/06/2024

[16] D Dolgov, S Thrun, M Montemerlo, J Diebel, Practical search techniques in path planning for autonomous driving, Published: June 2008 , Link: https://cdn.aaai.org/Workshops/2008/WS-08-10/WS08-10-006.pdf, 17/06/2024

The study titled "Path Planning Using Improved Hybrid A* Algorithm," authored by Bijun Tang, Kaoru Hirota, Xiangdong Wu, Yaping Dai, and Zhiyang Jia, was published on October 25, 2020 This research explores advancements in path planning techniques by enhancing the traditional A* algorithm to improve efficiency and effectiveness in various applications For more details, please refer to the publication available at [J-STAGE](https://www.jstage.jst.go.jp/article/jaciii/25/1/25_64/_article/-char/ja/) as of June 17, 2024.

Chien Van Dang, Heungju Ahn, Doo Seok Lee, and Sang C Lee published an article titled "Improved Analytic Expansions in Hybrid A-Star Path Planning for Non-Holonomic Robots" on June 13, 2022 This research introduces advancements in path planning algorithms specifically designed for non-holonomic robots, enhancing their navigation capabilities The findings are significant for future applications in robotics and automation, contributing to the field's ongoing development For further details, the article can be accessed at [MDPI](https://www.mdpi.com/2076-3417/12/12/5999).

[19] Jong Min Kim, Kyung Il Lim and Jung Ha Kim, Auto Parking Path Planning System Using Modified Reeds-Shepp Curve Algorithm, Published: 2014 – Link: https://ieeexplore.ieee.org/abstract/document/7057441, 17/06/2024

[20] Ronneberger, O.; Fischer, P.; Brox, T U-net: Convolutional networks for biomedical image segmentation In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp 231–241

[21] Badrinarayanan, V.; Kendall, A.; Cipolla, R SegNet: A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation IEEE Trans Pattern Anal Mach Intell

[22] Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al Deep High-Resolution Representation Learning for Visual Recognition arXiv 2020, arXiv:1908.07919

[23] Long, J.; Shelhamer, E.; Darrell, T Fully convolutional networks for semantic segmentation In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp 3431–3440

[25] Wang, Jingdong, et al, Deep high-resolution representation learning for visual recognition IEEE transactions on pattern analysis and machine intelligence 43.10 (2020): 3349-3364

[26] Alexey Dosovitskiy , et al, CARLA: An Open Urban Driving Simulator, 2017

[27] Carla Simulator, Sensor reference, version 0.9.15, Link : https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera, , 18/06/2024

[28] Google, HRNet based model trained on the semantic segmentation dataset CamVid, 20/09/2022, link: https://www.kaggle.com/models/google/hrnet/tensorFlow2/hrnet-camvid- hrnetv2-w48, , 18/06/2024

[29] Gary Bradski, Adrian Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media, Inc., 09/2008

[30] glenn-jocher, RizwanMunawar, AyushExel, Ultralytics YOLO Docs, Published 2023- 11-12, link: https://docs.ultralytics.com/#yolo-licenses-how-is-ultralytics-yolo-licensed, 20/06/2024

[31] Lauszus, A practical approach to Kalman filter and how to implement it, Published: 10/09/2012, Link: https://blog.tkjelectronics.dk/2012/09/a-practical-approach-to-kalman- filter-and-how-to-implement-it/ , 20/06/2024

Liang Wang, Zhiqiang Zhai, Zhongxiang Zhu, and Enrong Mao conducted a study on the path tracking control of an autonomous tractor, utilizing an improved Stanley controller that has been optimized with a multiple-population genetic algorithm This research was published on November 1, 2022, and is accessible through the link: [MDPI](https://www.mdpi.com/2076-).

[33] J Kong, M Pfeiffer, G Schildbach, and F Borrelli, “ Kinematic and dynamic vehicle models for autonomous driving control design” IEEE Intelligent Vehicles Symposium, pp 1094-1099,2015

[34] Yeom, K (2020) “Kinematic and dynamic controller design for autonomous driving of car-like mobile robot” International Journal of Mechanical Engineering and Robotics Research, 9(7), 1058-1064.

Ngày đăng: 20/12/2024, 11:23

TỪ KHÓA LIÊN QUAN

TRÍCH ĐOẠN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w