1. Trang chủ
  2. » Luận Văn - Báo Cáo

Research, design and build an agv robot using lidar sensor for navigation

109 13 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Research, Design And Build An Agv Robot Using Lidar Sensor For Navigation
Tác giả Phan An Dong, Nguyen Anh Minh, Nguyen Hoa Loc
Người hướng dẫn Me. Tưởng Phước Thọ
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Electronics and Telecommunication Engineering Technology
Thể loại Graduation Thesis
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 109
Dung lượng 8,79 MB

Cấu trúc

  • CHAPTER 1: OVERVIEW (14)
    • 1.1. HISTORY (14)
    • 1.2 RESEARCH SITUATION NATIONALLY AND INTERNATIONALLY (14)
    • 1.3 INTRODUCTION (16)
    • 1.4 AGVS FOR INDUSTRIAL USE (17)
    • 1.5 OBJECT AND SCOPE OF THE STUDY (19)
  • CHAPTER 2: MECHANICAL DESIGN (20)
    • 2.1 DESIGN REQUIREMENTS AND TRANSMISSION SELECTION (20)
    • 2.2 MECHANICAL DESIGN (24)
  • CHAPTER 3: POWER CALCULATION (31)
    • 3.1 CALCULATION TO CHOOSE MOTOR (32)
    • 3.2 BELT TRANSMISSION CALCULATIONS (33)
    • 3.3 BELT'S TEST TORQUE FORCE (34)
  • CHAPTER 4: CONTROL METHOD (36)
    • 4.1 ROBOT’S KINEMATICS (36)
    • 4.2 ROBOT DYNAMICS (38)
    • 4.3 ROBOT CONTROL (39)
      • 4.3.1 BASIC OPERATION DIAGRAM (39)
      • 4.3.2 RASPBERRY PI EMBEDDED COMPUTER (41)
      • 4.3.3 MICROCONTROLLER (42)
      • 4.3.4 THE RPLIDAR SENSOR (44)
      • 4.3.5 ENCODER (45)
      • 4.3.6 WORKING FLOW – DATA PROCESS (46)
    • 4.4 PID CONTROL (51)
  • CHAPTER 5: PATH PLANNING (60)
    • 5.1 MAPPING (60)
      • 5.1.1 SLAM (60)
      • 5.1.2 HOW SLAM WORKS (61)
      • 5.1.3 COMMON SLAM PROBLEMS (63)
    • 5.2 PATH PLANNING WITH ROS (64)
      • 5.2.1 ROS (64)
      • 5.2.2 PATH PLANNING – NAVIGATION METHOD (65)
  • CHAPTER 6: RESULTS (75)
    • 6.1 FINAL PRODUCT (75)
    • 6.2 EXPERIMENTS (78)

Nội dung

OVERVIEW

HISTORY

During World War II, the first mobile robots appeared as a result of technical advances in some relatively new areas of research such as computer science and cybernetics

Between 1948 and 1949, Elmer and Elsie, two pioneering moveable robots equipped with light sensors, could navigate towards light sources while avoiding obstacles As technology advanced, subsequent robots emerged with the capability to autonomously locate charging stations when their batteries were low and to follow predefined paths From 1966 to 1972, the Stanford Research Institute developed Shakey the Robot, which featured cameras, rangefinders, collision sensors, and radio links, making it the first robot capable of deducing its actions Meanwhile, mobile robots were also being developed in the USSR for space exploration initiatives.

In the 19th century, mobile robots primarily served military functions, utilized during conflicts such as the war between the United States and the Soviet Union, as well as for space exploration initiatives.

In the 21st century, mobile robots were used for a variety of purposes They are used for cleaning floors, vacuuming, sweeping floors in hospitals, finding objects in the environment…

Mobile robots are being advanced by numerous countries and manufacturers to endure harsh environments while effectively mapping their surroundings to track individuals and navigate around obstacles Their applications are broadening across various sectors, including transportation, healthcare, industrial operations, and notably, military use.

RESEARCH SITUATION NATIONALLY AND INTERNATIONALLY

The COVID-19 pandemic has accelerated the adoption of autonomous mobile robots across various sectors As the world transitions into a post-pandemic phase, these robots are set to transform industries such as hospitality, healthcare, and manufacturing Automation has gained significance in retail, warehouses, and restaurants, introducing different levels of autonomy to enhance operational efficiency.

Autonomous mobile robots have revolutionized labor-intensive tasks in factories by efficiently managing operations like picking, sorting, and transporting components These robots can handle millions of items autonomously, significantly improving material flow and reducing the need for human involvement.

In healthcare, robots can assist with duties such as cleaning and sterilization, medicine, food, and garbage distribution, and others when human presence is not required

Research on mobile robots equipped with advanced navigation sensors and cameras is being actively pursued by various domestic organizations Key topics include high-speed image processing, multi-sensor coordination, spatial positioning and mapping, and the design of orbital motion to avoid obstacles These findings have been presented at the National Mechatronics conferences held in 2002, 2004, and 2006.

2008 and 2010 Robotic Vision studies are of interest both in industrial robots and mobile robots, particularly in the field of robotic identification and control on the basis of visual information

Vietnamese scientists are actively contributing to the diverse field of robotics research, aligning their work with global trends Their studies focus on key areas such as kinetics, dynamics, orbital design, sensor information processing, actuation, control, and the development of intelligence in robots Both civil and military applications are explored, particularly within the mechanics and machine-building departments of universities and research institutes.

A few famous companies that manufacturing AGV, AMR in Viet Nam can be mentioned as:

Uniduc has effectively developed and implemented AGV robotic systems in factories nationwide, showcasing the company's expertise in this area Their strength lies in a diverse range of products designed to handle varying load capacities, from low to high, ensuring versatility and efficiency in industrial applications.

▪ AGV Yaskawa: since 2010 with 2 main products are towing rows and roaring robots Has an average load of 500kg speed of 10m / min

▪ KuKa Group: more than 40 years of experience in the field of automation; in the production of AGV robots the company promotes product diversity

Omron's AGV boasts over 40 years of expertise, offering a diverse range of products capable of handling loads exceeding 500 kg With a speed of 0.5 m/s, these vehicles feature selectable battery options with charging times of 6, 8, or 10 hours Additionally, the user-friendly clock and HMI display enhance operational convenience Figure 1.1 illustrates the AGV products from both KUKA and Omron.

Figure 1.1 KUKA's and OMRON's AGV product

INTRODUCTION

An Automated Guided Vehicle (AGV), commonly referred to as an AGV rover or AGV robot, utilizes advanced navigation technologies to autonomously transport goods and materials to designated locations without human intervention These self-driving vehicles, often known as towing robots or automated-guide robots, are revolutionizing logistics and material handling processes.

Autonomous mobile robots (AMRs) significantly outperform automated guided vehicles (AGVs) due to their ability to adapt to dynamic environments, making them perfect for e-commerce warehouse fulfillment Equipped with advanced vision sensors and machine learning capabilities, AMRs can autonomously assess their surroundings and navigate through existing infrastructures, optimizing their routes in real-time rather than following fixed paths.

Figure 1.2 describes AGV can transport big & heavy cargoes:

AGV self-driving cars play a crucial role in the evolution of smart factories and warehouses, significantly enhancing industrial automation These autonomous vehicles are widely utilized across various sectors, including automotive, electronics, logistics, pharmaceuticals, healthcare, and consumer goods, for efficient material transportation.

AGV rovers, AGV robots used in manufacturing plants, smart warehouses bring many benefits to businesses compared to traditional methods of transportation, some advantages of AGV are as follows:

AGV can operate independently without the intervention of workers, moreover, AGV fully meets the cargo volume with a large load

Figure 1.2 AGV can transport big & heavy cargoes (1)

AGV autonomous vehicles utilize a highly accurate and safe programming system, equipped with advanced camera systems and various sensors that enable them to detect obstacles while transporting materials.

Figure 1.3 describes AGV can transport big & heavy cargoes:

AGV autonomous vehicles excel in inaccessible environments, including areas with toxic chemicals, extreme temperatures, and even outer space, where air is absent.

Most Automated Guided Vehicles (AGVs) are engineered for high precision, minimizing shipping errors and consistently responding to user requests These vehicles can operate around the clock, 24/7, significantly enhancing labor productivity.

• Easy to change, repair and expand or upgrade the AGV

Many Automated Guided Vehicles (AGVs) offer modular capabilities that allow users to customize their features, adjust load capacities, and modify their operational areas to better meet specific needs.

AGVS FOR INDUSTRIAL USE

Autonomous Mobile Robots (AMRs) and Autonomously Guided Vehicles (AGVs) are essential tools in manufacturing and warehousing, designed to transport cargo efficiently These wheeled vehicles are equipped with advanced sensors, onboard computing capabilities, and electric motors, enabling them to navigate and map their surroundings autonomously.

Figure 1.3 AGV can transport big & heavy cargoes (2)

5 its interiors This is frequently done with a person directing the AMR around using a controller

Figure 1.4 describes an AMR working in a facility:

After creating an internal map, Autonomous Mobile Robots (AMRs) can navigate to designated locations independently A key feature of AMRs is their autonomous navigation, allowing them to avoid obstacles such as people or forklifts by maneuvering around them or selecting alternative routes to reach their destination.

In comparison to a conveyor system, AMRs are more adaptable due to their autonomous characteristics

Automated Guided Vehicles (AGVs) represent an older technology compared to Autonomous Mobile Robots (AMRs), possessing limited onboard intelligence and autonomy Unlike AMRs, AGVs cannot map their surroundings and instead rely on fixed paths defined by embedded wires, special tape, or other guiding mechanisms When faced with an obstacle, AGVs can only stop and seek help, highlighting their dependence on predetermined routes.

Figure 1.5 describes an example of an AGV:

Figure 1.4 An AMR working in a facility

The distinction between Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs) is increasingly blurring, as AGVs are now equipped with enhanced onboard computational capabilities This advancement allows certain AGVs to navigate around obstacles along their paths, showcasing their evolving functionality.

OBJECT AND SCOPE OF THE STUDY

Transporting heavy products efficiently is now possible with the use of Automated Guided Vehicles (AGVs) These robots autonomously move large boxes throughout expansive facilities, streamlining logistics and enhancing productivity.

Lidar technology measures distance by emitting lasers to objects and recording the response time, enabling precise ambient scans Once the scan is complete, the data is sent to a computer via ROS, which creates a map of the surrounding area and saves it in a library Specific destinations can be marked and stored on this map By selecting these pre-saved destinations, ROS calculates the optimal route and sends a signal to the Raspberry Pi to activate the encoder engines, allowing the robot to navigate to the chosen location.

Figure 1.6 Example of an AGV

Figure 1.5 Example of an AGV

MECHANICAL DESIGN

DESIGN REQUIREMENTS AND TRANSMISSION SELECTION

Autonomous robots can transport up to 20-30 kilograms of mass in an area of a workshop according to pre-programmed and repeated routes every day

Robots need to be firmly designed, withstand the load on demand but still ensure the speed and flexibility required to transport goods

Moderate height, the robot does not need to be too big to easily dodge obstructions, fit design to suit the working environment in the workshop

Therefore, we set up this target parameter:

Robots designed for workshop environments must navigate through areas with frequent human traffic and tight spaces, requiring flexibility in their movement to go straight or turn Selecting appropriate mechanisms for motor speed and rotation is essential for optimizing the robot's performance and ensuring safe operation in dynamic settings.

There are 3 types of transmission: electric transmission, mechanical transmission, hydraulic transmission, and pneumatic

Mechanical transmission is the system responsible for transferring mechanical energy from the motor to various components of a robot This mechanism alters velocity, torque, force, and movement dynamics, playing a crucial role in the robot's overall functionality.

Based on the principle of work, they are divided into 2 types:

- Friction transmission: belt transmission, friction gear…

- Joint feeding transmission: gears, screw…

Belt transmission operates on the principle of friction between the belt and gear, allowing for efficient torque and speed transfer This method enables longer transmission distances compared to traditional reducers, while offering a flexible transmission ratio that can be easily adjusted by changing the diameter of the belt wheels.

Figure 2.2 describes tooth belt transmitter:

- Belt transmission includes active belt gear, passive belt gear, straps, stretch belt

Straps are commonly constructed from PE plastic or woven fabric, typically featuring two or more layers The contact layer, designed for enhanced friction, is often made from materials such as chromium, steel, or cast iron In contrast, the outer layer is usually composed of plastic, which offers minimal expansion and high tensile strength.

Chain transmission works based on the principle of chain joint transmission, with the simplest structure The power is transmitted using leashes by metal bonds transmitted through chain gears

Structure: master chain, passive chain, chain chain and chain plate

The gear actuator efficiently transmits rotational motion between shafts, ensuring a precise fit between gear teeth that eliminates slippage This mechanism enables the transmission of both small forces, ideal for precision engineering, and large forces, such as those required in rolling machines, while maintaining an accurate transmission rate Additionally, it is essential for gears to have a minimal shaft gap for optimal performance.

Gears can transmit rotational motion and thereby change the speed, torque, or direction of rotation

Based on the requirement to select the transmitter:

Shaft distance Big Big Small

Transmission ratio Flexible Fixed Big

Safe when overloaded due to elasticity Low Low

Power range Big Big Diversity

Simple easy to repair, replace Complexity

Very complex, requires high accuracy

No need to lubricate, save cost

Must be lubricated and chained regularly

Regular inspection, full oil care, highly cost repair

To meet the project's requirements for easy repair and replacement, low cost, minimal noise, and reliable operating capacity, the team has decided to implement a belt transmission system in the operational diagram.

MECHANICAL DESIGN

Figure 2.6 describes base design and figure 2.7 describes completed frame design:

Figure 2.6 Base design.Picture 2.5 Base design

Figure 2.8 Base design.Picture 2.5 Base design

The robot weighs 20kg and is designed to carry a load of 15kg, resulting in evenly distributed forces across the vehicle, primarily supported by the gear hole and pillow support With a total load capacity of 40kg and vehicle dimensions of 500x400, the total force exerted at the center of the vehicle is calculated to be 343N.

The base has dimensions of 500×450×4 (length×width×thickness) in millimeters The area of the base will be 500 × 450 = 225,000 square millimeters

Figure 2.8 describes the strength of the base using Inventor.

Using tensile testing software, continue applying tension until the plate fractures Record the load in pounds that was applied when the plate fractured

The force simulation model indicates that a force of 400N is applied to the bottom plate, resulting in an applied stress range of 0.001173 MPa to 0.003274 MPa Given that the stress tolerance of steel exceeds 380 MPa, the bottom plate can easily withstand this force However, the most critical and potentially hazardous points of bearing stress are located at the bore securing the motor and the bearings.

Number of rotations on the wheel shaft:

Figure 2.8 Strength of the base using Inventor

Power on the wheel axle:

We use belt transmitters and rollers

Transmission performance: (roller drive) 2 x belt = 0.992 x 0.95 = 0.93 (5)

Use a transmitter with ratio of 2

Power on the motor shaft:

Calculation and testing of shaft durability:

Axle control according to fatigue durability for wheel shafts: Axis diameter is determined by formula: d ≥ ∛(𝑇 ∕ (0,2[𝜏]) (8)

Fatigue testing at dangerous cross section:

According to formula 10.19 in the reference document (document [1], page 195):

In which: [s]: permissible safety coefficient, [s] = 1,5…2,5 According to formula 10.20 and 10.21 (document [1], page 195):

According to formula 10.20 and 10.21 (document [1], page 195):

𝑠𝜎𝑗 = Kσdj σaj+ Ψσ σmj σ−1 and 𝑠τj = Kτdj τaj+ Ψτ τmj τ−1 (10) (11)

14 s: the safety coefficient is considered only with stress and the safety coefficient are only considered with the next stress at the wheel level

𝜎−1 and 𝜏−1 are the limit of bending and twisting fatigue corresponding to the symmetrical cycle with steel carbon C45 has 𝜎𝑏 = 750𝑀𝑃𝑎

For bending stress rotation axis changes cyclically, therefore: 𝜎𝑚𝑗 = 0

According to formula 10.22 (document [1], page 196):

According to table 10.6 (document [1], page 196) with axis with round area:

When the axis rotates one way, the torsional stress changes with the arterial cycle:

In table 10.6 (document [1], page 196) with axis with round area:

2 × 261,64= 14,6399 MPa According to formulas in 10.25 and 10.26 (document [1], page 197):

𝑘𝜎𝑑 = (𝑘 𝜎 ∕ 𝜀 𝜎 + 𝑘 𝑥−1 ) ∕ 𝑘 𝑦 ⋅ 𝑘 𝜏𝑑 = (𝑘 𝜏 ∕ 𝜀 𝜏 + 𝑘 𝑥+1 ) ∕ 𝑘 𝑦 (18) kx: stress concentration coefficient, by table 10.8 (document [1], page 197) kx = 1.1 ky: shaft surface endurance coefficient, according to table 10.9 (document [1], page

𝜀 𝜎 and 𝜀 𝜏 : size coefficient refers to the effect of axis size on fatigue limits

𝑘 𝜎 and 𝑘 𝜏 : actual stress concentration coefficient when bending and twisting

 Satisfied with the endurance conditions

Axle control according to fatigue foregine shafts:

Axis diameter is determined by formula: d ≥ 3 √𝑇 (0,2[𝜏])⁄

[𝜏] is the allowable torsional stress, with shaft material is steel CT5, steel 45, steel 40 [𝜏] = 15….30 𝑀𝑃𝑎 d ≥ √4118,438/(0,2 × [30]) 3 = 8.82 (20)

Choose d mm (base on the reality condition)

Fatigue testing at dangerous cross sections:

According to formula 10.19 in reference document (document [1], page 195):

In which: [s]: permissible safety coefficient, [s] = 1,5…2,5

According to formulas 10.20 and 10.21 (document [1], page 195):

𝑠𝜎𝑗 = Kσdj σaj+ Ψσ σmj σ−1 and 𝑠τj = Kτdj τaj+ Ψτ τmj τ−1 (22) (23)

16 s: The safety coefficient is considered only with french stress and the safety coefficient are only considered with the next stress at the wheel level

𝜎 −1 and 𝜏 −1 are the limit of bending and twisting fatigue corresponding to the symmetrical cycle with steel carbon C45 has 𝜎𝑏 = 750𝑀𝑃𝑎

𝜏 −1 = 0,58 × 𝜎 −1 = 0,58 × 327 = 189,66 𝑀𝑃𝑎 (25) For bending stress rotation axis changes cyclically, therefore: 𝜎𝑚𝑗 = 0

According to formula 10.22 (document [1], page 196):

By table 10.6 (document [1], page 196) with axis with round area:

When the axis rotates one way, the torsional stress changes with the arterial cycle:

By table 10.6 (document [1], page 196) with axis with round area:

By formulas 10.25 and 10.26 (document [1], page 197)

𝑘𝜎𝑑 = (𝑘 𝜎 ∕ 𝜀 𝜎 + 𝑘 𝑥−1 ) ∕ 𝑘 𝑦 ⋅ 𝑘 𝜏𝑑 = (𝑘 𝜏 ∕ 𝜀 𝜏 + 𝑘 𝑥+1 ) ∕ 𝑘 𝑦 (32) Kx: stress concentration coefficient, by table 10.8 (document [1], page

17 ky: shaft surface endurance coefficient, according to table 10.9

𝜀𝜎 and 𝜀𝜏: size coefficient refers to the effect of axis size on fatigue limits

𝑘𝜎 and 𝑘𝜏: actual stress concentration coefficient when bending and twisting

 Satisfied with the endurance conditions

POWER CALCULATION

CALCULATION TO CHOOSE MOTOR

When the AGV moved, there were lots of external and internal forces that affected the movement of the

Figure 3.1 describes active force on AGV:

Among those forces, θ the slope of the car compared to the horizontal,

The force exerted by the Automated Guided Vehicle (AGV) on a sloped plane, represented as FN, can be calculated using the formula FN = mg.cosθ In this context, 'm' denotes the mass of the AGV, 'g' is the acceleration due to gravity, and 'θ' is the angle of the slope The radius of the wheel is denoted as 'r', while 'a' represents the AGV's acceleration and 'v' indicates its velocity The friction force, Ff, between the wheel and the slope is given by Ff = mg.cosθ, which also signifies the friction coefficient of the wheel against the slope Additionally, Fw represents the pulling or dragging force generated by the motor.

The AGV, with a weight of 20 kg plus an additional load of 15 kg, achieves a maximum speed of 14 rpm, which is approximately 0.1 m/s, ensuring precise performance of the Lidar sensor and effective mapping Operating on a flat, smooth surface with an angle of inclination θ = 0°, the AGV utilizes wheels with a radius of 72 mm and a friction coefficient of 0.3, while the acceleration due to gravity is 9.8 m/s².

Number of rotations on the wheel shaft:

Power on the wheel axle:

We use belt transmitters and rollers

Transmission performance: (roller drive) 2 x belt = 0.992 x 0.95 = 0.93 (38)

Figure 3.1 Active force on AGV

Use a transmitter with ratio of 2

Power on the motor shaft:

28 = 4118,438 (N mm) (41) Table 3.1 describes Motor technical information that we need:

Choose: Decelerator motor DCM50-775 24VDC 400RPM encoder 13ppr

Rotation speed: 400rpm Moment 27.6kgf.cm

Deceleration ratio: 13.8 :1 Encoder 2 channels A, B 13 pulse.

BELT TRANSMISSION CALCULATIONS

For this robot, we use the ring of teeth with the current parameters: P1 = 0,012075 (kW)

Modun is determined by formula: m = 35 × √ 𝑝1

With ψđ is the belt width coefficient for m = 2,644; we choose mtc = 2

Because we choose a standard mode that is smaller than the calculation, we choose a

 We choose b = 5 (mm) according to the standard table

But with such parameters it is difficult to earn in the market, so the team decided to choose b = 6 (mm)

With active belts with d1 = 16 mm and z1 = 20; We have:

Number of teeth on the passive belt: z2 = 𝑢 × 𝑧 1 = 2

Select axis distance under the condition:

Number of teeth: choose step p = 2

Split ring diameter of the belts: d1 = m x z1 = 2 x 20 = 40 (mm) (50) d2 = m x z2 = 2 x 40 = 80 (mm) (51)

Ring diameter of the belts:

𝜹 = 0.4 (distance from the bottom of the teeth to the average line of the load layer) da1 = m x z1 - 2𝜹 = 39,2 (mm) (52) da2 = m x z2 - 2𝜹 = 79.2 (mm) (53)

𝛼1 is angle closed of active pulley

65 ] × 57,3 ° = 197.63 (54) The number of teeth simultaneously fits on the active belt:

BELT'S TEST TORQUE FORCE

The torque on the belt must satisfy the following calculation:

Ft is the circular force = 1000 × 𝑃 1

𝑞𝑚 volume 1 meter belt with width 1mm = 0,0032

𝑘𝑑 is the dynamic load factor = 1 v is the speed m/s,

With [q0] allowed specific ring force = 5 N/mm

𝐶𝑧: the coefficient refers to the number of teeth that are simultaneously engaged = 7

𝐶𝑢 factor considering the effect of the acceleration drive = 1

Initial tension and force on the shaft:

= 2,112 x 10 −4 − 2,496 x 10 −4 (N) (59) The force acting on the shaft when the speed is not greater than 20m/s can be calculated by the formula:

CONTROL METHOD

ROBOT’S KINEMATICS

Mobile robots utilize a drive mechanism known as independent steering, or differential drive, which relies on the number of propulsion wheels This system typically features two wheels mounted on a single axle, allowing each wheel to be driven independently in either direction Additionally, the design includes a drive wheel to prevent the robot from tipping over.

Adjusting the speed of a wheel allows a robot to turn left or right, rotating around a specific point relative to the wheel axle This pivotal point is known as the Instantaneous Center of Curvature (ICC), which defines the robot's rotation dynamics.

Figure 4.1 describes center of the Instantanous curve rotation:

Altering the speed of the robot's two wheels directly affects its trajectory Given that the angular speed around the Instantaneous Center of Curvature (ICC) for both wheels remains constant, we can express this relationship mathematically: ω(R + l/2) = Vr and ω(R - l/2) = Vl, where dθ represents the change in angle.

In which: l: distance holds 2 wheels

R: distance from ICC to midpoint between 2 wheels

Vr: is the right wheel velocity & Vl is the left wheel velocity ω: is the angular velocity around the ICC

Figure 4.1 Center of the Instantanous curve rotation

So, we always can define R and ω according to the formula:

We have 3 cases for this model as follows:

● 𝑉 𝑟 = 𝑉 𝑙 : the robot moves in a straight line linearly 𝑅 is infinite and 𝜔 = 0

● 𝑉 𝑟 = −𝑉 𝑙 : 𝑅 = 0, robots rotate around the wheel axle midpoint – robots rotate in place

● 𝑉 𝑟 = 0: the robot rotates towards the right wheel, now 𝑅 = 𝑙

2 Similarly, for 𝑉 𝑙 = 0, the robot will rotate towards the left wheel => when one wheel rotates faster than the other, the robot will move towards the slow wheel

Represent the coordinate system of the robot where (𝑥 𝑅 , 𝑦 𝑅 ) is the global coordinate system, (𝑥 𝐿 , 𝑦 𝐿 ) is the local coordinate system attached to the car center

Figure 4.2 describes the robot coordinate system:

Figure 4.2 The robot coordinate system

ROBOT DYNAMICS

The robot features a symmetrical design, crossing the perpendicular axis at its center of mass It is equipped with two drive wheels, each powered by independent motors, complemented by passive front and rear wheels that ensure stable movement on flat surfaces.

The robot's system model, illustrated in the figure below, features a distance (L) between the two wheels and a wheel radius (r) This model operates under two key constraints: there is no lateral sliding between the active wheels and the ground, and the robot maintains a non-vertical sliding motion These constraints are mathematically represented by a specific equation.

The coordinates of the robot's center are represented as (𝑥𝑐, 𝑦𝑐), while the angle θ denotes the orientation of the robot's x-axis in relation to the global x-axis Additionally, the angles 𝜑𝑟 and 𝜑𝑙 indicate the positional angles of the cakes located on the right and left sides, respectively.

Figure 4.3 describes robot system model:

Where M (q) is inertia, is the matrix containing the terms centrifugal and coriolis, G (q) is the gravity matrix, B (q) is the input variable matrix, τ is the input moment, 𝐴 𝑇 (q) is the

Jacobian matrix associated with constraints, λ is the binding force vector and q is the state vector representing the general coordinates, 𝜏 𝑑 denotes unclear external perturbations

𝜆 = −𝑚(𝑥 𝑐 ×𝑐𝑜𝑠 𝑐𝑜𝑠 𝜃 + 𝑦 𝑐 ×𝑠𝑖𝑛 𝑠𝑖𝑛 𝜃)𝜃 (69) Where m is the mass of the robot, 𝜏 𝑙 and 𝜏 𝑟 is the inertial torque generated in turn by 2 wheels left and right When mobile robots are bound in fixed spaces:

ROBOT CONTROL

Robots function in challenging industrial settings, requiring the ability to perform a variety of flexible tasks such as obstacle avoidance, map creation, and orbit setup based on available maps To manage these complex tasks efficiently and accurately, our team utilizes the Raspberry Pi model 3B+ as a powerful processing unit.

Robots need to know where to identify themselves in the work environment, set their own paths to help adapt quickly in different work environments

From the above requirements, the team decided to create a system as follows:

Figure 4.4 describes basic operation diagram:

The team utilized a personal laptop equipped with the ROS operating system to power the robot's brain, enabling it to perform complex calculations and process high-speed data This setup allows for the creation of precise maps and paths, optimizing the robot's operational efficiency.

The computer initiates a request for the robot's current location and its surroundings from the RPLIDAR laser sensor In response, the RPLIDAR begins scanning the environment by emitting lasers and measuring the return time of each signal to assess the surroundings Once the scanning process is complete, the sensor sends the data back to the computer, which processes the information to create a minimal map of the environment and accurately determine the robot's location.

When the robot moves, the computer sends a request to the designated coordinates at a speed of 1 to the processor The processor processes this information through the PID controller, which regulates the engine's movement Each engine is equipped with an encoder to monitor and adjust its position After the movement, the encoder continuously sends signals back to the Arduino to fine-tune the position and control speed, while also relaying the current coordinates to the computer Thanks to the computer's power and performance, it can handle multiple tasks simultaneously, process calculations efficiently, and accurately recognize its position within the working environment.

To meet complex operational requirements, robots need advanced computing power for effective multi-tasking and calculations While traditional computers can enhance a robot's performance, their size can hinder flexibility The Raspberry Pi 3 Model B+ emerges as an ideal solution, offering a compact design that retains essential computing functions at an affordable price, making it perfect for mechanical and electronic systems.

The fully connected diagram will be as the following figure 4.6 – describes connections between electrical components:

● CPU: Broadcom BCM2837B0, quad-core A53 (ARMv8) 64-bit SoC @1.4GHz

● Connection: 2.4GHz and 5GHz IEEE 802.11 b/g/n/ac wireless LAN,

Bluetooth 4.2, BLE, Gigabit Ethernet over USB 2.0 (Up to 300Mbps)

● Video and audio1 full-sized HDMI port, MIPI DSI Display port, MIPI CSI Camera port, 4-pin stereo output and composite video

● Multimedia: H.264, MPEG-4 decode (1080p30), H.264 encode (1080p30); OpenGL ES 1.1, 2.0 graphics

● Power supply: 5V/2.5A DC microUSB port, 5V DC on GPIO pin, Power over Ethernet (PoE) (requires additional PoE HAT)

To optimize the Raspberry Pi's performance, the team has dedicated it solely to calculating the map scan and determining the robot's location, while employing an additional processor to independently control the engine With numerous processors available on the market, the team carefully selected one that best suits their needs.

Arduino R3 with full speed and reasonable function in accordance with the robot of the group

The Arduino Uno R3 is a microcontroller based on the ATMega16U2 Processor: 8-bit AVR® RISC-based microcontroller

• 2x 8-bit Timer/Counter with a dedicated period register and compare channels

• 1x 16-bit Timer/Counter with a dedicated period register, input capture and compare channels

• 1x USART with a fractional baud rate generator and start-of-frame detection

• 1x controller/peripheral Serial Peripheral Interface (SPI)

• 1x Analog Comparator (AC) with a scalable reference input

• Watchdog Timer with separate on-chip oscillator

• Interrupt and wake-up on pin change

The microcontroller features 14 digital input/output pins, including 6 that support PWM outputs, alongside 6 analog inputs It is equipped with a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header, and a reset button To begin using it, simply connect the microcontroller to a computer via USB or power it using an AC-to-DC adapter or battery.

RPLIDAR is a laser-based sensor that measures distances by emitting laser beams toward objects and calculating the time it takes for the reflected signal to return This technology employs fundamental mathematical formulas to accurately determine the distance to obstacles.

In this context, the variables are defined as follows: c represents the speed of light, d denotes the distance from the detector to the object or surface being detected, and t indicates the time taken for the laser light to reach the object or surface and return to the detector.

The Rplidar proximity sensor utilizes UART communication, enabling seamless interaction with microcontrollers, embedded computers, or computers through a USB-UART transfer circuit and accompanying software This sensor is designed for long-range scanning, with a detection range of 0.15 to 12 meters, a rotation speed of 5.5 Hz, and a high sampling frequency of up to 8000 points per second.

The RPLIDAR sensor offers rapid scanning speeds and extended range, making it ideal for applications such as distance measurement, obstacle detection, and laser mapping in vehicles, autonomous robots, and anti-theft systems, all while ensuring high stability and precision.

Rotary encoders are essential electromechanical sensors used in a wide range of applications, including motors, automated machinery, consumer electronics, elevators, and conveyor speed monitoring They accurately track the rotation of motor shafts, generating digital positioning and motion data Whether utilizing ascending or absolute measurement, and operating through magnetic or optical methods, these encoders provide critical information on rotational motion Their widespread adoption in industrial and commercial designs underscores their importance in modern automation systems.

The encoder is a crucial engine component that enables the measurement of engine speed and position by generating square pulses, with their frequency varying according to the engine's speed.

The Encoder structure features a circular disk with grooves that rotates around a fixed axis An LED lamp is positioned adjacent to the rotating disc, while a light sensor is located on the opposite side, enabling precise light detection.

The operation principle involves detecting whether an LED light passes through a designated hole, either in the presence or absence of light This mechanism counts the number of pulses generated, increasing the count each time the light is interrupted.

Assuming there is only one hole on the disk, each time the eye receives a LED signal, it means that the disk has rotated 1 round

Figure 4.10 describes Structure of an Encoder:

Figure 4.10 Structure of an Encoder

PID CONTROL

Figure 4.16 describes PID control function graph:

An automated car equipped with microcontrollers, ultrasonic sensors, and engine controllers faces the challenge of maintaining a straight path Implementing the PID algorithm is an effective solution to ensure precise control and stability in the vehicle's movement.

When aiming for an engine speed of 100 rpm, excessive friction at the wheel results in the engine only reaching 50 rpm This discrepancy creates an error, calculated as e = 100 - 50, which equals 50 rpm.

To address the error indicating e = 50 rpm, which signifies slow engine rotation, we need to enhance the engine's speed by increasing the pulse width.

At this point PWM = 50 + 50 (assuming: 100 rpm ~ 50 PWM)

If the error value (e) is less than zero, it indicates that the engine is operating at an excessive speed, necessitating a reduction in pulse width The algorithm calculates the motor speed error at a specific moment, utilizing this data to make logical adjustments for optimal performance.

The algorithm's return value is derived from three components: Proportional (P), Integral (I), and Derivative (D) This means that the algorithm evaluates error over time during engine operation, incorporating the current error, the cumulative error throughout the process, and the rate of change of the error at the present moment.

The meaning of each component P – I – D:

Figure 4.1 PID control function graph Figure 4.16 PID control function graph

To determine the current error level, it is essential to establish an ideal value that the desired engine should achieve By comparing the actual performance against this ideal value, we can effectively infer the existing error level.

We have a pulse width that is almost proportional to the error e So, we can write the Formula X = KP * e

With: X is the pulse width

Kp is the coefficient of P

The KP coefficient will have to be chosen by itself according to the actual motor running process Choosing Kp is the most time consuming

The overall error level of the engine process determines the adjustment value, which is influenced by the average errors recorded during previous operations.

In theory, adjusting the pulse width based on the error at stage P should suffice; however, in practice, this is not always the case A minimal return error can result in an insignificant adjustment of the X value at stage P, preventing any meaningful increase in engine speed For instance, if the desired speed is 100 rpm but the actual speed is only 99 rpm, the calculated X value (X = KP * E) may be too small to effectively raise the engine speed.

Over time, vehicles accumulate errors during operation, which can affect engine performance Each error contributes to a cumulative total that eventually alters the engine speed The calculation for determining Pulse Width is now based on this accumulation of errors.

𝑋 = 𝐾𝑝 𝑒(𝑡) + 𝐾𝑖 ∫ 𝑒(𝜏)𝑑𝜏 0 𝑡 (72) For Ki is the coefficient of Phase I, the choice of Ki must be chosen according to the fact, each motivation and each problem the choice should be different

▪ D: derivative error in a previous period, this value means that the adjusted value will somewhat guess the trend of the engine that adjusted for reasonable

When the engine accelerates and reaches a state of no errors (e = 0), we stop the pulse hash at X = 0, leading to a maintenance or reduction of engine speed However, due to inertia from the motor shaft and associated mechanisms, the engine speed may continue to rise, resulting in a negative error (e < 0) To counteract this, we must reduce X to slow down the engine back to e = 0 and X = 0 Despite these adjustments, the engine may still not stabilize at the standard speed, as inertia can cause it to exceed the benchmark by a small margin, which is referred to as a fluctuating response.

The principle of preventing this oscillation is that we will gradually reduce the change in engine speed (i.e., decrease the acceleration of the engine) as the engine speed progresses

40 to the required point It also means reducing the change in error as the engine speed gets closer to the speed it requires

That change is calculated according to the formula: ∆𝑡 = 𝑒 − 𝑒𝑡

With: e is the error at the present time et is the error at the previous time

Crystal formula pulse width changes as follows:

Control the speed and position of the engine by encoder:

To read the return signal from the LPD3806 2-phase rotary encoder, we analyze its structure, which features a circular disc with grooves that rotate around a fixed axis This encoder operates using two LEDs positioned near the disc and a light sensor on the opposite side The principle of operation relies on detecting whether light passes through the holes in the disc Each time the light is interrupted, a pulse is counted, allowing for precise tracking of the encoder's rotation.

Figure 4.18 above describes Rotary Encoder.

When the encoder starts rotating clockwise, the output a led will receive the signal first Figure 4.19 describes LED A returns the pulse signal before of LED B:

Next comes the LED B which returns the pulse signal up

And so on when rotating clockwise or counterclockwise we get the following signal: Figure 4.20 describes LED B returns signal after LED A:

Figure 4.21 describes Encoder's pulse return rule:

Figure 4.19 LED A returns the pulse signal before of LED B

Figure 4.20 LED B returns signal after LED A

Figure 4.21 Encoder's pulse return rule

From here we can know when the encoder moves in the direction of clockwise or counterclockwise based on how to read the encoder's return signal

Figure 4.22 describes how to read the return signal of the encoder:

To measure the number of pulses generated by the encoder over a specified distance, our team conducted an experiment using a distance of 1 meter We then propelled the robot forward to record the pulse count, as detailed in Table 4.1.

Table 4.1 Pulse return of 2 motors

No Number of Pulse left encoder returns

Number of Pulse right encoder returns

Figure 4.22 Read the return signal

To achieve a movement of 1 meter, the encoder must produce 8400 pulses, which means that for a distance of 0.1 meters, it needs to send back 840 pulses While these parameters allow for precise position control of the motor, our focus is on regulating the motor's velocity to enhance the robot's mobility.

The algorithm derived from the calculated robot dynamics determines the angular velocity of the left wheel and the right wheel, expressed as ω (R + 𝑙).

According to the theory, the robot's maximum speed is 0.1 m/s with a wheel diameter of 0.144 m, resulting in a maximum wheel speed of 14 rpm With a transmission ratio of 1:2 between the engine and the wheel, the encoder generates 14 × 2 × 600800 pulses per minute, equating to 280 pulses per second Consequently, the required angular velocity for each wheel is 1.46 rad/s.

PATH PLANNING

MAPPING

The method, the algorithm our team used to create a map with the help of ROS is SLAM

Simultaneous Localization and Mapping (SLAM) is a crucial algorithm for autonomous vehicles, enabling them to create a map of their surroundings while determining their precise location within that map By utilizing SLAM techniques, cars can effectively navigate and understand unfamiliar environments Engineers leverage the generated map data for essential tasks, including path planning and obstacle avoidance, enhancing the vehicle's overall navigation capabilities.

Home robot vacuum cleaners without SLAM technology tend to clean randomly, often missing areas and consuming more power, leading to faster battery drain In contrast, robots equipped with SLAM utilize data from wheel rotations, cameras, and imaging sensors for efficient localization and mapping This allows them to accurately calculate movement, avoid obstacles, and prevent redundant cleaning of the same areas, resulting in a more thorough and energy-efficient cleaning process.

Figure 5.1 describes Example using SLAM & without SLAM:

SLAM technology is essential for navigating fleets of mobile robots, enabling tasks like shelf arrangement in large spaces, self-parking of autonomous vehicles, and drone delivery in unfamiliar environments It can be effectively combined with sensor fusion, object tracking, path planning, and path following to enhance operational efficiency.

Figure 5.1 Example using SLAM & without SLAM

To create a successful SLAM (Simultaneous Localization and Mapping) system, two essential technological components are required: sensor signal processing and sensor-agnostic pose-graph optimization The first component, sensor signal processing, focuses on front-end processing and is largely dependent on the specific sensors used In contrast, sensor-agnostic pose-graph optimization involves back-end processing To gain a deeper understanding of front-end processing, we can examine two distinct types of SLAM: visual SLAM and lidar SLAM.

Visual SLAM (vSLAM) utilizes images captured from various cameras and image sensors It can incorporate simple cameras like wide-angle, fisheye, and spherical types, as well as compound eye cameras such as stereo and multi-camera setups, and RGB-D cameras, which include depth and Time-of-Flight (ToF) sensors.

Visual SLAM can be effectively achieved using low-cost cameras, which provide abundant data for landmark recognition This capability allows for the identification of previously measured positions, enhancing the flexibility of SLAM implementations when combined with graph-based optimization techniques.

Monocular SLAM utilizes a single camera as its only sensor, which complicates depth perception This challenge can be addressed by recognizing AR markers, checkerboards, or other identifiable objects within the image for localization, or by integrating camera data with additional sensors like inertial measurement units (IMUs) that assess physical attributes such as velocity and orientation Technologies related to vSLAM include Structure from Motion (SfM), visual odometry, and bundle adjustment.

Visual SLAM algorithms can be categorized into two main types: sparse and dense approaches Sparse methods, including PTAM and ORB-SLAM, focus on matching feature points in images, while dense methods utilize image brightness for processing and include algorithms such as DTAM, LSD-SLAM, DSO, and SVO.

Light detection and ranging (lidar) is a procedure that typically employs a laser sensor (also known as a distance sensor)

Lasers offer superior accuracy compared to cameras and other sensors, making them essential in applications involving fast-moving vehicles like self-driving cars and drones These laser sensors produce 2D or 3D point cloud data, providing high-precision distance measurements that enhance SLAM (Simultaneous Localization and Mapping) for effective map creation By progressively comparing point clouds, movement is approximated, allowing for accurate vehicle localization Commonly referred to as distance sensors, laser sensors are integral to light detection and ranging (LiDAR) technology.

Point clouds, while useful for various applications, lack the fine detail of images, which can hinder effective matching and localization for autonomous vehicles, especially in areas with limited features This challenge can lead to difficulties in aligning point clouds, potentially causing the vehicle to lose its positional accuracy Additionally, point cloud matching requires substantial computational resources, necessitating optimization for improved performance To address these issues, autonomous vehicle localization often integrates multiple data sources, including wheel odometry, GNSS, and IMU data While 2D lidar SLAM is commonly used for warehouse robots, 3D lidar point clouds are increasingly applied in UAVs and autonomous driving scenarios.

Figure 5.3 describes LiDAR based SLAM:

Although SLAM can be applied in several practical applications, various technological obstacles prohibit it from being used more broadly Each has a countermeasure that can assist in overcoming difficulties

 Localization errors accumulate, causing substantial deviation from actual values:

SLAM (Simultaneous Localization and Mapping) measures movement with a margin of error, which accumulates over time, leading to significant discrepancies from actual positions This inaccuracy can distort map data, complicating further navigation For instance, when a robot navigates a square corridor, the starting and ending points may diverge due to accumulated errors, highlighting a challenge known as loop closure difficulty These pose estimation errors are unavoidable, making it essential to identify loop closures and develop methods to correct or mitigate the accumulated inaccuracies.

Figure 5.4 describes Errors in accumulated, mapping:

To mitigate localization errors, recalling features from previously visited locations can serve as effective landmarks The creation of pose graphs facilitates the correction of these mistakes By approaching error minimization as an optimization problem, more precise map data can be achieved In the context of visual SLAM, this optimization process is referred to as bundle adjustment.

 Localization fails and the position on the map is lost:

The peculiarities of a robot's mobility are not considered in image and point-cloud mapping This method can produce discontinuous location estimations in some instances A

Figure 5.4 Errors in accumulated, mapping

A robot moving at 1 m/s can experience localization failures, such as unexpectedly jumping ahead by 10 meters To prevent this issue, it is essential to integrate the motion model with multiple sensors and perform calculations based on the collected sensor data.

When localization fails, recalling a landmark from a previously visited location can aid in recovery A feature extraction method facilitates quick scanning for landmarks, utilizing techniques like Bag of Features (BoF) and Bag of Visual Words (BoVW) based on image characteristics Recently, deep learning has been employed to assess distances between features effectively.

 High computational cost for image processing, point cloud processing, and optimization

Implementing SLAM on automotive hardware presents challenges due to high computing costs Typically, computations are executed on low-energy embedded microprocessors with restricted processing power Achieving accurate localization necessitates frequent image processing and point cloud matching Additionally, optimization tasks like loop closure require significant computational resources The key challenge lies in efficiently performing these intensive computations on embedded microcomputers.

One countermeasure is to run many processes concurrently Parallelization is generally ideal for processes such as feature extraction, which is preprocessing of the matching process

PATH PLANNING WITH ROS

In this project, we utilize ROS to manage the LiDAR sensor for environmental scanning and mapping, establishing a designated route for the Automated Guided Vehicle (AGV) The ROS framework is installed on our personal laptop running Ubuntu, facilitating the connection between the LiDAR for mapping and Arduino for motor control.

ROS (Robot Operating System) is a flexible platform that facilitates the development of robot software It comprises various tools, libraries, and protocols aimed at simplifying the creation of complex and robust robotic behaviors across diverse automated systems.

ROS is a powerful software package designed for the swift and efficient development of autonomous robotic systems It serves as a comprehensive toolkit for creating innovative solutions or enhancing existing ones With a wide array of drivers and advanced algorithms, ROS is a crucial resource commonly utilized in the field of robotics.

In the Robot Operating System (ROS), a node serves as the fundamental unit, responsible for managing devices or computational techniques, with each node assigned a specific task Communication between nodes is facilitated through topics or services ROS software is organized into packages, typically designed to perform a particular function and may include one or more nodes.

In ROS, a topic serves as a data stream that enables nodes to share information, typically transmitting repetitive messages of the same type, such as sensor readouts or motor speeds Each topic is identified by a unique name and a specific message type, allowing nodes to connect for publishing or subscribing to messages While a single node cannot both publish and subscribe to the same topic simultaneously, there are no limits on the number of different nodes that can publish or subscribe to it.

Service communication operates similarly to the client-server model, where one node, acting as the server, registers its services within the system This setup enables other nodes to request those services and receive responses Unlike topics, services facilitate two-way communication, allowing requests to include data alongside the response.

Path planning for mobile robots involves determining the sequence of movements required for a robot to navigate from its starting point to its destination while avoiding collisions with obstacles This critical process ensures safe and efficient navigation in dynamic environments.

A few algorithms for path planning are Dijkstra’s algorithm, A* or A-star, D* or Dynamic A*, Artificial potential field method, Visibility graph method Path planning algorithms can be graph or occupancy grid based

A graph-based approach enables robots to navigate by representing locations as vertices and pathways as edges Each edge can have weights that reflect the complexity of the route, such as door width or the energy needed to open it To find the optimal trajectory, the robot calculates the shortest path between its current position and the desired destination.

The occupancy grid method divides a space into cells, similar to map pixels, categorizing them as either occupied or free It identifies one cell as the robot's current position and another as its destination The robot's trajectory is calculated by determining the shortest path that avoids crossing any occupied cells.

Our team used occupancy grid path planning in ROS in this project

Figure 5.5 describes Grid map method and figure 5.6 describes how to set up the working environments of ROS:

Figure 5.6 Set up the working environments of ROS

Before launching Rviz to visualize the AGV's map and position, it's essential to execute a series of commands and code to establish parameters and rules, ensuring proper data transmission between the laptop, Raspberry Pi, and LiDAR.

In a new environment lacking a pre-existing map, the LiDAR system can only scan its immediate surroundings in a circular pattern, limited by the room's boundaries To facilitate mapping, we will manually guide the Automated Guided Vehicle (AGV) throughout the area we intend to map and scan.

Figure 5.7 describes new environment - not scan yet:

After the RPLIDAR completed the mapping process, it identified surrounding obstacles and pinpointed the AGV's location using ROS and integrated sensors The coordinate system utilized in the function and code facilitated this process, resulting in a clear display of both the map and the AGV's position.

Figure 5.8 describes how display map on screen after scanned:

Figure 5.7 New environment - not scan yet

Figure 5.8 Display map on screen after scanned

The dwa_local_planner package is essential for navigating and avoiding obstacles with a mobile robot It serves as a controller that connects the path planner to the robot, utilizing a map to create a kinematic trajectory from start to finish Throughout this process, the planner generates a value function displayed as a grid map surrounding the robot, which encodes the costs of traversing the grid cells The controller then uses this value function to compute the velocities dx, dy, and dtheta, enabling effective communication with the robot for smooth navigation.

Figure 5.9 describes Dynamic Window Approach (DWA):

The Dynamic Window Approach (DWA) method works on the following principle:

1 Sample discreetly in the robot's control space (dx, dy, dtheta)

2 Perform forward simulation from the robot's present state for each sampled velocity to forecast what would happen if the sampled velocity was applied for some (short) length of time

3 Score (evaluate) each trajectory resulting from the forward simulation using a measure that includes features such as obstacle proximity, goal proximity, global path closeness, and speed Remove erroneous trajectories (those that collide with barriers)

4 Select the highest-scoring trajectory and send the velocity associated with it to the mobile base

5 Rinse and repeat as necessary

Figure 5.9 Dynamic Window Approach (DWA)

The Dynamic Window Approach focuses on two main objectives: calculating a valid velocity search space and selecting the optimal velocity This search space consists of velocities that ensure a safe trajectory, allowing the robot to stop before any collisions, based on its attainable velocities in the next time slice defined by its dynamics The chosen ideal velocity aims to enhance the robot's clearance, speed, and heading towards the target.

So in DWA we will have some simple step to find the best velocity to create trajectory for our robot:

RESULTS

FINAL PRODUCT

Following the design outlined in chapter 2, we constructed a comparable Automated Guided Vehicle (AGV) featuring a steel base that weighs approximately 10kg This component was precision-machined by a local metal workshop in Ho Chi Minh City, adhering to the specifications provided in our team's design blueprint.

Figure 6.1 describes the base of the mobile robot and Figure 6.2 describes 2 main wheels and 4 caster wheels:

Picture 6.1 The base of the mobile robot

Picture 6.2 2 main wheels and 4 caster wheels Figure 6.1 The base of the mobile robot

Figure 6.2 2 main wheels and 4 caster wheels

Figure 6.3 present the product when install all the control unit & transmission system:

The robot's frame is constructed from seven separate aluminum sheets, securely fastened with bolts and nuts, and designed by our team in collaboration with a company based in Ho Chi Minh City Additionally, the buttons and lights on the robot facilitate user interaction and provide important notifications.

Figure 6.4 describes the frame of the mobile robot and Figure 6.5 describes completed mobile robot:

Figure 6.3 Install all the control unit & transmission system

Figure 6.4 Frame of the mobile robot

EXPERIMENTS

• Position deviation from the specified point

To evaluate the accuracy of the SLAM path planning algorithm, the AGV will navigate a completed map of an area, identifying easily recognizable landmarks to select a destination The AGV will repeatedly travel to this destination, with measurements taken in millimeters (mm) to assess the precision of its movements.

Figure 6.7 Move from a position to another - mark with black tape Figure 6.6 Map of the place we chose to do the experiment display on screen

The experiment location is illustrated in Figure 6.6, while Figure 6.7 depicts the transition between positions, indicated by black tape.

The map illustrates black barriers that signify a row of tables, while the black square rectangle near the destination indicates the location of a white table and an adjacent chair Each square on the grid corresponds to a measurement of 100 cm.

In experiments 1, the coordinates of x, y for comparison with the real results is (100,

400), the unit to measure is centimeters (cm) Table 6.1 describes Experiments 1 results:

No X position Y position Deviation of X Deviation of Y

Average position deviation of X: = sum of all 10 times/10 = ± 5.3 cm

Average position deviation of Y: = sum of all 10 times/10 = ± 6.8 cm

The experiment was done at the Robotic Room of Ho Chi Minh city University of Technology and Education

The second experiment involves navigating a robot along an L-shaped path while increasing the travel distance Throughout this journey, we introduced obstacles like chairs and pedestrians to assess the robot's speed in detecting these hindrances and adjusting its trajectory accordingly.

Picture 6.8 describes our team measuring to see if the map was correct:

During our experiment, we faced challenges with the LiDAR sensor, which relies heavily on light Conducting the experiment in the afternoon, we encountered excessive ambient light that interfered with the sensor's signals Consequently, the mobile robot moved slowly and frequently stopped, resulting in a lack of data collection for the current map.

Figure 6.8 Measuring to see if the map was correct

Figure 6.9 describes the 2nd experiment - move in an L shape:

The journey begins at the air conditioning unit (AC) and concludes at the room's entrance, with the green line illustrating the robot's designated path.

Picture 6.9 The 2nd experiment - move in an L shape

Figure 6.9 The 2nd experiment - move in an L shape

Figure 6.10 The starting point and set the destination

In Experiment 2, the starting position is set at coordinates (100, 75) and the destination at (150, 150), using centimeters as the measuring unit Table 6.2 outlines the details of this experiment, focusing on the deviation between the starting point and the destination.

Table 6.2 Experiment 2 - Deviation of starting position and destination

The experiment conducted at the Robotic Room of Ho Chi Minh City University of Technology and Education revealed the average position deviations for the X and Y axes The average position deviation of X at the starting position was ± 4.1 cm, while at the destination, it increased to ± 5.8 cm For the Y axis, the average position deviation at the starting position was ± 3.8 cm, and at the destination, it was ± 5.2 cm.

X position Y position Deviation of X Deviation of Y

START END START END START END START END

Figure 6.11 describes how we check the detection of obstacles:

The trajectory changed when the LiDAR spot anything interfered the moving path, as we can observe from the following pictures:

Figure 6.11 Check the detection of obstacles

71 Figure 6.12 describes the mobile robot changed the route when encounter an obstacle:

Figure 6.12 The mobile robot changed the route when encounter an obstacle

When a chain is placed in front of the robot, a short orange wall is detected on the map, obstructing the AGV's path Consequently, selecting a destination for the robot alters its trajectory from a straight line to a curved route that navigates around the chair.

Figure 6.13 Describe pictures in the map and in reality:

Figure 6.13 Describe pictures in the map and in reality

In a crowded room, when a mobile robot encounters a person moving past its sensor, a thin red line appears on the map, indicating a sudden obstacle in its path The robot promptly alters its route to navigate around the obstacle and will revert to its original path once the obstacle is no longer present Figure 6.14 illustrates the robot's navigation through the bustling environment.

Figure 6.14 Travel in a crowded room

Figure 6.15 describes how the trajectory changed when there’s someone in front and return when they move away:

Figure 6.15 The route changed when there’s someone in front and return when they move away

The team has carefully calculated and chosen appropriate hardware, ensuring that the car meets the initial criteria This allows the vehicle to adjust its speed and maintain a steady movement while keeping position deviation within permissible limits.

- Calculation of the power of the engine

- Calculate and select a belt transmitter

- Calculate robot kinematics and dynamics

- Calculate the durability of the bearing frame

- The car can spot obstacles, proceed to plan a new path, or just simply stop and wait

- How connect all the components and run the navigation system smoothly

- How to control from afar using Raspberry pi 3

- The program still hasn’t run smoothly

- Lights and buttons not working properly yet

- The robot can only detect obstacles the same height or higher than the LiDAR, not objects that are smaller or below the LiDAR

- Display maps on a touch screen with different options and modes

- Design an anti-impact system

- Programmed an image and voice recognition system for further development

- Install a notification system such as honking when reaching a destination

UNIVERSITY OF TECHNOLOGY AND EDUCATION SOCIALIST REPULIC OF VIETNAM

In Council 13, the following students are registered: Phan An Đông (ID: 19146113), Nguyễn Anh Minh (ID: 19146093), and Nguyễn Hòa Lộc (ID: 19146092).

Project’s name: Research, design and build an AGV robot using LIDAR sensor for navigation

Instructor’s full name: ME Tưởng Phước Thọ

1 Comment on student spirit and work attitude

2 Comments on GRADUATION THESIS's performance

2.1 Structures, methods to present graduation thesis:

(The project's theory, practicality, and applicability, as well as research directions, can be developed further)

TP.HCM, date month 08 year 2023

(Sign and write full name)

No Evaluate list Max score

1 Form and structure of graduation thesis 20

The correct format, including the complete form and substance of entries

Objectives, tasks, overviews of the topic 10

Capability to apply knowledge of mathematics, science and engineering, social sciences…

Capability to perform/analyze/synthesize/evaluate 15

The capacity to develop or build a system, component, or process that fits a specific requirement while working within actual limits

Capability to improve and develop 15

Capability to use technical tools, specialized software … 5

3 Evaluate the applicability of the topic 10

UNIVERSITY OF TECHNOLOGY AND EDUCATION SOCIALIST REPULIC OF VIETNAM

FACULTY FOR HIGH QUALITY TRAINING

COMMENTS FOR THE GRADUATION THESIS

The thesis review committee for Council 13, No 03, includes the following students: Phan An Đông (ID: 19146113), Nguyễn Anh Minh (ID: 19146093), and Nguyễn Hòa Lộc (ID: 19146092).

Project’s name: Research, design and build an AGV robot using LIDAR sensor for navigation

Thesis reviewer’s full name: ME Phạm Bạch Dương

1 Structure, presentation method of Graduation Thesis:

(Theory, practicality and applicability of the project, research directions can continue to develop)

TP HCM, date month 08 year 2023

No Evaluate list Max score

1 Form and structure of graduation thesis 20

The correct format, including the complete form and substance of entries

Objectives, tasks, overviews of the topic 10

Capability to apply knowledge of mathematics, science and engineering, social sciences…

Capability to perform/analyze/synthesize/evaluate 15

The capacity to develop or build a system, component, or process that fits a specific requirement while working within actual limits

Capability to improve and develop 15

Capability to use technical tools, specialized software … 5

3 Evaluate the applicability of the topic 10

UNIVERSITY OF TECHNOLOGY AND EDUCATION

Student’s full name: Phan An Đông ID: 19146113 Council: 13 No: 03 Student’s full name: Nguyễn Anh Minh ID: 19146093 Council: 13 No: 03 Student’s full name: Nguyễn Hòa Lộc ID: 19146092 Council: 13 No: 03

Project’s name: Research, design and build an AGV robot using LIDAR sensor for navigation

Instructor’s full name: ME Tưởng Phước Thọ

No Criteria Rate the degree of achievement Score Notes

1 Research’s problem Clarity of goals, problems to be solved

Explain the need to solve the problem 1 - 4

Criteria for the proposed solution 1 - 3

3 The novelty of the method’s research

Research plan, method/model design 2 - 7

4 Creativity in research Methods of data collection and processing/Process of manufacturing and testing samples according to design

Clarity on the scientific foundations of inquiry / Problem-solving solutions

The solution implementation model's scientific foundation 3 - 7

6 Conduct research/implementation: data gathering, analysis, and interpretation, fabrication, and testing

Systematicity, dependability (reproducibility of collected data/conformity of produced samples with design models)

Appropriateness of mathematical and statistical procedures in data processing/Responsiveness of the originally created sample

The adequacy of data to be able to draw objective conclusions / Degree of completeness and completion of post- manufacturing samples in terms of technology

7 Report Logical level of layout data, images, graphs

Clarity of data annotations, images, graphs, drawings 1 - 3

Conformity with the given format, drawings according to TCVN 1 - 3

8 Answer the question The level of clarity, conciseness, and depth, showing the effectiveness of the scientific basis of the topic

The degree of independence of each student in the research topic 3 - 10

Understanding the limitations of the study results 2 - 7

* Minimium HCM City, date month 08 year 2023

(Sign and write full name)

[1] Trịnh Chất – Lê Văn Uyển, Tính toán thiết kế hệ dẫn động cơ khí, Nhà xuất bản giáo dục, 2006

[2] Nguyễn Trường Thịnh, Giáo trình Kỹ thuật robot, Nhà xuất bản ĐHQG, 2014

[3] K Uyên, “Thuật Toán PID trong Điều Khiển Tự Động,” 2020: https://www.stdio.vn/dien-tu-ung-dung/thuat-toan-pid-trong-dieu-khien-tu-dong-3Iu1u

Xu thế phát triển robot trên thế giới đang diễn ra mạnh mẽ, với nhiều quốc gia đầu tư vào nghiên cứu và ứng dụng công nghệ robot trong các lĩnh vực khác nhau Tại Việt Nam, tình hình nghiên cứu robot cũng đang có những bước tiến đáng kể, với sự gia tăng của các dự án và chương trình đào tạo chuyên sâu Việc áp dụng robot trong sản xuất và dịch vụ không chỉ nâng cao hiệu quả công việc mà còn góp phần thúc đẩy sự phát triển kinh tế Các nhà nghiên cứu và doanh nghiệp Việt Nam đang nỗ lực hợp tác để đẩy mạnh ứng dụng công nghệ robot, nhằm đáp ứng nhu cầu ngày càng cao của thị trường.

[5] Top 7 robot tự hành AGV tốt nhất trong những năm gần đây, By Daniel 31-07-2021, link: Top 7 robot tự hành agv tốt nhất trong những năm gần đây (maysanxuattudong.com)

[6] Computational Principles of Mobile Robotics, link: http://www.cs.columbia.edu/~allen/F17/NOTES/icckinematics.pdf

[7] Pulse Width Modulation, link: https://www.electronics-tutorials.ws/blog/pulse- width-modulation.html

[8] Handson Technology, “BTS7960 High Current 43A H-Bridge Motor Driver”, link: BTS7960 Motor Driver.pdf (handsontec.com)

[9] Farbod Fahimi, Autonomous Robots Modeling, Path Planning, and Control, Springer Science+Business Media, 2009

[10] United States Naval Academy, “Mobile Robot Kinematics”, link: https://www.usna.edu/Users/cs/crabbe/SI475/current/mob-kin/mobkin.pdf

[11] What is SLAM? – link: What is SLAM? Simultaneous Localization and Mapping (geoslam.com)

[12] ROS tutorials, link: ROS introduction | Husarion

[13] DWA algorithm – link: dwa_local_planner - ROS Wiki

NOTE: REMOVE ALL SHARP EDGES

The holes dimensions are symetric through the centerline horizontally and vertically

Parameters: mm (MILIMETERS) Tolerance for unmarked dimension: ±0.1mm Surface roughness: 25àm for all sides and holes

HCMC University of Technology and Education Faculty of High Quality Training

Tolerance for unmarked dimension: ±0.5 mm

SURFACE ROUGHNESS: 25àm for all sides

NOTE: REMOVE ALL SHARP EDGES & BAVIA

ME TUONG PHUOC THO NGUYEN HOA LOC

HCMC University of Technology and Education Faculty of High Quality Training

Thickness: 2mm Parameter: Milimeters (mm) Tolerance for unmarked dimension: ± 0.5 mm NOTE:

ME TUONG PHUOC THO NGUYEN HOA LOC

HCMC University of Technology and Education Faculty of High Quality Training

Parameter: Milimeters (mm) Tolerance for unmarked dimension: ± 0.5 mm NOTE: REMOVE ALL SHARP EDGES AND BAVIA

HCMC University of Technology and Education Faculty of High Quality Training

Tolerance for unmarked dimension: ± 0.5 mm

NOTE: REMOVE ALL SHARP EDGES

ME TUONG PHUOC THO NGUYEN HOA LOC

HCMC University of Technology and Education Faculty of High Quality Training

Parameters: mm (Milimeters) Tolerance for unmarked dimension: 0.5 mm Remove bavia

Paint anti-rust for long term usage

HCMC University of Technology and Education Faculty of High Quality Training

Parameters: mm (Milimeters) Tolerance for unmarked dimension: 0.5 mm Remove bavia

Paint anti-rust for long term usage

HCMC University of Technology and Education Faculty of High Quality Training

Tolerance for unmarked dimension: 0.1 mm

Paint anti-rust for long term usage

ME TUONG PHUOC THO NGUYEN HOA LOC

HCMC University of Technology and Education Faculty of High Quality Training

Parameters: mm (Milimeters) Tolerance for unmarked dimension: ± 0.5 mm

ME TUONG PHUOC THO NGUYEN HOA LOC

HCMC University of Technology and Education Faculty of High Quality Training

ITEM NO PART NUMBER QTY.

NGUYEN HOA LOC Mobile Robot

HCMC University of Technology and Education Faculty of High Quality Training

NO PART NUMBER DESCRIPTION QTY.

1 Foam Plastic sheet place components 1

2 RASPBERRY_PI_3_MODEL_B Raspberry Pi 3B+ 1

6 Power down module 12V to 5V step down power converter 1

NGUYEN HOA LOC Control Unit

HCMC University of Technology and Education Faculty of High Quality Training

PULLEY 20 TEETH ON MOTORS & ENCODERS PULLEY 40 TEETH ON THE MAIN WHEEL SHAFT

NGUYEN HOA LOC TRANSMISSION METHOD

HCMC University of Technology and Education Faculty of High Quality Training

Ngày đăng: 14/11/2023, 10:11

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w