1. Trang chủ
  2. » Giáo Dục - Đào Tạo

design and implementation of a firefighting robot

62 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Design and Implementation of a Firefighting Robot
Tác giả Lê Quốc Thanh
Người hướng dẫn Assoc. Prof. TRƯƠNG NGỌC SƠN
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Computer Engineering Technology
Thể loại Graduation Project
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 62
Dung lượng 5,63 MB

Cấu trúc

  • Chapter 1: INTRODUCTION (15)
    • 1.1. Introduction (15)
    • 1.2. Designing a Firefighting Robot with Raspberry Pi and Arduino Integration (15)
    • 1.3. Objective (16)
    • 1.4. Research methods applied to robot and scope of robot research (16)
      • 1.4.1. Research methods applied to robot (16)
      • 1.4.2. Scope of robot research (16)
    • 1.5. Research contents (17)
    • 1.6. Outline (17)
  • Chapter 2: LITERATURE REVIEW (19)
    • 2.1. AI Technology (19)
    • 2.2. Predictive tracking of an object (20)
      • 2.2.1. Introduction (20)
      • 2.2.2 Pan–tilt control system (20)
      • 2.2.3 Finding the Object Realtime Position (20)
    • 2.3. The Overview of Firefighting Robot (21)
  • Chapter 3: DESIGN AND MANUFACTURE OF FIREFIGHTING ROBOT (22)
    • 3.1. Main goals and technical objectives of firefighting robots (22)
    • 3.2. Block diagram of firefighting robot (22)
    • 3.3. System design and layout of Firefighting robot (23)
      • 3.3.1 Fire detection system (23)
        • 3.3.1.1 Hardware system design for fire detection (23)
        • 3.3.1.2 Software system design for fire detection (30)
      • 3.3.2. Pan-tilt control for fire detection tracking (41)
        • 3.3.2.1 Hardware system design for pan-tilt control (41)
        • 3.3.2.2 Software system design for object tracking using Pan-Tilt (48)
  • Chapter 4: EXPERIMENT RESULTS AND DISCUSSION (55)
    • 4.1. System Operation and results (55)
      • 4.1.1 Fire object detection fire detection (56)
      • 4.1.2 Pan and tilt controls (57)
    • 4.2 Review and evaluation (58)
  • Chapter 5: CONCLUSION AND FUTURE WORKS (59)
    • 5.1. Conclusion (59)
      • 5.1.1 Advantage (59)
      • 5.1.2 Disadvantages (59)
    • 5.2. Future works (59)

Nội dung

Project title: Design and implementation of a firefighting robot 2.. THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness --- Ho Chi Minh City, June 26, 2023 ADVISOR’S EV

INTRODUCTION

Introduction

It is our responsibility as Electrical Engineers to design and build an embedded system that can automatically detect and extinguish fires Also aims to reduce air pollution According to statistics from the Ministry of Public Security, from 2001 to

2022, Vietnam will have nearly 60,000 fires and explosions, killing 1,910 people, injuring more than 4,400 people, and causing great financial damage Many house fires start when someone is sleeping or not home With the invention of such a device, people and property can be saved at greater expense with relatively little damage caused by fire Firefighting using robot firefighting is the best today

Following in the footsteps of leading technologists such as HOPE Technick, Fire Robot Technology We will create a small automatic firefighting robot that can navigate at very fast speeds and track objects, find the burning object and then extinguish it with a direct high-pressure water jet.Firefighting robots are designed to search for interior fires small floor plan of a house of a specific size, extinguish the fire with water, then return to the original position The fire detection prediction models put into use are relatively error-free, they will not overreact in non-fire simulations

Artificial intelligence (AI) is becoming increasingly popular and is affecting many aspects of daily life Computer vision (CV) is a branch of artificial intelligence that includes the collection, processing, analysis, and recognition of digital images yolo is one of the most stable vision models available today, with flexibility and ease of access and deployment on embedded systems Together with the precision of automatic control devices, the implementation of the project becomes more feasible Therefore, we decided to choose the topic "Firefighting Robot".

Designing a Firefighting Robot with Raspberry Pi and Arduino Integration

The research objective of the thesis is the design and implementation of firefighting robot has the following functions:

 Use Raspberry Pi 4 to process images, calculate object positions and create signal coordinates transmitted via UART communication standard

 Use the Arducam Fisheye Low Light USB Camera for Computer, to recognize fire

 Use an Arduino Nano to control the pan and tilt through module driver TMC2209

 Use two module driver TMC2209 to control the pan angle and the tilt angle

 Track and verify advancement using the computer

 Calculate and design the most optimal hardware models

 Considering assessing and refining the model

The expected result will be a robot capable of detecting sudden fires and promptly preventing them from extinguishing them.

Objective

It is crucial to analyze and assess the accuracy, speed of processing, and performance of models for fire recognition on embedded systems

Evaluate and analyze system functions to select appropriate hardware that brings high performance and efficiency.

Research methods applied to robot and scope of robot research

1.4.1 Research methods applied to robot

In order to gain a deeper grasp of the issue's implementation, we performed research on the research topics, which made problem-solving easier The topics we looked into were as follows:

 The Raspberry Pi 4 Model B has garnered significant popularity among developers Engineered as a compact yet potent embedded computer, it serves a myriad of purposes, making it ideal for learning and testing Its versatility and high compatibility enable seamless access and deployment of both AI applications and control of other hardware

 Arducam Fisheye Low Light USB Camera ,2MP 1080P IMX291

 Module Arduino Nano CNC Shield V4

 Neural Network: YOLO (You Only Look Once) represents a Convolutional Neural Network (CNN) model employed for efficient object detection and identification

Given financial constraints and the existing capacity and knowledge, the content of the topic can only be applied within the following scope:

 Due to limited knowledge, restricted access to equipment, and various other objective challenges, we were only able to progress as far as simulation

 The Fire-fighting robot using raspberry pi in this project uses a camera that allows you to see the lane within 120 degrees

 The robot will operate similarly to a turret, utilizing gears and spindles for both pan and tilt movements This configuration ensures precise angles and a comprehensive sweep capability

 The simulation will be done indoors without bright lights.

Research contents

During the graduation project's implementation, we have recognized and resolved the following issues:

 Content 1: Conduct a comprehensive analysis of the project's challenges, identifying potential obstacles and devising strategies for overcoming them

 Content 2: Delve into the technical specifications, guiding principles, and theoretical foundations of the hardware components, ensuring a solid understanding before implementation

 Content 3: Present a well-defined model proposal and provide an overarching system summary Develop detailed block diagrams and principal diagrams to visually communicate the system's architecture

 Content 4: Implement robust data preprocessing techniques, including thorough data cleaning and the generation of datasets for effective object detection

 Content 5: Explore system configuration intricacies and design hardware solutions that align seamlessly with the proposed model, emphasizing efficiency and scalability

 Content 6: Execute a comprehensive test run, meticulously checking, evaluating, and making necessary adjustments to optimize system performance

 Content 7: Engage in meticulous report writing, documenting the entire project lifecycle, methodologies, findings, and insights gained throughout the process

 Content 8: Prepare for the thesis defense by rehearsing and refining the presentation, ensuring a clear and compelling demonstration of the project's objectives, methodologies, and outcomes

Outline

We have made a concerted effort to communicate and express the content in an accessible manner, ensuring ease of understanding even for non-specialists The material is thoughtfully organized and arranged across five chapters:

 Chapter 1: INTRODUCTION provides an overview of current research on AI applications in Robotics, outlining the purpose, object, and scope of the study

 Chapter 2: LITERATURE REVIEW covers the background knowledge of AI

Technology, methods for calculating film coordinates, and other techniques relevant to the project

 Chapter 3: DESIGN AND IMPLEMENTATION details system requirements, block diagrams, hardware design, and algorithm diagrams

 Chapter 4: EXPERIMENT RESULTS AND DISCUSSION showcases the results of hardware and software development, along with performance evaluations

 Chapter 5: CONCLUSIONS AND RECOMMENDATION summarizes the project's conclusion, highlighting its advantages, disadvantages, implementation mistakes, and providing directions for future development.

LITERATURE REVIEW

AI Technology

YOLO (You Only Look Once) [3] is a CNN network model used to detect and identify objects Before YOLO, there were many ideas and algorithms for the topic of object detection, from image-processing-based to deep -learning-based.Among them, deep-learning-based gives outstandingly good results Most object detection models at that time were built according to a 2-stage architecture

Since YOLO is an object detection algorithm, the model's objective goes beyond simply predicting labels for objects in classification problems to include locating the object Consequently, rather than classifying a single label for an image, YOLO can identify multiple objects in a picture with varying labels

YOLO has released many versions so far such as v1, v2, v3, v4, v5, … Each stage of YOLO has upgraded classification, optimized real-time label recognition, and extended prediction limits for frames

YOLO features a relatively simple architecture comprising a base network and additional layers The base network, typically Darknet, is a convolutional network responsible for extracting image features The extra layers, situated at the end, analyze these features and facilitate object detection

Figure 2 1 YOLO network architecture diagram

In the Darknet Architecture, the base network component serves as a feature extraction module This component generates a 7x7x1024 feature map, acting as input for the Extra layers responsible for predicting the label and bounding box coordinates of the object.

Predictive tracking of an object

Pan–tilt camera systems, vital for 3D visual tracking in applications like autonomous vehicles and robots [2, 5, 20, 27], face challenges in real-time data processing Research primarily aims at speeding up these processes using efficient algorithms, parallelization [24], or faster hardware The separate consideration of camera servopositioning focuses on object retention within the image, with control accuracy not being the main research emphasis

The controlled pan-tilt system and the applied control strategy, which are the two primary components of the suggested contribution, are explained in this section

2.2.3 Finding the Object Realtime Position

The main task of the visual stepper motor is to track the object of interest, denoted G, ensuring that its image is aligned near the center of the camera's image plane This alignment is achieved by making the relative position of the object's image coincide with the optical axis of the camera

Figure 2 2 Method for pan-tilt camera calibration The mechanism of this process involves utilizing two stepper motors to control pan and tilt, representing the adjustment of the camera's direction to align with the center of the image To achieve this, it is essential to obtain the coordinates of the detected object, in this case, fire After acquiring the object's position coordinates (x, y), which are based on the image reference with the coordinate angle corresponding to the image and the bounding box dimensions (width_object and height_object), along with the predetermined frame size (height and width), or automatically obtained using the shape() function, the following calculations are initiated:

The computation to determine the center position of the object involves subtracting half of the object's width from its x-coordinate and similarly for the y-coordinate

Using the result from the previous calculation, determine the distance from the current position to the center of the object along the x-axis and create a valid region with a width of 220 for x and 195 for y, as calculated in the distance finding operation (2.3) and (2.4)

The purpose of calculating the distance target_distance (2.5) is to check whether the center of the image lies within the desired region If it does not, further examination and adjustments will be made If it falls within the desired region, it allows for object locking, initiating the process of suppressing and extinguishing

The presented model is a straightforward method for remote object tracking, yielding relatively accurate results However, it is advisable to limit its application to demonstrations and simulations for optimal performance.

The Overview of Firefighting Robot

Firefighting robots have many different versions and forms, serving many different goals The firefighting robot we are aiming for is more versatile and intelligent to serve in human living environments The robot uses a camera as a vision to monitor and watch if a fire is detected, navigate and extinguish it

AI technology plays a crucial role in supporting object-tracking robot systems Developers working on fire identification utilize extensive data from image recognition systems, employing machine learning and neural networks to create accurate fire detection systems

We'll begin with a sizable dataset containing images with and without fire objects After that, we'll use a machine learning and training system to teach the system to recognize these objects, and finally, implement this knowledge into the robot.

DESIGN AND MANUFACTURE OF FIREFIGHTING ROBOT

Main goals and technical objectives of firefighting robots

The project aspires to successfully achieve its initial goals within the stipulated timeframe Our focus is on crafting a miniature robot designed to detect and extinguish fires, providing invaluable insights and lessons throughout the project's journey.The envisioned system must meet stringent technical requirements, embodying qualities of compactness, flexibility, and precise responsiveness Operating seamlessly across diverse environments, the robot aims to excel in complex visual scenarios, reflecting a high level of adaptability.This venture seeks not only to detect fires but also to actively suppress them The robot's compact design and flexibility are pivotal for effective navigation and response in varied settings, ensuring swift and accurate reactions in visually challenging situations.The project serves as a platform for exploring cutting-edge technologies, aiming to create a robot that not only showcases technical prowess but also addresses real-world challenges practically The ultimate goal is to set new standards in firefighting capabilities, emphasizing advancements in miniaturization, adaptability, and sophisticated image processing within complex scenarios.

Block diagram of firefighting robot

The diagram [3.1] illustrates the entire operational system of the project, with each block representing a processing unit executing a specific task to enable smooth coordination and synchronization among them

The internal processing system within the robot operates as a closed-loop system, enabling the robot to carry out tasks independently The interconnected blocks play a crucial role in the seamless coordination of activities throughout its operation The firefighting robot's workflow initiates with the image capture unit, proceeding to the image processing segment utilizing a specific AI image processing model, namely the YOLO (You Only Look Once) model This model exhibits a relatively high accuracy in image processing and object detection compared to other models.Upon analyzing and calculating the object's position within the robot's frame, the robot sends coordinate signals to the control analysis processing unit This unit continues the process of analyzing the object's position, making decisions based on the received coordinates The control block then initiates the execution of actions as per the given instructions These actions involve precise navigation to bring the object into the target area, locking onto the object within the field of view, triggering alerts, and preparing to suppress the fire outbreak.

System design and layout of Firefighting robot

The fire detection system serves as the core functionality of the robot, acting as its optical brain where complex analyses and calculations take place, leading to final decisions Therefore, the selection of suitable hardware is crucial as it directly impacts the efficiency and outcomes of the entire robot operation The choice of hardware and software compatibility plays a significant role in ensuring the smooth functioning of the robot, considering the intricate environment it operates in.The careful consideration of both hardware and software compatibility is essential for optimal performance The synergy between the two is critical for seamless operations, ensuring that the robot can effectively analyze, make decisions, and execute actions in response to fire incidents The complexity of the operational environment underscores the importance of a well-matched hardware and software ecosystem for the successful functioning of the robot

3.3.1.1 Hardware system design for fire detection

Choosing the right hardware requires careful thought in order to meet the requirements of this intricate AI processing segment Prioritizing the selection of hardware devices takes precedence over other issues Devices that are small, strong in image processing, and enable camera integration are necessary for the best hardware selection for AI processing systems Considering their small size and computational power, embedded computers turn out to be a good option in this situation.Selecting hardware that is both cost-effective and capable of supporting a range of AI problem- solving tasks is imperative Consequently, the group has chosen gadgets that meet these requirements, guaranteeing the effective incorporation of AI processing models into the firefighting robot system

The control system, utilizing the Shelv4 CNC module and additional small circuit boards, is designed for smooth and precise operation The Shelv4 CNC module forms the core for computerized numerical control, providing advanced features for precision machining Small circuit boards are integrated strategically to enhance functionality

These boards serve as modular extensions, offering specialized functions like sensor interfacing and feedback control The collaboration between the Shelv4 CNC module and these small boards ensures comprehensive control of operations with heightened smoothness a) Raspberry pi 4

The system needs AI processing and computing systems that can be customized, easily accessed and used flexibly, so the Raspberry Pi 4 is a suitable choice to solve this problem, although not as powerful about AI like jetson nano but in return Raspberry Pi is quite easy to monitor and handle puzzles that control other embedded devices and the cost is also much lower than jetson nano, compatibility and large community from many sources open With current requirements, Raspberry Pi 4 relatively meets output requirements such as image processing and calculation Installing and configuring packages and libraries are also quite easy because this is a circuit board development toolkit

The Raspberry Pi 4 Model B is a small yet powerful single-board computer developed by the Raspberry Pi Foundation Positioned as a significant upgrade from its predecessors, the Raspberry Pi 4 Model B introduces substantial improvements in performance and features Equipped with the Broadcom BCM2711 64-bit Quad-Core Cortex-A72 processor, it delivers robust multitasking capabilities Supporting up to 4GB or 8GB of RAM enhances data processing capabilities, making it suitable for handling complex AI models such as YOLO for image analysis, processing, and solving intricate mathematical problems

Additionally, the board features integrated wireless connectivity (Wi-Fi) and Bluetooth 5.0 following the 802.11b/g/n/ac dual-band standard, providing high-speed internet access for seamless communication and data retrieval The design incorporates a 7-inch Waveshare display with a resolution of 1024×600, ensuring clear image display visible at a distance of 3m under suitable lighting conditions USB 3.0 support enhances compatibility with various connected devices, and the inclusion of a CSI camera interface on the board allows for the utilization of HD CSI cameras

Furthermore, the Raspberry Pi 4 Model B includes a range of communication interfaces (SPI, I2C, LCD, UART, PWM, SDIO) and enables UART communication through GPIO, catering to the requirements of the current mode

An illustration of the connections and interactions between the Raspberry Pi 4's major parts is called a diagram It offers a thorough overview of the connections and functionality of parts including the CPU, RAM, ports, and GPIO (General Purpose Input/Output) pins Circuit diagrams are useful for projects and customizations, as well as for helping developers, electronics engineers, and Raspberry Pi users understand the internal workings of the board and how its components interact circuit based on particular requirements The following summarizes the key sections shows memory components, including RAM and storage (microSD card), describes the location and connections of the main processor, which is typically a Broadcom BCM2711; displays USB, Ethernet, HDMI, audio, and other ports; and represents GPIO pins

The Raspberry Pi 4 satisfies all current specifications, operates efficiently, and can be used with the firefighting robot Selecting Raspberry Pi is a reasonable option for the current robot development model, especially given its current cost b) Camera

A camera that can take pictures is necessary for the operation of a fire detection and recognition system For best results, choosing the appropriate camera is essential It needs a broad field of vision of about 120 degrees and sharp images within a range of 10 to 15 meters It was decided to use the Arducam Fisheye Low Light USB Camera after careful consideration With its 2MP resolution and 1080P image quality, this excellent USB camera was made especially for computers, offering crisp and detailed images

Figure 3 5 Arducam Fisheye low light USB camera

This product integrates the IMX291 image sensor and a wide-angle lens, creating a broad field of view and a fisheye shooting mode, capturing a wealth of information in a single frame Notably, its ability to capture images in low-light conditions is a significant advantage, ensuring high-quality pictures in challenging lighting environments With the integrated microphone feature, you can record audio and video simultaneously, making it a versatile choice for video calls, online conferences, and remote teaching applications

Arducam Fisheye low light USB camera also supports H.264 image transmission protocol, and integrates UVC (USB Video Class) technology to help easily connect and use on many different operating systems and applications on the Internet Around the world computers and embedded computers c) Controlling Pulse For Stepper Motor

Stepper motors are very popular motors in many fields To choose the right motor, we pay attention to the Torque (M) of the motor Different engine lines produce different M

Stepper motors operate through electronic switches that send controlled command signals to the stator in a sequential and frequency-specific manner The total rotation angles of each rotor align with the frequency of motor switching Simultaneously, the rotation direction and speed of the rotor are contingent on both the sequence number and the frequency of these switching conversions

For example, the same is NEMA 17 1.8° :

Lmax = 26mm then M = 1.7 kg.cm = 0.17 N.m Lmax = 60mm then M = 8.2 kg.cm = 0.82 N.m From there we get the step angle parameter α = {0.9°, 1.2°, 1.8° }

The next parameter we need is the tooth pitch parameter λ (mm) and the number of teeth R (teeth) From all the above parameters α, λ, R, m we can calculate step B (steps/mm) (steps/mm) :

2.4.3 UART (Universal Asynchronous Receiver/Transmitter)

UART (Universal Asynchronous Receiver-Transmitter) is a communication protocol that uses asynchronous serial communication and can configure the speed Simple and popular, consisting of two independent data transmission lines, TX (transmission) and

EXPERIMENT RESULTS AND DISCUSSION

System Operation and results

The team worked to design the system according to previous plans, including a 3D- printed plastic structure and tightly assembled axle rods While there were details that needed tweaking to improve performance, the team paid attention to ruggedness The hardware devices are selected and connected reliably, and the rotation axis gear selection is also done thoughtfully to ensure that the robot can operate robustly in all conditions

Figure 3 35 Robot FireFighting From the effort and effort invested in the project, we have created a powerful and flexible firefighting robot The 3D printed plastic structure design and optimized joystick ensure sturdiness and stability during operation Although there were minor adjustments to fix the mismatched details, we were able to fix it quickly Hardware components, from turning gears to motor control modules, are carefully selected to ensure the best stability and performance The robot uses cameras to monitor the environment, and if a fire is detected, it has the ability to automatically identify and extinguish the hazard With 360- degree rotation and high-pressure water nozzles, our firefighting robots can control any situation, ensuring safety and efficiency in handling emergency situations This project is not only an excellent research initiative but also an important step forward in the field of robotic applications in firefighting and public safety

Figure 3 36 Results of the robot's operation The operational process of the robot, as observed from the first-person perspective of the front-mounted camera, illustrates the swift and precise detection and suppression of a fire outbreak at any location within the camera's scanning range The red circle signifies the capability to locate targets within the permitted object-locking zone If the target falls within the red circle, the system initiates the target-locking procedure, transforming the red circle into green, and the nozzle begins deploying firefighting measures Upon the completion of the firefighting operation, the water spray is halted At this point, the robot is returned to its default state, maintaining a vigilant monitoring state until further instructions

4.1.1 Fire object detection fire detection

The system uses the yolov8 model as the central AI model to detect fires in robots This model is quite powerful and fast, especially very accurate The team has tried it in many different test cases, but the results are: The accuracy is quite high However, fires with a lot of smoke blurring the image from the camera make it more difficult to detect the location of the fire, making the yolov8 model less effective To limit this, the team added a harsh training data set with more images that are more difficult to distinguish, but the results are somewhat better than before In short, the yolov8 model is still somewhat poor in practical application on small embedded computers but is enough to meet basic requirements so that the robot can complete its task

Figure 3 37 Results Of Yolov8's training Process The results indicate that YOLOv8 is stable with various weights, achieving relatively acceptable precision The training and detection times are good, and the error rate is reasonably low The model's applicability appears to be feasible across various test scenarios

The firefighting robot's navigational motion control system is based on the navigation principle of turrets and surveillance cameras, allowing the robot to navigate and control nozzles in all directions By connecting the gear ratio motion between gears to create smooth movements for the robot to move with more comprehensive viewing positions, using stepper motors and stepper motor control circuits is a This is an advantage because it helps increase the accuracy ratio between rotation angles and better aiming positions However, the application is quite complicated and causes many problems during the development process along with many other problems, but the result is that the robot is quite stable The pan and tilt operating system of the robot is a system Essential and modern applied to many robots today

Figure 3 38 Results Of Moving Pan and Tilt

The pan and tilt move with a pan of 15 degrees, and the tilt moves from 35 degrees to

45 degrees The result shows a smooth and stable movement process, while the vibration ability also significantly decreases.

Review and evaluation

Robots have quite high applicability in practice and real life, especially the need for safety and protection of people and property caused by fires is increasing, becoming a concern of many large cities That's why firefighting robots that meet all human safety issues, along with flexible new features in all situations, will be a trust worth placing in order to protect our lives Firefighting robots are also a new generation technology product that utilizes the advantages of AI in today's robots and is the foundation for further development of other firefighting robots

Although the robot is applied with many modern technologies and good software, with low budget and hardware along with poor developer experience, the robot product is still not optimally promoted, but what can be done? , the firefighting robot can be viewed as an experimental product to gain experience for the developer.

CONCLUSION AND FUTURE WORKS

Conclusion

The Fire Fighting Robot project has proven successful, conquering numerous obstacles and accomplishing its objectives While there are still some limitations, particularly with regard to location and environment adaptation, the robot has demonstrated that it can fulfill the fundamental needs of a firefighting robot The project is a major time and effort investment that will yield important experience and observable results for future robotics and fire safety projects

Fire-Fighting Robot is quite flexible in movement and actively searches for fires Thanks to the application of AI to recognize and detect fires, the robot can easily distinguish different large and small fires, perform Extinguishing is also quite quick thanks to the pan and tilt motion mechanism

Fire-Fighting Robot is mainly made of plastic material and the hardware is quite bulky and does not provide the best performance The hardware has many handmade details so the accuracy is not high and aesthetic Insufficient investment costs lead to hardware devices that are not fully optimized.

Future works

Robots have demonstrated their breakthrough and progress in the task of extinguishing fires However, to provide suggestions and directions for the future, several aspects should be considered One of the potential research directions is integrating new technology, especially advances in the fields of artificial intelligence and machine learning This can improve the robot's recognition and interaction capabilities, making it more flexible and efficient in diverse tasks In addition, expanding the features and applications of robots to meet diverse needs in different fields is also an important development direction This can include applications from risk management Security and interoperability are also important points Further research on safety measures to ensure the safety of robots operating in diverse environments and interacting effectively with humans and other devices These research directions can open up new opportunities and shape the future of robotics in the field of research and application

[1] Eric Peňa and Mary Grace Legaspi, "UART: A Hardware Communication Protocol Understanding Universal Asynchronous Receiver/Transmitter" , 2011

[2]Campbell, Scott,“Basics of UART Communication.” Electronics Hub, July 2017

[3] Shuai Liu, Dongye Liu, Gautam Srivastava, Overview and methods of correlation filter algorithms in object tracking” 09 June 2020

[4] Mohammadreza Hojati , Amir Baktash “Design and fabrication of a new hybrid stepper motor with significant improvements in torque density”, Department of Mechatronics, Arak University, Arak, Iran ,7 July 2021

[5] Jayanth Suresh, “Fire-fighting robot” , 01 February 2018

[6] Kirti Kadam, Aayushi Bidkar ,” Fire Fighting Robot “, 6 January 2018

[7] Hangzhou Dianzi University, Hangzhou, , “Forest Fire Detection and Recognition Using YOLOv8 Algorithms from UAVs Images “ , China 05 September 2023

[8] S Robin Yosafat , Carmadi Machbub , Egi M I Hidayat ,” Design and implementation of Pan-Tilt control for face tracking “,School of Electrical Engineering and Informatics, Bandung Institute of Technology, Bandung, Jawa Barat, Indonesia 30 November 2017

[9] S.V Viraktamath, Mukund Katti, Aditya Khatawkar , Pavan Kulkarni ,” Face Detection and Tracking using OpenCV “,The SIJ Transactions on Computer Networks & Communication Engineering, July-August 2013

[10] Xuebin Cheng, Hong Ai,” Research on embedded access control security system and face recognition system “,Automation Institute of Beijing Information Science & Technology University, Beijing 100192, China, July 2018

Ngày đăng: 26/09/2024, 12:20

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Eric Peňa and Mary Grace Legaspi, "UART: A Hardware Communication Protocol Understanding Universal Asynchronous Receiver/Transmitter". , 2011 Sách, tạp chí
Tiêu đề: UART: A Hardware Communication Protocol Understanding Universal Asynchronous Receiver/Transmitter
[2] Campbell, Scott,“Basics of UART Communication.” Electronics Hub, July 2017 Sách, tạp chí
Tiêu đề: Basics of UART Communication
[4] Mohammadreza Hojati , Amir Baktash “Design and fabrication of a new hybrid stepper motor with significant improvements in torque density”, Department of Mechatronics, Arak University, Arak, Iran ,7 July 2021 Sách, tạp chí
Tiêu đề: Design and fabrication of a new hybrid stepper motor with significant improvements in torque density
[3] Shuai Liu, Dongye Liu, Gautam Srivastava, Overview and methods of correlation filter algorithms in object tracking” 09 June 2020 Khác
[7] Hangzhou Dianzi University, Hangzhou, , “Forest Fire Detection and Recognition Using YOLOv8 Algorithms from UAVs Images “ , China 05 September 2023 Khác
[8] S Robin Yosafat , Carmadi Machbub , Egi M. I. Hidayat ,” Design and implementation of Pan-Tilt control for face tracking “,School of Electrical Engineering and Informatics, Bandung Institute of Technology, Bandung, Jawa Barat, Indonesia . 30 November 2017 Khác
[9] S.V. Viraktamath, Mukund Katti, Aditya Khatawkar , Pavan Kulkarni ,” Face Detection and Tracking using OpenCV “,The SIJ Transactions on Computer Networks & Communication Engineering, July-August 2013 Khác
[10] Xuebin Cheng, Hong Ai,” Research on embedded access control security system and face recognition system “,Automation Institute of Beijing Information Science & Technology University, Beijing 100192, China, July 2018 Khác
w