1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Đồ án tốt nghiệp - Đề tài: Ứng dụng xử lý ảnh và cánh tay robot Nachi công nghiệp phân loại sản phẩm theo hình dạng

80 3 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Image Processing Application For Product Classification And Robot Control
Tác giả Nguyen Truong Thinh, Ngo Trieu Vy
Người hướng dẫn Assoc. Prof. Dr. Le My Ha
Trường học HCMC University of Technology and Education
Chuyên ngành Automation and Control Engineering Technology
Thể loại Graduation Project
Năm xuất bản 2024
Thành phố Ho Chi Minh City
Định dạng
Số trang 80
Dung lượng 14,34 MB

Cấu trúc

  • CHAPTER 1: PROJECT OVERVIEW (17)
    • 1.1 Reason for this project (17)
    • 1.2 Application of robot hand with conveyor system (0)
      • 1.2.1 Introduction to conveyor system (0)
      • 1.2.2 Application of robot arm with conveyor system (18)
    • 1.3 Example about a Robotic Automatic Assembly System Based on Vision [3] 19 (20)
    • 1.4 Objective, task and scope of the thesis (24)
      • 1.4.1 Objectives (24)
      • 1.4.2 Tasks (25)
      • 1.4.3 Scope of topic implementation (25)
  • CHAPTER 2: THEORIES (26)
    • 2.1 Algorithm Flowchart (26)
      • 2.1.1 Main Flowchart (26)
        • 2.1.1.1 Step 1 (27)
        • 2.1.1.2 Step 2 (27)
        • 2.1.1.3 Step 3 (29)
        • 2.1.1.4 Step 4 (29)
        • 2.1.1.5 Step 5 (29)
        • 2.1.1.6 Step 6 (29)
      • 2.2.4 Step 4 (0)
    • 2.3 Tracking Algorithm with Conveyor Belt (34)
      • 2.3.1 Harris corner algorithm [4] (0)
      • 2.3.2 Hough circle algorithm [5] (0)
      • 2.3.3 KLT algorithm (Kanade Lucas Tomasi feature tracker) (36)
    • 2.4 Determining Object Center Coordinate (37)
      • 2.4.1 The center of the square object (37)
      • 2.4.2 The center of the circular object (38)
  • CHAPTER 3: SYSTEM DESIGN (39)
    • 3.1 Robot classification (39)
      • 3.1.1 Delta robot (39)
      • 3.1.2 Scara Robot (40)
      • 3.1.3 Serial Robot (41)
    • 3.2 End – Effector (43)
      • 3.2.1 Gripper (43)
      • 3.2.2 Vacuum Gripper (43)
      • 3.2.3 Magnetic grippers (44)
    • 3.3 Camera (45)
      • 3.3.1 Stereo camera (45)
      • 3.3.2 Time-of-Flight camera (ToF camera) (46)
      • 3.3.3 Structure light camera (47)
      • 3.3.4 LIDAR (48)
      • 3.3.5 Phone camera (50)
    • 3.4 PLC and CFD controller (50)
    • 3.5 Overview of Robot Nachi MZ07 (52)
      • 3.5.1 Forward kinetic analysis (54)
      • 3.5.2 Inverse kinematic analysis (58)
      • 3.5.3 Adjust the parameters of the initial tool (61)
      • 3.5.4 Robot motion options (62)
    • 3.6 Design of gripper end-effector (63)
      • 3.6.1 Calculation of gripping force (63)
      • 3.6.2 Fixture for the gripping tool head (66)
    • 3.7 Robot Nachi communication (68)
      • 3.7.1 Introduction to TCP/IP Communication Method with Nachi Robot (68)
      • 3.7.2 Introduction to Robot Communication and Control Program (70)
  • CHAPTER 4: EXPERIMENTAL RESULT (72)
    • 4.1 Result (72)
      • 4.1.1 Overview (72)
      • 4.1.2 Connection diagram (73)
      • 4.1.3 Image processing (0)
      • 4.4.3 Camera calibration (75)
      • 4.1.4 Experimental Results and Evaluation (0)
    • 4.2 Link to experimental video (76)
  • CHAPTER 5: CONCLUSION (77)
    • 5.1 Conclusion (0)
    • 5.2 The limitations of the thesis (0)
    • 5.3 The directions for future research (0)

Nội dung

Đồ án tốt nghiệp đã báo cáo HK1 - 2024 tại trường Đại học sư phạm kỹ thuật Hồ Chí Minh với điểm chấm 8.2/10 ở hội đồng bảo vệ Khoa Điện - Điện tử.

PROJECT OVERVIEW

Reason for this project

Currently, with modernized industry, with the development of modern techniques, industry 4.0 needs to produce a lot of components or machinery, or in the commodity production industry, it is also necessary to modernization. Classifying is an extremely essential step, to save time, effort and money, manufacturing manipulators to serve this need in industry has been widely developed widely in our country.

In traditional manufacturing assembly lines, conveyors are widely used to move components from one workstation to another The conveyor must stop at each workstation for a specific amount of time and wait until the assembly operation is complete before the conveyor can move the components to another workstation. Conveyor downtime accumulates and lengthens the entire assembly process This causes a decrease in overall production efficiency, especially in automated assembly lines To eliminate such drawback, the concept of conveyor tracking was developed to ensure consistent assembly line workflow by performing the assembly process while the conveyor belt is still running.[ CITATION Che97 \l 1033 ]

In a conveyor tracking system, the conveyor not only serves the purpose of material handling but also works in conjunction with the automation process.Furthermore, the Conveyor robot tracking system is also capable of providing information about the continuously changing position of components on the conveyor belt to the robot controller through the use of vision system[ CITATIONPap93 \l 1033 ] This thesis presents the use of the Nachi MZ07 robot of the Serial

Conveyor systems are an important technology today and are widely used in many different industries In general, conveyor systems have become an indispensable part of many modern industries, helping to optimize the moving and transportation process, increase efficiency and minimize labor costs Technologies and designs are constantly being improved to meet the increasing demands in the modern world of manufacturing and logistics.

Figure 1.1 Conveyor belts used in assembly lines

1.2.2 Application of robot arm with conveyor system

Manipulators (industrial robots) combined with conveyor systems are a powerful combination in industry and have many useful applications Below are some examples of applications of manipulators with conveyor systems:

Figure 1.2 Manipulator in computer case assembly line

Figure 1.3 Robot sorts trash for recycling

Overall, the combination of manipulators and conveyor systems brings many

1.3 Example about a Robotic Automatic Assembly System Based on Vision [ CITATION Rui20 \l 1033 ]

The scientific article below presents the capabilities of robots in automatic phone assembly lines, the precise assembly capabilities of robots and the importance of maximizing the application of robots to optimize cost savings fees instead of hiring labor The article presents the process of assembling phone

Figure 1.4 Phone components components and methods from tracking on conveyor to none tracking on conveyor, thereby providing comparisons and evaluations to choose the optimal method pros. There is also mention of techniques such as Camera Calibration and Robot- Conveyor calibration system or Camera-Robot-Conveyor calibration system.

In addition, the phone has good image input quality, making it suitable for current image processing applications.

Figure 1.5 Assembly process of the article

In the article, it is shown that the assembly of mobile phones mainly includes three parts: main board, screen and back cover The assembly system is mainly composed of a robot controller, the motion module is the conveyor, the vision module is the camera, and the assembly module is the manipulator, as illustrated above.

Figure 1.6 Process and steps of the assembly system

The above procedure describes camera calibration, component position detection, camera dynamics (dynamic capture), and template matching algorithm.

The author implemented the fixed camera calibration method by ZhangCalibration, the 4-point calibration method for the conveyor-robot system, the eye- in-hand camera calibration method to convert the camera coordinate system to the manipulator coordinate system Then use a 2D image processing algorithm to extract component images, contours, rotation angles and component coordinates. Combined with processing and reading the conveyor encoder signal to transmit signals to the robot to perform the picking task and finally assemble it into an external fixed tray.

Figure 1.7 The calibration method used in the paper's system

Figure 1.8 The image processing method used in the article

Figure 1.9 Matching algorithm and assembly step of the article.

Finally, experimental results show that the maximum error of positioning is between 0.5mm and 0.8 degrees The robot can successfully grasp the part with a 100% success rate at different conveyor speeds, even when incomplete information about the target part is provided The key technologies developed in this article will be applied to key equipment in the automatic robot production line This will improve the standard performance of products, greatly increase the market competition of target products, and create a large market space.

Figure 1.10 Parameter table evaluates the results of the article

1.4 Objective, task and scope of the thesis

 Assemble conveyor belts to move objects.

 Design the camera position to track objects.

 Program and control the position of the robot to pick up objects with high and optimal accuracy.

 Design an image processing program using the OpenCV library to improve the object detection algorithm according to the object's velocity and position even when that velocity is unstable.

 Get an overview of the manipulator system that works with cameras and conveyors.

 Learn an overview of Object tracking and related algorithms to find the velocity and position of objects through the camera.

 Implement control algorithms between Robot-PC and Camera-PC.

 Experimentally test, evaluate errors and propose improvement plans.

The topic applies the knowledge of robotics, electromechanics and programming that has been learned at school, while combining it with references to research and documents The topic uses phone cameras and hand robots The machine and conveyor perform a demo run to check the algorithm results Provide comments on the errors of the algorithm and structure and finally propose solutions that can be used in the future.

Furthermore, the scope of the topic revolves around object classification based on shape, with additional improvements focused on enhancing speed during experimentation.

The topic is delimited by the number of object types, specifically focusing on the shapes of circles and squares.

The robotic arm is constrained to work based on each object moving on the conveyor belt The conveyor belt operates at a fixed speed.

Example about a Robotic Automatic Assembly System Based on Vision [3] 19

The scientific article below presents the capabilities of robots in automatic phone assembly lines, the precise assembly capabilities of robots and the importance of maximizing the application of robots to optimize cost savings fees instead of hiring labor The article presents the process of assembling phone

Figure 1.4 Phone components components and methods from tracking on conveyor to none tracking on conveyor, thereby providing comparisons and evaluations to choose the optimal method pros. There is also mention of techniques such as Camera Calibration and Robot- Conveyor calibration system or Camera-Robot-Conveyor calibration system.

In addition, the phone has good image input quality, making it suitable for current image processing applications.

Figure 1.5 Assembly process of the article

In the article, it is shown that the assembly of mobile phones mainly includes three parts: main board, screen and back cover The assembly system is mainly composed of a robot controller, the motion module is the conveyor, the vision module is the camera, and the assembly module is the manipulator, as illustrated above.

Figure 1.6 Process and steps of the assembly system

The above procedure describes camera calibration, component position detection, camera dynamics (dynamic capture), and template matching algorithm.

The author implemented the fixed camera calibration method by ZhangCalibration, the 4-point calibration method for the conveyor-robot system, the eye- in-hand camera calibration method to convert the camera coordinate system to the manipulator coordinate system Then use a 2D image processing algorithm to extract component images, contours, rotation angles and component coordinates. Combined with processing and reading the conveyor encoder signal to transmit signals to the robot to perform the picking task and finally assemble it into an external fixed tray.

Figure 1.7 The calibration method used in the paper's system

Figure 1.8 The image processing method used in the article

Figure 1.9 Matching algorithm and assembly step of the article.

Finally, experimental results show that the maximum error of positioning is between 0.5mm and 0.8 degrees The robot can successfully grasp the part with a 100% success rate at different conveyor speeds, even when incomplete information about the target part is provided The key technologies developed in this article will be applied to key equipment in the automatic robot production line This will improve the standard performance of products, greatly increase the market competition of target products, and create a large market space.

Figure 1.10 Parameter table evaluates the results of the article

Objective, task and scope of the thesis

 Assemble conveyor belts to move objects.

 Design the camera position to track objects.

 Program and control the position of the robot to pick up objects with high and optimal accuracy.

 Design an image processing program using the OpenCV library to improve the object detection algorithm according to the object's velocity and position even when that velocity is unstable.

 Get an overview of the manipulator system that works with cameras and conveyors.

 Learn an overview of Object tracking and related algorithms to find the velocity and position of objects through the camera.

 Implement control algorithms between Robot-PC and Camera-PC.

 Experimentally test, evaluate errors and propose improvement plans.

The topic applies the knowledge of robotics, electromechanics and programming that has been learned at school, while combining it with references to research and documents The topic uses phone cameras and hand robots The machine and conveyor perform a demo run to check the algorithm results Provide comments on the errors of the algorithm and structure and finally propose solutions that can be used in the future.

Furthermore, the scope of the topic revolves around object classification based on shape, with additional improvements focused on enhancing speed during experimentation.

The topic is delimited by the number of object types, specifically focusing on the shapes of circles and squares.

The robotic arm is constrained to work based on each object moving on the conveyor belt The conveyor belt operates at a fixed speed.

THEORIES

Algorithm Flowchart

 We will start with each of the following steps:

We initialize a socket connection for the computer with the robot controller.

And then, we initialize variables to define that connection, including the program for robot movement and activating I/O.

Details about the definition and initialization are available in the appendix section.

We proceed to perform the image processing.

3 Tracking circular and square objects.

4 Sending the coordinates of the object's position to the robot for movement and concluding the process.

Check if the robot has reached the object's position ?

If not, return to step 2 for image processing

If yes, proceed to the next step.

Activate the suction mechanism to lift the object.

Move the item to the designated position.

After the object reaches the designated position, release the object and proceed to the next step. a) Camera in fixed position:

Figure 2.15 Camera in fixed position

 At times, the robot arm may obstruct the object from the camera, causing issues with object recognition and tracking.

 A stationary camera typically has a limited field of view due to its fixed position This can reduce the ability to achieve comprehensive observation and tracking of moving objects. b) Attached to the robot arm:

Figure 2.16 Camera attached to the robot arm

 Observe the object more clearly.

 No need to recalibrate when moving the robot system.

 Slow performance While the camera on the robotic arm can adjust its viewing angle, it may encounter difficulties in providing a comprehensive view compared to a stationary camera This could be a particular issue when the need arises to observe a wide area.

 Not suitable for high-performance applications The tracking capability of moving objects may be subpar in certain cases, and this can be attributed to factors such as the weight and size of the robotic arm.

The types of cameras used in the project vary in size and weight, making the installation relatively complex This complexity would pose difficulties and obstacles in moving and adjusting the angles of the camera Therefore, the decision

Figure 2.17 Fixed position for the camera

Using Matlab to process 15 input images, from which the transformation matrix KK is determined:

 This matrix is used to transform pixel coordinates to real-world coordinates

The coordinates after completing the calibration process are depicted in Figure 2.4 below:

Figure 2.18 The result after training yields coordinates (X, Y, Z) in real space.

After obtaining the transformation matrix KK, we determine the camera's intrinsic and extrinsic parameters.

(2.2) After using Matlab, we get the value of the transfomation matrix M w : w

We use “Paint” software with the purpose of determining the pixel coordinates of the input image for image calibration

After that, we get the first point with pixel coordinates (178, 171) and the second point with pixel coordinates (215, 170).

At the result, we find out the the distance obtained during calibration between the two points (178, 171) and (215, 170) equals 53.1621 mm.

Tracking Algorithm with Conveyor Belt

2.3.1 Harris corner algorithm[ CITATION Alg \l 1033 ]

The algorithm informs us that corners are regions in the image with significant intensity variations for the displacement of (u, v) in all directions This is illustrated as follows:

We have to maximize this function E(u,v) for corner detection That means we have to maximize the second term Applying Taylor Expansion to the above equation and using some mathematical steps (please refer to any standard text books you like for full derivation), we get the final equation as:

Here, Ix and Iy are image derivatives in x and y directions respectively

(These can be easily found using cv.Sobel()).

Then comes the main part After this, they created a score, basically an equation, which determines if a window can contain a corner or not.

  1 and  2 are the eigenvalues of M

So the magnitudes of these eigenvalues decide whether a region is a corner, an edge, or flat.

 When |R| is small, which happens when  1 and  2 are small, the region is flat.

 When R < 0, which happens when  1 >>  2 or vice versa, the region is edge.

 When R is large, which happens when  1 and  2 are large and  1

In the line detection case, a line was defined by two parameters (r,θ) In the) In the circle case, we need three parameters to define a circle:

Where ( x center , y center ) define the center position (green point) and r is the radius, which allows us to completely define a circle.

For efficiency, OpenCV implements a detection method slightly trickier than the standard Hough Transform: The Hough gradient method, which is made up of two main stages The first stage involves edge detection and finding the possible circle centers and the second stage finds the best radius for each candidate center. For more details, please check the book Learning OpenCV or your favorite Computer Vision bibliography.

2.3.3 KLT algorithm (Kanade Lucas Tomasi feature tracker)

The Kanade-Lucas-Tomasi (KLT) tracking algorithm is based on identifying and tracking the motion of feature points in consecutive frames Below is a more detailed description with the core algorithmic formulas:

Choose a set of feature points in the initial frame.

Use a gradient method (usually Sobel) to compute partial derivatives with respect to x and y in the frame, generating gradient matrices I x and I y

Compute matrices of partial derivatives of feature points over time Let I t be the derivative with respect to time.

These matrices are synthesized into a Jacobian matrix J, where:

(2.7) Here, represents p i the position of feature point i.

The goal is to solve the linear system of equations: J P   I t , where

_P is a vector containing changes in the positions of feature points.

This equation can be solved using methods like the Least Squares method.

Based on the computed _P vector, update the positions of feature points for the next frames:

Here, k is the index of the current frame.

Determining Object Center Coordinate

2.4.1 The center of the square object

_ _ _ / / 2 center x min x max x center y min y max y

Min_x has pixel coordinates (0, 0) located at the origin of the region of

Therefore, the center of the square object has coordinates at ( x max , y max )

2.4.2 The center of the circular object

Thanks to the Hough circle algorithm, we can find the coordinates of the center of the circular object by the openCV library.

In addition, the library also assists us in obtaining the radius of that circle.

The Hough Circle Detection algorithm in OpenCV helps determine the center and radius of circles in an image

You need to convert the image to grayscale, optionally blur it, and then use the cv2.HoughCircles function to detect circles.

The result is an array containing parameters for the detected circles You can draw circles and their centers on the image to display the results

Adjust parameters such as minDist, param1, param2, minRadius, and maxRadius to optimize the outcome.

SYSTEM DESIGN

Robot classification

Delta robots are famous for their high speed and acceleration Their design makes them faster than other types of industrial robots Most industrial robots have motors housed in the robot arms, making them suitable for heavy-duty applications. However, placing the motor in the robot arm adds weight and prevents fast speeds. With the Delta robot, all the motors are located in the main body of the robot above the work area This keeps the main weight of the robot fixed All movements come from the extremely light robot arm, allowing for low inertial weight, achieving extremely fast operating speeds and accelerations.[CITATION htt \l 1033 ]

Figure 3.20 Delta Robot FANUC M-2ia

 It has the highest processing speed among robot mechanisms.

 Delta robots are ideal for assembly, distribution, pick and place, material handling, part converting and packaging applications Parts processed by Delta robots should be light and simple in shape, as these robots operate at fast speeds Heavy or complex parts are not suitable for the speed and acceleration of the Delta robot.

The SCARA robot is a manipulator with four degrees of freedom This type of robot has been developed to improve speed and application in repetitive tasks such as Pick and Place applications picking objects from one location to another. [ CITATION SCA \l 1033 ]

 In general, SCARA robots can operate at high speeds and are commonly used for pick-and-place or assembly operations where high speed and accuracy are required Existing SCARA robots can achieve accuracy below

10 microns (0.01mm) This precision is comparable to 20 microns of a six- axis robot.

 The compact layout of the SCARA robot also makes it easy to move for installation in temporary work environments or to take away: by design, the

SCARA robot is suitable for applications with a small operating range or limited space limit.

 Due to their configuration, SCARA robots are typically only capable of carrying light loads Typically, they carry loads up to 2 kg nominal (maximum 10 kg).

 The SCARA robot's workspace is typically circular, which is not suitable for all applications, and the robot has limited flexibility and functionality compared to the full 3D capabilities of other types of robots (e.g six-axis robot).

Serial Robot is the most common type of robot in heavy industries because of its ability to work flexibly in space due to the robot's 6 degrees of freedom structure and the heavy load capacity that the robot brings.

 Ability to work in 3D space: Serial robot can perform multi-dimensional movements, allowing it to work in 3D space, meeting many different application needs.

 Heavy load carrying capacity: Serial robots typically have heavy load carrying capacity, allowing it to handle large and heavy items in industrial applications.

 High precision: Serial robots have high precision, allowing it to perform tasks that require high precision, such as assembly and quality inspection.

 Simple control capabilities: Due to their simple serial structure, serial robots are often easy to control and program, helping to save time and costs during deployment.

 Limited reach: Because of its joints, serial robots may have difficulty achieving direct impact on objects that are far apart or not in direct contact with the robot.

 Limited reliability: Because of its many joints and moving parts, serial robots are more likely to fail than some other types of robots, reducing reliability and requiring regular maintenance.

 Slow speed: Serial robots may have slower speeds than some other types of robots, especially when handling complex movements in 3D space.

 Large size: Some types of serial robots are large in size and require appropriate installation space, limiting their use in environments with limited space.

Picking up objects moving on a conveyor belt requires accuracy in both position and direction, so it is necessary to use a robot with orientation function in three-dimensional space

To be more specific, Nachi MZ07 is used in this project and will be explained in more detail in chapter 3.

High accuracy, flexible speed, versatility in applications Flexible human- machine interaction, high safety, and reliability Quality control, flexibility in welding, safety features.

End – Effector

Gripper end effectors, also known as grippers Grippers can be mechanical grippers or pneumatic grippers that are final devices attached to manipulators or robots to grasp, hold, or grip objects during work automation Clamps are used in many different situations, including product assembly, pick and place, etc.

The gripper has a tight grip so it can pick up objects with large loads and handle most different profiles The disadvantage of the gripper is that you must clearly determine the position and direction of the object to be picked up, and pay attention to the size of the object to avoid collision.

Magnetic grippers are convenient to install and easy to use, can attract uneven surfaces, operate with little noise, and have all kinds of capacities for different loads The biggest disadvantage of electromagnets is that they can only be used to attract magnetic materials.

The thesis uses a circular mounting sample with a flat profile that needs to be installed on an inclined surface parallel to the normal of the mounting box. Therefore, choose a vacuum cup as the End - Effector mechanism.

Figure 3.26 On field vaccuum gripper

Camera

Tracking objects with random rotation angles compared to the conveyor plane requires determining 6 parameters of position and direction Therefore, the option of using a 3D camera is proposed 3D camera, also known as depth sensor, is a camera 3D will provide a 3D model of objects and their distance in the form of a point cloud.

It is a type of 3D camera that has two or more image sensors to capture different image angles later From the deviation value returned from those 2 single cameras, calculate the depth of the image This is the first 3D camera model, with low accuracy compared to other cameras with the same function and relatively cheap price.

Figure 3.27 3D Stereo Camera Multi Sense S7

 Rich data for computer vision analysis.

 Requires high processing power to infer the depth of the image.

 Does not work with unshaped surfaces.

3.3.2 Time-of-Flight camera (ToF camera)

ToF cameras include a sensor that uses a small laser to emit infrared light. This light will shine on any object or person in front of the camera and reflect back to the sensor The time it takes the light to reflect back is calculated and converted into distance information that can be used to create a depth map.

By measuring the time of flight, a ToF camera can generate a depth image of the surrounding environment ToF cameras find applications in various fields such as computer vision, artificial intelligence, and virtual reality applications for distance measurement and rapid 3D model creation.

 Fast operating speed, high accuracy.

 Outdoor operating are ineffective due to sunlight.

 Adverse effects due to material reflections.

Works by shining a strip of light onto the surface of the object to be determined, based on the curvature of the returned image to calculate the depth of the surface The speed of this method is faster than that of ToF cameras (about 30fps compared to 20fps) with an error of about 1cm.

For a Structured Light Camera, the process typically involves comparing and analyzing the deformations of structured light patterns on a surface to construct a3D depth map of the object This is particularly useful in applications such as 3D scanning, product quality inspection, and even in virtual reality markets Structured

 High accuracy with affordable price.

 Not require environmental light to operate.

 Outdoor operating are ineffective due to sunlight.

LIDAR is a method of measuring distance to a target by using lasers and sensors to measure reflected pulses This forms a 3D model of the object Lidar is widely used in many fields such as geodesy, geoinformatics, archaeology, geography, geology, geomorphology, seismology, forestry, laser navigation and many other applications It is also applied in automatic vehicle control and navigation Lidar is an important and powerful technology in creating maps with high resolution and wide application potential.

LIDAR is commonly used in various fields, including autonomous vehicles for object detection and avoidance, in robotics for environment mapping, and in environmental science for terrain monitoring and geological research LIDAR technology provides highly accurate information about distance and shapes, making it a crucial tool in computer vision and virtual reality applications.

Figure 3.30 LIDAR sensor for AGV

 Lidar provides high accuracy in measuring distances and creating 3D models of the environment, helping to create accurate and detailed maps

 Ability to work in harsh environments: Lidar can operate in strong or poor light conditions, in harsh weather conditions such as rain, snow, and fog.

 Lidar has a fast scanning speed, allowing for quick and effective data collection.

 Lidar provides information not only about distance but also about the shape and structure of objects, helping to create accurate 3D models…

 Lidar typically has high costs, making its deployment and use require large investments.

 Size and weight: Some Lidar types are large in size and heavy in weight,limiting their integration into compact devices and vehicles.

With the convenience of the phone, we can completely use the phone camera temporarily for short-term purposes with the support of 3rd party software via wifi connection.

We decided to use smartphone cameras due to their convenience, noting that all other types of cameras can also be used for the primary purpose of providing image data, while image processing takes place within the computer.

PLC and CFD controller

CFD Controller is a Nachi robot controller, a controller with compact designed, easy to transport The controller provides 64 I/O ports (including 32 inputs and 32 outputs pins) with peripheral devices such as buttons, sensors, PLC, etc

Along with various communication ports such as USB, LAN and allows mounting communication cards such as Ethernet/IP, CC-link, Devicenet, vision systems, communication cards with encoder for functions such as conveyor tracking, etc.

This controller is supported by Nachi Robotics with 2 communication methods, using OpenNR library and Socket Communication.

OpenNR library: A powerful library designed by Nachi Robotics exclusively for controlling Robots by computer The library supports many features such as monitoring/controlling Robot coordinates, monitoring Torque, combining control/ receiving data from some 3rd party peripherals such as cameras, sensors, etc.

Controller supports both TCP and UDP protocols The main function of SocketCommunication on CFD Controller is to monitor coordinates, I/O, Torque, etc by computer, but this protocol can be used to easily transmit and receive data to controlRobot.

Overview of Robot Nachi MZ07

The Nachi robot is used for experimentation in the thesis The Nachi MZ07 robotic arm belongs to the collaborative robot product line of Nachi Robotics Systems, Inc Nachi Robotics Systems is a leading company in the manufacturing of industrial robots and automation systems.

The Nachi MZ07 robotic arm is designed to work alongside humans, allowing safe and efficient interaction in manufacturing environments The MZ07 can operate in constrained workspaces and can meet the flexible requirements of industrial applications.

Figure 3.34 Nachi Robot MZ07 Table 3.1 Operating range of MELFA RV-4FR-R

Forward kinematic analysis is to find a mathematical representation of the position and direction of the manipulator when the values of the angular joints are known The MELFA RV-4FR-R manipulator includes 6 swivel joints with the coordinate axis set as follows:

Figure 3.36 Coordinate system of manipulator axes

This is crucial for controlling the motion of a robot and programming tasks such as positioning and movement Forward kinematics is an essential component in the field of robotics and serves as the foundation for developing control algorithms and robot programming.

The following four transformation parameters are known as D–H parameters:

 Link length 𝑎 𝑖 : distance from 𝑍 𝑖−1 to 𝑍 𝑖 based on 𝑋 𝑖

 Link twist 𝛼 𝑖 : angle from 𝑍 𝑖−1 to 𝑍 𝑖 based on 𝑋 𝑖

 Link offset di:offset along Xi-1 Xi based on Zi-1

 Joint angle  i : angle from 𝑋 𝑖−1 to 𝑋 𝑖 based on 𝑍 𝑖−1

The transformation matrix from joint i to i-1 has a general form:

 cos sin cos sin sin cos sin cos cos cos sin sin

Let  i  s i , cos  i  c i ,sin(  i   j )  s ij , cos(  i   j )  c ij , we have:

The transformation matrix from the end effector to the hand coordinate system:

The coordinates of the end-effector:

Inverse kinematics is the use of kinematics equations to determine the motion of the robot to reach the desired position.

The steps of implementing inverse kinematics using numerical methods are detailed in the section below:

Square both sides of the system of equations, we get:

Add both sides to get:

From angle  3 finded we calculate angle  2 follow expression (2.14):

Replace s2 to c2 ta we have:

3.5.3 Adjust the parameters of the initial tool

The teach pendant of the Nachi MZ07 robot supports the tool setting function,

Figure 3.37 Parameters setting for tool head in Pendant

Here, you are configuring the tool with a Z-axis pointing downward (opposite to TCP) and defining the tool’s length along the Z-axis relative to the TCP You can calibrate the tool parameters using one of two methods:

 Method 1: Choose a fixed sharp point in space, adjust the orientation and roll-pitch-yaw angles in such a way that the tool’s end remains at the fixed sharp point Write a program to perform this operation at least 10 times and use the interpolation function of the Teach Pendant device to obtain the tool’s position parameters.

 Method 2: If the tool only has a Z-axis length, use a simpler method Touch the tool’s end (the part where you’ll attach the tool) to a fixed plane Then, attach the tool once more and touch it to the same fixed plane You’ll have the positions from the two operations, and you can calculate the tool’s length based on this data.

In reality, industrial robots have integrated various types of path interpolation calculations Here is a list of each type:

If the target position is achieved through joint interpolation, the robot will move to the target position in a way that minimizes the motion of each joint The path of the end-effector is not controlled.

If the target involves linear interpolation, the tool head will move along a straight line connecting the steps.

If the target is related to circular interpolation, the tool head will move along an arc A circular arc is a circle passing through three destination points.

Design of gripper end-effector

The end effector for picking up objects is a pneumatic suction cup The following section will present the calculations for the vacuum valve and vacuum pad sizes using available devices on the market. pressure drop, resulting in a lower pressure at port U compared to atmospheric pressure P a This creates a suction force that lifts the object.

Figure 3.39 Forces affecting the object

Calculating the lifting force for a horizontal vacuum cup with a vertical lifting force:

Theoretical vertical lifting force is calculated using the following formula[ CITATION Fes \l 1033 ]:

 F TH : Theoretical lifting force required to lift the object (N)

 m: Mass of the object to be lifted (Kg)

 a: Acceleration of the lifting object (m/s 2 )

 s: Safety factor ( 2 if lifting vertically, 4 if lifting horizontally)

 For this thesis, the object has a mass of 0.05 kg Lifting the object vertically, we choose a safety factor s = 2, acceleration of the lifting a = 1 m/s 2 , and gravitational acceleration g = 9.87 m/s 2

Preliminary selection of the DP-S25 suction cup with a standard diameter of d%mm to stabilize the object We choose the VASB-30-1/8-SI-B suction cup from Festo for calculation because it has an equivalent diameter and the manufacturer provides a datasheet for reference According to the manufacturer's specifications, the suction cup operates in the P  0.95  0(Pa) range Vacuum system manufacturers typically recommend a vacuum level from -0.6 to -0.8 bar for handling airtight surfaces like metal and clean plastic, and a vacuum level from -0.2 to -0.4 bar for porous materials like wood or chipboard Since the object to be gripped is flat plastic, we choose an operating pressure of P  0.7(Pa).

Arcording to [ CITATION Fes1 \l 1033 ] the holding force corresponds to

P  0.7(Pa) is F TT  34 N , then F TT  F TH  4.89 N , therefore, with the suction cup DP-S25 having a standard suction diameter of d%mm, it is entirely sufficient to lift an object with a mass of 0.05 kg.

After calculating and selecting the pneumatic suction cup, we proceed to choose the directional control valve and the vacuum generator For convenience in installation and rotation in various directions of the robot arm, the project selects the

ZU vacuum generator from SMC.

Figure 3.40 Suction cup DP-S25 into the product's surface This protects both the product and the equipment from collisions and damage.

3.6.2 Fixture for the gripping tool head

To fix the gripping tool head onto the robotic arm, the system needs additional fixtures attached to the robotic arm for mounting the suction head The flange face of the Nachi MZ07 robotic arm's step 6 is designed with a 4-bolt M5 pattern, and additional convenient tools are attached for easy tool replacement.

Figure 3.42 Pneumatic suction head Đài Loan

Figure 3.43 Flange surface for attaching the tool on Nachi MZ07 robot.

The distance from the face of joint 6 to the center of mass position affects the maximum payload that the robot arm can handle.

According to the manufacturer's documentation, the maximum allowable length of the tool for a mass of 5kg is 250mm Therefore, a tool length of 180mm can be considered temporarily suitable.

Choose to use Hitachi Bebicon 0.2LE-8SA compressor:

Robot Nachi communication

3.7.1 Introduction to TCP/IP Communication Method with Nachi Robot

TCP/IP is an abbreviation for Transmission Control Protocol/Internet Protocol or Network Interconnection Protocol This is a set of communication protocols used to connect network devices to each other over the internet TCP/IP can also be used as a communication protocol in a private computer network (intranet) The Internet Protocol suite - a set of rules and procedures - is often called TCP/IP (TCP/IP Protocol) TCP and IP are two main protocols alongside other protocols in the suite. The TCP/IP protocol suite works as an abstract layer between internet applications and router/switch infrastructure TCP/IP specifies how data is exchanged over the internet It does this by providing end-to-end communication information From there, it determines how it is divided into packets, defines addresses, transmits, routes and receives data TCP/IP is designed to ensure reliability, it is capable of automatically recovering when there is a failure during data transmission.

Sockets are mainly divided into 2 types:

 Stream socket: Based on the TCP protocol, only operates on two processes that have established a connection This protocol ensures data is transmitted to the recipient reliably and in the correct order Also called connection-oriented sockets.

 Datagram socket: Based on the UDP protocol, does not require a connection to be established This protocol does not guarantee reliable and intact data transmission to the recipient Also called connectionless sockets. Socket programming with TCP/IP:

 In order for them to interact with each other and for the server to receive communications from the client, the server must always be ready This has two meanings, first, like in the case of UDP, a server process must be running before the client initiates contact Second, the host program must create a socket to be ready to accept connections from the client process.

 When the host process is running, the client process will create a TCP socket to connect to the server While the client is creating the TCP socket, it will specify the IP address, port number of the host process.

We apply to my project as the following image:

TCP/IP is a crucial network protocol suite that facilitates connection and communication between devices in a computer network It is divided into two layers: the Internet Protocol (IP) manages IP addresses and routing, and the Transmission Control Protocol (TCP) oversees connection management and data transmission TCP/IP forms the foundation for the internet and is widely used in network applications such as web browsers and email Devices connected via TCP/IP are identified by IP addresses to send and receive data over the network.

The use of IP addresses for device identification ensures the proper routing of data within the network, making TCP/IP a fundamental technology in the realm of computer networking.

The illustration image below provides us with a clearer understanding of theTCP/IP communication connections:

Figure 3.45 TCP/IP Signal Transmission and Reception Process

3.7.2 Introduction to Robot Communication and Control Program

To perform robot control via the TCP/IP protocol We need to prepare three main programs:

 Usertask program: Its task is to serve as a server, receiving connection signals from the Client program sent from a laptop to the Robot's controller unit.

 PC side program: Its task is to act as a Client, sending connection signals to the Robot's controller unit.

 Program Robot: Used to execute tasks related to moving the robot according to the coordinates sent by the PC program.

 IP addresses configured for the Robot and PC.

The UserTask program is written in program 7174 The process to connect is to start the UserTask Status first and then connect using the Python Socket protocol code.

EXPERIMENTAL RESULT

Result

Figure 4.47 Overview of the system

Overview of the system includes the following hardware devices:

The experimental procedure is as follows:

 The camera captures and processes image data.

 The conveyor belt transports the object to the robot's workspace.

 The robot performs the task of moving the object to the specified position.

The results in Figures 4.2 and 4.3 are outcomes of object tracking using the OpenCV library

The tracking process provides the pixel coordinates of the centers of the detected circles and squares In addition to that, it also provides the real-world coordinates (x, y, z) for the robot to move to.

Based on the calibration calculation equation above, we proceed with the calibration steps for the camera using the following method:

Since, at the same pixel position on the image, each obtained frame has different world points values, the author averages 10 consecutive measurements and simultaneously employs the Hough Circle algorithm to determine the circle's center. Setting minRadius and maxRadius to obtain the most accurate circle center result.

We use the code in Appendix.

For square object with right angle, we use the code in Appendix.

After converting the camera coordinate system to the robotic arm coordinate system, we proceed to measure the error:

Calibrating a camera involves adjusting and determining the accurate parameters of the camera By capturing images from various angles and using distinctive features, this process helps calibrate distortion, focal length, and optical center to enhance image quality The results enable the camera to respond accurately in applications such as computer vision and computer graphics.

Table 4.5 Discrepancy table between calibrated image coordinate distances and real-world spatial distances between two image points.

Although the object grasping and classification system sometimes has lower accuracy due to the influence of the surrounding environment (lighting, camera angles, noise, etc.), the object classification still performs well, and the tracking process is relatively accurate.

After running the test program, we observed the results regarding the feedback speed of the robot after four instances of increasing the speed in the gradually ascending direction by 10% The obtained results were unexpected, as the robot reached a speed of only 40% compared to its maximum speed The eccentricity deviation falls within the range of 3 mm to 20 mm compared to the object's center.

The absolute error of the distance between the object's center coordinates and the actual position after controlling the robot's movement is presented in Table This table shows the results of the successful testing process:

Table 6 The absolute error of the distance between the object's center coordinates and the actual position.

Although the project is relatively simple, it has successfully met the specified requirements, which include recognizing the shapes of objects and tracking them while displaying their positions and speeds This allows for efficient product classification From here, there are potential avenues for further improvement, such as classification based on color, size, quality, etc.

Link to experimental video

https://drive.google.com/drive/folders/1Owl8wz9RsLFeOOuWJ0Y9vDZMOIAyLDWa?usp=drive_link

CONCLUSION

Ngày đăng: 31/07/2024, 15:21

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w