Trang 1 FACULTY FOR HIGH QUALITY TRAINING GRADUATION THESIS AUTOMATION AND CONTROL ENGINEERING ADVISOR :STUDENTS:S K L 0 1 0 8 5 0 DEPLOYING DELTA ROBOT TO A PRODUCT CLASSIFYING AND SORT
INTRODUCTION OVERVIEW
Reasons for choosing the topic
In the era of 4.0 technology, the development of robots has become increasingly significant, addressing various challenges in everyday life, particularly in production services Among these advancements, parallel robots are gaining traction among scientists for their ability to replace traditional robots across multiple fields They offer several advantages, including high load capacity, enhanced productivity, and superior rigidity and flexibility, thanks to their optimized geometric structure This design allows for an even distribution of impact forces, with the robot's wings effectively managing both traction and compression forces, making them a compelling alternative in modern industrial applications.
The parallel robot does not need to work on a pedestal and can move anywhere in the production environment due to its relatively small volume and size
We chose the topic "Flexible Product Classification Using Robots and Image Processing" to leverage the advantages of Delta robots in efficiently classifying products This innovative approach allows for flexible sorting based on each product's color and rotation angle, addressing new challenges in automation.
Research objectives
The goal of the research is to design and manufacture a Delta robot capable of flexibly sorting products by color and arranging products in order.
Objects of research scope
- Researching the theory, and solving the kinematics problems of parallel robots
- Researching, designing, and manufacturing mechanical models of parallel robots Researching, selecting, and manufacturing electrical circuits used to control robots
- Image processing algorithm to classify products and calculate the rotation angle
- The software on the computer calculates the image processing and kinematics of the robot to control the motors and conveyors and writes the control program interface for the system
- Researching parallel robots, specifically Delta robots
- Researching on theoretical basis and solving kinematic problems of the Delta robot
- Researching, designing, and improving the mechanical model of the Delta robot
- Researching and selecting suitable AC Servo motor for Delta robot
- Using Python programming language to calculate kinematics and image processing, calculate rotation angle, and classify products by color and size.
Approaches, research methods
- Theory of image processing, theory of robots, theory of PLC circuit connection, theory of stepper motor control, AC Servo control theory, geared motor control theory, etc
- Approach by applying the above theories to develop a prototype Delta robot product that can be assembled in a real test
The study uses a combination of the following research methods:
- Researching method based on a theoretical foundation
Research content
- Documents synthesis on calculating forward and reverse kinematics of Delta robots, thereby building algorithms to solve forward/reverse kinematics problems
- Learning documents about AC Servo motor control, and documents using related software (Visual Studio Code, GX Works3, Arduino, Solidworks, KEPServerEX)
- Calculating the design and improving the mechanical model for the robot to ensure accuracy, flexibility, and certain rigidity to limit the vibration when operating
- Building control program and algorithm
- Application of image processing methods to classify products, and deviation angle from the conveyor
- Classification color and shape, when the object is deviated to any angle using the rotating joint to arrange the object in the right position.
Report layout
The content of this report consists of 5 parts:
Presenting an overview of requirements for reporting: problem statement, topic objective, content, and topic limit
Presenting the overview of robotics knowledge, hardware information, basic knowledge to create the system, theory of PLC, and image processing
The process of performing hardware design, assembly, configuration, and testing
The results of the project will be presented in this section
Chapter 5: Conclusion and development direction
The last chapter gives a conclusion about the system that the team has implemented, thereby identifying development directions for the system.
THEORETICAL BASIS
Introduction to Delta Robot
Today, parallel robot arms, specifically Delta robots, are increasingly used in industry
A parallel robot featuring three degrees of freedom consists of a closed-loop structure, where the final actuation link is connected to the ground through a minimum of two independent kinematic sequences.
Parallel robots, with their unique structure, effectively address some limitations of serial robots They excel in lifting heavy loads while maintaining high speed and precision, making them ideal for flexible product sorting and 3D printing applications that demand exceptional accuracy However, their working space is somewhat restricted As various types of robots continue to emerge, the classification of robots becomes increasingly diverse, offering tailored solutions for specific challenges that leverage the strengths of each robot type.
Introduction to the PLC controller
A Programmable Logic Controller (PLC) is a versatile control device that enables the flexible implementation of control algorithms using various programming languages It operates on a closed-loop system, executing programs in cycles, allowing users to create everything from simple to complex control solutions.
5 complex The birth and development of the PLC control system have changed control systems as well as their design concepts, the advantages can be mentioned as follows:
- Easy programming, easy-to-understand language
- The number of relays and wires is reduced, and the number of contacts in the program is not limited
- High reliability, compact size, coherent details, and easy diagnostic function for maintenance and repair
- Low power consumption of PLC controller
- When not changing the hardware, it will be easy to change the control direction as well as find errors when operating
- Fast control time (from a few milliseconds per cycle)
- Program data can be printed to a hard data file
- Diversity of connected devices to help link support for system control: computers, screens, network connections, and expansion modules for connecting electrical devices.
Image processing
Image processing involves various techniques focused on manipulating image signals, encompassing four primary areas: image enhancement, image recognition, image compression, and image querying The study primarily utilizes static images captured from webcams Mathematically, an image is defined as a continuous function representing light intensity across a two-dimensional space For effective computer processing, this image must be converted into a numerical format with discrete values, typically represented as a two-dimensional matrix, denoted as f(x, y), consisting of M columns and N rows.
An image is characterized by a two-dimensional function F(x, y) in spatial coordinates, where the intensity of the image corresponds to the amplitude of F at each (x, y) point When the values of x, y, and amplitude F are finite, the image is referred to as a digital image, which can be categorized based on a specific matrix representation.
Figure 2.2 Representing an image as a matrix
The process of image acquisition and processing follows the following block diagram:
The digitization of images involves converting analog values into digital values, effectively quantizing information that may not be distinguishable to the naked eye This process utilizes Picture Elements, or pixels, which are the fundamental units of an image Each pixel is represented as a coordinate pair (x, y), collectively forming the complete image.
The gray level represents the encoding of luminance intensity for each pixel, assigned a numerical value The most common encoding method utilizes 256 levels, allowing each pixel to be represented by 8 bits.
Figure 2.4 Illustrate grayscale image levels
A basic color image is composed of three primary colors: red, green, and blue, each categorized into various levels Consequently, each color in the image is stored as a separate grayscale image As a result, the memory required for color images is three times greater than that needed for grayscale images of the same size.
Some primary colors are represented as follows: red is (255, 0, 0), green is (0, 255, 0), blue is (0, 0, 255), yellow is (255, 255, 0)
Figure 2.5 Set of 3 primary colors RGB
A black and white image, also known as a binary image, consists of only two colors: black and white This transformation can be categorized into L levels, where L=2 represents a binary image and L>2 indicates a grayscale multi-level image Typically, high-quality images are most commonly found at levels where L is a multiple of 6.
Figure 2.6 Color image and grayscale image 2.3.2 Problems while processing images
The post-segmentation image output includes the segmented image pixels along with codes for adjacent regions, requiring transformation into a suitable format for further computer processing Feature selection is the process of choosing specific characteristics to represent the image, facilitating the distinction between different classes of objects based on quantitative information derived from the image.
An image can be understood as a function of two variables, where image representation models offer a logical or quantitative description of this function The fundamental element of an image is its pixels, which can be represented as scalar values or vectors (with three components for color images) Images can be mathematically represented through functions or dot matrices.
Figure 2.7 Transform images by matrix
2.3.2.2 Image recovery, transformation and analysis
Image restoration aims to restore, reconstruct the original image, and remove distortions from the image depending on the cause of the contour
Transform images to facilitate processing and image analysis with methods such as Fourier transform, Sin, Cosine, etc
When analyzing images, it is to find image features and build relationships between them by finding edges, separating edges, thinning edges, image partitioning, and object classification
The HSV (Hue, Saturation, and Value) color space is an effective method for analyzing images by breaking down colors into more comprehensible attributes This cylindrical color model simplifies the transformation of RGB primary colors, making it easier for humans to understand and interpret color information.
H (Hue): is the color area expressed as a number from 0 to 360 degrees In which red (0-60), yellow (60-120), green (120-180), etc
Color saturation refers to the intensity of a specific color, measured on a scale from 0 to 100% A saturation level of 0 results in grayish hues, while low saturation produces pale colors and a faded appearance Conversely, at 100% saturation, a color appears in its purest form, reflecting the properties defined by its hue (H).
The brightness, denoted as B (Value), refers to the luminance value that ranges from 0 to 100 percent This scale indicates the intensity of color, with 0 representing complete black and 100 signifying the brightest display of color The luminance value works in tandem with saturation to describe the overall luminosity of a color.
Figure 2.8 Color column illustrating the HSV color system
Figure 2.9 Relationship between parameters S and V
The team utilized frames with an 8-bit depth per pixel, allowing for the conversion of H, S, and V values to the range of [0, 255] for H and [0, 1] for S and V By employing the HSV color space, they achieved a more resilient color threshold that effectively adapts to variations in external lighting This approach ensures that the Hue value is less affected by minor changes in external light compared to the RGB value.
Dilation is one of the basic operations in mathematical morphology, which makes the original object in the image increase in size
Erosion is one of the two basic operations in morphology that has applications in reducing the size of objects, separating close objects, and slicing and finding bones of objects
The method employs 8x8 element shrinkage to adjust the sample size and frame, effectively eliminating noise points and facilitating the separation of objects from their backgrounds Following the shrinkage, a stretching process restores the image to its original dimensions This combination of shrinking and stretching significantly reduces noise and smooths the contours of the objects within the binary image In this study, the image is shrunk twice and then stretched twice to achieve optimal results.
Figure 2.12 Illustration of mathematical operations in morphology
The thresholding algorithm transforms color or grayscale images into binary images composed solely of black and white There are various methods to establish the threshold, with two primary approaches commonly used.
The Simple Threshold algorithm operates by substituting values that meet or exceed a specified threshold, as well as those that fall below it, with a new designated value While this method is effective in many scenarios, it does have drawbacks, particularly due to the use of globally assigned thresholds.
12 when in fact when taking pictures the noise will be affected (exposure, flash, etc.) with Simple Threshold types such as:
● THRESH_BINARY THRESH_BINARY_INV: Reverse binary threshold
It can be understood as inverting the result of THRESH_BINARY
● THRESH_TRUNC: Pixel values below the threshold will remain, pixels larger than the threshold will be reassigned as max value
● THRESH_TOZERO: Pixels smaller than the threshold will be set to 0, the rest of the pixels will be left unchanged
● THRESH_TOZERO_INV: Pixels smaller than the threshold value will be preserved, remaining pixels will be set to 0
● THRESH_OTSU: Use the Otsu algorithm to determine the threshold value
● THRESH_TRIANGLE: The Triangle algorithm determines the threshold value
- Algorithm Adaptive Threshold: This is an algorithm that divides the image into small areas and sets thresholds on those small areas Take in 2 values:
● ADAPTIVE_THRESH_MEAN_C: Calculate the average neighbor around the point to be considered in the area blockSize * blockSize minus the constant value C
● ADAPTIVE_THRESH_GAUSSIAN_C: Multiply the value around the point to be considered by the Gauss weight and calculate the average, then subtract the constant C value
The software used in the project
2.4.1.1 YOLO Overview of YOLO model architecture
YOLO, which stands for "You Look Only Once," is a widely-used model and regression algorithm for object detection It leverages advanced neural networks to achieve high accuracy and impressive processing speed, making it a top choice in the field of computer vision.
YOLO utilizes a single bounding box regression to assess key factors like height, width, center, and feature layers Its unmatched accuracy and speed enable it to detect multiple objects in one go, outperforming other models such as Fast R-CNN, RetinaNet, and Single-Shot MultiBox Detector (SSD).
Figure 2.15 YOLO applies multi-field in life
YOLO offers significant advantages over other recognition models, primarily by enabling real-time object detection, which greatly enhances detection speed Additionally, it delivers precise results with minimal background interference, ensuring high accuracy Furthermore, YOLO's robust learning capabilities allow it to effectively learn object representations, making it highly effective for object detection tasks These combined features contribute to the widespread adoption of YOLO in various applications.
Originally developed by Joseph Redmon within the Darknet framework, YOLO has significantly evolved since its inception The initial release of YOLO distinguished itself from competitors like R-CNN and DPM through its innovative approach to object detection, which emphasized speed and accuracy.
- Real-time frame processing at 45 fps
- Real-time frame processing at 45 fps.Fewer false positives on the background
- Higher detection accuracy (although lower localization accuracy)
Since its launch in 2016, the YOLO algorithm has undergone significant evolution, with YOLOv2 and YOLOv3 developed by Joseph Redmon Following YOLOv3, new contributors have emerged, each introducing their unique objectives with subsequent YOLO releases.
YOLOv2: Released in 2017, this version was honored at CVPR 2017 for its significant bounding box improvements and higher resolution
YOLOv3, released in 2018, enhances bounding box predictability with additional objective scores and strengthens connectivity to backbone network layers This version significantly improves performance on small objects by enabling predictions at three distinct levels of detail.
YOLOv4: The April 2020 release became the first non-Joseph Redmon article Here,
Alexey Bochkovski introduced innovations, including Mind Activation, improved feature synthesis, and more
YOLOv5: Glenn Jocher continues to improve further in the June 2020 release, focusing on the architecture itself
C YOLO-based models don't take up space, and the way they work is based on three basic techniques:
In the residual block stage, the model segments the incoming image into uniform grids, with each grid tasked with identifying objects or their components present within its area.
Bounding box regression is a key component in object detection, where each object within a cell is marked by a bounding box characterized by its weight, height, layer, and center YOLO (You Only Look Once) utilizes this technique to predict the likelihood of an object being present within the defined bounding box.
Intersection over Union (IoU) measures the overlap between predicted and actual bounding boxes in object detection Each grid cell predicts bounding boxes along with their confidence scores, and IoU is calculated by dividing the area of overlap by the total area covered by both boxes An IoU of 1 indicates a perfect match between the predicted and true bounding boxes, facilitating the elimination of predictions that significantly deviate from the actual objects.
After segmenting the image into grid cells, each cell generates bounding box predictions along with associated probability scores for both the boxes and the object classes In a scenario with multiple objects of varying classes, all predictions are made concurrently The Intersection over Union (IoU) metric guarantees that these predictions align with the actual object locations, resulting in distinct bounding boxes that accurately enclose each object.
YOLO today is increasingly being applied in all aspects of life Can be mentioned as:
Autonomous driving relies on advanced object detection to prevent accidents, as there is no human driver to take control In this context, YOLO (You Only Look Once) plays a crucial role by efficiently detecting pedestrians, vehicles, and other potential hazards on the road.
- Wildlife detection: this can be applied to trees and biodiversity as well as to different animal species to track their growth and migration
- Robots: depending on the industry in which the robot operates, some robots require computer vision to detect objects in their path and execute specific instructions
- YOLO Retail: visual product search or reverse image search is becoming increasingly common in retail, which would not be possible without object detection algorithms like YOLO
Since its introduction in 2015, the YOLO (You Only Look Once) algorithm has become highly popular within the computer vision community The model has undergone several updates, with versions YOLOv2, YOLOv3, YOLOv4, YOLOv5, YOLOv6, and the latest YOLOv7 being released to enhance its capabilities.
Before exploring the topic, it's essential to note that two versions of YOLOv7 exist online, and this discussion will focus on the "Official YOLOv7 Algorithm." Developed by the same team at Academia Sinica that created YOLOv5, the official YOLOv7 algorithm represents a significant advancement in object detection technology.
YOLOv5 stands out as one of the most successful and widely adopted versions in the YOLO series, making it a compelling choice for those looking to learn about object detection The continuous updates and improvements from the same development team further enhance its appeal.
Yolov7 is a real-time object recognition engine that is revolutionizing the computer vision industry with its incredible features Official YOLOv7 offers unbelievable speed
YOLOv7 achieves remarkable accuracy, boasting an Average Precision (AP) score of 56.8%, the highest among its predecessors This model was trained exclusively on Microsoft's COCO dataset without utilizing any pre-trained weights, underscoring its advanced performance.
Mitsubishi GXWorks 3 series PLC programming software overview
A Programmable Logic Controller (PLC) is a device designed for programming logic control algorithms, functioning by scanning input and output states When an input change occurs, the PLC adjusts the corresponding output based on the programmed logic Currently, Ladder is the most widely used programming language for PLCs, which offer several advantages in automation and control processes.
- The number of relays and wires is reduced, and the number of contacts in the program is not limited
- Power consumption of PLC is very low
- Easy programming, easy-to-learn programming language Lightweight, easy to maintain, and repair
- High reliability, compact size, easy storage, maintenance, and repair
- The time to complete a control cycle is very fast (several milliseconds) resulting in increased production speed
- Large memory capacity can hold complex programs
- Connecting to a variety of smart devices: computers, network connections, and expansion modules
In this report, we use the Mitsubishi PLC series and use the corresponding software
Visual Studio Code
Visual Studio Code, developed by Microsoft, is a versatile source code editor compatible with Windows, Linux, and macOS It offers essential features including debugging support, integrated Git, syntax highlighting, smart code autocomplete, snippets, and various code enhancements, making it a powerful tool for developers.
- Multi-language supporting (Python, CSS, C++, C#, HMML, F#, etc.)
- Supporting diverse features on platforms (can run on many different platforms at the same time)
- Supporting multitasking (can open multiple files at the same time)
- Intellisense: supporting to detect any incomplete type
Figure 2.17 Logo of programming software Visual Studio Code
My group chose this software for programming because it has many outstanding features such as:
- Strong development of 3 popular operating systems of the world Visual Studio Code is detected by Microsoft software so it is compatible with Windows, macOS, and Linux
Visual Studio Code enhances the coding and debugging experience by streamlining processes with convenient shortcuts for opening functions and adding command lines, ultimately saving you time Additionally, it allows for customization of keyboard shortcuts to better align with your workflow.
Visual Studio Code features a robust and scalable architecture, leveraging Electron and advanced programming languages like JavaScript and Node.js This powerful combination ensures an exceptional user experience, making it a top choice for developers.
Visual Studio Code boasts a vast support community, solidifying its popularity among users nationwide If you encounter any issues, assistance is readily available through platforms such as Microsoft, Reddit, and StackOverflow.
Solidworks
SolidWorks is an essential tool for engineers, particularly in the robotics industry, as it offers professional-grade design capabilities for hardware systems My team utilized this software to effectively conceptualize, design, and construct our project, enhancing our research process.
KEPServerEX
KEPServerEX serves as a middleware solution that facilitates connectivity, data collection, and communication among automation devices within industrial process systems By acting as a bridge between various devices and data management applications, it enables seamless communication and efficient information exchange, enhancing operational efficiency.
KEPServerEX offers extensive support for widely-used communication protocols in automation, including OPC (OLE for Process Control), MQTT (Message Queuing Telemetry Transport), and SNMP (Simple Network Management Protocol) This powerful software enables seamless connections and data collection from various devices, such as PLCs (Programmable Logic Controllers), meters, sensors, and controllers, enhancing the efficiency of industrial process systems.
KEPServerEX enables users to establish and manage communication links with devices, process collected data, and transfer information to management applications like SCADA, HMI, and MES It facilitates the integration of data from multiple sources while adhering to industry standards and protocols, thereby enhancing the management and monitoring of industrial automation systems.
IMPLEMENT CONTENT
Delta's robot kinematics equation
Figure 3.1 Structure of the robot Delta
Based on Figure 3.1, the convention to set geometrical parameters for Delta robot is as follows:
- f is the distance from the center O of the fixed base (center of the center of the triangle plane A) to the motor connecting shaft
- e is the distance from the center E (center of the center of the triangle plane B) of the fixed base frame with the connecting rod axes
- rf is the length of the motor coupling or drive link re is the length of the parallelogram structure or the passive link
- re is the length of the parallelogram structure or the passive link
The structure of Delta robot includes:
- Fixed base plate A to fix the motors on this surface with 3 symmetrical positions
- Motion pedestal B for mounting the DC motor for the slewing joint
- Engine coupling is an active part, including 3 joints F1J1, F2J2, F3J3 attached directly to the motor head fixed on the base
The Motion Pedestal B is designed for mounting a DC motor on the slewing joint, featuring a parallelogram passive joint system that includes three interconnected joints (F1E1, F2E2, F3E3) with the active joint This configuration of three parallelogram passive joints enables Pedestal B to effectively navigate its working space, allowing for precise determination of the angles θ1, θ2, and θ3.
With 3 angles 1 , , 2 3 we can be able to calculate the coordinate ( x 0 , y 0 , z 0 ) of point E on the moving platform B Once we know the angle x , we can easily determine the coordinates J 1 , J 2 and J 3 The joints J E 1 1 , J E 2 2 and J E 3 3 can operate freely around the points J 1 , J 2 and J 3 respectively, form 3 spheres of radius re
Figure 3.2 Kinetic model for Delta robot
We move the centers of the spheres from the points J 1 , J 2 and J 3 to the points J 1 ' ,
J 2 and J 3 ' through transition vectors E E 1 0 , E E 2 0 , E E 3 0 respectively After transformation, all spheres will intersect at point E 0
Figure 3.3 The intersection of 3 spheres corresponds to 3 joints
So when finding the coordinates ( , , )x y z 0 0 0 of point E 0 , need to solve the equation
(xx j ) (yy j ) (zz j ) r e , at the coordinates of the centers of the spheres ( , , )x y z j j j and known radius r e
Figure 3.4 Coordinates of projection points on the plane Oxy
From Figure 3.4, the following expressions can be calculated
2 cos 2 cos 30 ; cos 2 sin 30 ; sin 2
3 cos 3 cos 30 ; cos 3 sin 30 ; sin 3
From the equations of the three spheres, we can determine the coordinates of the three points J 1 , J 2 and J 3 respectively are ( , , )x y z 1 1 1 , ( , , )x y z 2 2 2 and ( , , )x y z 3 3 3 please note that
From equation (3.8) – (3.9) we get the following equation as below:
From equation (3.8) – (3.10) we get the following equation as below:
From equation (3.9) – (3.10) we get the following equation as below:
We proceed to subtract the equation (3.12) to equation (3.13) we get:
In there a b a 1 , , 1 2 and b 2 are determined by the following expressions:
After we get a b a 1 , , 1 2 and b 2 we substitute (3.14) and (3.15) and substitute both into equation (3.8) we now get the final equation:
When we solve equation (3.21) and findz 0 (should choose the smallest negative angle) and then find x 0 and y 0 from equations (3.12) and (3.13)
Figure 3.5 Specific parameters of the robot
As shown in Figure 3.5, the parameters , ,f r r f e and e are known
Since the couplingFJ 1 1 rotates only in the YZ plane, the center is located at F 1 and form a circle with radius of r f J E 1 1 is a dynamic joint that can rotate freely respective to
E 1 create a sphere with the center at E 1 and radius r e
Intersection between sphere and plane YZ is a circle with center E 1 ' and radius E J 1 1 ' (
E 1 has from the projection of E 1 on the plane YZ) Point J 1 is the intersection of circles of known radius with center E 1 ' and F 1 From J 1 we can calculate 1
Considering the YZ plane as shown in Figure 3.6:
From the YZ plane as shown in Figure 3.6, we have the point E of the moving platform:
From there, the length of EE1:
We can derive the points E1 and E1':
From equation (3.24), the length EE1' is obtained, deduce the length E1'J1:
The choice considers only the motion of the joint FJ 1 1 in the YZ plane, so it is possible
To analyze the Delta robot's symmetric properties, we can disregard the X coordinate and focus on calculating the remaining angles, denoted as θ This is achieved by sequentially rotating the coordinate system in the XY plane around the Z axis by an angle of 120 degrees counterclockwise.
Figure 3.7 The coordinate reference system of the robot
After establishing the new reference system XYZ, replicate the calculations used for θ1 to find the coordinates xO' and yO' for point E0 Utilize the rotation matrix to determine the corresponding angle for these coordinates.
We get the new coordinate system converted as follows:
For the angle 3 , we rotate the original XY coordinate system around the Z axis by an angle of 120 clockwise and do the same transformation as 2
System design and construction
The conveyor features durable 20x20mm aluminum bars equipped with grooves, allowing for easy adjustments by sliders to secure the aluminum bar positions Once designed, the conveyor measures 500x200x250mm, making it a compact and efficient solution for various applications.
3.2.2 Design of robot support frame and fixed base surface
The support frame is crucial for maintaining the robot's stability, as it bears the entire weight and prevents shaking during operation, which can impact accuracy and longevity Featuring a robust 3-post design anchored at the three corners of an isosceles triangle base, along with triangular supports positioned at each right angle between two aluminum bars, this structure enhances the robot's overall sturdiness and stability.
Figure 3.10 Design to place the servo motor on a fixed base
The fixed base is designed as an equilateral triangle made of aluminum sheet with
3 positions of the motor or arm also placed on the bracket and equally spaced
Figure 3.11 The soleplate is shaped like an equilateral triangle
The Delta robot features three identical arms positioned at 120-degree intervals, each equipped with an active joint linked to a motor and a parallelogram-shaped passive joint in series This design guarantees exceptional accuracy while maintaining flexibility and security.
To ensure that the robot does not shake or calculate the offset position of the motor shaft, the robot team decided to directly attach the arm to the motor
Figure 3.12 Active joint design for robots
Figure 3.13 One complete designed robot arm
When the design is complete the details at the swivel joint, proceed to assemble that part and the result is shown in the following figure
Figure 3.14 Distance of swivel joint
Figure 3.15 Designed suction cup swivel joint
Device selection
Our project focused on utilizing the Mitsubishi FX5U-64MT/DSS PLC to expand our understanding of programmable logic controllers (PLCs), while also incorporating various types beyond just Siemens models.
The Rockwell PLCs, particularly the FX5U and FX5U-64MT/DSS models, feature a compact design with 32 inputs and 32 outputs, making them ideal for diverse systems These PLCs support multiple programs and offer building function block capabilities, enhancing programming efficiency Additionally, they come equipped with 12-bit 2CH analog and 1CH analog inputs, allowing for parameter settings without the need for programming The inclusion of an SD slot facilitates convenient and rapid program and device updates.
Figure 3.16 PLC FX5U-64MT/DSS
The following are some basic parameters of this type of PLC:
Input parameters: 32 points – 24VDC, pins X0 – X7 can set 50 to 200kHz Pulse Output parameters: 32 points – Transistor/ Source output
Below is the pinout of the FX5U-64MT/DSS PLC:
Figure 3.17 FX5U-64MT/DSS PLC Pinout
3.3.2 Camera Xiaovv HD USB Webcam
With the role of collecting images to send data to the PC, the camera to the selected group has the following parameters:
Figure 3.18 Camera XIAOVV HD USB Webcam
Video format: H.264 H.265 MJPG NV12 YUY2
OS Support: Windows 7/8/10, Mac OS 10.5 and
The honeycomb source is essential for converting AC power to DC power by employing a ferromagnetic transformer to reduce voltage This process is further enhanced by utilizing a rectifier alongside a linear source IC, which generates DC voltage levels that are appropriate for various applications.
Figure 3.19 Common DC source module
The travel switch functions similarly to conventional switches, activating when the robot reaches a predetermined limit Unlike traditional switches, it does not retain its original state and only changes upon contact Its primary purpose is to establish the initial position of the delta robot by allowing the robot arm to move and engage each limit switch, subsequently stopping and sending a pulse back to the starting position.
The structure requires external force, specifically from a wheel, to minimize wear on its components It utilizes a 3-pin switch configuration, which includes one common (COM) pin, one pin creating normally closed (NC) contacts, and another pin forming normally open (NO) contacts.
Basic parameters of the limit switch:
3.3.5 T-shaped vacuum cup and pump motor
The T-shaped vacuum cup is a detail that can help absorb light objects easily and at high speed like products designed by the group for classification
A vacuum motor is essential for facilitating the intake and exhaust of air, enabling suction on solid objects The team selected a compact and efficient engine tailored to their requirements for handling relatively lightweight items.
The Arduino's role in this project is to generate pulses that drive the conveyor motor, enabling efficient operation Since the first semester, the team has utilized this device to reduce costs effectively.
Circuit feeder: 5VDC from USB port or external power
Digital I/O pins: 54 (15 pins can output PWM pulses)
DC current on each I/O pin: 20mA
Flash Memory: 256 KB (8 KB used for bootloader)
3.3.7 Step motor and TB6600 driver
With the function used to control the conveyor The specifications of the stepper motor are as follows:
Figure 3.24 Stepper motor for conveyor
To control the motor we need a driver to control Using driver module to control stepper motor TP6600 specifications:
Figure 3.25 Driver for stepper motor TP6600
Input: Optical isolation, high speed
- DC +: Connect to the source from 9 - 40VDC
- DC-:Connect to the negative voltage of the source
- Pins A+ and A-: Connect to the coils of the motor
- Pins B+ and B-: Connect to the remaining windings of the motor
- PUL+ pin: Speed control pulse signal (+5V) from BOB for M6600
- PUL- pin: Speed control pulse signal (-5V) from BOB for M6600
- DIR+: Reverse pulse supply signal (+5V) from BOB for M6600
- DIR-: Reverse pulse supply signal (-5V) from BOB for M6600
- ENA+ and ENA- pins: when giving signals to these two pins, the motor will have a holding torque force and rotate again
- We can connect the common positive (+) signal or the signal
Figure 3.26 TP6600 connection diagram with stepper motor 3.3.8 Driver MR-J2S-10A
For some types of motors we need a dedicated driver to be able to control it correctly Below is the MR-J2S-10A driver to control AC servo motors
The wiring diagram is arranged in the following figure:
There are two main types of drivers: pulse-driven and network-driven The structure of a servo driver is fundamentally similar to that of an inverter; however, servo drivers possess built-in capabilities to read encoders attached to servo motors, enabling more precise control In contrast, only certain high-end inverters can read encoders and typically require an additional option card for this function Additionally, servo drivers are generally compatible only with servo motors of the same brand and within a specific power range, whereas inverters can be used with motors from any brand, regardless of capacity.
The familiar and common parameters when adjusting the servo driver:
Calculating Gear for PLC, we have the formula:
- No is the speed of the servo (rpm)
- Pt is the resolution of encoder in servo
Here is the setting of P032678 and P04%00, for a PLC with an output of 100Khz, the PLC outputs 100 000 pulses in 1 second, the servo can run at 600 rpm The PLC sends
10000 pulses, the servo can run 1 round
3.3.9 AC Servo Motor HC – MFS13
A servo motor is a specialized motor designed to deliver precise mechanical energy to a moving system It functions as a closed-loop motor, enabling accurate control of stop operations and adherence to control commands, while providing essential position feedback for effective servo operation.
Servo motors are divided into two main types
- The motor used for direct current is called DC servo
- Motors for alternating current are called AC servos
Figure 3.29 Servo motor HC – MFS13
The topic of using AC servo HC-MFS13 is this type of motor commonly used in industrial machinery applications
Servo amplifier MR-J2S-10A/A1/B/B1/CP/CP1/CL/CL1
Speed 3000rpm (Small inertia, low power)
Control method: 3-phase full-phase rectifier/IGBT/PWM
The structure of AC servo motor consists of 3 parts: stator, rotor (usually used as permanent magnet) and encoder
- The stator consists of a coil wound around the core, which is powered to provide the force needed to rotate the rotor
- The rotor is made of permanent magnets with strong magnetic fields
The encoder is mounted on the rear of the motor to accurately feedback the speed and position of the motor to the controller
This article discusses a three-phase synchronous motor that utilizes permanent magnets and features a high-resolution encoder for precise process control AC servo motors typically operate in three primary control modes, tailored to specific applications.
43 speed, position and torque (moment), in each different mode, we need to set them depending on the parameters of the application and the load
- Able to control the motor at high speed well
- Smooth control over all speed zones
- The motor operates without oscillation
- The heat generated is rarely in operation
- High-precision position control (depending on the accuracy of the encoder)
- The motor uses torque, low inertia and low noise
- The drive parameters must also be adjusted according to the controller
The reduction gearbox is a direct-coupled, constant-ratio transmission mechanism, often used with the engine, the gearbox is used with 2 main effects:
- Increase torque: Connecting the reducer to the motor helps to increase the torque, thereby increasing the load capacity and stiffness of the output shaft of the reducer
A gear reduction gearbox features straight or inclined gears that interlock according to a specific gear ratio, producing a consistent number of revolutions when powered Typically constructed from robust materials like cast iron, stainless steel, or steel, the gearbox's box-shaped or cylindrical shell is designed to withstand corrosion and minimize impact damage.
When using a reduction gearbox, the torque will rotate according to the designed gear ratio This helps to generate the number of revolutions at the desired speed of the driver
In various systems, gear reducers serve as crucial intermediaries between engines and machinery, including conveyor belts and conveyors These reducers are particularly prevalent in automobile and motorcycle engines, highlighting their essential role in efficient power transmission.
The total mass that the motor must support is 360g, which includes a 60g plastic arm, 130g from two carbon rods, and approximately 70g from the engine and accessories, with a grip limit of 100g.
Based on the above parameters, Torque needs to use:
To prevent motor overload and ensure future development, the group established a safety factor of 2, resulting in a minimum required torque of 1.06056 Consequently, the chosen engine aligns with the hardware design specifications.
Step motor NEMA 17 size 42 x 48mm (Step motor) is a type of stepper motor commonly used when making 3D printers, mini CNCs and is often installed with GT2 pulleys
Face size x Length 42 mm x 48 mm
Figure 3.33 Connection diagram Drivers and Motors
System construction
3.4.1 Design and construction of electrical cabinets
When constructing electrical cabinets, it's essential to determine the number of loads and branches for accurate calculations of circuit breakers and conductors Balancing economic and technical considerations is crucial, as selecting equipment that is excessively high in capacity can inflate product costs Additionally, when designing industrial electrical cabinets, future expansion or changes in the equipment system must be considered A focused and thorough design process is necessary to avoid errors that could necessitate starting the entire process over After completing the internal electrolysis, the results are as follows:
- 3 Drivers MR-J2S-10A for 3 servo motors
- 1 Drivers TP6600 for Step motor NEMA 17
Diagram of connecting devices with PLC:
Diagram of connecting the source:
Before carrying out the actual design of the model, the team did 3D simulations with Solidworks software to draw appropriate calculations on the hardware structure
Figure 3.38 Robot model designed on solidworks
Carrying out the model, Delta robot has designed the experiment exactly as the model
Figure 3.39 Experimental robot model designed
Algorithms and programs to control the robot
The robot control sequence will be conducted step by step as follows:
- First, connect to the PLC and Python through the middleware namely KEPServerEX by OPC protocol
- Then proceed to select the mode to control the robot
In Auto mode, the robot operates autonomously by processing products on a moving conveyor A camera captures images of the conveyor, determining the center and angle of each object This data is sent to a PC to identify the color, size, and position of the products Subsequently, the information is relayed to software that controls the robot to classify and arrange the items efficiently.
Figure 3.40 Robot control sequence block diagram
3.5.2 Image training and image processing sequence
First, we perform image training to get data through the tiny YOLOv7 algorithm and stored by Google Colab:
● Step 1: Get raw image data: 1200 photos/object, total 9600 photos
In Step 2 of the process, manually label 100 panels or objects and then utilize the YOLOv7-tiny model with epoch 0 to train on the fast train dataset After this initial training, proceed to label all remaining raw image data, culminating in a comprehensive dataset consisting of 9,600 photos paired with 9,600 labels.
● Step 3: Train the complete dataset with model yolov7-tiny 300 rounds, check the results, choose the weight file with the best training result
Figure 3.41 Image dataset training results
Figure 3.42 Accurate object prediction results
After getting the data, we continue to process the image of the object:
To begin the process, capture an image using the camera and analyze it with the yolov7-tiny model for object detection The output will include the object's name and the coordinates of its center of gravity in pixels.
● Step 2: Crop the image area containing the object in pixels, put that image area into processing
● Step 3: Filter the conveyor belt green to separate the object from the background of the conveyor
● Step 4: Change to black and white image (Threshold), thresholding black background, white object
● Step 5: Apply contour to recalculate the coordinates of the center with the angle of inclination relative to the horizontal of the object
● Step 6: The coordinates of the center of the object (pixels), the rotation angle of the object (rad)
● Step 7: Convert coordinates from 2D axis system (camera - pixels unit) to 3D (x,y,z
● Step 8: Calculate the angle of joints with the robot inverse kinematics
3.5.3 Method of converting frame coordinates to coordinates of robot system
Call the coordinate axes on the camera's frame as O x y ' ' ' and the axis system of the robot attached to the base has coordinates Oxy
In this example, we analyze an image frame captured by a camera with a resolution of 640x480 pixels The horizontal and vertical ray lengths correspond to 640 and 480 pixels, respectively To convert these pixel measurements into centimeters, we assume a length of 10 cm for the horizontal ray By dividing 640 pixels by 10 cm, we determine that there are 64 pixels per centimeter.
To convert coordinates from the Ox y'' system to the Oxy system, we begin by analyzing the coordinates of point O in the Ox y'' system We will determine the pixel values for the coordinates (xo, yo) and subsequently apply this to find the coordinates of any given point.
In the Cartesian coordinate system, point A is represented by coordinates (x_a, y_a) in pixels, while O denotes the robot's origin To enable the robot to navigate to point A, it is essential to calculate the vector coordinates uuuAO As illustrated in Figure 4.4, the lines Oy and Oy' share the same direction, while rays Ox and Ox' move in opposite directions Consequently, the vector coordinates uuuAO can be expressed as (P_x, P_y), where P_x = -x_o + x_a (in pixels) and P_y = y_a - y_o (in pixels).
54 andP y are also the coordinates of the arm in the inverse kinematics equation Then P x and
P y will be converted to cm on the actual plane will have the value given by the formula
In our image processing, we focus solely on 2D space, requiring only the determination of coordinates P x and P y, while P z remains a fixed constant that does not influence calculations By obtaining these coordinates, we can easily calculate the rotation angles θ1, θ2, and θ3, enabling the robot to effectively operate and navigate to the desired position.
3.5.4 Design of control interface for robot
User interface is designed for the operator to monitor and observe the operating status of the system
For the Delta robot system, the team also designed through Python support software,
QT Designer enables precise tracking and classification of products by color and size, utilizing WHO for accurate identification It offers two operational modes: Manual and Auto In Manual mode, forward kinematics is employed to input angles that define the object's position In Auto mode, the camera captures an image, determines the object's coordinates, and transmits this data to the PC The PC then interfaces with the PLC, which generates pulses to control the servo, allowing the robotic arm to accurately pick up the object from its designated position.
Figure 3.44 Interface designed for Delta robot
The team designed an interface featuring distinct functional areas, including the school logo and robot name at the top Below this, the camera screen displays the conveyor belt and object deviation angles The control button area houses AUTO, MANUAL, and HOME reset buttons, while the "Operating devices" section indicates the system's mode Additionally, the "Status" area shows the system's operational status, and the "Products" section enables the shipper to select products for sorting.
“Manual Control” allows the operator to adjust the coordinates at which the arm wants to move to the object
OPERATION RESULTS
Hardware performance results
After the process of ideation, design and construction of the model for the topic, the results will be as follows:
Figure 4.1 Front system position distribution
After completing the model, re-wiring the entire system to ensure aesthetics as well as electrical safety, meeting the needs of industry
Figure 4.2 Electrical cabinets are guaranteed for safety and aesthetics
Performance results on software
The image processing system is capable of identifying round and rectangular shapes in three distinct colors: blue, red, and yellow It also determines the angle of the object in relation to the horizontal plane and identifies the coordinates of the object's center.
The robot efficiently manages the processes of waiting for, picking up, and placing objects, before returning to its original position Its rotating joint allows for precise movements, enabling the robot to pick up and arrange items effectively.
The products are identified in turn as shown below:
Figure 4.3 Product identification in red rectangle
Figure 4.4 Rectangular yellow product identification
Figure 4.5 Product identification in blue rectangle
Figure 4.6 Product identification in red circle
Figure 4.7 Product identification in blue circle
Figure 4.8 Product identification in yellow circle
Figure 4.9 Product identification in error rectangle
Figure 4.10 Product identification in error circle
Figure 4.11 No product recognition in yellow rectangle
Figure 4.12 No product recognition in blue circle
All of the above results are that we have tested in Normal Lighting conditions, we have tested on 100 samples equally divided for each object and the success rate is 98%
Then, we also tested in Glare condition and the success rate is rate 60% In which, Yellow, Error and Red products that affect the most
Figure 4.13 Red Rectangle product identification in glare condition
Figure 4.14 Yellow Rectangle product identification in glare condition
Figure 4.15 Blue Rectangle product identification in glare condition
Figure 4.16 Error Rectangle product identification in glare condition
Similarly, the success rate in Low light condition is 50% In which, Blue products is the most influential because this color is close to the conveyor color
Figure 4.17 Red Rectangle product identification in low light condition
Figure 4.18 Yellow Rectangle product identification in low light condition
Figure 4.19 Blue Rectangle product identification in low light condition
Figure 4.20 Blue Rectangle product identification in low light condition
We conducted tests on the robot under standard lighting conditions to achieve optimal performance Throughout the program execution, we noted that the robot effectively picked up and released objects as designed.
Figure 4.21 The robot starts to suck things
Figure 4.22 The robot sucks and rotates discovered objects to arrange it in a neat box
Figure 4.23 Robot drops objects into the 2nd sorting box
Figure 4.24 The robot returns to the home position after dropping the object
Similar to other colors, a situation with a blue circle:
Figure 4.25 The robot starts to suck things
Figure 4.26 Objects are dropped into the 3th place sorting box
Increasing the conveyor speed too rapidly or too slowly can cause timing deviations in the folding of objects, resulting in the robot failing to pick them up from the center This issue can be addressed by adjusting the PLC code accordingly.
Figure 4.27 The robot folds the object off-center due to the conveyor speed deviation
However, the Robot's swivel joint is not really effective and errors often occur, because the hardware and code are not optimized
The object experiences a deviation angle during rotation due to the impact of the conveyor belt and the rotational resistance from the air duct's traction force, resulting in an uncontrolled rotation feedback error.
Figure 4.28 Red Rectangle product is rotated at an wrong angle from the drop box
Figure 4.29 Yellow Rectangle product is rotated at an wrong angle from the drop box
Figure 4.30 Blue Rectangle product is rotated at an wrong angle from the drop box
CONCLUSIONS AND DEVELOPMENT ORIENTATIONS
Conclusion
During nearly 4 months of implementing the project, the group has achieved the following specific results:
- Design and construction of hardware systems to ensure aesthetics and electrical safety
- Calculating kinematics for Delta robot arm
- Using PLC to connect and control devices in the system
- Successful Applying Image Processing to product identifying, classifying, and sorting by color and size
- The robot arm system is relatively stable and very solid, no longer shaking
With a relatively short project implementation time, after the system was completed, the group also found several shortcomings that could not be overcome as follows:
- Picking and dropping objects have not reached absolute accuracy
Reused devices often encounter numerous hardware issues, leading to significant time spent on identifying and replacing compatible components, which can be costly.
- The fourth joint is inefficient due to many hardware influences
- The camera works best in normal lighting conditions, when conditions are not satisfactory, there are errors because of reduced accuracy
- There is no manual mode yet
Development
With the potential of this topic, the system still has many development directions that can be achieved in the future, such as:
- Positions that do not change should be properly calibrated to avoid deviations
To ensure maximum accuracy, the conveyor must maintain a consistent speed to precisely determine its position Additionally, it is essential to calculate and accurately measure the deviation between the robot axis and the camera axis.
- At the fourth joint , it is recommended to use a motor with a gearbox to combat the rotational deviation processes affected by external factors
- Adding manual mode to deploy given locations, to simulate execution
[1] The Delta Parallel Robot: Kinematics Solutions Robert L Williams II, Ph.D.,
Mechanical Engineering, Ohio University, October (2016)
[2] https://www.marginallyclever.com/other/samples/fk-ik-test.html
[3] Hoai Nam LE and Xuan Hoang LE, Geometrical design of a RUU type Delta robotbased on the predescribed Workspace, 4th International Conference on GreenTechnology and Sustainable Development (GTSD), HCMUTE (2018)
[4] Robot công nghiệp, PGS.TS Nguyễn Trường Thịnh, Đại học SPKT Tp HCM
[6] https://thanhphan.vn/so-do-ket-noi-servo-mitsubishi-mr-j2s-10a
[7] https://biquyetxaynha.com/huong-dan-cai-dat-driver-servo-mitsubishi
[8] https://blog.paperspace.com/yolov7
[9] Train YOLOv7, nhận diện YOLOv7 bằng GPU của Colab, https://www.youtube.com/watch?v=TJ4o5QB7Wqg
[10] Giao tiếp Python với PLC (FX-5UJ) Mitsubitshi || Communicating Python with
PLC (FX-5UJ) Mitsubitshi, https://www.youtube.com/watch?v=Ubd3ImXOrYc