Robot manipulators trends and development 2010 Part 13 pptx

40 184 0
Robot manipulators trends and development 2010 Part 13 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

RobotManipulators,TrendsandDevelopment472   T T y I x I qpI            (6) Where   qp are used to obtain the gradient and they are known as Sobel operators. 6.2 Normals As the normals are perpendicular to the tangents, the tangents can be finded by the cross product, which is parallel to   T qp 1,, . Thus we can write the normal like:   T qp qp n 1,, 1 1 22    (7) Assuming that z component of the normal to the surface is positive. 6.3 Smoothness and rotation The smoothing, in few words can be described as avoiding abrupt changes between normal and adjacent. The Sigmoidal Smoothness Constraint makes the restriction of smoothness or regularization forcing the error of brightness to satisfy the matrix rotation  , deterring sudden changes in direction of the normal through the surface. With the normal smoothed, proceed to rotate these so that they are in the reflectance cone as shown in Figure 7. Fig. 7. Rotation of the normals in the reflectance cone Where k ji n , are the normals smoothed. k ji n  , are the normals after the smoothness and before the rotation. 1 , k ji n are the normals after the rotation of  grades. With the normals smoothed and rotated with the smoothness constraints, this can result in having several iterations, which is represented by the letter k. 6.4 Shape index Koenderink (Koenderink, &Van Doorn, 1992) separated the shape index in different regions depending on the type of curvature, which is obtained through the eigenvalues of the Hessian matrix, which will be represented by 1 k and 2 k as showing the equation 7. 12 12 12 arctan 2 kk kk kk       (8) The result of the shape index  has values between [-1, 1] which can be classified, according to Koenderink it depends on its local topography, as shown in Table 1. Cup Rut Saddle rut Saddle Point Plane Saddle Ridge Ridge Dome        8 5 ,1        8 3 , 8 5        8 1 , 8 3        8 1 , 8 1       8 3 , 8 1       8 5 , 8 3       1, 8 5 Table 1. Classification of the Shape Index Figure 8 shows the image of the local form of the surface depending on the value of the Shape Index, and in the Figure 9 an example of the SFS vector is showed. Fig. 8. Representation of local forms of the classification of Shape Index. Fig. 9. Example of SFS Vector 7. Robotic Test Bed The robotic test bed is integrated by a KUKA KR16 industrial robot as it is shown in figure 10. It also comprises a visual servo system with a ceiling mounted Basler A602fc CCD camera (not shown). UsingObject’sContourandFormtoEmbedRecognitionCapabilityintoIndustrialRobots 473   T T y I x I qpI            (6) Where   qp are used to obtain the gradient and they are known as Sobel operators. 6.2 Normals As the normals are perpendicular to the tangents, the tangents can be finded by the cross product, which is parallel to   T qp 1,, . Thus we can write the normal like:   T qp qp n 1,, 1 1 22    (7) Assuming that z component of the normal to the surface is positive. 6.3 Smoothness and rotation The smoothing, in few words can be described as avoiding abrupt changes between normal and adjacent. The Sigmoidal Smoothness Constraint makes the restriction of smoothness or regularization forcing the error of brightness to satisfy the matrix rotation  , deterring sudden changes in direction of the normal through the surface. With the normal smoothed, proceed to rotate these so that they are in the reflectance cone as shown in Figure 7. Fig. 7. Rotation of the normals in the reflectance cone Where k ji n , are the normals smoothed. k ji n  , are the normals after the smoothness and before the rotation. 1 , k ji n are the normals after the rotation of  grades. With the normals smoothed and rotated with the smoothness constraints, this can result in having several iterations, which is represented by the letter k. 6.4 Shape index Koenderink (Koenderink, &Van Doorn, 1992) separated the shape index in different regions depending on the type of curvature, which is obtained through the eigenvalues of the Hessian matrix, which will be represented by 1 k and 2 k as showing the equation 7. 12 12 12 arctan 2 kk kk kk       (8) The result of the shape index  has values between [-1, 1] which can be classified, according to Koenderink it depends on its local topography, as shown in Table 1. Cup Rut Saddle rut Saddle Point Plane Saddle Ridge Ridge Dome        8 5 ,1        8 3 , 8 5        8 1 , 8 3        8 1 , 8 1       8 3 , 8 1       8 5 , 8 3       1, 8 5 Table 1. Classification of the Shape Index Figure 8 shows the image of the local form of the surface depending on the value of the Shape Index, and in the Figure 9 an example of the SFS vector is showed. Fig. 8. Representation of local forms of the classification of Shape Index. Fig. 9. Example of SFS Vector 7. Robotic Test Bed The robotic test bed is integrated by a KUKA KR16 industrial robot as it is shown in figure 10. It also comprises a visual servo system with a ceiling mounted Basler A602fc CCD camera (not shown). RobotManipulators,TrendsandDevelopment474 Fig. 10. Robotc test bed The work domain is comprised by the pieces to be recognised and that are also illustrated in figure 10. These workpieces are geometric pieces with different curvature surface. These figures are showed in detail in figure 11. Rounded-Square (RS) Pyramidal-Square (PSQ) Rounded-Triangle (RT) Pyramidal-Triangle (PT) Rounded-Cross (RC) Pyramidal-Cross (PC) Rounded-Star (RS) Pyramidal-Star (PS) Fig. 11. Objects to be recognised 8. Experimental results The object recognition experiments by the FuzzyARTMAP (FAM) neural network were carried out using the above working pieces. The network parameters were set for fast learning (=1) and high vigilance parameter ( ab = 0.9). There were carried out three. The first experiment considered only the BOF taking data from the contour of the piece, the second experiment considered information from the SFS algorithm taking into account the reflectance of the light on the surface and finally, the third experiment was performed using a fusion of both methods (BOF+SFS). 8.1 First Experiment (BOF) For this experiment, all pieces were placed within the workplace with controlled light illumination at different orientation and this data was taken to train the FAM neural network. Once the neural network was trained with the patterns, then the network was tested placing the different pieces at different orientation and location within the work space. The figure 12 shows some examples of the object’s contour. Fig. 12. Different orientation and position of the square object. The object’s were recognised in all cases having only failures between Rounded shaped objects and Square shaped ones. In these cases, there was always confusion due to the fact that the network learned only contours and in both cases having only the difference in the type of surface the contour is very similar. 8.2 Second Experiment (SFS) For the second experiment and using the reflectance of the light over the surface of the objects (SFS method), the neural network could recognise and differentiate between rounded and pyramidal objects. It was determined during training that for the rounded objects to be recognised, it was just needed one vector from the rounded objects because the change in the surface was smooth. For the pyramidal objects it was required three different patterns during training to recognise the objects, from which it was used one for the square and triangle, one for the cross and other for the star. It was noticed that the reason was that the surface was different enough between the pyramidal objects. 8.3 Third Experiment (BOF+SFS) For the last experiment, data from the BOF was concatenated with data from the SFS. The data was processed in order to meet the requirement of the network to have inputs within the [0, 1] range. The results showed a 100% recognition rate, placing the objects at different locations and orientations within the viewable workplace area. To verify the robustness of our method to scaling, the distance between the camera and the pieces was modified. The 100% size was considered the original size and a 10% reduction UsingObject’sContourandFormtoEmbedRecognitionCapabilityintoIndustrialRobots 475 Fig. 10. Robotc test bed The work domain is comprised by the pieces to be recognised and that are also illustrated in figure 10. These workpieces are geometric pieces with different curvature surface. These figures are showed in detail in figure 11. Rounded-Square (RS) Pyramidal-Square (PSQ) Rounded-Triangle (RT) Pyramidal-Triangle (PT) Rounded-Cross (RC) Pyramidal-Cross (PC) Rounded-Star (RS) Pyramidal-Star (PS) Fig. 11. Objects to be recognised 8. Experimental results The object recognition experiments by the FuzzyARTMAP (FAM) neural network were carried out using the above working pieces. The network parameters were set for fast learning (=1) and high vigilance parameter ( ab = 0.9). There were carried out three. The first experiment considered only the BOF taking data from the contour of the piece, the second experiment considered information from the SFS algorithm taking into account the reflectance of the light on the surface and finally, the third experiment was performed using a fusion of both methods (BOF+SFS). 8.1 First Experiment (BOF) For this experiment, all pieces were placed within the workplace with controlled light illumination at different orientation and this data was taken to train the FAM neural network. Once the neural network was trained with the patterns, then the network was tested placing the different pieces at different orientation and location within the work space. The figure 12 shows some examples of the object’s contour. Fig. 12. Different orientation and position of the square object. The object’s were recognised in all cases having only failures between Rounded shaped objects and Square shaped ones. In these cases, there was always confusion due to the fact that the network learned only contours and in both cases having only the difference in the type of surface the contour is very similar. 8.2 Second Experiment (SFS) For the second experiment and using the reflectance of the light over the surface of the objects (SFS method), the neural network could recognise and differentiate between rounded and pyramidal objects. It was determined during training that for the rounded objects to be recognised, it was just needed one vector from the rounded objects because the change in the surface was smooth. For the pyramidal objects it was required three different patterns during training to recognise the objects, from which it was used one for the square and triangle, one for the cross and other for the star. It was noticed that the reason was that the surface was different enough between the pyramidal objects. 8.3 Third Experiment (BOF+SFS) For the last experiment, data from the BOF was concatenated with data from the SFS. The data was processed in order to meet the requirement of the network to have inputs within the [0, 1] range. The results showed a 100% recognition rate, placing the objects at different locations and orientations within the viewable workplace area. To verify the robustness of our method to scaling, the distance between the camera and the pieces was modified. The 100% size was considered the original size and a 10% reduction RobotManipulators,TrendsandDevelopment476 for instance, meant that the piece size was reduced by 10% of its original image. Different values with increment of 5 degrees were considered up to an angle θ = 30 degrees (see figure 13 for reference). Fig. 13. Plane modifies. The obtained results with increments of 5 degrees step are shown in Table 2. Grades R.S. P.SQ R.T. P.T R.C. P.C. R.S P.S. 5 100 100 100 100 100 100 100 100 10 100 100 100 100 100 100 87 100 15 9 8 100 96 100 82 100 72 100 20 9 1 100 81* 97* 5 8 100 51* 82* 25 53 100 73* 93* 37* 91* 44* 59* 30 43 100 54* 90* 4* 83* 20* 26* Table 2. Recognition results The “numbers” are errors due to the BOF algorithm, the “numbers*” are errors due to SFS algorithm, and the “numbers *” are errors due to both, the BOF and SFS algorithm. The first letter is the capital letter of the curvature of the objects and the second one is the form of the object, for instance, RS (Rounded Square) or PT (Pyramidal Triangle). Figure 14 shows the behaviour of the ANN recognition rate at different angles. Fig. 14. Recognition graph The Figure 14 shows that the pyramidal objects have fewer problems to be recognized in comparison with the rounded objects. 9. Conclusions and future work The research presented in this chapter presents an alternative methodology to integrate a robust invariant object recognition capability into industrial robots using image features from the object’s contour (boundary object information) and its form (i.e. type of curvature or topographical surface information). Both features can be concatenated in order to form an invariant vector descriptor which is the input to an Artificial Neural Network (ANN) for learning and recognition purposes. Experimental results were obtained using two sets of four 3D working pieces of different cross-section: square, triangle, cross and star. One set had its surface curvature rounded and the other had a flat surface curvature so that these object were named of pyramidal type. Using the BOF information and training the neural network with this vector it was demonstrated that all pieces were recognised irrespective from its location an orientation within the viewable area since the contour was only taken into consideration. With this option it is not possible to differentiate the same type of object with different surface like the rounded and pyramidal shaped objects. When both information was concatenated (BOF + SFS), the robustness of the vision system improved recognising all the pieces at different location and orientation and even with 5 degrees inclination, in all cases we obtained 100% recognition rate. Current results were obtained in a light controlled environment; future work is envisaged to look at variable lighting which may impose some consideration for the SFS algorithm. It is also intended to work with on-line retraining so that recognition rates are improved and also to look at the autonomous grasping of the parts by the industrial robot. 10. Acknowledgements The authors wish to thank The Consejo Nacional de Ciencia y Tecnologia (CONACyT) through Project Research Grant No. 61373, and for sponsoring Mr. Reyes-Acosta during his MSc studies. 11. References Biederman I. (1987). Recognition-by-Components: A Theory of Human Image Understanding. Psychological Review, 94, pp. 115-147. Peña-Cabrera, M; Lopez-Juarez, I; Rios-Cabrera, R; Corona-Castuera, J (2005). Machine Vision Approach for Robotic Assembly. Assembly Automation. Vol. 25 No. 3, August, 2005. pp 204-216. Horn, B.K.P. (1970). Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View. PhD thesis, MIT. Brooks, M. (1983). Two results concerning ambiguity in shape from shading. In AAAI-83, pp 36-39. UsingObject’sContourandFormtoEmbedRecognitionCapabilityintoIndustrialRobots 477 for instance, meant that the piece size was reduced by 10% of its original image. Different values with increment of 5 degrees were considered up to an angle θ = 30 degrees (see figure 13 for reference). Fig. 13. Plane modifies. The obtained results with increments of 5 degrees step are shown in Table 2. Grades R.S. P.SQ R.T. P.T R.C. P.C. R.S P.S. 5 100 100 100 100 100 100 100 100 10 100 100 100 100 100 100 87 100 15 9 8 100 96 100 82 100 72 100 20 9 1 100 81* 97* 5 8 100 51* 82* 25 53 100 73* 93* 37* 91* 44* 59* 30 43 100 54* 90* 4* 83* 20* 26* Table 2. Recognition results The “numbers” are errors due to the BOF algorithm, the “numbers*” are errors due to SFS algorithm, and the “numbers*” are errors due to both, the BOF and SFS algorithm. The first letter is the capital letter of the curvature of the objects and the second one is the form of the object, for instance, RS (Rounded Square) or PT (Pyramidal Triangle). Figure 14 shows the behaviour of the ANN recognition rate at different angles. Fig. 14. Recognition graph The Figure 14 shows that the pyramidal objects have fewer problems to be recognized in comparison with the rounded objects. 9. Conclusions and future work The research presented in this chapter presents an alternative methodology to integrate a robust invariant object recognition capability into industrial robots using image features from the object’s contour (boundary object information) and its form (i.e. type of curvature or topographical surface information). Both features can be concatenated in order to form an invariant vector descriptor which is the input to an Artificial Neural Network (ANN) for learning and recognition purposes. Experimental results were obtained using two sets of four 3D working pieces of different cross-section: square, triangle, cross and star. One set had its surface curvature rounded and the other had a flat surface curvature so that these object were named of pyramidal type. Using the BOF information and training the neural network with this vector it was demonstrated that all pieces were recognised irrespective from its location an orientation within the viewable area since the contour was only taken into consideration. With this option it is not possible to differentiate the same type of object with different surface like the rounded and pyramidal shaped objects. When both information was concatenated (BOF + SFS), the robustness of the vision system improved recognising all the pieces at different location and orientation and even with 5 degrees inclination, in all cases we obtained 100% recognition rate. Current results were obtained in a light controlled environment; future work is envisaged to look at variable lighting which may impose some consideration for the SFS algorithm. It is also intended to work with on-line retraining so that recognition rates are improved and also to look at the autonomous grasping of the parts by the industrial robot. 10. Acknowledgements The authors wish to thank The Consejo Nacional de Ciencia y Tecnologia (CONACyT) through Project Research Grant No. 61373, and for sponsoring Mr. Reyes-Acosta during his MSc studies. 11. References Biederman I. (1987). Recognition-by-Components: A Theory of Human Image Understanding. Psychological Review, 94, pp. 115-147. Peña-Cabrera, M; Lopez-Juarez, I; Rios-Cabrera, R; Corona-Castuera, J (2005). Machine Vision Approach for Robotic Assembly. Assembly Automation. Vol. 25 No. 3, August, 2005. pp 204-216. Horn, B.K.P. (1970). Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View. PhD thesis, MIT. Brooks, M. (1983). Two results concerning ambiguity in shape from shading. In AAAI-83, pp 36-39. RobotManipulators,TrendsandDevelopment478 Zhang, R; Tsai, P; Cryer, J. E.; Shah, M. (1999). Shape from Shading: A Survey. IEEE Transaction on pattern analysis and machine intelligence, vol. 21, No. 8, pp 690-706, Agosto 1999. Koenderink, J &. Van Doorn, A (1992). Surface shape and curvature scale. Image and Vision Computing, Vol. 10, pp. 557-565. Gupta, Madan M.; Knopf, G, (1993). Neuro-Vision Systems: a tutorial. A selected reprint Volume IEEE Neural Networks Council Sponsor, IEEE Press, New York. Worthington, P.L. and Hancock, E.R. (2001) Object recognition using shape-fromshading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (5). pp. 535-542. Cem Yüceer adn Kema Oflazer, (1993). A rotation, scaling and translation invariant pattern classification system. Pattern Recognition, vol 26, No. 5 pp. 687-710. Stavros J. and Paulo Lisboa, (1992). Transltion, Rotation , and Scale Invariant Pattern Recognition by High-Order Neural networks and Moment Classifiers., IEEE Transactions on Neural Networks, vol 3, No. 2 , March 1992. Shingchern D. You , Gary E. Ford, (1994). Network model for invariant object recognition. Pattern Recognition Letters 15, 761-767. Gonzalez Elizabeth, Feliu Vicente, (2004). Descriptores de Fourier para identificacion y posicionamiento de objetos en entornos 3D. XXV Jornadas de Automatica. Ciudad Real. Septiembre 2004 Worthington, P.L. and Hancock, E.R. (2001) Object recognition using shape-fromshading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (5). pp. 535-542. David G. Lowe, (2004). Distinctive Image Features from Scale-Invariant Keypoints. Computer Science Department. University of British Columbia. Vancouver, B.C., Canada. January 2004. Hu, M.K., (1962). Visual pattern recognition by moment invariants, IRE Trans Inform Theory. IT-8, 179-187. Cem Yüceer and Kema Oflazer, (1993). A rotation, scaling and translation invariant pattern classification system. Pattern Recognition, vol 26, No. 5 pp. 687-710. Montenegro Javier, (2006). Hough-transform based algorithm for the automatic invariant recognition of rectangular chocolates. Detection of defective pieces. Universidad Nacional de San Marcos. Industrial Data, vol. 9, num 2. Geoffrey G. Towell; Jude W. Shavlik, (1994). Knowledge based artificial neural networks. Artificial Intelligence. Vol. 70, Issue 1-2, pp. 119-166. Robert S. Feldman, (1993). Understanding Psychology, 3 rd edition. Mc Graw-Hill, Inc. Carpenter, G.A. and Grossberg, S., (1987). A massively parallel architecture for a selforganizing. Neural pattern recognition machine, Computer Vision, Graphics, and Image Processing, 37:54-115. Gail A. Carpenter, Stephen Grossberg, John H Reynolds, (1991). ARTMAP: Supervised Real- Time Learning and Classification of Nonstationary Data by Self-Organizing Neural Network. Neural Networks. Pp 565-588. Autonomous3DShapeModelingandGraspPlanningforHandlingUnknownObjects 479 Autonomous 3D Shape Modeling and Grasp Planning for Handling UnknownObjects YamazakiKimitoshi,MasahiroTomonoandTakashiTsubouchi x Autonomous 3D Shape Modeling and Grasp Planning for Handling Unknown Objects Yamazaki Kimitoshi (*1), Masahiro Tomono (*2) and Takashi Tsubouchi (*3) *1 The University of Tokyo *2 Chiba Institute University *3 University of Tsukuba 1. Introduction To handle a hand-size object is one of fundamental abilities for a robot which works on home and office environments. Such abilities have capable of doing various tasks by the robot, for instance, carrying an object from one place to another. Conventionally, researches which coped well with such challenging tasks have taken several approaches. The one is that detail object models were defined in advance (Miura et al., 2003) , (Nagatani & Yuta, 1997 ) and (Okada et al., 2006). 3D geometrical models or photometric models were utilized to recognize target objects by vision sensors, and their robots grasped its target objects based on the handling point given by manual. Other researchers took an approach to give information to their target objects by means of ID tags (Chong & Tanie, 2003} or QR codes (Katsuki et al., 2003). In these challenges, what kind of information of the object should be defined was mainly focused on. These researches had an essential problem that a new target object cannot be added without a heavy programming or a special tools. Because there are plenty of objects in real world, robots should have abilities to extract the information for picking up the objects autonomously. We are motiveted above way of thinking so that this chapter describes different approach from conventional researches. Our approach has two special policies for autonomous working. The one is to create dense 3D shape model from image streams (Yamazaki et. al., 2004). Another is to plan various grasp poses from the dense shape of the target object (Yamazaki et. al., 2006). By combining the two approaches, it is expected that the robot will be capable of handling in daily environment even if it targets an unknown object. In order to put all the characteristics, following conditions are allowed in our framework: - The position of a target object is given - No additional information on the object and environment is given - No information about the shape of the object is given - No information how to grasp it is given 22 RobotManipulators,TrendsandDevelopment480 According to our framework, robots will be able to add its handling target without giving shape and additional marks by manual, except one constraint that the object has some texture on its surface for object modeling. The major purpose of this article is to present whole framework of autonomous modeling and grasp planning. Moreover, we try to illustrate our approach by implementing a robot system which can handle small objects in office environment. In experiments, we show that the robot could find various ways of grasp autonomously and could select the best grasping way on the spot. Object models and its grasping ways had enough feasibility to be easily reused after they acquired at once. 2. Issues and approach 2.1 Issues on combination with modeling and grasp planning Our challenge can roughly be divided two phases, (1)the robot creates an object model autonomously, and (2)the robot detects a grasp pose autonomously. An important thing is that these two processes should be connected by a proper data representation. In order to achieve it, we apply a model representation named "oriented points". An object model is represented as 3D dense points that each point has normal information against object surface. Because this representation is pretty simple, it has an advantage to autonomous modeling. In addition, the oriented points representation has another advantage can in grasp planning because the normal information enables to plan grasp poses effectively. One of the issues in the planning is to prepare sufficient countermeasures against the shape error of the object model which is obtained from a series of images. We take an approach to search good contacts area which is sufficient to cancel the difference. The object modeling method is described in section 3, and the grasp planning method is described in section 4. 2.2 Approach In order to generate whole 3D shape of an object, sensors have to be able to observe the object from various viewpoint. So we take an approach to mount a camera on a robotic arm. That is, multiple viewpoint sensing can be achieved by moving the arm around the object. From the viewpoint of shape reconstruction, there is a worry that a reconstruction process tends to unstable comparing with a stereo camera or a laser range finder. However, a single camera is suitable to mount a robotic arm because of its simple hardware and light weight. A hand we utilize for object grasping is a parallel jaw gripper. Because one of the purposes of the authors is to develop a mobile robot which can pick up an object in real world, such compact hand has an advantage. In grasp planning, we think grasping stability is more important than dexterous manipulation which takes rigorous contact between fingers and an object into account. So we assume that fingers of the robot equip soft cover which has a role of comforming to irregular surfaces to the object. The important challenge is to find stable grasping pose from a model which includes shape error. Effective grasp searching is also important because the model has relatively large data. 3. Object Modeling 3.1 Approach to modeling When a robot arranges an object information for grasping it, main information is 3D shape. Conventionally, many researchers focused on grasping strategy to pick up objects, the representation of object model has been assumed to be formed simple predefined shape primitives such as box, cylinder and so on. One of the issues of these approaches is that such model is difficult to acquire by the robot autonomously. In constrast, we take an approach to reconstruct an object shape on the spot. This means that the robot can grasp any object if an object model is able to be acquired by using sensors mounted on the robot. Our method only needs image streams which are captured by a movable single camera. 3D model is reconstructed based on SFM (structure from motion) which provides an object sparse model from image streams. In addition, by using motion stereo and 3D triangle patch based reconstruction, the sparse shape improved into 3D dense points. Because this representation consists of simple data structure, the model can be autonomously acquired by the robot relatively easily. Moreover, unlike primitive shape approach, it can represent the various shapes of the objects . One of the issues is that the object model can have shape errors accumulated through the SFM process. In order to reduce the influence to grasp planning, each 3D point on reconstructed dense shape is given a normal vector standing on the object surface. Oriented points is similar to the ``needle diagram'' proposed by Ikeuchi (Ikeuchi et al., 1986). This representation is used as data registration or detection of object orientation. Another issue is data redundancy. Because SFM based reconstruction uses multiple images, the reconstructed result can have plenty of points that are too much to plan grasp poses. In order to cope with this redundancy, we apply voxelization and its hierarchy representation to reduce the data. The method described in chapter 5 improves planning time significantly. Fig. 1. Surface model reconstruction 3.2 Modeling Outline Fig.1 shows modeling outline. An object model is acquired according to following procedure: first, image feature points are extracted and tracked from a small area which has Ima g e streams Trian g le ( 3 ) Oriented p oints ( 1 ) Stereo p air ( 2 ) [...]... Error Recovery, Proc of the 2006 IEEE Int Conf Intelligent Robots and Systems, pp 7–12, 2006 496 Robot Manipulators, Trends and Development Open Software Structure for Controlling Industrial Robot Manipulators 497 23 x Open Software Structure for Controlling Industrial Robot Manipulators Flavio Roberti, Carlos Soria, Emanuel Slawiñski, Vicente Mut and Ricardo Carelli Universidad Nacional de San Juan Argentina... data interchange between the control system and the robot s hardware A block diagram of the described system is shown in Fig 2.The robot was also equipped was a force sensor FS6-120 and a vision camera Sony XC77, both located at the end effector of the robot Fig 2 Control diagram of the robot Bosch SR-800 500 Robot Manipulators, Trends and Development 2.1 Robot kinematic model Let’s consider the industrial... software packages are powerful and have many benefits, they can be applied only to the robots that were developed 498 Robot Manipulators, Trends and Development The main objectives of this chapter are the development and the implementation of an open software structure with reusable components, which works as a link between the hardware of an industrial robot manipulator and its control algorithm in... reconstruction and planning process 1s t 10t h 20th 30t h 40t h 50th 60t h 70t h 80th 90t h Fig 8 Image streams in case of a plastic bottle 6.2 Proof experiments of automatic 3D modeling and grasp planning Firstly, several small objects having commonly texture and shape were selected and they were tried to reconstruct the shape and to plan grasp poses 492 Robot Manipulators, Trends and Development The... evaluated and adjusted by means of Newton method On the other hand, the equation of motion stereo is as follows: ~ 2 ~ C  X  s1m1  X  s2 Rm 2  T where  m1 and  m2 2 denotes extended vectors about corresponded feature point between two images X = (X,Y,Z) indicates 3D position of the feature point, R and T denotes relative 484 Robot Manipulators, Trends and Development rotation matrix and relative... applied to any industrial robot On the other hand, several commercial software packages, that run under Windows, for mobile robots can be found Among the best known ones, Advanced Robotics Interface for Applications (ARIA) is used in the robots manufactured by Mobile Robots Inc., BotController software were developed by MobotSoft and it is used for the well known Khepera and Koala robots Even when these... structure and shows the experimental results Finally, Section 5 presents same conclusions of the work 2 Industrial robot Bosch SR-800 The robot manipulator Bosch SR-800 is 4 dof SCARA like industrial robotic arm This kind of manipulator is useful for smooth and fast movements, especially for assembly tasks First, second and fourth joints are rotational and they move on the horizontal plane; and third... Automation, and Systems, pp.22–25, 2003 C Connolly, J Burns and R Weiss, Path Planning using Laplace’s Equation, Proc of the IEEE Intl Conf on Robotics and Automation, pp.2102–2106, 1990 R Katsuki et al., Design of Artificial Marks to Determine 3D Pose By Monocular Vision, Proc 2003 IEEE Int Conf Robotics and Automation, pp.995–1000, 2003 J Miura et al., Development of a Personal Service Robot with... experience in software 506 Robot Manipulators, Trends and Development development, some issues are commented These issues should be taken into account to develop a program that efficiently uses the available hardware resources  Determine the correct number of threads of the program, according to available PC hardware  Avoid high time demanding operations Perform I/O operations on files and communication devices... unit, provided by the manufacturer, consisting of four servo-amplifiers and a CPU The servo-amplifiers command the joints of the robot and the CPU is used to compute a position control algorithm with internal velocity loop for each joint Open Software Structure for Controlling Industrial Robot Manipulators 499 Fig 1 Industrial robot Bosch SR-800 In order to reach the proposed objectives, the closed . CCD camera (not shown). Robot Manipulators, Trends and Development4 74 Fig. 10. Robotc test bed The work domain is comprised by the pieces to be recognised and that are also illustrated. Robot Manipulators, Trends and Development4 72   T T y I x I qpI            (6) Where   qp are used to obtain the gradient and they are known as. pieces was modified. The 100% size was considered the original size and a 10% reduction Robot Manipulators, Trends and Development4 76 for instance, meant that the piece size was reduced by

Ngày đăng: 11/08/2014, 23:22

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan