1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Manufacturing the Future 2012 Part 9 ppt

50 255 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 50
Dung lượng 6,21 MB

Nội dung

Distributed Architecture for Intelligent Robotic Assembly, Part II: Design… 391 It was also tested the generalisation capability of the NNC by assembling dif- ferent components using the same ACQ-PKB. Results are provided in table 3. For the insertion of the radiused-square component, the offsets were the same as before and for the insertion of the circular component a higher offset was used and no rotation was given. The time for each insertion was computed with the learning ability on (Lon) and also with learning inhibited (Loff); that is, using only the initial ACQ-PKB. The assembly operation was always success- ful and in general faster in most cases when the learning was enabled com- pared with inhibited learning. Recovery Error (Rz) During Assembly Using GVN-PKB -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30 35 40 45 50 55 60 65 Alignment Motions Rz Error (degrees/10) IN 1 IN 2 IN 3 IN 4 IN 5 IN 6 IN 7 IN 8 IN 9 IN 10 IN 11 IN 12 IN 13 IN 14 IN 15 IN 16 Recovery Error (Rz) During Assembly Using ACQ-PKB -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30 35 40 45 50 55 60 65 Alignment Motions RZ Error (degrees/10) IN 1 IN 2 IN 3 IN 4 IN 5 IN 6 IN 7 IN 8 IN 9 IN 10 IN 11 IN 12 IN 13 IN 14 IN 15 IN 16 Figure 13. Recovery error (Rz) during assembly Manufacturing the Future: Concepts, Technologies & Visions 392 Radiused-square chamfered peg insertion Circular chamfered peg insertion Inser- tion Offset (dx, dy, dRz) (mm, mm, °) Lon time (s) Loff time (s) Offset (dx, dy, dRz) (mm, mm, °) Lon time (s) Loff time (s) 1 (0.7, 0.8, 0.8) 45 48 (0.7, 0.8, 0) 42 43 2 (-0.8, 1.1, -0.8) 45 51 (-0.8, 1.1, 0) 41 41 3 (-0.7, -0.5, 0.8) 43 47 (0.8. -0.9, 0) 40 42 4 (0.8. -0.9, -0.8) 50 54 (0.8. -0.9, 0) 41 41 5 (0.7, 0.8, -0.8) 44 44 (-0.8, 1.1, 0) 41 41 6 (-0.8, 1.1, 0.8) 53 51 (0.8. -0.9, 0) 41 42 7 (-0.7, -0.5, -0.8) 54 55 (1.4, 1.6, 0) 45 45 8 (0.8. -0.9, 0.8) 50 49 (1.6. -1.8, 0) 43 45 9 (0.7, 0.8, 0.8) 46 46 (1.4, 1.6, 0) 43 44 10 (-0.8, 1.1, -0.8) 45 55 (-1.4, -1, 0) 42 43 11 (-0.7, -0.5, 0.8) 44 45 12 (0.8. -0.9, -0.8) 53 51 13 (0.7, 0.8, -0.8) 43 43 14 (-0.8, 1.1, 0.8) 53 51 15 (-0.7, -0.5, -0.8) 44 59 16 (0.8. -0.9, 0.8) 45 50 Table 3. Results using an ACQ-PKB Total distance in XY plane 0 10 20 30 40 50 60 70 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Insertions Distance (mm/10) ACQ-PKB GVN-PKB IDEAL Figure 14. Total distance on XY plane Distributed Architecture for Intelligent Robotic Assembly, Part II: Design… 393 Insertion center error (XY plane) 0 1 2 3 4 12345678910111213141516 Insertions Error (mm/10) ACQ-PKB GVN-PKB Figure 15. Insertion center error on XY plane 5.2 Whole assembly process results Several tests were carried out to asses the performance. The diameter of the male components was 24.8 mm whereas the diameter of female components was 25 mm; the chamfer was set to 45° and 5 mm width. Results are contained in table 4. In zone 2 the SIRIO only provides location (X,Y) because the female component orientation was fixed, however an error occurs and it is related to the component’s tolerance. The error for the chamfered square component is 0.8°, 0.5° for the chamfered radiused-square and 0.4° for the chamferless square and 0.6° for the chamferless radiused-square. Error recovery is illus- trated in figure 18. The assembly operation ends when ¾ of the body of male component is in the hole, this represents 14 mm. The NNC was operated dur- ing the first 10 mm (100 manipulator steps), the FuzzyARTMAP parameters were: ρa = 0.2, ρmap = 0.9 and ρb = 0.9. Table 4 shows the position errors in zone 2 which is represented in figures 16 and 17 as the trajectory followed by the robot. The minimum time of assembly cycle was 1:11 min, the maximum was 1:24 min and the average time was 1.17 min. The system has an average angular error of 3.11° and a maximum linear posi- tion error from -1.3 mm to 3.1 mm due to the camera positioning system in Zone 1. Manufacturing the Future: Concepts, Technologies & Visions 394 ZONE 1 Zone 1 Error ZONE 2 Zone 2 Error # IN P Ch TC (min) TA (s) Xmm Ymm RZ° Xmm Ymm RZ° Xmm Ymm Xmm Ymm NC 1 S Y 1:15 32.5 62.4 144.1 10 0.2 -1.3 0 84.6 102.1 0.3 -1 Y 2 S Y 1:15 30.4 62.4 45.7 12 1.8 0.2 2 85.6 101.1 -0.7 0 Y 3 S Y 1:15 31.8 178.7 47.7 23 0.9 -0.8 3 84.7 100.9 0.2 0.2 Y 4 R Y 1:11 30.1 181.6 147 29 -0.3 -0.7 -1 84.7 100.6 0.2 0.5 Y 5 R Y 1:14 29.4 62.4 145.1 36 0.2 -0.3 -4 84.9 100.7 0 0.4 Y 6 R Y 1:19 29.6 67.3 44.8 48 3.1 -0.7 -2 85.3 101.6 -0.4 -0.5 Y 7 C Y 1:15 29.6 180.6 49.6 57 1 1.1 -3 84.6 102.4 0.3 -1.3 Y 8 C Y 1:13 30.2 180.6 148 77 -0.7 0.3 7 84.3 101 0.6 0.1 Y 9 C Y 1:14 30.2 61.5 146 79 -0.7 0.6 -1 83.9 101.6 1 -0.5 Y 10 S N 1:18 29.9 63.4 45.7 83 -0.8 0.2 -7 85.4 100.5 -0.5 0.6 Y 11 S N 1:19 30.4 179.6 48.6 104 0 0.1 4 83.2 100.8 1.7 0.3 Y 12 S N 1:22 34.6 180.6 147 104 -0.7 -0.7 -6 83.2 101.8 1.7 -0.7 Y 13 R N 1:22 38.3 61.5 146 119 -0.7 0.6 -1 84.8 102.8 0.1 -1.7 Y 14 R N 1:22 36.8 63.4 43.8 126 -0.8 1.7 -4 83.6 101.8 1.6 -0.7 Y 15 R N 1:24 36.6 179.6 47.7 138 0 -0.8 -2 83.2 101.7 1.7 -0.6 Y 16 C N 1:17 30.5 182.6 149 150 1.3 1.3 0 83.7 101.2 1.2 -0.1 Y 17 C N 1:15 28.3 63.4 146 155 1.2 0.6 -5 84.6 100.7 0.3 0.4 Y 18 C N 1:15 29.7 64.4 47.7 174 0.2 2.2 4 83.9 101.1 1 0 Y Table 4. Eighteen different assembly cycles, where IN= Insertion, P=piece, Ch=chamfer present, TC=Assembly cycle time, TA= Insertion time, NC=correct neural classifica- tion, S=square, R=radiused-square, C=circle, N=no and Y=yes. The force levels in chamferless assemblies are higher than the chamfered ones. In the first one, the maximum value was in Z+, 39.1 N for the insertion number 16, and in the chamfered the maximum value was 16.9 N for the insertion number 9. In chamfered assembly, in figure 16, it can be seen that some trajectories were optimal like in insertions 2, 5, 7, 8 and 9, which was not the case for chamfer- less assembly; however, the insertions were correctly completed. In figure 17, each segment corresponds to alignment motions in other direc- tions different from Z The lines mean the number of Rz+ motions that the ro- bot performed in order to recover the positional error for female components. The insertion paths show how many rotational steps are performed. The maximum alignment motions were 22 for the chamfered case in comparison with 46 with the chamferless component. Distributed Architecture for Intelligent Robotic Assembly, Part II: Design… 395 CHAMFERED ASSEMBLY TRAJECTORY -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 POSITION ERROR X (m m/10) POSITION ERROR Y (mm/10 IN 6 IN 5 IN 8 IN 4 IN 3 IN 2 IN 9 IN 1 IN 7 INSERTION CENTER CHAMFERLESS ASSEMBLY TRAJECTORY -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 22 -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 22 POSITION ERROR X (mm/10) POSITION ERROR Y (mm/10 IN 10 IN 11 IN 12 IN 13 IN 14 IN 15 IN 16 IN 17 IN 18 IN 14 IN 13 IN 15 IN 17 IN 10 IN 11 IN 12 IN 16 IN 18 INSERTION CENTER Figure 16. Assembly trajectory in top view for each insertion in zone 2. The trajectory starts with the labels (INx) and ends at 0,0 origins coordinate Manufacturing the Future: Concepts, Technologies & Visions 396 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 01234567891011121314151617181920212223 Alignment Motions Rz Error (steps) IN 1 IN 2 IN 3 IN 4 IN 5 IN 6 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 0 1020304050 Alignment Motions Rz Error (steps) IN 10 IN 12 IN 13 IN 14 IN 15 IN 16 Figure 17. Compliant rotational motions (only Rz+) for each insertion in zone 2, left chamfered assembly, right chamferless assembly Distributed Architecture for Intelligent Robotic Assembly, Part II: Design… 397 6. Conclusions A task planner approach for peg-in-hole automated assembly was presented. The proposed methodologies were used to achieve the tasks and tested suc- cessfully in the real world operations using an industrial manipulator. The ro- bot is able to perform not only the assembly but also it can start working with- out initial knowledge about the environment, and it can increase its PKB at every assembly if it is necessary. The presented approach using the vision and force sensing system has envis- aged further work in the field of multimodal learning in order to fuse informa- tion and to increase the prediction capability of the network which contributes towards the creation of truly self-adaptive industrial robots for assembly. All assemblies were successful showing the system robustness against differ- ent uncertainties and its generalization capability. The generalization of the NNC has been demonstrated by assembling successfully different component geometry using different mechanical tolerances and offsets employing the same acquired knowledge base. Initial knowledge is acquired from actual contact states using explorative mo- tions guided by fuzzy rules. The knowledge acquisition stops once the ACQ- PKB is fulfilled. Later this knowledge is refined as the robot develops new as- sembly tasks. The dexterity of the robot improves using the ACQ-PKB by observing the magnitude of forces and moments as shown in Figures 11 and 12. Values are significantly lower, hence motions were more compliant in this case indicating that information acquired directly from the part geometry allowed also lower constraint forces during manipulation. Having implemented the knowledge acquisition mechanism, the NNC acquires only real contact force information from the operation. In comparison with our previous results, insertion trajecto- ries improved enormously; we believe that given a priori knowledge (GVN- PKB) is fine, but contact information extracted directly form the operation it- self provides the manipulator with better compliant motion behaviour. Results from this work have envisaged further work in the area of multimodal data fusion (Lopez-Juarez, et al, 2005). We expect that data fusion from the F/T sensor and the vision system result in an improved confidence for getting the contact information at the starting of the operation providing also important information such as chamfer presence, part geometry and pose information, which will be the input data to a hierarchical task level planner as pointed out by (Lopez-Juarez & Rios-Cabrera, 2006). Manufacturing the Future: Concepts, Technologies & Visions 398 7. References Ahn, D.S.; Cho, H.S.; Ide, K.I.; Miyazaki, F.; Arimoto, S. (1992). Learning task strategies, in robotic assembly systems. Robotica Vol. 10, 10 (409–418) Asada, H. (1990). Teaching and Learning of Compliance Using Neural Nets. IEEE Int Conf on Robotics and Automation, 8 (1237-1244) Baeten, J.; Bruyninckx, H.; De Schutter, J. (2003). Integrated Vision/Force Ro- botic Servoing in the Task Frame Formalism. The International Journal of Robotics Research. Vol. 22, No. 10-11, 14 (941-954) Carpenter, G. A.; Grossberg, S. (1987). A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine. Computer Vision, Graphics, and Image Processing, Academic Press, Inc. 62 (54-115) Carpenter, G. A.; Grossberg, S.; Reynolds, J. H. (1991). ARTMAP: Supervised Real-Time Learning and Classification of Nonstationary Data by Self- Organizing Neural Network. Neural Networks, 24 (565-588) Carpenter, G.A.; Grossberg, J.; Markunzon, N.; Reynolds, J.H.; Rosen, D.B. (1992). Fuzzy ARTMAP: A Neural Network Architecture for Incre- mental Learning of Analog Multidimensional Maps. IEEE Trans. Neural Networks, Vol. 3, No. 5, 36 (678-713) Cervera, E.; del Pobil, A. P. (1996). Learning and Classification of Contact States in Robotic Assembly Tasks. Proc of the 9th Int. Conf. IEA/AIE, 6 (725-730) Cervera, E.; Del Pobil, A.P. (1997). Programming and Learning in Real World Manipulation Tasks. In: Int. Conf. on Intelligent Robot and Systems (IEEE/RSJ, Proc. 1, 6 (471-476) Cervera, E.; del Pobil, A. P. (2002). Sensor-based learning for practical planning of fine motions in robotics. The International Journal of Information Sci- ences, Special Issue on Intelligent Learning and Control of robotics and Intelligent Machines in Unstructured Environments, Vol. 145, No. 1, 22 (147-168) De Schutter, J.; Van Brussel, H. (1988). Compliant Robot Motion I, a formalism for specifying compliant motion tasks. The Int. Journal of Robotics Re- search, Vol. 7, No. 4, 15 (3-17) Doersam, T.; Munoz Ubando, L.A. (1995). Robotic Hands: Modelisation, Con- trol and Grasping Strategies. In: Meeting annuel de L’Institute Fanco- Allemand pour les Application de la recherche IAR Driankov, D.; Hellendoorn, H.; Reinfrank, M. (1996). An Introduction to Fuzzy Control. 2nd ed. Springer Verlag Erlbacher, E. A. Force Control Basics. PushCorp, Inc. (Visited December 14 th , Distributed Architecture for Intelligent Robotic Assembly, Part II: Design… 399 2004). http://www.pushcorp.com/Tech%20Papers/Force-Control- Basics.pdf Grossberg, S. (1976). Adaptive Pattern Classification and universal recoding II: Feedback, expectation, olfaction and illusions. Biological Cybernetics, Vol. 23, 16 (187-202) Gullapalli, V.; Franklin, J. A.; Benbrahim, H. (1994). Acquiring Robot Skills via Reinforcement Learning. IEEE Control Systems, 12 (13-24) Gullapalli, V.; Franklin, J.A.; Benbrahim, H. (1995). Control Under Uncertainty Via Direct Reinforcement Learning. Robotics and Autonomous Systems. 10 (237-246) Howarth, M. (1998). An Investigation of Task Level Programming for Robotic As- sembly. PhD thesis. The Nottingham Trent University, UK Ji, X.; Xiao, J. (1999). Automatic Generation of High-Level Contact State Space. Proc. of the Int. Conf. on Robotics and Automation, 6 (238-244) Joo, S.; Miyasaki, F. (1998). Development of a variable RCC and its applica- tions. Proceedings of the 1998 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vol. 2, 7 (1326-1332) Jörg, S.; Langwald, J.; Stelter, J.; Natale, C.; Hirzinger, G. (2000). Flexible Robot Assembly Using a Multi-Sensory Approach. In: Proc. IEEE Int. Confer- ence on Robotics and Automation, 8 (3687-3694) Kaiser, M.; Dillman, M.R. (1996). Building Elementary Robot Skills from Hu- man demonstration. IEEE International Conference on Robotics and Auto- mation, Minneapolis, Minnesota, 6 (2700 – 2705) Lopez-Juarez, I.; Howarth, M.; Sivayoganathan, K. (1996). Robotics and Skill Acquisition. A. Bramley; T. Mileham; G. Owen. (eds.) In: Advances in Manufacturing Technology X, ISBN 1 85790 031 6, 5 (166-170) Lopez-Juarez, I. (2000). On-line learning for robotic assembly using artificial neural networks and contact force sensing. PhD thesis, Nottingham Trent Univer- sity, UK Lopez-Juarez, I.; Ordaz-Hernandez, K.; Pena-Cabrera, M.; Corona-Castuera, J.; Rios-Cabrera, R. (2005). On the Design of A multimodal cognitive archi- tecture for perceptual learning in industrial robots. In MICAI 2005: Ad- vances in Artificial Intelligence. LNAI 3789. Lecture Notes in Artificial Intelligence. A Gelbukh, A de Albornoz, and H Terashima (Eds.). 10 (1062-1072). Springer Verlag, Berlin Lopez-Juarez, I.; Rios-Cabrera, R. (2006). Distributed Architecture for Intelli- gent Robotic Assembly, Part I: Design and Multimodal Learning. Ad- Manufacturing the Future: Concepts, Technologies & Visions 400 vances Technologies: Research-Development-Application. Submitted for publication Lozano-Perez T.; Mason, M.T.; Taylor R. H. (1984). Automatic Synthesis of Fine Motion Strategies. The Int. Journal of Robotics Research, Vol. 3 No. 1, 22 (3-24) Mason, M. T. (1983). Compliant motion, Robot motion, Brady M et al eds. Cam- bridge: MIT Press Peña-Cabrera, M.; López-Juárez, I.; Ríos-Cabrera R.; Corona-Castuera, J. (2005). Machine vision learning process applied to robotic assembly in manu- facturing cells. Journal of Assembly Automation, Vol. 25, No. 3, 13 (204- 216) Peña-Cabrera, M.; Lopez-Juarez, I. (2006). Distributed Architecture for Intelli- gent Robotic Assembly, Part III: Design of the Invariant Object Recogni- tion System. Advances Technologies: Research-Development- Application. Submitted for publication Skubic M.; Volz, R. (1996). Identifying contact formations from sensory pat- terns and its applicability to robot programming by demonstration. Pro- ceedings of the 1996 IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, Osaka, Japan Skubic, M.; Volz, R.A. (2000). Acquiring Robust, Force-Based Assembly Skills from Human Demonstration. IEEE Trans. on Robotics and Automation, Vol. 16, No. 6, 10 (772-781) Whitney. D.; Nevis, J. (1979). What is the Remote Center Compliance (RCC) and what can it do?. Proceedings of the 9th Int. Symposium on Industrial Robots, 18 (135-152) Xiao J.; Liu, L. (1998). Contact States: Representation and Recognizability in the Presence of Uncertainties. IEEE/RSJ Int. Conf. Intell. Robots and Sys [...]... of the background and the pieces within the ROI eliminating the noise that may appear This dynamic threshold value calculation allows independent light conditions operation of the system The 1D histogram normally has the aspect shown in figure 11 Manufacturing the Future: Concepts, Technologies & Visions 420 The 2 peaks in the histogram represent the background and the pieces in the image After the. .. N is the size of the perimeter, and 8 & 4 are the number of comparisons the algorithm needs to find the pixel farer to the boundary, the main difference with the traditional algorithm consist of making the sweep in an uncertain area which is always larger than the figure, this turns the algorithm into: O(N*M) N*M, is the size of the Boundary Box in use, and it does not obtain the coordinates of the. .. c ⎥ ⎢φ ⎥ ⎢Z ⎥ ⎢ ⎥ ⎢ ID ⎥ ⎣ ⎦ (5) where: - Di is the distance from the centroid to the object’s perimeter point XC, YC, are the coordinates of the centroid φ, is the orientation angle Z is the height of the object ID is a code number related to the geometry of the components 6.3 .9 Information processing in the neural network The vision system extends the BOF data vectors to 180, plus 4 more data vectors,... will be useful when the information received by the assembly system coming from the vision system is incorrect, due to an error in the check sum or any other error The communication protocol is as follows: # Zone Command Type C-Sum The response from the vision system is a function of the request command from the assembly system, which coordinates the activities of the intelligent manufacturing cell... Gonzalez-Galvan et al., 199 7), developed a procedure for precision measure in 3D rigid-body positioning using camera-space manipulation for assembly (Dickmanns, 199 8), and (Nagel, 199 7), have shown solutions to facilitate the use of vision for real world-interaction, (Hager et al., 199 5), and (Papanikolopoulos et al., 199 3), use markers on the object to simplify detection and tracking of cues Some other authors... Figure 5 as well as the peg-in-hole operation in Figure 6 The diameter of the circular peg was 25 mm and the side of the square peg was also 25 mm The dimensions of the non-symmetric part, the radiused-square, was the same as the squared peg with one corner rounded to a radius of 12.5 mm Clearances between pegs and mating pairs were 0.1 mm, chamfers were at 45 degrees with 5 mm width The assembly was... pixel to the boundary that has not been already located 2 Assigns the label of actual pixel to the nearer pixel to the boundary recently found 3 Paints the last pixel as a visited pixel 4 If the new coordinates are higher than the last higher coordinates, it is assigned the new values to the higher coordinates 5 If the new coordinates are lower than the last lower coordinates, it is assigned the new... task, the robotic assembly system sends commands to the vision system as follows: $SENDINF#1 Send Information of Zone 1: zone 1 is the place where the robot grasps the male components The robot can locate different pieces and their characteristics $SENDINF#2 Send information of zone 2: zone 2 is the place where the robot is performing the assembly task The assembly system can request information about the. .. calculation Where A is the area or number of pixels that composes the piece 6.3.6 Piece orientation The projected shadow by the pieces is used to obtain its orientation Within the shadow, the largest straight line is used to calculate the orientation angle of the piece using the slope of this line, see figure 14 The negative image of the shadow is obtained becoming a white object, from which, the perimeter... and also the two most distant points (x1 y1, x2 y2) are determined Figure 14 Shadow for the orientation Manufacturing the Future: Concepts, Technologies & Visions 424 These points define the largest straight line, the equation for the distance between 2 points is used to verify if it is the largest straight line, and also if it contains the centroid using equation (2) YC – y1 = m(XC – x1) (2) The slope . flexible parts with a dy- namic model of two robots which does not require measurements of the part deflections have been done (W. Ngyuen and J.K. Mills, 199 6). (Plut, 199 6), and (Bone, 199 7),. The force levels in chamferless assemblies are higher than the chamfered ones. In the first one, the maximum value was in Z+, 39. 1 N for the insertion number 16, and in the chamfered the. 178.7 47.7 23 0 .9 -0.8 3 84.7 100 .9 0.2 0.2 Y 4 R Y 1:11 30.1 181.6 147 29 -0.3 -0.7 -1 84.7 100.6 0.2 0.5 Y 5 R Y 1:14 29. 4 62.4 145.1 36 0.2 -0.3 -4 84 .9 100.7 0 0.4 Y 6 R Y 1: 19 29. 6 67.3 44.8

Ngày đăng: 21/06/2014, 19:20

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
Aguado A., E. Montiel, M. Nixon . Invariant characterization of the Hough Transform for pose estimation of arbitrary shapes. Pattern Recognition 35 , 1083-1097 , Pergamon, (2002) Khác
Gonzalez-Galvan Emilio J. et al. Application of Precision-Enhancing Measure in 3D Rigid-Body Positioning using Camera-Space Manipulation, The In- ternational Journal of Robotics Research, vol 16, No. 2, pp. 240-257, April (1997) Khác
Kronauer R.E. , Y. Zeevi . Reorganization and Diversification of Signals in Vi- sion. IEEE Trans. Syst. Man, Cybern., SMC-15,1,91-101. (1985) Khác
Langley C.S., D. Eleuterio GMT, A memory efficient neural network for robotic pose estimation, In proceedings of the 2003 IEEE International Sympo- sium on Computational Intelligence in Robotics and Automation, No. 1 , 418-423, IEEE CIRA, (2003) Khác
ternational Conference on Robotics and Automation, Alburquerque, NM, pp. 379-384, (1997) Khác
Yong-Sheng Chen et al. Three dimensional ego-motion estimation from motion fields observed with multiple cameras. Pattern Recognition 34, 1573-1583, Pergamon , (2001) Khác