Field and Service Robotics - Corke P. and Sukkarieh S.(Eds) Part 8 pdf

40 320 0
Field and Service Robotics - Corke P. and Sukkarieh S.(Eds) Part 8 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Constrained Motion Planning in Discrete State Spaces 277 It is important to observe that these issues are primarily relevant for local planning. When the distance to goal is much larger than the minimum turn- ing radius of the vehicle, the under-estimation error percentage of Euclidean distance will become small, thus making this metric a viable heuristic option. We this in mind, we propose a hybrid approach to calculating the heuristic: in the close vicinity of the robot, a local estimation procedure that considers the vehicle’s kinematic model is used, whereas in the far range Euclidean distance is sufficiently accurate. Based on experimental studies, we found that a good threshold for switching between local and global heuristics is 10 minimum turning radii. As we mentioned, it is crucial for the heuristic to be as accurate as possible for high efficiency. However, there is no known closed-form solution for calcu- lating the local heuristic, and obtaining an accurate estimate requires solving the original planning problem. Thus, we defined an off-line pre-processing step to create the heuristic look-up table (LUT). The table is simply a compilation of the costs to travel from the origin to all lattice nodes in its local neighbor- hood. These costs are determined by running the planner for each possible path endpoints using simply Euclidean distance as heuristic, which is guaran- teed to be an under-estimate. LUT generation could be a lengthy process, but it is performed off-line, and the agenda for future work includes developing advanced function approximators to eliminate this pre-processing. The exact values for the path costs provided by the LUT result in the dramatic speed-up of the planner as described in the next section. 5.2 Path Planner Results In order to quantify the performance of the present path planner, we under- took a simulation study that included performing a statistically significant number of planning experiments, where initial and final path configurations were chosen at random. It was confirmed that the planner built using the control set generated in Section 4.2 for the robots in [4] is very efficient: it performs as efficiently as basic grid search. In fact, for over 90% of path plan- ning queries, our method performs even faster than grid search. The significant result here is that this method generates optimal nonholonomic paths with no post-processing, yet can perform better than the classical grid search, the archetype of efficiency in path planning. We believe that the reason for this significant speed-up is twofold. The primitive paths can span multiple grid cells, such that by choosing a primitive, the planner may “jump” ahead, while grid search still considers one cell after another. Besides, the accurate heuristic as provided by the LUT was shown to reduce the number of required search iterations considerably. In Figure 5 we present the timing results of our planner by considering the toughest local planning scenarios: the final state (goal) is close to the initial state and exhibits significant change in heading and direction to goal. The fig- ure shows the results of over 1000 timing experiments for both nonholonomic Fig. 5. Run-timeresultsofour nonholonomic (N.H.) path planner(red datapoints) in comparison with basic grid search(blue datapoints).Verticalaxis is the time of plan generation, and horizontal axis is the lengthofthe nonholonomic paths. a) Runtimes for both nonholonomic and gridsearch on semi-log scale.b)Average runtimes superimposed on thesame plot on linear scale. path planner and grid search. Foreachexperiment, agoal Q f waschosen ran- domly suchthat theEuclidean distance between initialstateand goal wasthe same. In this manner, the grid searchhad roughlythe same amountofwork to do, whereas nonholonomicpath planner’s job couldvary significantlyde- pending on changes in orientation between initial and final states.The length of the resulting nonholonomic plan is roughly indicativeofthat complexity, and so thehorizontal axis(in units of cell size) is intended to capture increas- ing nonholonomic planning complexity.Figure5ashows the runtime versus nonholonomicpath length (i.e.“complexity”) perplanningquery,plottedon the semi-logscale. Nonholonomic planner runtimes are denotedwith circles, andthe grid searchruntimes –with stars. Even though theplot looks rather busy,the clusteringofcirclesbelowthe starsisclearly visible, indicating that on average nonholonomicplanner ran faster in the same experiment (i.e. a choiceofpath endpoints). This trend is easier to see in Figure5,where we superimposedthe mean of runtimefor both planners.The two solid lines (red for nonholonomic,and blue forgrid search) clearly showthat nonholonomic planner on averagetakes less time.The balance tiltsinfavor of grid search onlyatthe right-most end of the horizontal axis, i.e. for highest planning complexity. Thus, our path planner is clearly very efficientand cancompute most planning queriesinless than 100ms, which deems it useful for real-time applications. 6Applications We discuss several important mobile robotics applicationsthat could greatly benefit from the constrained motion planningapproachpresented herein.The 278M. Pivtoraiko and A. Kelly Constrained Motion Planning in Discrete State Spaces 279 constrained motion planner presented here was successfully implemented on the DARPA PerceptOR program [4]. This planner guided a car-like all-terrain vehicle in its exploration of natural, often cluttered, environments. The pro- posed planner exhibited great performance as a special behavior that was invoked to guide the vehicle out of natural cul-de-sacs. Fig. 6b depicts an ex- ample motion plan that was generated by the vehicle on-line. The grayscale portion in the figure represents the cost-map, pink indicates obstacles, and orange is the area of unknown cost. Yellow line represents the generated plan. With this technology, the PerceptOR vehicles exhibited kilometers of autonomous traverse in very difficult natural terrain (Fig. 6a). Fig. 6. Example applications:a)Robot navigation in natural cluttered environments (DARPAPerceptOR). b) Anonholonomic path computedinanatural cul-de-sac. c) Planetary roverinstrument placement problem. The rovermust approachfive scienceobjectsatspecified heading in cluttered environmentonthe slope of acrater. Another importantapplicationfor whichthe presented motion planner is suitedespecially well is rovernavigation for space exploration.The rover instrumentplacementtask is knowntobeadifficult problemboth from the standpointofmotion planningand execution (see Figure 6c). The signifi- cantcommunication time lag is an importantconsideration promptingquick progress in roverautonomy.Very rough terrain and considerable wheel slipon looseterrain require an approach that can consider the model of rovermotion as accuratelyaspossible, as well as takeintoaccountthe peculiarities of the terrain as it is beingdiscovered. Our methodofmotion planningiswell suitedfor this application because it addresses all of the above issues. The inverse trajectorygenerator used in this approach[10] can use anykinematic rovermodel whatsoever, and therefore anygenerated path is inherently executable by the roverunder consideration. The flexibilityofusing anycost-map,overlaidoverthe state lattice(implicitly represented through using control sets), enablesthisplanner to consider an arbitrary definition of obstaclesinterms of the mapcost:both binary(e.g. rocks) and variable (e.g. slopes as high-cost, yettraversable). Moreover, dy- namicsanalysis canbemade to “label”regions of steep slopes or very loose terrain as untraversable. An added benefit to specifying paths as continuous curvature functionsis the possibilitytodefine velocityplanning quiteeasily.Bydefining amaxi- mum desirable angular velocity of a vehicle, it is straight-forward to compute the maximum translational velocity as a function of path curvature. In this fashion, our path planner can become a trajectory planner with this simple velocity planning post-processing step. 7Conclusions andFutureWork This work has proposed agenerativeformalismfor the construction of dis- cretecontrol setsfor constrained motion planning. The inherentencoding of constraints in the resulting representationre-renders the problem of mo- tion planning in terms of unconstrained heuristic search. The encoding of constraints is an offline process that does not affect the efficiency of on-line motion planning. Ongoing work includes designing amotion planner basedondynamic heuristic searchwhichwould allowittoconsider arbitrarymoving obsta- cles, the extension of trajectory generation to rough terrain, and hierarchical approaches which scale the results to be applicable to kilometersoftraverse. References 1. LatombeJ-C (1991) Robot motion planning. Kluwer, Boston 2. BranickyMS, LaValle S, Olson S, Yang L(2001) Quasi-randomizedpath plan- ning. In: Proc.ofthe Int. Conf.onRobotics and Automation 3. Frazzoli E, Dahleh MA, and Feron E(2001) Real-time motion planning for agile autonomous vehicles.In: Proc.ofthe American Control Conference 4. Kelly Aetal. (2004) Toward reliable off-road autonomous vehicleoperatingin challenging environments. In: Proc.ofthe Int. Symp. on Experimental Robotics 5. Laumond J-P,SekhavatSand Lamiraux F(1998) Guidelinesinnonholonomic motion planning. In: Laumond J-P (ed)Robot motion planning and control. Springer,New York 6. Pancanti Setal. (2004) Motion planning throughsymbols and lattices. In: Proc. of theInt.Conf. on Robotics and Automation 7. ScheuerA,Laugier Ch (1998) Planning sub-optimal and continuous-curvature paths for car-likerobots. In: Proc.ofthe Int. Conf.onRobotics and Automation 8. Wang D, Feng Q(2001) Trajectory planning for afour-wheel-steering vehicle. In: Proc.ofthe Int. Conf.onRobotics and Automation 9. Hsu D, KindelR,Latombe J-C andRockS(2002)Randomizedkinodynamic motion planningwithmovingobstacles. Int. J. of Robotics Research21:233-255 10. Kelly Aand Nagy B(2003) Reactivenonholonomic trajectorygeneration via parametric optimal control.Int.J.ofRobotics Research22:583-601 11. LaValle S, BranickyMand Lindemann S(2004) On the relationship between classical grid searchand probabilisticroadmaps. Int. J. of Robotics Research 23:673-692 280M. Pivtoraiko and A. Kelly Vision-Based Grasping Points Determination by Multifingered Hands Madjid Boudaba 1 , Alicia Casals 2 , Dirk Osswald 3 , and Heinz Woern 3 1 TES Electronic Solutions GmbH, Zettachring 8. 70567 Stuttgart, Germany madjid.boudaba@tesbv.com 2 Automatic Control and Computer Engineering Dpt.(ESAII),Technical University of Catalonia. Campus sud-edif.U, 08028 Barcelona, Spain alicia.casals@upc.es 3 Institute of Process Control and Robotics (IPR), University of Karlsruhe. Engle-Bunte-Ring 8-Gebaeude 40.28, 76131 Karlsruhe, Germany osswald@ira.uka.de, woern@ira.uka.de Summary. This paper discusses some issues for generating points of contact on object grasping by multifingered robot hands. To address these issues, we present a general algorithm based on computer vision techniques for determining grasping points through a sequence of processes: (1) object’s visual features, we apply some algorithms for extracting vertices, edges, object’s contours, (3) modeling the point of contact by a bounded polytope, (3) based on these features, the developed algo- rithm starts by analysing the object’s contour to generate a set of contact points that guarantee the force-closure grasps condition. Finally, we briefly describe some ex- periments on a humanoid robot with a stereo camera head and an anthropomorphic robot hand within the ”Center of Excellence on Humanoid Robots: Learning and co-operating Systems” at the University of Karlsruhe and the Forschungszentrum Karlsruhe. Keywords: Vision system, Points of contacts, Force-closure, Grasping, Lin- ear programmimg implementation. 1Introduction Grasping by amultifingeredrobot hands hasbeen an activeresearch area in thelast years. Severalimportantissues includinggrasp planning, manipula- tion and stabilityanalysis hasbeen done. We referto[1] forageneral survey on graspingand to [2-3]for thetheory of grasping. Muchrelevantworkhas been doneinthe fieldofgraspinganalysis andgraspingsynthesis referring to propertiessuchasequilibrium, force-closure grasp,formclosure[4-14].Most of theseresearches assume that thegeometryofthe objecttobegrasped is knownand the positions of the contact points are estimatedbasedonthe P. Corke and S. Sukkarieh (Eds.): Field and Service Robotics, STAR 25, pp. 281–292, 2006. © Springer-Verlag Berlin Heidelberg 2006 282 M. Boudaba et al. geometrical constraints of the gripper. These assumptions reduce the com- plexity of the mathematical model of the grasping. However, much less work has been done in generating possible grasps from unknown objects. The com- plete grasping process requires multiple type of sensors information such as visual information or tactile (force) sensor information, and so, the grasping process should be controlled by fusing this sensory information. This require- ment seems too difficult for practical applications. To overcome this problem, the authors propose different systems of sensors. In [15-16] suggested the fea- sibility of detecting the contact location directly by a tactile sensor instead of estimating it, using the geometrical model of the object. In [17], a vision system based on sensor fusion for grasping has been proposed, which has a parallel processing architecture. The system provides higher resolution with 1ms cycle time, being this rate within the range required for the visual ser- voing applications. In the most of these researches, the main concern is the detection of the contact location between the fingertips and the object, which a priori geometrical information of the object was not necessary. A grasping system suitable of performing various tasks under different operating condi- tion is currently in the development stage. The adaptation of these theoretical techniques for real grasping objects applications is a subject of actual inves- tigation. In this paper we address the problem of grasping unknown objects by defining a procedure based on the following three phases: 1. Visual information phase. To provide a description of the objects grasping, a set of visual features (including object size, center of mass, object’s boundary, and main axis for orientation) is stored into a model graph, which preserves the topological relationships between the features. 2. Grasp points determination phase. A graphical method is used to generate a group of grasping points on the boundary of the object. Then a set of geometrical functions is analysed to find a feasible solution for grasping. There are many other methods proposed for the planning stage [7-10]. 3. Grasp decision phase: This is characterized by the stability of the contact between fingers and object. To achieve this, the friction constraints of all contact points must be satisfied by finding an equilibrium between internal and external forces (see also [13] for this topic). Fig. 1 shows the experimental setup. It contains, a stereo camera head, a humanoid robot arm, and an anthropomorphic robot hand. The rest of this paper is organised as follows: Section 2 gives some mathemat- ical preliminaries for grasping in this direction. In section 3, a vision system framework is presented. The vision system is divided into two parts: the first part concerning to 2D grasping and the second part concerning 3D grasping. In section 4, we will describe some procedures we are using to compute feasi- ble grasping region. In section 5, we will present the experimental results with conclusion. Vision-Based Grasping Points Determination by Multifingered Hands 283 Stereo Camera Image Acquisition & Preprocessing Image Processing & Segmentation 3D Model Reconstruction 3D Features Extraction 3D Grasp Points Generation 2D Features Extraction 2D Grasp Points Generation USER INTERFACE Part I Part II Create Feasible Grasp Set F or Library Fig.1. The Vision System.(left side)Experimental setup with astereocamera head, ahumanoidrobot arm(7DOF) and an anthropomorphicrobot hand(10 DOF). (right side)VisualProcessing PartI andPartII: GeneralFlowDiagram. 2Mathematical Preliminaries We consider a k -hard-fingered robothandisgraspingarigid polygonal object in a2Dworkspace,asillustratedinFig. 2. Since anycurved planarobject can be approximatedtoapolygon withany requireddegree of accuracy,we assumethatanobject to be grasped hasapolygonal shape. We assumethat k -fingers are positioned along n given different edges respectively.Tohold theobject and balance anyexternal forces andmoments within eachedge, a finger i must apply aforce f i to the object, called grasping force.Toassure non-slipping at thecontactpoint p i ,the graspingforce f i must lie inside the friction sectordefined by f li and f ri andbecentered about theinternal surface normal at contact pointwith half-angle α i .Ifitlies inside the friction sector, thegraspingforce f i can be expressed as apositive linearcombination of thesetwo edges vectors: f i = µ ri f ri + µ li f li ,with the coefficients µ ri , µ li ≥ 0.Theforce f i andthe moment m i actingonthe objectcan be combined into wrench w i =(f i , m i ) T ,where m i =(p i , f i ). By linearity, acontactwrench w i , generatedby f i can be expressed as the samepositive linearcombinationof the edgewrenches: w i = λ ri w ri + λ li w li ,where w ri and w li arecalled primitive contact wrenches. The net wrench appliedtothe objectby k -fingers is w net = k  i =1 ( λ ri w ri + λ li w li )=W λ (1) where W =[w r 1 w l 1 , , w rk w lk ] ∈ R 3 x 2 k and λ =[λ r 1 λ l 1 , , λ rk λ lk ] ∈ R 1 x 2 k are called wrench matrix andcolumn vector,respectively.For moredetailsinthis topic,see also [1]. 2.1 Modeling the PointofContact Apointofcontactwith friction(sometimes referredtoasahard-finger) as definedpreviously, imposes non linearconstraints on theforce inside of its 0 v X Y O i v 1i v n v i p finger i i n i f ri f li f i t 1 y f x f li w ri w i w i m (a) (b) Fig. 2. Polygonal object to be graspedwithahard-finger. (a) aforce f i applied withinvertices v i and v i +1 .Itlies insidethe friction cone definedbyits edge vectors f li and f ri .(b) The corresponding wrenchspace of f i , w i =(f i , m i ) ∈ R 3 . frictioncones.Inthis subsection, we simplifythe problembymodeling the frictioncones as aconvexpolytopes using the theoryofpolyhedralconvex cones attributed to Goldman andTucker[19]. In order to construct the convex polytopefromthe primitive contact wrenches, thefollowing theorem states that apolyhedralconvexcone(PCC) can be generatedbyaset of basic directionalvectors. Theorem 1. Aconvexcone is apolyhedral if and only if it is finitely gener- ated, that is,the cone is generated by afinitenumber of vectorsv 1 ,v 2 , ,v m : C =  v i ∈ R n : m  i =1 α i v i ,α i ≥ 0  (2) where the coefficients α i areall non negative. Since vectors v i through v m span the cone, we write(2) simply by C=span { v 1 , v 2 , , v m } .The cone spanned by aset of vectorsisthe setofall nonnegativelinearcombinations of its vectors. Aproof of this theorem can be found in [24]. Given apolyhedral convexset C ,let vert( q ) = { v 1 , v 2 , , v m } stand for vertices of apolytope q , while face ( q ) = { F 1 , , F M } denotesits faces. In the plane, aconehas the appearance as showninFig. 3(b). This means that we can reducethe number of cone sides, m to one face. Let’s denote by q ,the convexpolytopeofa modelledcone, and { v 1 , v 2 , v 3 } its three vertices. We can define such polytope as q =  x ∈ R n | x = v q  i =1 δ i v i :0≤ δ i ≤ 1 , v q  i =1 δ i =1  (3) where v i denotesthe i -th vertexof q ,and v q is the totalnumberofvertices. n=2inthe case of a2Dplane.Finally,wedenotethe intersectionofaset of polytopecones (3) by C q 3 q 1 = cone( q 1 ) ∩ cone( q 2 ) ∩ cone( q 3 )(4) 284 M. Boudaba et al. Vision-Based Grasping Points Determination by Multifingered Hands 285 The set C q 3 q 1 is either empty or a convex polytope comes from the properties of convex polytope intersection. Since we are interested by the bounded convex i q (c) 1 x 2 x l v u v 1 x 2 x i q (b) i q Q z )(a i v Fig.3. Model contact:(a) Friction cone. (b) Arepresentation of abounded poly- hedral cone in the plane ( x 1 , x 2 ). Both vectors v l and v u spanthe cone on its lower/upper side. (c) Convexpolytopeand itsextreme vertices. Pointedvertex v 1 joinsthe linesegmentatits midpointand c i is the gradient of suchsegmentthat lies insidethe cone. polytope, we knowbydefinition that one of theits extremepoints is an optimal solution. Let v i be apointedvertex of q ,let l i ( φ )bethe line through v i that makesanangle φ withthe positivex-axis,and h i be the half-plane lying on l i ( φ ). When φ increases from 0 to 2 π ,the line l i ( φ )(also q )rotates around v i (see Fig. 3(c-d)). From this definition, we can suggest an observationthat, if agiven extremevertex is not an optimal solution, it is possibletomove alongits line. To representthe vertex, v i on theline, we introduce ascalar parameter u i ,aunit direction t i ,and an end-vertex v i 0 so that v i = v i 0 + u i t i where u i is constrainedby0≤ u i ≤ l i .The equation c T i x i ,i=1, , k (5) representsthe line containingthe vertex v i and k is the number of constraints. We denote by c i = [ c ix , c iy ] ∈ R 2 ,avector of thefirstpartial derivatives of (5), called the gradient, and it is orthogonaltothe line as showninFig. 3(c-d) where the gradientispointedinside of P .InFig. 3(d), we have changed c i to c i +tu i where u i is an u p -vector variesfromthe lowertothe upper side of thespanned cone, and t is aparameter that increases from its initialvalue 0 to some upper bound t .This allowsustocheckwhether the optimal solution remainsfor all c i in the cone spanned by v 2 and v 3 .Insection 4, we givea detailed procedure to refine the algorithm for(3) andhow to integrate it into awhole algorithm forcomputing force-closure grasps. 2.2 Force-Closure Grasps Our results on force-closure graspsare based on the following characterization of theforce-closure in [4-10]. We dealwith our previous definitions that the full effect of the force and moment applied to an object is a wrench vector such as the one given in (1), where each column of the matrix is described as a three-dimensional wrench space. Using (1), given a set of k -point contacts on an object, the corresponding wrenches w 1 , w 2 , , w k are said to achieve force-closure, if the origin of the wrench space R 3 lies in the interior of the convex hull of the primitive contact wrenches [3] Let us rewrite (1) as a convex hull defined by co(W) =  2 k  i =1 λ i w i | λ i ≥ 0 , 2 k  i =1 λ i =1, w i ∈ W  (6) where the superscript 2 k is apair of primitive wrenches as defined in the wrench matrix.Equation(6) meansthatthe convexhull of 2 k -primitive wrenches containsstrictly the origin of R 3 .The constructionofthe force- closure graspsnow becomes theproblem of finding contact location on the boundary of theobject. Whenafingertip makes acontactonanedge(as showninFig. 2), the force direction canbeinany directionlocatedinside the frictionconewith afriction coefficient, µ =arctan(α ). To adapttoour previ- ousdefinitions, we restrict our consideration to fingertips that makecontact on an edge, andonly edgeforces delimiting the frictionconeare considered. 3Vision System Framework We arecurrently developingavisionsystemand its application to two and three-dimensional object grasping whichare correspondingtothe part Iand II,respectively (see Fig. 1). The vision system is basedonSTH-MD1-C digi- tal stereo head with an IEEE1394 (Firewire)bus interface andaSRI’s Small Vision System (SVS) software forcalibrationand stereocorrelation.The ar- chitectureofthe whole system is organized into severalmodulesimplement- ingimage processing,grasp generationand storage, whichare embedded in adistributed object communicationframework. Forits complexity, the flow diagram of visualprocessing has been divided into two parts. Thefirstpart providesdetailsof2Dobject grasping. The second partisdedicated to 3D object graspingusing visual information retrieval. As showninFig. 1, the image acquisition primarily aimsatthe conversionofvisual information to electricalsignals, suitable for computer interfacing. Then, the incoming im- ageissubjected to processing having in mindtwo purposes: (1)removalof image noise via low-pass filteringbyusing Gaussian filters duetoits computa- tionalsimplicityand (2)extractionofprominentedges via high-pass filtering by using the Sobeloperator. This informationisfinally usedtogroup pixels into lines, or anyother edgeprimitive(circles, contours,etc). This is the basis of the extensively used Cannyedgedetector. So, thebasicstep is to identify the main pixels that maypreservethe objectshape.Asweare visually de- termining graspingpoints,the followingsubsections provide some detailsof what we needfor ourapproach. 286M. Boudaba et al. [...]... J and Roth B, (1 986 ) Analysis of multifingered hands, Int J of Robotics Research, 4(4):3–17 6 Nguyen V D, (1 988 ) Constructing force-closure grasps, International Journal of Robotics Research, 7(3):3–16 7 Ferrari C, Canny J F, (1992) Planning Optimal Grasps, Int Conf on Robotics and Automation, 2290–2295 8 Ponce J, Sullivan S, Boissonnat J D,Merlet J P, 8( 1993) On characterizing and computing three -and. .. P., Ishiguro, H., Kanda, T., Miyashita, T., & Christensen, H I (2004, April) Navigation for human-robot interaction tasks In Proc of the IEEE Int Conf on Robotics and Automation (Vol 2, p 189 4-1 900) Borenstein, J., & Koren, Y (1991, Aug.) Histogramic in-motion mapping for mobile robot obstacle avoidance IEEE Trans on Robotics and Automation, 7 (4), 535–539 Burgoon, J., Buller, D., & Woodall, W (1 989 )... similar rules This is in particular important when robots interact with users that are inexperienced or have never before met a robot Several studies of physical interaction with people have been reported in the literature Nakauchi & Simmons (2000) report on a system that is to stand P Corke and S Sukkarieh (Eds.): Field and Service Robotics, STAR 25, pp 293–304, 2006 © Springer-Verlag Berlin Heidelberg... (e.g persons, animals and robots) It is assumed that such plans This work has been partially supported by a Conacyt scholarship We also want to thank the support of the french CNRS Robea ParkNav and the Predit Mobivip projects P Corke and S Sukkarieh (Eds.): Field and Service Robotics, STAR 25, pp 305–316, 2006 © Springer-Verlag Berlin Heidelberg 2006 306 D Vasquez et al are made with the intention to... Journal of Robotics Research, 15(3):230–266 2 Murray R M, Li Z, and Sastry S S (1994) A Mathematical Introduction to Robotic Manipulation, CRC Press, Boca Raton, New York 3 Mishra B, Schwartz J T, Sharir M, (1 987 ) On the existence and synthesis of multifinger positive grips, Algorithmica (3) 4 Martin B, Canny J, (1994) Easily Computable Optimum Grasps in 2-D and 3-D 2, IEEE Int Conf on Robotics and Automation,... Finland 11 Brost R C, (1991) Analysis and planning of planar manipulation tasks, Ph.D thesis Carnegie Mellon University School of Computer Science 12 LiuY H, (19 98) Computing n-finger force-closure grasps on polygonal objects, Proc IEEE Int Conf on Robotics and Automation, 2734–2739 13 Hirai S, Asada H, (1993) Kinematics and Statics of Manupulation using the Theory of Polyhedral Convex cones, Int J of Robotics. .. speed grasping using visual and force feedback, Int Conf on Robotics and Automation, 3195–3200 18 de Berg M, Van Kreveld M, Overmars M, Schwarzkopf O, (1997) Computational Geometry: Algorithms and Applications, 2nd ed., Springer-Verlag 19 Goldman A J, Tucker A W, (1956) Polyhedral Convex Cones, in Linear Inequalities and Related Systems, Annals of Math Studies, Princeton, 38: 1 9-4 0, 20 Kvasnica M, Grieder... processing vision, Int Conf on Robotics and Automation, 230 9-2 314 15 Maekawa H, K Tanie H K, Komoriya K, (1995) Tactile sensor based manipulation of an unknown object by a multifingered hand with rolling contact, IEEE ICRA, 743–750 16 Yoshimi B, Allen P, (19 98) Visual control of grasping,” In D Kriegman, G Hagar, and S Morse, Editors, The Confluence of Vision and Control, 195–209, Springer-Verlag 17 Namiki A,... Autonomous Systems (CAS) and the EU as part of the Integrated Project “CoSy” (FP 6-0 04150-IP) The support is gratefully acknowledged H H¨ttenrauch and K Severinsonu Eklundh participated in discussions on the interaction strategy 304 E Pacchierotti, H.I Christensen, and P Jensfelt References Aiello, J R (1 987 ) Human Spatial Behaviour In D Stokels & I Altman (Eds.), Handbook of Environmental Psychology... Sullivan S, Boissonnat J D,Merlet J P, 8( 1993) On characterizing and computing three -and four-finger force-closure grasps of polyhedral objects,” International Conference in Robotics and Automation 9 Boudaba M, Casals A, (2000) Robot Grasps: A survey and development of a grasping procedure, Technical report, ESAII-RR-0 0-1 5, Dept ESAII, Technical University of Catalonia, Barcelona, Spain 10 Boudaba M, Casals . with businessassociated ,and as aseparationdistance in public spaces suchasbeaches, bus stops, shopping, etc. Public: Thepublic space is beyond 3.5m and is usedfor no interaction or in places withgeneral. objecttobegrasped is knownand the positions of the contact points are estimatedbasedonthe P. Corke and S. Sukkarieh (Eds.): Field and Service Robotics, STAR 25, pp. 281 –292, 2006. © Springer-Verlag. region of grasps. The focus point inside the polytope convex and its distance from the object s center of mass are two measures used for selecting the best grasps.The most impor- tant aspects of our

Ngày đăng: 10/08/2014, 02:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan