1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Sensing Intelligence Motion - How Robots & Humans Move - Vladimir J. Lumelsky Part 6 pot

30 274 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 403,65 KB

Nội dung

126 MOTION PLANNING FOR A MOBILE ROBOT TangentBug, in turn, has inspired procedures WedgeBug and RoverBug [69, 70] by Laubach, Burdick, and Matthies, which try to take into account issues spe- cific for NASA planet rover exploration. A number of schemes with and without proven convergence have been reported by Noborio [71]. Given the practical needs, it is not surprising that many attempts in sensor- based planning strategies focus on distance sensing—stereo vision, laser range sensing, and the like. Some earlier attempts in this area tend to stick to more familiar graph-theoretical approaches of computer science, and consequently treat space in a discrete rather than continuous manner. A good example of this approach is the visibility-graph based approach by Rao et al. [72]. Standing apart is the approach described by Choset et al. [73, 74], which can be seen as an attempt to fill the gap between the two paradigms, motion planning with complete information (Piano Mover’s model) and motion planning with incomplete information [other names are sensor-based planning, or Sens- ing–Intelligence–Motion (SIM)]. The idea is to use sensor-based planning to first build the map and then the Voronoi diagram of the scene, so that the future robot trips in this same area could be along shorter paths—for example, along links of the acquired Voronoi diagram. These ideas, and applications that inspire them, are different from the go-from-A-to-B problem considered in this book and thus beyond our scope. They are closer to the systematic space exploration and map-making. The latter, called in the literature terrain acquisition or terrain cov- erage, might be of use in tasks like robot-assisted map-making, floor vacuuming, lawn mowing, and so on (see, e.g., Refs. 1 and 75). While most of the above works provide careful analysis of performance and convergence, the “engineering approach” heuristics to sensor-based motion plan- ning procedures usually discuss their performance in terms of “consistently better than” or “better in our experiments,” and so on. Since idiosyncracies of these algorithms are rarely analyzed, their utility is hard to assess. There have been examples when an algorithm published as provable turned out to be ruefully divergent even in simple scenes. 8 Related to the area of two-dimensional motion planning are also works directed toward motion planning for a “point robot” moving in three-dimensional space. Note that the increase in dimensionality changes rather dramatically the formal foundation of the sensor-based paradigm. When moving in the (two-dimensional) plane, if the point robot encounters an obstacle, it has a choice of only two ways to pass around it: from the left or from the right, clockwise or counterclockwise. When a point robot encounters an object in the three-dimensional space, it is faced with an infinite number of directions for passing around the object. This means that unlike in the two-dimensional case, the topological properties of three- dimensional space cannot be used directly anymore when seeking guarantees of algorithm completeness. 8 As the principles of design of motion planning algorithms have become clearer, in the last 10–15 years the level of sophistication has gone up significantly. Today the homework in a graduate course on motion planning can include an assignment to design a new provable sensor-based algorithm, or to decide if some published algorithm is or is not convergent. WHICH ALGORITHM TO CHOOSE? 127 Accordingly, objectives of works in this area are usually toward complete exploration of objects. One such application is visual exploration of objects (see, e.g., Refs. 63 and 76): One attempts, for example, to come up with an economical way of automatically manipulating an object on the supermarket counter in order to locate on it the bar code. Extending our go-from-A-to-B problem to the mobile robot navigation in three-dimensional space will likely necessitate “artificial” constraints on the robot environment (which we were lucky not to need in the two-dimensional case), such as constraints on the shapes of objects, the robot’s shape, some recognizable properties of objects’ surfaces, and so on. One area where constraints appear naturally, as part of the system kinematic design, is motion planning for three- dimensional arm manipulators. The very fact that the arm links are tied into some kinematic structure and that the arm’s base is bolted to its base provide additional constraints that can be exploited in three-dimensional sensor-based motion planning algorithms. This is an exciting area, with much theoretical insight and much importance to practice. We will consider such schemes in Chapter 6. 3.9 WHICH ALGORITHM TO CHOOSE? With the variety of existing sensor-based approaches and algorithms, one is enti- tled to ask a question: How do I choose the right sensor-based planning algorithm for my job? When addressing this question, we can safely exclude the Class 1 algorithms: For the reasons mentioned above, except in very special cases, they are of little use in practice. As to Class 2, while usually different algorithms from this group produce dif- ferent paths, one would be hard-pressed to recommend one of them over the others. As we have seen above, if in a given scene algorithm A performs bet- ter than algorithm B, their luck may reverse in the next scene. For example, in the scene shown in Figures 3.15 and 3.21, algorithm VisBug-21 outperforms algorithm VisBug-22, and then the opposite happens in the scene shown in Figure 3.23. One is left with an impression that when used with more advanced sensing, like vision and range finders, in terms of their motion planning skills just about any algorithm will do, as long as it guarantees convergence. Some people like the concept of a benchmark example for comparing differ- ent algorithms. In our case this would be, say, a fixed benchmark scene with a fixed pair of start and target points. Today there is no such benchmark scene, and it is doubtful that a meaningful benchmark could be established. For example, the elaborate labyrinth in Figure 3.11 turns out to be very easy for the Bug2 algorithm, whereas the seemingly simpler scene in Figure 3.6 makes the same algorithm produce a torturous path. It is conceivable that some other algorithm would have demonstrated an exemplary performance in the scene of Figure 3.6, only to look less brave in another scene. Adding vision tends to smooth algo- rithms’ idiosyncracies and to make different algorithms behave more similarly, especially in real-life scenes with relatively simple obstacles, but the said rela- tionship stays. 128 MOTION PLANNING FOR A MOBILE ROBOT T S r u (a)(b) r u T S Figure 3.23 Scene 2. Paths generated (a) by algorithm VisBug-21 and (b) by algorithm VisBug-22. Furthermore, even seemingly simple questions—(1) Does using vision sensing guarantee a shorter path compared to using tactile sensing? or (2) Does a better (that is, farther) vision buy us better performance compared to an inferior (that is, more myopic) vision?—have no simple answers. Let us consider these questions in more detail. 1. Does using vision sensing guarantee a shorter path compared to using tac- tile sensing? The answer is no. Consider the simple example in Figure 3.24. The robot’s start S and target T points are very close to and on the opposite sides of the convex obstacle that lies between them. By far the main part of the robot path will involve walking around the obstacle. During this time the robot will have little opportunity to use its vision because at every step it will see only a tiny piece of the obstacle boundary; the rest of it will be curving “around the corner.” So, in this example, robot vision will behave much like tactile sensing. As a result, the path generated by algorithm VisBug-21 or VisBug-22 or by some other “seeing” algorithm will be roughly no shorter than a path generated by a “tactile” algorithm, no matter what the robot’s radius of vision r v is. If points S and T are further away from the obstacle, the value of r v will matter more in the initial and final phases of the path but still not when walking along the obstacle boundary. When comparing “tactile” and “seeing” algorithms, the comparative perfor- mance is easier to analyze for less opportunistic algorithms, such as VisBug-21: Since the latter emulates a specific “tactile” algorithm by continuously short- cutting toward the furthest visible point on that algorithm’s path, the resulting path will usually be shorter, and never longer, than that of the emulated “tactile” algorithm (see, e.g., Figure 3.14). WHICH ALGORITHM TO CHOOSE? 129 T S Obstacle Figure 3.24 In this scene, the path generated by an algorithm with vision would be almost identical to the path generated by a “tactile” planning algorithm. With more opportunistic algorithms, like VisBig-22, even this property breaks down: While paths that algorithm VisBig-22 generates are often significantly shorter than paths produced by algorithm Bug2, this cannot be guaranteed (com- pare Figures 3.13 and 3.21). 2. Does better vision (a larger radius of vision, r v ) guarantee better perfor- mance compared to an inferior vision (a smaller radius of vision)? We know already that for VisBug-22 this is definitely not so—a larger radius of vision does not guarantee shorter paths (compare Figures 3.21 and 3.14). Interestingly, even for a more stable VisBug-21, it is not so. The example in Figure 3.25 shows that, while VisBug-21 always does better with vision than with tactile sensing, more vision—that is, a larger r v —does not necessarily buy better performance. In this scene the robot will produce a shorter path when equipped with a smaller radius of vision (Figures 3.25a) than when equipped with a larger radius of vision (Figures 3.25b). The problem lies, of course, in the fundamental properties of uncertainty. As long as some, even a small piece, of relevant information is missing, anything may happen. A more experienced hiker will often find a shorter path, but once in a while a beginner hiker will outperform an experienced hiker. In the stock market, an experienced stock broker will usually outperform an amateur investor, but once in a while their luck will reverse. 9 In situations with uncertainty, more experience certainly helps, but it helps only on the average, not in every single case. 9 On a quick glance, the same principle seems to apply to the game of chess, but it does not. Unlike in other examples above, in chess the uncertainty comes not from the lack of information—complete information is right there on the table, available to both players—but from the limited amount of information that one can process in limited time. In a given time an experienced player will check more candidate moves than will a novice. 130 MOTION PLANNING FOR A MOBILE ROBOT S T r u T (a) (b) r u S Figure 3.25 Performance of algorithm VisBug-21 in the same scene (a) with a smaller radius of vision and (b) with a larger radius of vision. The smaller (worse) vision results in a shorter path! These examples demonstrate the variety of types of uncertainty. Notice another interesting fact: While the experienced hiker and experienced stock broker can make use of a probabilistic analysis, it is of no use in the problem of motion planning with incomplete information. A direction to pass around an obstacle that seems to promise a shorter path to the target may offer unpleasant surprises around the corner, compared to a direction that seemed less attractive before but is objectively the winner. It is far from clear how (and whether) one can impose probabilities on this process in any meaningful way. That is one reason why, in spite of high uncertainty, sensor-based motion planning is essentially a deterministic process. 3.10 DISCUSSION The somewhat surprising examples above (see the last few figures in the previous section) suggest that further theoretical analysis of general properties of Class 2 algorithms may be of more benefit to science and engineering than proliferation of algorithms that make little difference in real-world tasks. One interesting possibil- ity would be to attempt a meaningful classification of scenes, with a predictive power over the performance of various algorithmic schemes. Our conclusions from the worst-case bounds on algorithm performance also beg for a similar analysis in terms of some other, perhaps richer than the worst-case, criteria. DISCUSSION 131 This said, the material in this chapter demonstrates a remarkable success in the last 10–15 years in the state of the art in sensor-based robot motion plan- ning. In spite of the formidable uncertainty and an immense diversity of possible obstacles and scenes, a good number of algorithms discussed above guarantee convergence: That is, a mobile robot equipped with one of these procedures is guaranteed to reach the target position if the target can in principle be reached; if the target is not reachable, the robot will make this conclusion in finite time. The algorithms guarantee that the paths they produce will not circle in one area an indefinite number of times, or even a large number of times (say, no more than two or three). Twenty years ago, most specialists would doubt that such results were even possible. On the theoretical level, today’s results mean, to much surprise from the standpoint of earlier views on the subject, that purely local input information is not an obstacle to obtaining global solutions, even in cases of formidable complexity. Interesting results raise our appetite for more results. Answers bring more questions, and this is certainly true for the area at hand. Below we discuss a number of issues and questions for which today we do not have answers. Bounds on Performance of Algorithms with Vision. Unlike with “tactile” algorithms, today there are no upper bounds on performance of motion planning algorithms with vision, such as VisBug-21 or VisBug-22 (Section 3.6). While from the standpoint of theory it would be of interest to obtain bounds similar to the bound (3.13) for “tactile” algorithms, they would likely be of limited generality, for the following reasons. First, to make such bounds informative, we would likely want to incorporate into them characteristics of the robot’s vision—at least the radius of vision r v , and perhaps the resolution, accuracy, and so on. After all, the reason for developing these bounds would be to know how vision affects robot performance compared to the primitive tactile sensing. One would expect, in particular, that vision improves performance. As explained above, this cannot be expected in general. Vision does improve performance, but only “on the average,” where the meaning of “average” is not clear. Recall some examples in the previous section: In some scenes a robot with a larger radius of vision r v will perform worse than a robot with a smaller r v . Making the upper bound reflect such idiosyncrasies would be desirable but also difficult. Second, how far the robot can see depends not only on its vision but also on the scene it operates in. As the example in Figure 3.24 demonstrates, some scenes can bring the efficiency of vision to almost that of tactile sensing. This suggests that characteristics of the scene, or of classes of scenes, should be part of the upper bounds as well. But, as geometry does not like probabilities, the latter is not a likely tool: It is very hard to generalize on distributions of locations and shapes of obstacles in the scene. Third, given a scene and a radius of vision r v , a vastly different path perfor- mance will be produced for different pairs of start and target points in that same scene. 132 MOTION PLANNING FOR A MOBILE ROBOT Moving Obstacles. The model of motion planning considered in this chapter (Section 3.1) assumes that obstacles in the robot’s environment are all static—that is, do not move. But obstacles in the real world may move. Let us call an envi- ronment where obstacles may be moving the dynamic (changing, time-sensitive) environment. Can sensor-based planning strategies be developed capable of han- dling a dynamic environment? Even more specifically, can strategies that we developed in this chapter be used in, or modified to account for, a dynamic environment? The answer is a qualified yes. Since our model and algorithms do not include any assumptions about specifics of the geometry and dimensions of obstacles (or the robot itself), they are in principle ideally suited for handling a dynamic environment. In fact, one can use the Bug and VisBug family algorithms in a dynamic environment without any changes. Will they always work? The answer is, “it depends,” and the reason for the qualified answer is easy to understand. Assume that our robot moves with its maximum speed. Imagine that while operating under one of our algorithms—it does not matter which one —the robot starts passing around an obstacle that happens to be of more or less complex shape. Imagine also that the obstacle itself moves. Clearly, if the obstacle’s speed is higher than the speed of the robot, the robot’s chance to pass around the obstacle and ever reach the target is in doubt. If on top of that the obstacle happens to also be rotating, so that it basically cancels the robot’s attempts to pass around it, the answer is not even in doubt: The robot’s situation is hopeless. In other words, the motion parameters of obstacles matter a great deal. We now have two options to choose from. One is to use algorithms as they are, but drop the promise of convergence. If the obstacles’ speeds are low enough compared to the robot, or if obstacles move more or less in one place, like a tree in the wind, then the robot will likely get where it intends. Even if obstacles move faster than the robot, but their shapes or directions of motion do not create situations as in the example above, the algorithms will still work well. But, if the situation is like the one above, there will be no convergence. Or we can choose another option. We can guarantee convergence of an algo- rithm, but impose some additional constraints on the motion of objects in the robot workspace. If a specific environment satisfies our constraints, convergence is guaranteed. The softer those constraints, the more universal the resulting algo- rithms. There has been very little research in this area. For those who need a real-world incentive for such work, here is an example. Today there are hundreds of human-made dead satellites in the space around Earth. One can bet that all of them have been designed, built, and launched at high cost. Some of them are beyond repair and should be hauled to a satellite cemetery. Some others could be revived after a relatively simple repair—for example, by replacing their batteries. For long time, NASA (National Aeronautics and Space Administration) and other agencies have been thinking of designing a robot space vehicle capable of doing such jobs. DISCUSSION 133 Imagine we designed such a system: It is agile and compact; it is capable of docking, repair, and hauling of space objects; and, to allow maneuvering around space objects, it is equipped with a provable sensor-based motion planning algo- rithm. Our robot—call it R-SAT—arrives to some old satellite “in a coma”—call it X. The satellite X is not only moving along its orbit around the Earth, it is also tumbling in space in some arbitrary ways. Before R-SAT starts on its repair job, it will have to fly around X, to review its condition and its useability. It may need to attach itself to the satellite for a more involved analysis. To do this—fly around or attach to the satellite surface—the robot needs to be capable of speeds that would allow these operations. If the robot arrives at the site without any prior analysis of the satellite X condition, this amounts to our choosing the first option above: No convergence of R-SAT motion planning around X is guaranteed. On the other hand, a decision to send R-SAT to satellite X might have been made after some serious remote analysis of the X’s rate of tumbling. The analysis might have concluded that the rate of tumbling of satellite X was well within the abilities of the R-SAT robot. In our terms, this corresponds to adhering to the second option and to satisfying the right constraints—and then the R-SAT’s motion planning will have a guaranteed convergence. Multirobot Groups. One area where the said constraints on obstacles’ motion come naturally is multirobot systems. Imagine a group of mobile robots operating in a planar scene. In line with our usual assumption of a high level of uncer- tainty, assume that the robots are of different shapes and the system is highly decentralized. That is, each robot makes its own motion planning decisions with- out informing other robots, and so each robot knows nothing about the motion planning intentions of other robots. When feasible, this type of control is very reliable and well protected against communication and other errors. A decentralized control in multirobot groups is desirable in many settings. For example, it would be of much value in a “robotic” battlefield, where a continuous centralized control from a single commander would amount to sacrificing the sys- tem reliability and fault tolerance. The commander may give general commands from time to time—for instance, on changing goals for the whole group or for specific robots (which is an equivalent of prescribing each robot’s next target position)—but most of the time the robots will be making their own motion planning decisions. Each robot presents a moving obstacle to other robots. (Then there may also be static obstacles in the workspace.) There is, however, an important difference between this situation and the situation above with arbitrary moving obstacles. You cannot have any beforehand agreement with an arbitrary obstacle, but you can have one with other robots. What kind of agreement would be unconstraining enough and would not depend on shapes and dimensions and locations? The system designers may prescribe, for example, that if two robots meet, each robot will attempt to pass around the other only clockwise. This effectively eliminates 134 MOTION PLANNING FOR A MOBILE ROBOT the above difficulty with the algorithm convergence in the situation with moving obstacles. 10 (More details on this model can be found in Ref. 77.) Needs for More Complex Algorithms. One area where good analysis of algo- rithms is extremely important for theory and practice is sensor-based motion planning for robot arm manipulators. Robot manipulators operate sometimes in a two-dimensional space, but more often they operate in the three-dimensional space. They have complex kinematics, and they have parts that change their rel- ative positions in complex ways during the motion. Not rarely, their workspace is filled with obstacles and with other machinery (which is also obstacles). Careful motion planning is essential. Unlike with mobile robots, which usually have simple shapes and can be controlled in an intuitively clear fashion, intuition helps little in designing new algorithms or even predicting the behavior of existing algorithms for robot arm manipulators. As mentioned above, performance of Bug2 algorithm deteriorates when deal- ing with situations that we called in-position. In fact, this will be likely so for all Class 2 motion planning algorithms. Paths tend to become longer, and the robot may produce local cycles that keep “circling” in some segments of the path. The chance of in-position situations becomes very persistent, almost guaranteed, with arm manipulators. This puts a premium on good planning algorithms. This area is very interesting and very unintuitive. Recall that today about 1,000,000 industrial arms manipulators are busy fueling the world economy. Two chapters of this book, Chapters 5 and 6, are devoted to the topic of sensor-based motion planning for arm manipulators. The importance of motion planning algorithms for robot arm manipulators is also reinforced by its connection to teleoperation systems. Space-operator-guided robots (such as arm manipulators on the Space Shuttle and International Space Station), robot systems for cleaning nuclear reactors, robot systems for detonating mines, and robot systems for helping in safety operations are all examples of teleoperation systems. Human operators are known to make mistakes in such tasks. They have difficulty learning necessary skills, and they tend to compensate difficulties by slowing the operation down to crawling. (Some such problems will be discussed in Chapter 7.) This rules out tasks where at least a “normal” human speed is a necessity. One potential way out of this difficulty is to divide responsibilities between the operator and the robot’s own intelligence, whereby the operator is responsible for higher-level tasks—planning the overall task, changing the plan on the fly if needed, or calling the task off if needed—whereas the lower-level tasks like obstacle collision avoidance would be the robot’s responsibility. The two types of intelligence, human and robot intelligence, would then be combined in one control system in a synergistic manner. Designing the robot’s part of the system would require (a) the type of algorithms that will be considered in Chapters 5 and 6 and (b) sensing hardware of the kind that we will explore in Chapter 8. 10 Note that this is the spirit of the automobile traffic rules. EXERCISES 135 Turning back to motion planning algorithms for mobile robots, note that nowhere until now have we talked about the effect of robot dynamics on motion planning. This implicitly assumed, for example, that any sharp turn in the robot’s path dictated by the planning algorithm was deemed feasible. For a robot with flesh and reasonable mass and speed, this is of course not so. In the next chapter we will turn to the connection between robot dynamics and motion planning. 3.11 EXERCISES 1. Recall that in the so-called out-position situations (Section 3.3.2) the algo- rithm Bug2 has a very favorable performance: The robot is guaranteed to have no cycles in the path (i.e., to never pass a path segment more than once). On the other hand, the in-position situations can sometimes produce long paths with local cycles. For a given scene, the in-position was defined in Section 3.3.2 as a situation when either Start or Target points, or both, lie inside the convex hull of obstacles that the line (Start, Target) intersects. Note that the in-position situation is only a sufficient condition for trouble: Simple examples can be designed where no cycles are produced in spite of the in-position condition being satisfied. Trytocomeupwithanecessary and sufficient condition—call it GOOD- CON—that would guarantee a no-cycle performance by Bug2 algorithm. Your statement would say: “Algorithm Bug2 will produce no cycles in the path if and only if condition GOODCON is satisfied.” 2. The following sensor-based motion planning algorithm, called AlgX (see the procedure below), has been suggested for moving a mobile point automaton (MA) in a planar environment with unknown arbitrarily shaped obstacles. MA knows its own position and that of the target location T , and it has tactile sensing; that is, it learns about an obstacle only when it touches it. AlgX makes use of the straight lines that connect MA with point T and are tangential to the obstacle(s) at the MA’s current position. The questions being asked are: • Does AlgX converge? • If the answer is “yes,” estimate the performance of AlgX. • If the answer is “no,” why not? Explain and give a counterexample. Using the same idea of the tangential lines connecting MA and T , try to fix the algorithm. Your procedure must operate with finite memory. Estimate its performance. • Develop a test for target reachability. Just like the Bug1 and Bug2 algorithms, the AlgX procedure also uses the notions of (a) hit points, H j ,andleave points, L j , on the obstacle boundaries and (b) local directions. Given the start S and target T points, here are some necessary details: [...]... turn on a dime and hence can execute any sharp turn if prescribed by its motion planning software Most of existing approaches to motion planning (including those within the Piano Mover’s model) assume, first, that the system is holonomic and, second, Sensing, Intelligence, Motion, by Vladimir J Lumelsky Copyright  20 06 John Wiley & Sons, Inc 139 140 ACCOUNTING FOR BODY DYNAMICS: THE JOGGER’S PROBLEM... Any provable maze-searching algorithm can be used for the kinematic part of the algorithm that we are about to build, as long as it allows distant sensing For specificity only, we use here the VisBug algorithm (see Section 3 .6; either VisBug-21 or VisBug-22 will do) VisBug algorithms alternate between these two operations (see Figure 4.1): 1 Walk from point S toward point T along the M-line until, at... stop every time it intends to turn, let it turn, and resume the motion as needed Not many applications will like such a stop-and-go motion pattern For a realistic control we want the robot to make turns on the move, and not stop unless “absolutely necessary,” whatever this means That is, in addition to the usual problem of “where to go” and how to guarantee the algorithm convergence in view of incomplete... of the SIM (Sensing Intelligence Motion) paradigm To be sure, such control can in principle be incorporated in the Piano Mover’s paradigm as well One way to do this is, for example, to divide the motion planning process into two stages: First, a path is produced that satisfies the geometric constraints, and then this path is modified to fit the dynamic constraints [79], possibly in a time-optimal fashion... Section 3.1 can be used for more rigorous definitions Define M-line (Main line) as the straight-line segment (S, T ) (Figure 4.1) The M-line is the robot’s desired path When, while moving along the M-line, the robot senses an obstacle crossing the M-line, the crossing point on the obstacle boundary is called a hit point, H The corresponding M-line point “on the other side” of the obstacle is a leave... explain to you how the motions of different kinds of matter depend on a property called inertia —Sir William Thomson (Lord Kelvin), The Tides 4.1 PROBLEM STATEMENT As discussed before, motion planning algorithms usually adhere to one of the two paradigms that differ primarily by their assumptions about input information: motion planning with complete information (Piano Mover’s problem) and motion planning... (4.8) and (4.9) always include at least one safe solution: By the algorithm design, the straight-line motion with maximum braking, (p, q) = (−pmax , 0), is always collision-free (for more detail, see Ref 96) 4.2 .6 The Algorithm The resulting algorithm consists of three procedures: • • • Main Body This defines the motion within the time interval [ti , ti+1 ) toward the intermediate target Ti Define Next Step... and V = 0 will result in a straight-line constant velocity motion. 2 Robot motion is controlled in steps i, i = 0, 1, 2, Each step takes time δt = ti+1 − ti = const The step’s length depends on the robot’s velocity within Robot’s path T Q L P C Obstacle H M-line S ru Figure 4.1 An example of a conflict between the performance of a kinematic algorithm (e.g., VisBug-21, the solid line path) and the... steps where the collision is inevitable How many steps look-ahead is enough? This is one thing that we need to figure out Below we will study the said effects, with the same objective as before—to design provably correct sensor-based motion planning algorithms As before, the presence of uncertainty implies that no global optimality of the path is feasible Notice, however, that given the need to plan for... the L∞ -norm; that is, the velocity and acceleration components are assumed bounded with respect to a fixed (absolute) reference system This allows one to decouple the equations of robot motion and treat the two-dimensional problem as two one-dimensional problems.1 1 Though comparisons between algorithms belonging to the two paradigms are difficult, one comparison seems to apply here Using the L∞ -norm . and, second, Sensing, Intelligence, Motion, by Vladimir J. Lumelsky Copyright  20 06 John Wiley & Sons, Inc. 139 140 ACCOUNTING FOR BODY DYNAMICS: THE JOGGER’S PROBLEM that it will behave. information (Piano Mover’s model) and motion planning with incomplete information [other names are sensor-based planning, or Sens- ing Intelligence Motion (SIM)]. The idea is to use sensor-based planning. automatically manipulating an object on the supermarket counter in order to locate on it the bar code. Extending our go-from-A-to-B problem to the mobile robot navigation in three-dimensional space will

Ngày đăng: 10/08/2014, 02:21

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN