Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 30 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
30
Dung lượng
448,86 KB
Nội dung
Extending AI Planning to Solve more Realistic Problems 413 The idea of deriving a heuristic function consists of formulating a simplified version of the planning problem by relaxing some constraints of the problem. The relaxed problem can be solved easily and quickly compared to the original problem. The solution of the relaxed problem can then be used as a heuristic function that estimates the distance to the goal in the original problem. The most common relaxation method used for propositional planning is to ignore the negative effects of actions. This method was originally proposed in (McDermott, 1996) and (Bonet et al., 1997) and then used by the most of propositional heuristic planners (Bonet & Geffner, 2001; Hoffman, 2001; Refanidis & Vlahavas, 2001). With the arising of planners that solve problems with numerical knowledge such as metric resources and time, a new relaxation method has been proposed to simplify the numerical part of the problem. As proposed in Metric-FF (Hoffmann, 2002) and SAPA (Do & Kambhampati, 2001), relaxing numerical state variables can be achieved by ignoring the decreasing effects of actions. This numerical relaxation has been presented as an extension to the propositional relaxation to solve planning problems that contain propositional and numeric knowledge. Knowing that, some planning problems contains actions that strictly increase or decrease numeric variables without alternation, other problems uses numeric variables to represent real world objects that have to be handled according to their quantity (Zalaket & Camilleri, 2004a) and thus applying the above proposed numerical relaxation method can be inadmissible to solve this kind of problems. In this section, we start by explaining the relaxed propositional task as it was proposed for STRIPS problems (McDermott, 1996), we introduce a new relaxation method for numerical tasks in which we relax the numeric action effects by ignoring the effects that move away numeric variable values from their goal values, then we present the calculation of a heuristic function using a relaxed planning graph over which we apply the above relaxation methods, and finally we present the use of the obtained heuristic to guide the search for a plan in a variation of hill-climbing algorithm. 5.1 Propositional task relaxation Relaxing a propositional planning task can be obtained by ignoring the negative effects of actions. Definition-6: Given a propositional planning task P=<S, A, s I , G>, the relaxed task P’ of P is defined as P’=<S, A’, s I , G>, such that: ∀ a ∈ A and eff P (a) = eff P + (a) ∪ eff P - (a) ⇒ ∃ a’ ∈ A’ / eff P (a’)= eff P + (a)( which means eff P (a’)= eff P (a) - eff P - (a)). And thus, A’ = { con P (a), Pre P (a), eff P + (a) , ∀ a ∈ A }. The relaxed plan can be solved in polynomial time as it is proven by bylander (Bylander, 1994). 5.2 Numerical task relaxation Relaxing a numerical planning task can be obtained by ignoring the negative effects of actions that move away numeric values from the goal values. Definition-7: Given a numerical planning task V=<S, A, s I , G>, the relaxed task V’ of V is defined as V’=<S, A’, s I , G>, such that: ∀ a ∈ A and eff N (a) = eff N + (a) ∪ eff N - (a), such that: ∀ (n:=v) ∈ eff N (a), where n is a numeric variable and v is a constant numeric value that can be the result of an arithmetic expression or an executed external function. Positive numeric effects eff N + (a) and negative numeric effects eff N - (a) are defined as follows: Frontiers in Robotics, Automation and Control 414 ∀ (n=v I ) ∈ s I , where v I is a constant numeric value that represents the initial value of n. if (n θ v G ) ∈ G , where θ ∈ {<, ≤, =, >, ≥} and v G is a constant numeric or the result of an arithmetic expression or an executed external function. if distance (v, v G ,) ≤ distance (v I , v G ) and distance (v I , v) ≤ distance (v I , v G ) (the current value v of the numeric variable n is closer to the goal value v G of n than the initial value v I from the initial value side.) then (n:=v) ∈ eff N + (a) else (n:=v) ∈ eff N - (a) endif else (n:=v) ∈ eff N + (a) / /(n does not appear in the goal state) end if Example of the distance calculation: Assume that: - We have a numeric variable n which is equal to 0 at the initial state (v I =0) and is equal to 5 in the goal state (v G =5). - We have an action a, which assigns to n respectively the values v 1 =-3, v 2 =-1, v 3 =1, v 4 =5, v 5 =7, v 6 =11. In this case the distance can be calculated as: distance (v j , v i )=|v j - v i | By testing for relaxed action effects: distance(v I , v G )= |v G – v I |=5 - v 1 =-3: distance(v 1 , v G )=|v G – v 1 |=8 > distance(v I , v G ) ⇒ (v 1 =-3) ∈ eff N - (a) ⇒ v 1 =-3 is ignored in the relaxed task. - v 2 =-1: distance(v 2 , v G )=|v G – v 2 |=6 > distance(v I , v G ) ⇒ (v 2 =-1) ∈ eff N - (a) ⇒ v 2 =-1 is ignored in the relaxed task. - v 3 =1: distance(v 3 , v G )=|v G – v 3 |=4 ≤ distance(v I , v G ) and distance(v I , v 3 )=|v 3 – v I |=1≤ distance(v I , v G ) ⇒ (v 3 = 1) ∈ eff N + (a) ⇒ v 3 =1 is held in the relaxed task. - v 4 =5: distance(v 4 , v G )=|v G – v 4 |=0 ≤ distance(v I , v G ) and distance(v I , v 4 )=|v 4 – v I |=5≤ distance(v I , v G ) ⇒ (v 4 = 4) ∈ eff N + (a) ⇒ v 4 =4 is held in the relaxed task. - v 5 =7: distance(v 5 , v G )=|v 5 - v G |=2 ≤ distance(v I , v G ), but distance(v I , v 5 )=|v 5 – v I |=7 > distance(v I , v G ) ⇒ (v 5 = 7) ∈ eff N - (a) ⇒ v 5 =7 is ignored in the relaxed task. - v 6 =11: distance(v 6 , v G )=|v 6 - v G |=6 > distance(v I , v G ) ⇒ (v 6 = 11) ∈ eff N - (a) ⇒ v 6 =11 is ignored in the relaxed task. Remarks: The distance formula can vary according to the comparison operator used in the goal state, but it is the same for all numeric values used in the initial and the goal state. Each numeric variable that appears in the initial state and doesn’t appear in the goal conditions is automatically added to the positive numeric effects, because the values of these variables are often used as preconditions for actions and thus, they can not be ignored. Extending AI Planning to Solve more Realistic Problems 415 Figure 5 (see Fig. 5) shows (in red) how negative numeric effects of an action that updates a numeric variable n are considered. It also shows (in blue) the positive numeric effects of the action which are considered according to the initial and the goal values of the variable n. Note that, exchanging the values of n between initial and goal states will not affect the ranges of selected positive and negative numeric effects. Fig. 5. Choosing negative and positive numeric action effects Fig. 6. Numeric relaxed action effects variation according to goal comparison operators. Figure 6 (see Fig. 6) shows how the selection of negative (in red) and positive (in blue) numeric effects depends on the comparison operator used for comparing the numeric variable n in the goal conditions. Therefore, the distance formula is calculated according to the operator used irrespective of the values of n in initial and goal states. As can be observed in this figure, a tighter range of positive numeric effects can be obtained when the equal Frontiers in Robotics, Automation and Control 416 operator is used to compare the value of n in the goal conditions, and consequently a smaller search space will be generated for the relaxed problem, which accelerates the process of search for a plan for that problem. 5.3 Mixed planning problem relaxation Definition-8: Given a mixed propositional and numerical planning problem P=<S, A, s I , G>, the relaxed problem P’ of P is defined as P’=<S, A’, s I , G>, such that: ∀ a ∈ A and eff(a)= eff P (a) ∪ eff N (a) and eff P (a) = eff P + (a) ∪ eff P - (a) and eff N (a) = eff N + (a) ∪ eff N - (a) ⇒ ∃ a’ ∈ A’ / eff (a’)= eff P + (a) ∪ eff N + (a). And thus, A’ = { con P (a), Pre (a), eff + (a) =eff P + (a) ∪ eff N + (a), ∀ a ∈ A }. Definition-9: A sequence of applicable actions {a 1 , a 2 , …, a n } is a relaxed plan for the planning problem P=<S, A, s I , G> if {a’ 1 , a’ 2 , …, a’ n } is a plan of its relaxed problem P’=<S, A’, s I , G>. 6. Relaxed planning graph with functions application Like the planning graph structure used in the adapted Graphplan algorithm, the relaxed planning graph consists of 2 types of levels fact levels and action levels. Algorithm-2 (see Fig. 7) shows how the relaxed planning graph is expanded until reaching a fact level that satisfied the goal conditions or until obtaining consecutive duplicated fact levels. This test is done by using the function testForSolution(Facts, Actions, G, Plan), which will be modified compared to the one used in the adapted Graphplan implementation (Fig. 3). Compared to algorithm-1 (Fig. 3), algorithm-2 (Fig. 7) applies only the positive propositional and numeric effects of actions for generating the next fact level, as discussed in sextion-5. An additional relaxation is added to the planning graph construction in algorithm-2, which consists of ignoring the mutual exclusion between facts and between actions. Therefore, the initialization subroutine for algorithm-2 will be the same as in Fig. 2 but without the mutual exclusion lists. This latter relaxation allows the relaxed planning graph to apply conflicting actions in parallel, and thus to reach the goal state faster in polynomial time. The test for solution • boolean testForSolution(Facts: the set of all fact levels, Actions: the set of action levels, G: set of goal conditions, Plan: ordered set of the actions to be returned){ /*this function tests if G is satisfied in Facts and if a relaxed plan can be found*/ if G is satisfied in Facts then if Actions = {} then Plan:={}; return true; elseif the graph is saturated then Plan:={’failure’}; return true; elseif G is satisfied at Facts[final_level] then // extract a relaxed plan see algorithm-3 ExtractRelaxedPLAN (Facts, Actions, G, Plan) end if end if return false; } Extending AI Planning to Solve more Realistic Problems 417 Algorithm-2: Relaxed planning graph with external function application Input: S 0 : initial state, G: Goal conditions, Act: Set of actions Output: Plan: sequence of ground actions begin call initialization(); /* a subroutine that initializes variables*/ /*relaxed planning graph construction iterations*/ while (not Stop) do Action i :={}; for all a ∈ A do InstNumeric(pre N (a), Fact i ); if con N (a) = true in Fact i then if pre P (a) ⊆ Fact i and pre N (a) are all true in Fact i then InstNumeric(eff N (a), Fact i ); Action i :=Action i ∪ a ; PointPreconditions(a, Fact i ); end if end if end for Actions := Actions ∪ Action i ; /*Add the facts of previous level with their “no-op” actions”*/ i := i+1; Fact i := Fact i-1 ; for each f ∈ Fact i-1 do Action i-1 :=Action i-1 ∪ “no-op”; end for /*Apply the applicable positive instantiated actions*/ for all a ∈ Action i-1 and a ≠ “no-op” do Fact i :=Fact i ∪ eff p + for each e ∈ eff N + do if g is an external function then call the function g; else evaluate the expression g; end if /*add a new value to the muli-valued attribute n*/ n=n ∪ g; end for Connect a to its added effects; end for Facts := Facts ∪ Fact i ; nonStop=testForSolution(Facts, Actions, G, Plan); end while end Fig. 7. The relaxed planning graph construction algorithm Frontiers in Robotics, Automation and Control 418 6.1 Relaxed plan extraction Once the relaxed planning graph is constructed using the algorithm-2 (Fig. 7) up to a level that satisfies the goals, the extraction process can be applied in backward chaining as shown in algorithm-3 (Fig. 8) which details the ExtractRelaxedPLAN function called by the testForSolution function of in algorithm-2 as detailed in section-6: Algorithm-3 : Extract plan in backward chaining from the relaxed planning graph Name: ExtractRelaxedPlan Input: Facts: Set of fact levels, G: Goal conditions, Actins: Set of action levels Output: Plan: sequence of ground actions begin Plan:={}; G final_level := { g ∈ Facts[final_level] / g satisfies G}; for i = final_level to 1 do G i-1 :={}; for each g ∈ G i do acts:= {actions at level final_level-1 that add g}; selAct:= get_first_element_of(acts); if act ≠ “no-op” then for act ∈ acts do if act= ‘no-op’ then selAct:=act; break; end if // Select the action that has the minimum number of preconditions if nb_preconditions_of(act)< nb_preconditions_of (selAct) then selAct:=act; end if end for end if plan:=plan ∪ selAct; G i-1 := G i-1 ∪ { f ∈ Facts[i-1] s.t. f is a precondition of selAct}; end for end for end Fig. 8. Plan extraction from a relaxed planning graph Each sub-goal in the final fact level (the level that satisfies the goal conditions), is replaced by the preconditions and by the implicit preconditions (definitions 3 and 4) of the action that adds it and the action is added to the list of relaxed plan. Normally, a “no-op” action will be preferred if it adds a sub-goal. If there is not a “no-op” action that adds the sub-goal and there is more than one action that add it, then we choose the action that has the minimum number of preconditions and implicit preconditions from these latter. We replace the sub- goal fact by the facts that serve as preconditions and implicit preconditions to the chosen Extending AI Planning to Solve more Realistic Problems 419 action. We can backtrack in the graph to choose another action adding the sub-goal if a selected action doesn’t lead to a solution. Once all goals of the final level are replaced by the sub-goals of previous level, this previous level becomes the final level and the sub-goals become the new goals. This process is repeated until reaching the first fact level. The resulting heuristic is considered as the distance to the goal and it is calculated by counting the number of actions of the relaxed plan. ∑ − = = 1_ 0 i |a| levelfinal i h , where [a 0 , a 1 , , a final_level-1 ] is the relaxed plan. Note that, during the backward plan extraction, we don’t make any difference between numeric and propositional facts as all facts even that are results of applied functions are accessed via action edges that are stored in the planning graph structure. 6.2 Heuristic planner running over the effects of applied functions The main search algorithm that we use to find a plan in the original problem is a variation of hill-climbing search guided by the heuristic h detailed in section-6.1. The heuristic is calculated for each state s in the search space. At each step we select the child having the lowest heuristic value compared to other children of the same parent to be the next state step, and so on until we reach a state with a heuristic equal to zero. If at some step, algorithm-2 doesn’t find a relaxed plan that leads a state s to the goal state then the heuristic h will be considered as infinite at this step. Each time a state is selected (except of the initial state) the action which leads to this selected state is added to the plan list. The variation of hill-climbing is when a child having the lowest heuristic is selected, if its heuristic value is greater than the parent state heuristic then the child can be accepted to be the next state step as long as the total number of children exceeding the parent heuristic value is less than a given threshold number. Another variation of hill climbing is: The number of consecutive plateaus (where the calculated heuristic value stays invariable) is accepted up to a prefixed constant. After that a worst-case scenario is launched. This scenario consists of selecting the child who has the lowest heuristic greater than the current state heuristic (invariable), and then to continue the search from this children state by trying to escape the plateau. This scenario can also be repeated up to a prefixed threshold. In all the above cases, if hill-climbing exceeds one of the quoted thresholds or when the search fails to find a plan the hill-climbing is considered as unable to find a solution and an A* search begins. As HSP and FF, we have added to hill climbing search and to A* search a list of visited states to avoid calculating a heuristic more than once for the same state. At each step a generated state is checked to see if it exists in the list of visited states in order cut it off to avoid cycles. According to our tests, we have noticed that most of the problems can be solved with hill-climbing algorithm. Only some tested domain problems (like ferry with capacity domain) have failed with hill-climbing search so early. But, the solution has been found later with the A* search. 7. Empirical results We have implemented as prototypes all the above algorithms in Java language. We have run these algorithms over multiple foremost numeric domains that necessitate non classical Frontiers in Robotics, Automation and Control 420 handling such as the water jugs domain, the manufacturing domain, the army deployment domain and the numeric ferry domain as introduced in (Zalaket & Camilleri 2004a). We note that, some of these domains such as manufacturing and army deployment are usually expressed and solved with scheduling or with mathematical approach. Our tests can be summarized in three phases: In the first phase, we have started by running a blind forward planning algorithm that supports the execution of external functions. Our objective at this phase was only to study the feasibility and the effectiveness of integrating such functions written in a host programming language to planning in order to accomplish some complex computation. In the second phase, we have run the adapted Graphplan algorithm with which we have obtained optimal plans for all the problems, but it was not able to solve large problems. In the third phase, we have run the heuristic planner over all the above cited domains. Larger problems are solved with this planner, but the generated plans were not always optimal as it was the case in the second phase. We have made minor efforts for optimizing our implementation in the one or the other of the above phases. Even though, we can conclude that the heuristic algorithm is the most promising one despite its non-optimal plans. We think that some additional constraints can be added to this algorithm to allow it generating better plans quality. We also remark that some planning domains can be modelled numerically instead of symbolically to obtain extremely better results. For example, in the numeric ferry domain the heuristic algorithm was able to solve problems that move hundreds of cars instead of tenth with classical propositional planners. 8. Conclusion In this chapter, we have presented multiple extensions for classical planning algorithms in order to allow them to solve more realistic problems. This kind of problems can contain any type of knowledge and can require complex handling which is not yet supported by the existing planning algorithms. Some complicated problems can be expressed with the recent extensions to PDDL language, but the main lack remains especially because of the incapacity of the current planners. We have suggested and tested the integration to planning of external functions written in host programming languages. These functions are useful to handle complicated tasks that require complex numeric computation and conditional behaviour. We have extended the Graphplan algorithm to support the execution of these functions. In this extension to GraphPlan, we have suggested the instantiation of numeric variables of actions incrementally during the expansion of the planning graph. This can restrict the number of ground actions by using for numeric instantiation only the problem instances of the numeric variables instead of using all the instances of the numeric variable domain which can be huge or even infinite. We have also proposed a new approach to relax the numeric effects of actions by ignoring the effects that move away the values of numeric variables from their goal values. We have then used this relaxation method to extract a heuristic which we have used it later in a heuristic planner. According to our tests on domains like the manufacturing one, we conclude that scheduling problems can be totally integrated into AI planning and solved using our extensions. As future work, we will attempt to test and maybe customize our algorithms to run over some domains adapted from the motion planning, in order to extend the AI planning to also cover Extending AI Planning to Solve more Realistic Problems 421 the motion planning and other robotic problems currently solved using mathematical approaches. 9. References Bacchus, F. & Ady, M. (2001). Planning with resources and concurrency a forward chaining approach. Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI-01), August, 2001, Seattle, Washington, USA Bak, M.; Poulsen, N. & Ravn, O. (2000). Path following mobile robot in the presence of velocity constraints. Technical report, Technical University of Denmark, 2000, Kongens Lyngby, Denmark Blum, L. & Furst, L. (1995). Fast planning through planning graph analysis. Proceedingsof the 14 th International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1636–1642, August, 1995, Montreal, Quebec, Canada Bonet, B. & Geffner, H. (2001). Planning as heuristic search. Journal of Artificial Intelligence, 129:5–33,2001 Bonet, B.; Loerincs, G. & Geffner, H.(1997). A robust and fast action selection mechanism for planning. Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), pages 714–719, July, 1997, convention center in Providence, Rhode Island Bresina, L. J.; Dearden, R.; Meuleau, N; Smith, E. D. & Washington, R. (2002) Planning Under Continuous Time and Resource Uncertainty: A Challenge for AI. Proceedings of the AIPS Workshop on Planning for Temporal Domains, pages 91–97, April, 2002, Toulouse, France Bylander, T. (1994). The computational complexity of propositional strips planning. Journal of Artificial Intelligence, 69:165–204, 1994 Cayrol, M. ; Régnier, P. & Vidal, V. (2000). New results about LCGP, a least committed graphplan. Proceedings of the 5th International Conference on Artificial Intelligence Planning and Scheduling (AIPS-2000),pages 273–282, 2000, Breckenridge, CO, USA Do, B. & Kambhampati, S. (2000). Solving planning graph by compiling it into a CSP. Proceedings of the 5 th International Conference on Artificial Intelligence Planning and Scheduling (AIPS-2000), 2000, Breckenridge, CO, USA Do, B. & Kambhampati, S. (2001). Sapa: A domain-independent heuristic metric temporal planner. Proceedings of the 6 th European Conference on Planning (ECP 2001), September, 2001, Toledo, Spain Edelkamp (2002). Mixed propositional and numerical planning in the model checking integrated planning system. Proceedings of the AIPS Workshop on Planning for Temporal Domains, April, 2002, Toulouse, France Fikes, R.E. & Nilsson, N. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Journal of Artificial Intelligence, 2:189–208, 1971. Fox, M. & Long, D. (2002). PDDL2.1: An extension to PDDL for expressing temporal planning domains. Proceedings of the 7 th International Conference on Artificial Intelligence Planning and Scheduling (AIPS- 2002), April, 2002, Toulouse, France Geffner, H. (1999). Functional strips: A more flexible language for planning and problem solving. Logicbased AI Workshop, June, 1999, Washington D.C. Frontiers in Robotics, Automation and Control 422 Gerevini, A. & Long, D. (2005). Plan constraints and preferences for PDDL3. Technical Report Technical report, R.T. 2005-08-07, Dept. of Electronics for Automation, 2005, University of Brescia, Brescia, Italy Ghallab, M.; Howe, A.; Knoblock, G.; McDermott, D.; Ram, A.; Veloso, M.; Weld, D. & Wilkins, D. (1998). PDDL : The planning domain definition language, version 1.2. Technical Report CVC TR-98 003/DCS TR-1165. Yale Center for Computational Vision and Control, October, 1998, Yale, USA Hoffman, J. (2001) FF: The fast-forward planning system. AI Magazine, 22:57 – 62, 2001. Hoffmann, J. (2002). Extending FF to numerical state variables. Proceedings of the 15 th European Conference on Artificial Intelligence (ECAI2002), pages : 571-575, July, 2002, Lyon, France Hoffmann, J.; Kautz, H.; Gomes, C. & Selman B. (2007). SAT encodings of state-space reachability problems in numeric domains. Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 1918– 1923, January, 2007, Hyderabad, India McDermott, D. (1996). A heuristic estimator for means ends analysis in planning. Proceedings of the 3 rd International Conference on Artificial Intelligence Planning Systems. May, 1996, Edinburgh, UK. Refanidis, I. & Vlahavas, I. (2001). The GRT planning system: Backward heuristic construction in forward state-space planning. Journal of Artificial Intelligence Research, 15:115–161, 2001. Samson, C. & Micaelli, A. (1993). Trajectory tracking for unicycle-type and Two steering- wheels mobile robots. Technical report, Institut National de Recherche en Automatique, 1993, Sophia-Anitpolis, France Schmid, U.; Müller, M. & Wysotzki, F. (2002). Integrating function application in state based planning. Proceedings of the 25th Annual German Conference on AI: Advances in Artificial Intelligence, pages 144 – 162, September 2002, Aachen, Germany. Smith, D. & Weld, D. (1999). Temporal planning with mutual exclusion reasoning. Proceedings of 16 th InternationalJoint Conference on Artificial Intelligence (IJCAI-99), August, 1999, Stockholm, Sweden Zalaket, J. & Camilleri, G. (2004a). FHP : Functional heuristic planning. Proceedings of the 8 th International Conference on Knowledge-Based Intelligent Information and Engineering Systems (KES 2004), pages 9–16, September, 2004, Wellington, New Zealand Zalaket, J. & Camilleri, G. (2004b). NGP: Numerical graph planning. proceedings of the 16 th European Conference on Artificial Intelligence (ECAI 2004), pages 1115–1116, August, 2004, Valencia, Spain. [...]... different strategy of design and a different trajectory in the space RN 426 Frontiers in Robotics, Automation and Control On the other hand, we can generalize the idea of using of an additional penalty function, if the penalty function is formed only from a part of system (1) while the remaining part is regarded as a system of constraints In this event the penalty function includes, for example, only... physically unreal space In the point C the Network Optimization as a Controllable Dynamic Process 441 value of the control function u1 takes the zero value, and we make a jump to the final point F or close to it, which depends on the step of integration and on prescribed accuracy The second part of the trajectory, starting in the point C and corresponding to u1 =0, and TDS either degenerates into a jump, so... design process, corresponding to MTDS for the network in Fig 6, but for all possible values of the coordinate x 2 442 Frontiers in Robotics, Automation and Control Fig 9 Design trajectories for MTDS and separatrix for one-node network The portrait includes two types of separatrix The first one is separatrix AFB separating the trajectories approaching the final point from the left and from the right The... trajectories for the initial point X 0 = (1,1) , are depicted in Fig 7a: the solid line – for TDS, and the dashed one – for MTDS The number of iterations and the processor time in the first case are equal to 44 and 0.09210-3 s and in the second case – 78 and 0.14910-3 s As can be seen, the traditional strategy is preferable An insignificant reduction of the designing time (5%) can be obtained if in the process... (6) and (11) can be evaluated as: N3 = L3{K + Z + (1+ K + Z){C + (P +1)Z + S ⋅[(M − Z) + (M − Z) (1+ P) + (M − Z)P]}} 3 2 (12) 428 Frontiers in Robotics, Automation and Control This formula turns into (8) if Z=0, and into (10) when Z=M Analysis of the number of operations N3 as a function of the parameter Z permits us to find the conditions for defining the strategy characterized by a minimum running... starting the design process from different initial points In future, both in theoretical reasoning and in practical examples, we shall assume that the problem of single-valuedness of the final point is overcome by imposing some additional conditions on the variables It should be specially stressed that this problem is essential only in comparison of different strategies and their trajectories, while in. .. trajectory corresponding to TDS remains almost unchanged In the first case we have a jump downward from the initial point to the line corresponding to the matched solution In the second case we have the jump to the same line, but upward Since the jump occurs instantaneously, the time in both cases is the same A somewhat different situation is x 2 ( x 2 = -1) the first part of the trajectory lies in the unfeasible... For automatically meeting the latter requirement the following definition of the vector X can be used: X = ( x1 , x2 ) where x12 ≡ R1 and x2 ≡ V1 The structural basis of various design strategies, defined for the control vector U in our case consists of two strategies – at u1 =0 and u1 =1 The first one is the TDS while the second is the 440 Frontiers in Robotics, Automation and Control MTDS The design... formula at every point of optimization procedure Also, it is worth introducing into our consideration a vector of control functions U = (u1 , u2 , ,um ) , where u j ∈ Ω and Ω = {0;1} In other words, every control function u j can take the value 0 or 1 These functions have the meaning of control functions of the design process and generalize this process Particularly, the meaning of the control function... defined by formulas (5), (15) and constraints (14) An additional difficulty is that the righthand parts of system (16) are piecewise continuous functions rather than strictly continuous Such a problem for system (16) with piecewise continuous control functions can be resolved most effectively based on the known maximum principle (Pontryagin et al., 1962), but straightforward application of this principle . planning and problem solving. Logicbased AI Workshop, June, 1999, Washington D.C. Frontiers in Robotics, Automation and Control 422 Gerevini, A. & Long, D. (2005). Plan constraints and. rewritten in component-wise form as () UXf dt dx i i ,= , Ni , ,2,1 = (18) in combination with (14) defines the design process. 432 Frontiers in Robotics, Automation and Control In the. planning. Proceedings of the 8 th International Conference on Knowledge-Based Intelligent Information and Engineering Systems (KES 2004), pages 9–16, September, 2004, Wellington, New Zealand