1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

plandingh and decision making for aerial robots

420 39 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 420
Dung lượng 5,65 MB

Nội dung

Intelligent Systems, Control and Automation: Science and Engineering Yasmina Bestaoui Sebbane Planning and Decision Making for Aerial Robots Intelligent Systems, Control and Automation: Science and Engineering Volume 71 Series editor S G Tzafestas, Zografou, Athens, Greece Editorial Advisory Board P Antsaklis, Notre Dame, IN, USA P Borne, Lille, France D G Caldwell, Salford, UK C S Chen, Akron, OH, USA T Fukuda, Nagoya, Japan S Monaco, Rome, Italy G Schmidt, Munich, Germany S G Tzafestas, Athens, Greece F Harashima, Tokyo, Japan N K Sinha, Hamilton ON, Canada D Tabak, Fairfax, VA, USA K Valavanis, Lafayette, LA, USA For further volumes: http://www.springer.com/series/6259 Yasmina Bestaoui Sebbane Planning and Decision Making for Aerial Robots 123 Yasmina Bestaoui Sebbane UFR Sciences and Technologies Université d’Evry Val-D’Essone Evry, Essonne France ISSN 2213-8986 ISBN 978-3-319-03706-6 DOI 10.1007/978-3-319-03707-3 ISSN 2213-8994 (electronic) ISBN 978-3-319-03707-3 (eBook) Springer Cham Heidelberg New York Dordrecht London Library of Congress Control Number: 2013956331 Ó Springer International Publishing Switzerland 2014 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect to the material contained herein Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) To my family Preface This book provides an introduction into the emerging field of planning and decision making of aerial robots An aerial robot is the ultimate of Unmanned Aerial Vehicles, an aircraft endowed with built-in intelligence, no direct human control, and able to perform a specific task It must be able to fly within a partially structured environment, to react and adapt to changing environmental conditions, and to accommodate the uncertainty that exists in the physical world An aerial robot can be termed as a physical agent that exists and flies in the real 3D world, can sense its environment, and act on it to achieve some goals So throughout this book, an aerial robot will also be termed as an agent Fundamental problems in aerial robotics are the tasks of moving through space, sensing about space, and reasoning about space Reasoning in the case of a complex environment represents a difficult problem The issues specific to reasoning about space are planning and decision making Planning deals with the trajectory algorithmic development based on the available information Decision making determines the most important requirements and evaluates possible environment uncertainties The issues specific to planning and decision making of aerial robots in their environment are examined in this book, leading to the contents of this book: Motion planning, Deterministic decision making, Decision making under uncertainty, and finally Multi-robot planning A variety of techniques are presented in this book, and some case studies are developed The topics considered in this book are multidisciplinary and lie at the intersection of Robotics, Control Theory, Operational Research, and Artificial Intelligence Paris, France Yasmina Bestaoui Sebbane vii Contents Introduction 1.1 Motivation 1.2 Aerial Robots 1.3 Aerial Robotics and Artificial Intelligence 1.4 Preliminaries 1.4.1 Probability Fundamentals 1.4.2 Uncertainty Fundamentals 1.4.3 Nonlinear Control Fundamentals 1.4.4 Graph Theory Fundamentals 1.4.5 Linear Temporal Logic Fundamentals 1.4.6 Rough Sets 1.5 Modeling 1.5.1 Modeling of the Environment 1.5.2 Modeling of the Aerial Robot 1.5.3 Aerial Robots in Winds 1.6 Conflict Detection 1.6.1 Deterministic Approach 1.6.2 Probabilistic Approach 1.7 Conclusions References 1 9 12 18 21 24 30 31 37 37 43 46 46 51 52 53 Motion Planning 2.1 Introduction 2.2 Controllability 2.3 Trajectory Planning 2.3.1 Trim Trajectory Generation 2.3.2 Leg-Based Guidance 2.3.3 Dubins and Zermelo Problems 2.3.4 Optimal Control Based Approaches 2.3.5 Parametric Curves 2.4 Nonholonomic Motion Planning 2.4.1 Differential Flatness 2.4.2 Nilpotence 2.4.3 Constrained Motion Planning 59 59 60 64 66 68 69 77 87 99 99 103 106 ix x Contents 2.4.4 Motion Planning for Highly Congested Obstacle/Collision Avoidance 2.5.1 Problem Formulation 2.5.2 Discrete Search Methods 2.5.3 Continuous Search Methods 2.6 Replanning Approaches 2.6.1 Incremental Replanning 2.6.2 Anytime Algorithms 2.7 Conclusions References 2.5 Spaces 109 111 111 115 139 148 149 157 163 164 Deterministic Decision Making 3.1 Introduction 3.2 Symbolic Planning 3.2.1 Hybrid Automaton 3.2.2 Temporal Logic Motion Planning 3.3 Computational Intelligence 3.3.1 Neural Networks 3.3.2 Evolution Algorithms 3.3.3 Decision Tables 3.3.4 Fuzzy Systems 3.4 Arc Routing Methods 3.4.1 Traveling Salesman Problem 3.4.2 Dubins Traveling Salesman Problem 3.4.3 Chinese Postman Problem 3.4.4 Rural Postman Problem 3.5 Case Studies 3.5.1 Surveillance Mission 3.5.2 Evolutionary Planner 3.5.3 Bridge Monitoring 3.5.4 Soaring Flight for Fixed Wing Aerial Robot 3.6 Conclusions References 171 172 173 174 180 181 182 185 189 190 192 193 199 202 203 207 207 213 220 231 239 240 Decision Making Under Uncertainty 4.1 Introduction 4.2 Generic Framework for Dynamic Decisions 4.2.1 Problem Formulation 4.2.2 Utility Theory 4.2.3 Decision Trees and Path Utility 4.2.4 Bayesian Inference and Bayes Nets 4.2.5 Influence Diagrams 4.3 Markov Approach 245 246 248 248 252 252 252 255 255 Contents xi 4.3.1 4.3.2 4.3.3 4.3.4 Markov Models Markov Decision Process Presentation Partially Observable Markov Decision Process Bayesian Connection with Partially Observable Markov Decision Process 4.3.5 Learning Processes 4.3.6 Monte Carlo Value Iteration 4.3.7 Markov Logic 4.3.8 Belief Space Approach 4.4 Stochastic Optimal Control Theory 4.4.1 Bayesian Connection with State Space Models 4.4.2 Learning to Control 4.4.3 Chance Constrained Algorithms 4.4.4 Probabilistic Traveling Salesman Problem 4.4.5 Type Fuzzy Logic 4.5 Motion Grammar 4.5.1 Description of the Approach 4.5.2 Grammars for Aerial Robots 4.5.3 Temporal Logic Specifications 4.6 Case Studies 4.6.1 Robust Orienteering Problem 4.6.2 Exploration of an Uncertain Terrain 4.6.3 Rescue Path Planning in Uncertain Adversarial Environment 4.6.4 Receding Horizon Path Planning with Temporal Logic Constraints 4.7 Real-Time Applications 4.8 Conclusions References Multi Aerial Robot Planning 5.1 Introduction 5.2 Team Approach 5.2.1 Cooperation 5.2.2 Cascade-Type Guidance Law 5.2.3 Consensus Approach 5.2.4 Flocking Behavior 5.2.5 Connectivity and Convergence of Formations 5.3 Deterministic Decision Making 5.3.1 Distributed Receding Horizon Control 5.3.2 Conflict Resolution 5.3.3 Artificial Potential 5.3.4 Symbolic Planning 256 256 262 264 265 266 268 270 275 276 277 278 280 283 284 286 287 289 292 292 299 301 303 306 311 312 317 318 319 321 323 327 329 333 336 338 340 342 346 xii Contents 5.4 Association with Limited Communications 5.4.1 Introduction 5.4.2 Problem Formulation 5.4.3 Genetic Algorithms 5.4.4 Games Theory Reasoning 5.5 Multi-Agent Decision Making Under Uncertainty 5.5.1 Decentralized Team Decision Problem 5.5.2 Algorithms for Optimal Planning 5.5.3 Task Allocation: Optimal Assignment 5.5.4 Distributed Chance Constrained Task Allocation 5.6 Case Studies 5.6.1 Reconnaissance Mission 5.6.2 Expanding Grid Coverage 5.6.3 Optimization of Perimeter Patrol Operations 5.6.4 Stochastic Strategies for Surveillance 5.7 Conclusions References 348 348 349 353 355 356 357 365 368 374 377 377 381 383 388 391 392 General Conclusions 397 Index 399 390 Multi Aerial Robot Planning The steady state, invariant distribution, satisfies p = Pp (5.159) The components of this steady state distribution can be found explicitly as solutions of recursive equations While this approach can be taken from simple environments for a single aerial robot, it becomes cumbersome as the environments become even slightly more complex and there are more sentries performing the surveillance An empirical measure of non uniformity can also be defined Definition 5.29 Non Uniformity : NU ⎜ i NU(k) = (˜xi − ζ˜ i )2 n2 (5.160) where x˜ i = x(i) × n k×a ζ˜ i = ζi × n lim k≤∞ NU(k) ≤0 (5.161) where x˜ i is a history of visitation frequency for state i, xi is the visitation history, ζ˜ i is a normalized invariant distribution for state i, n is the number of states, a is the number of agents, k is the number of steps each agent has taken This non uniformity measure essentially quantifies how quickly the surveillance team covers the environment Mathematically, it is the mean distance the team of aerial robots are from the composite invariant distribution Normalization of the visitation history and the invariant distribution is important because the measure must be applicable to both small and large environments As k increases, the normalized visitation history will approach the normalized steady state distribution Multi Aerial Robot Investigations A strategy for 1D, n−node lattice is presented in this paragraph By appropriate choice of parameters, it is possible to implement a probabilistic strategy in which aerial robots disperse in the lattice domain as fast as possible The turning parameters of this strategy are assigned by the following relation ∂i = 0.9 0.5 for k ∞ a+1−i a+1 × n Otherwise (5.162) where ∂i is the probability that aerial robot i turns right, k is the number of steps, a is the number of aerial robots and n is the number of nodes in the lattice This strategy disperses the aerial robots along the graph before switching to equal turning probabilities The aerial robots not have a uniform steady state distribution prior to 5.6 Case Studies 391 switching to the equal turning probabilities After the aerial robots switch to the equal turning probabilities, their steady state distribution approach the uniform distribution as the initial distributions are suppressed with time 5.6.4.3 Complete Graphs Complete graphs are considered and the associated turning parameters that yield the fastest mixing and approach the uniform steady state distribution are investigated Theorem 5.12 For a complete graph with n vertices, the probabilistic random walk having transition probability matrix pij = n−1 if i ∀= j Otherwise (5.163) −1 has eigenvalues of multiplicity one and n−1 of multiplicity n − The invariant −1 distribution for this Markov chain is uniform, and the eigenvalue n−1 is smaller in magnitude than the eigenvalue of second largest magnitude corresponding to any other set of transition probabilities In the general case, the structure of the problem can be considered as line segments on a bounded plane A complete graph on its vertices is associated to each edge of rank ∗ in H(X, λ) representing the set of segments in the search environment The edges of these complete graphs represent the possible choices of transitions from one segment to another Any general graph can be decomposed into a system of interconnected complete subgraphs, cliques Cliques having two vertices are distinguished from those having more than two vertices Those having two vertices correspond to transitions in which the only choices available to the aerial robot are to move ahead or to move backward The intersections of a general graph can be thought of as a complete graph where the number of nodes in the complete graph is equivalent to the number of choices an aerial robot can make With no restrictions on the movement of an aerial robot, the number of nodes in the complete graph is equal to the number of edges incident to the intersection in the graph representation A hybrid strategy is obtained by combining the strategies for both linear graphs and complete graphs It will provide uniform coverage of a general graph while achieving this coverage quickly without a large sacrifice in the randomness in the behavior of the aerial robot More details on this implementation can be found in [42] 5.7 Conclusions The evolution of multi agent systems has revolutionized the services that distributed computation can offer In this chapter, team approach is introduced then deterministic decision making is presented followed by information about association with limited communications Then, decision making under uncertainty was analyzed Finally, some case studies are discussed 392 Multi Aerial Robot Planning Some future challenges are the discovery of adaptation to new environments, cooperation and communication between agents, autonomous decision taking and resource management and collaboration in the realization of activities with common objectives In depth research is still required in the areas of modeling, control and guidance relevant in coordination and control problem The distributed nature of information processing, sensing and actuation makes the teams of vehicles a significant departure from the traditional centralized control system paradigm An open question is the following: ‘Given a team of locally interacting robots, and a high level (global) specification over some environment, how can be automatically generated provably correct (local) control strategies’? What global (expressive ) specifications can be efficiently distributed? How should be modelled local interactions (example message passing versus synchronization on common events? References Ahuja RK, Magnanti TL, Orlin JB (1993) Network flows Prentice-Hall, Englewood cliffs Alejo D, Diaz-Banez JM, Cobano JA, Perez-Lantero P, Ollero A (2013) The velocity assignment problem for conflict resolution with multiple UAV sharing airspace J Intell Robot Syst 69(1–4): 331-346 doi:10.1007/s10846-012-9768-4 Alighanbari M, Bertuccelli LF, How JP (2006) A robust approach to the UAV task assignment problem In: IEEE conference on decision and control, San Diego, Ca, pp 5935–5940 Altshuler Y, Bruckstein AM (2011) Static and expanding grid coverage with ant robots: complexity results Theorl Comput Sci 41:4661–4674 Aragues, R., Montijano, E., Sagues, C (2010) Consistency data association in multi-robot systems with limited communications In: Matsuoka Y, Durrant-White H, Neira J (eds) Robotics science and systems The MIT Press, Cambridge, pp 97–104 Asmare E, Gopalan A, Sloman M, Dulay N, Lupu E (2012) Self-management framework for mobile autonomous systems J Network Syst Manag 20:244–275 Ayanian N, Kallem V, Kumar V (2011) Synthesis of feedback controllers for multiple aerial robots with geometric constraints In: IEEE/RSJ international conference on intelligent robots and systems, San Francisco, pp 3126–3131 Basilico N, Amigoni F (2011) Exploration strategies based on multi criteria decision making for searching environments in rescue operations Auton Robots 31:401–417 Belkhouche F, Vadhva S, Vaziri M (2011) Modeling and controlling 3D formations and flocking behavior of UAV In: IEEE information reuse and integration conference, pp 449–454, 10.1109/IRI.2011.6009590 10 Bennet D, McInnes C, Suzuki M, Uchiyama K (2011) Autonomous Three-Dimensional Formation Flight for a swarm of unmanned aerial vehicles, AIAA J Guidance Control Dynamics 34:1899–1908 (2011) 11 Cao Y, Ren W (2010) Multi vehicle coordination for double integrator dynamics under fixed undirected/directed interaction with a sampled data setting Int J Robust Nonlinear Control 20:987–1000 12 Cao Y, Yu W, Ren W, Chen G (2013) An overview of recent progress in the study of distributed multi-agent coordination IEEE Trans Industr Inf 9:427–438 13 Chavel I (ed) (1984) Eigenvalues in Riemannian geometry Academic Press, New York 14 Dimarogonas D, Loizon SJ, Kyriakopoulos K, Zavlanos M (2006) A feedback stabilization and collision avoidance scheme for multiple independent non point agents Automatica 42:229–243 15 Duan H, Zhang X, Wu J, Ma G (2009) Max-min adaptive ant colony optimization approach to multi UAV coordianates trajectory replanning in dynamic and uncertain environments J Bionic Eng 6:161–173 References 393 16 Edison E, Shima T (2011) Integrating task assignment and path optimization for cooperating UAV using genetic algorithms Comput Oper Res 38:340–356 17 Faied M, Mostafa A, Girard A (2009) Dynamic optimal control of multiple depot routing problem with metric temporal logic In: IEEE American control conference, pp 3268–3273 18 Franchi A, Stegagno P, Oriolo G (2013) Decentralized multi-robot target encirclement in 3D space, arXiv preprint arXiv:1307.7170, 2013 - arxiv.org 19 Fraser C, Bertucelli L, Choi H, How J (2012) A hyperparameter consensus method for agreement under uncertainty Automatica 48:374–380 20 Gandhi R, Yang LG (2007) Examination of planning under uncertainty algorithms for cooperative UAV, AIAA Infotech@Aerospace, paper AIAA-2007-2817 21 Gattani A, Benhardsson B, Rantzer A (2012) Robust team decision theory IEEE Trans Autom Control 57:794–798 22 Gazi V, Fidan B (2007) Coordination and control of multi-agent dynamic systems: modes and apporaches in Swarm Robotics In: Sahin E (ed) LNCS, vol 4433 Springer, Heidelbreg 23 Geramifard A, Redding J, Joseph J, Roy N, How J (2012) Model estimation within planning and learning In: American control conference, Montreal, pp 793–799 24 Giardinu G, Kalman-Nagy T (2007) Genetic algorithms for multi agent space exploration AIAA Infotech@Aerospace conference, paper AIAA2007-2824 25 Goel A, Gruhn V (2008) A general vehicle routing problem Eur J Oper Res 191:650–660 26 Hantos P (2011) Systems engineering perspectives on technology, readiness assessment in software intensive system development AIAA J aircraft 48:738–748 27 Holzapfel F, Theil S (eds) (2011) Advances in aerospace guidance, navigation and control Springer, Berlin 28 Inigo-Blasco P, Diaz-del-Rio F, Romero M, Cargigas D, Vicente S (2012) Robotics software frameworks for multiagent robotic systems developement Robotics Auton Syst 60:803–821 29 Jorgensen, U, Skjetne, R (2012) Generating safe and equally long trajectories for multiple unmanned agents In: IEEE mediterranean conference on control and automation, pp 1566– 1572 30 Kamgarpour M, Dadok V, Tomlin c (2010) Trajectory generation for aircraft subject to dynamic weather uncertainty In: 49th IEEE conference on decision and control, Atlanta, pp 2063–2068 31 Karahan I, Koksalan M (2010) A territory defining multiobjective evolutionary algorithms and preference incorporation IEEE Trans Evol Comput 14:636–664 32 Karaman S, Frazzoli E (2008) Complex mission optimization for multiple UAV using linear temporal logic In: American control conference seattle, Wa, pp 2003–2009 33 Karaman S, Frazzoli E (2011) Linear Temporal logic vehicle routing with applications to multi-UAV mission planning, Int J Robust Nonlinear Control 21:1372–1395 34 Karimoddini A, Liu H, Chen B, Lee T (2011) Hybrid 3D formation control for unmanned helicopter Technical report, NUS-ACT-11-005 35 Kloetzer M, Belta C (2007) Temporal logic planning and control of robotic swarms by hierarchical abstraction IEEE Trans Robotics 23:320–330 36 Kon Kang B, Kim KE (2012) Exploiting symmetries for single and multi-agent partially observable stochastic domains Artif Intell 182:32–57 37 Krebsbach K (2009) Deliberative scheduling using GSMDP in stochastic asynchronous domains Int J Approximate Reasoning 50:1347–1359 38 Kulkarani A, Tai K (2010) Probability collectives: a multi-agent approach for solving combinatorial optimization problems Appl Soft Comput 37:759–771 39 Lemaitre C, Reyes CA, Gonzalez JA (2004) Advances in artificial intelligence Springer, Berlin 40 Liu L, Shell DA (2010) Assessing optimal assignment under uncertainty In: Matsuoka Y, Durrant-White H, Neira J (eds) Robotics science and systems The MIT Press, Cambrisdge, pp 121–128 41 Liu J, Wu J (2001) Multi-agent robotic system, CRC Press, Florida 42 Low CB (2012) A rapid incremental motion planner for flexible formation control of fixed wing UAV In: IEEE conference on decision and control, pp 2427–2432 394 Multi Aerial Robot Planning 43 Lyons D, Calliess JP, Hanebeck U (2011) Chance constrained model predictive control for multi-agent systems arXiv preprint arXiv:1104.5384 44 Margellos K, Lygeros J (2011) Hamilton-Jacobi formulation for reach-avoid differential games IEEE Trans Autom Control 56:1849–1861 45 Marier JS, Besse C, Chaib-Draa B (2009) A Markov model for multiagent patrolling in continous time ICONIP, vol 2, Springer, pp 648–656 46 Martin P, de la Croix, JP, Egerstedt M (2008) A motion description language for networked systems, In: 47th IEEE conference on decision and control, Mexico, pp 558–563 47 Martin P, Egerstedt M (2008) Optimal Timing control of interconnected, switched systems with applications to robotic marionettes In: 9th international workshop on discrete event systems, Goteborg, Sweden, pp 156–161 48 Marvel J (2013) Performance metrics of speed and separation monitoring in shared workspaces IEEE Trans Autom Sci Eng 10:405–414 49 Mesbahi M (2004) On state-dependent dynamic Graphs and their controllability properties In: IEEE conference on decision and control, Bahamas, pp 2473–2478 50 Mesbahi M, Egerstedt M (2010) Graph Theoretic methods in multiagent networks, Princeton series in applied mathematics 51 Moon J, Oh E, Shin DH (2013) An integral framework of task assignment and path planning for multiple UAV in dynamic environments J Intell Robots Syst 70:303–313 52 Sathyaraj BM, Jain LC, Fuin A, Drake S (2008) Multiple UAV path planning algorithms: a comparative study Fuzzy Optim Decis Making 7(3):257–267 53 No TS, Kim Y, Takh MJ, Jeon GE (2011) Cascade type guidance law design for multiple UAV formation keeping Aerosp Sci Technol 15:431–439 54 Oberlin P, Rathinam S, Darbha S (2009) A transformation for a multiple depot, multiple traveling salesman problem In: American control conference, pp 2636–2641 55 Ono M, Williams BC (2010) Decentralized chance constrained finite horizon optimal control for multi-agent systems In: 49th IEEE control and decision conference, Atlanta, Ga, pp 138– 145 56 Parlangeli G, Notarstefano G (2012) On the reachability and observability of path and cycle graphs IEEE Trans Autom Control 57:743–748 57 Pavone M, Frazzoli E, Bullo F (2011) Adaptive and distributive algorithms for Vehicle routing in a stochastic and dynamic environment IEEE Trans Autom Control 56:1259–1274 58 Peng R, Wang H, Wang Z, Lin Y (2010) Decision making of aircraft optimum configuration utilizing multi dimensional game theory Chinese J Aeronaut 23:194–197 59 Ponda S, Johnson L, How J (2012) Distributed chance constrained task allocation for autonomous multi-agent teams In: Proceedings of the 2012 American control conference, Montreal, Canada, pp 4528–4533 60 Rabbath CA, Lechevin N (2011) Safety and reliability in cooperating unmanned aerial systems World Scientific, Singapore 61 Rathinam S, Sengupta R, Darbha S (2007) A resource allocation algorithm for multivehicle system with nonholonomic constraints IEEE Trans Autom Sci Eng 4:4027–4032 62 Rosaci D, Sarne M, Garruzzo S (2012) Integrating trust measuring in multiagent sytems Int J Intell Syst 27:1–15 63 Saget S, Legras F, Coppin G (2008) Cooperative interface for a swarm of UAV, - arXiv preprint arXiv:0811.0335 - arxiv.org 64 Semsar-Kazerooni E, Khorasani K (2009) Multi-agent team cooperation: a game theory approach Automatica 45:2205–2213 65 Sennott LI (2009) Stochastic dynamic programming and the control of queuing systems, Wiley, New York 66 Seuken S, Zilberstein S (2008) Formal models and algorithms for decentralized decision making under uncertainty Autonom Agent Multi-Agent Syst doi:10.1007/s10458-007-9026-5 67 Shamma JS (2007) Cooperative control Of distributed multi-agent system Wiley, UK 68 Shanmugavel M, Tsourdos A, Zbikowski R, White BA, Rabbath CA, Lechevin N (2006) A solution to simultaneous arrival of multiple UAV using Pythagorean hodograph curves, In: American control conference, Minneapolis, MN, pp 2813–2818 References 395 69 Shi G, Hong Y, Johansson K (2012) Connectivity and set tracking of multi-agent systems guided by multiple moving leaders IEEE Trans Autom Control 57:663–676 70 Shima T, Rasmussen S (2009) UAV cooperative decision control: challenges and practical approaches, SIAM, Philadelphia, PA 71 Sirigineedi G, Tsourdos A, Zbikowski R, White B (2010) Modeling and verification of multiple UAV mission using SVM In: Workshop on formal methods for aerospace, pp 22–33 72 Stachura M, Frew GW (2011) Cooperative target localization with a communication awareunmanned aircraft system AIAA J Guidance Control Dynamics 34:1352–1362 73 Surynek P (2010) An optimization variant of multi-robot path planning is intractable In: 24th AAAI conference on artificial intelligence 74 Turpin M, Michael N, Kumar V (2012) Decentralized formation control with variable shapes for aerial robots In: IEEE international conference on robotics and automation, Saint Paul, pp 23–30 75 Turra D, Pollini L, Innocenti M (2004) Fast unmanned vehicles task allocation with moving targets In: 43rd IEEE conference on decision and control, pp 4280–4285 76 Twu PY, Martin P, Egerstedtd M (2012) Graph process specifications for hybrid networked systems Discrete Event Dyn Syst 22:541–577 77 Ulusoy, A Smith SL, Ding XC, Belta C, Rus D (2011) Optimal multi-robot path planning with temporal logic constraints In: IROS IEEE/RSJ international conference on intelligent robots and systems, 3087–3092 78 Ulusoy A, Smith SL, Ding XC, Belta C (2012) Robust multi-robot optimal path planning with temporal logic constraints In: IEEE international conference on robotics and automation 79 Vazirani V (2003) Approximation algorithms Springer verlag, New York 80 Virtanen K, Hamalainen RP, Mattika V (2006) Team optimal signaling strategies in Air Combat IEEE Trans Syst Man Cybern 36:643–660 81 Wen G, Duan Z, Yu W, Chen G (2012) Consensus of multi-agent systems with nonlinear dynamics and sampled data information: a delayed input approach Int J Robust Nonlinear Control 23:602–619 82 Wu F, Zilberstein S, Chen X (2011) On line planning for multi-agent systems with bounded communication Artif Intell 175:487–511 83 Yadlapelli S, Malik W, Darbha M (2007) A lagrangian based algorithm for a multiple depot, multiple traveling salesmen In: American control conference, pp 4027–4032 84 Zhang H, Zhai C, Chen Z (2011) A general alignment repulsion algorithm for flocking of multiagent systems IEEE Trans Autom Control 56:430–435 85 Zhi-Wei H, Jia-hong L, Ling C, Bing W (2012) A hierarchical architecture for formation control of multi-UAV Procedia Eng 29:3846–3851 Chapter General Conclusions In this book, some formalisms capable of dealing with a variety of reasoning tasks that require an explicit representation of the environment, were presented Intelligent aerial robots that interact with their environment by perceiving, acting or communicating often face a challenge in how to bring these different concepts together One of the main reasons for this challenge is the fact that the core concepts in decision, action, perception, and communication are typically studied by different communities: the control, robotics and artificial intelligence communities, among others, without much interchange between them As planning and decision making lie at the core of these communities, it can act as a unifying factor in bringing the communities closer together The goal is to show how formalisms developed in different communities can be applied in a multidisciplinary context Research in aerial robots is an innovative and ongoing part of research The implementation of aerial robots is envisioned for a wide variety of missions such as search and rescue, emergency and fire fighting, hurricane management Applications of aerial robots are expanding at a fast rate and are expected to operate in less controllable and harder to model domains Certain requirements for the design and simulation of concepts must be set Accurate models of aerial robots are necessary for the design of controllers and evaluation of the performance Autonomy is spread over several levels, which are characterized by factors including the complexity of the mission, environmental difficulties and the level of human-machine interaction necessary to accomplish the mission Robot intelligence is a property of complex robot systems which interact with real-world environments (including robots themselves) Learning and adaptation become essential to deploy aerial robots that continuously interact with the environment, acquire new data during operation and use them to improve their performance by developing new skills or improving and adapting their models How should a robot acquire and use this stream of data? How can it close the actionperception loop to efficiently learn models and acquire skills? Answers will provide ways for the aerial robot to choose better data to learn, reducing the time and energy used while at the same time improving generalization capabilities Y Bestaoui Sebbane, Planning and Decision Making for Aerial Robots, 397 Intelligent Systems, Control and Automation: Science and Engineering 71, DOI: 10.1007/978-3-319-03707-3_6, © Springer International Publishing Switzerland 2014 Index Symbols μ-calculus, 129 θ ∗ algorithm, 123 3D problem, 96 A A* Algorithm, 122, 123, 131, 138, 144, 153, 365 Accessibility distribution, 19, 62 Accointance, 321 Action space and rewards, 261 ADA* algorithm, 161 Adaptive control of a Markov decision process, 299 Adjacency matrix, 21 Adversarial Markov decision process, 292 Adversaries, 259 AEMS, 386, 387 Agenda, 321 Agility principles, 293 Algorithm, 47, 114, 118–122, 124, 126, 127, 130–137, 139, 149, 150, 152, 154, 156, 158, 160, 163, 175, 187, 188, 192, 194, 196, 198, 201–206, 214, 220, 223, 227, 264, 267, 281, 352, 369–372, 374, 379, 380, 382 Alignment, 322, 330 Alphabet, 287 ANA* algorithm, 160, 162 Ant colony optimization algorithm, 196 Anytime algorithm, 157, 158, 160 Anytime dynamic A* algorithm, 160 Anytime error minimization search, 385 Anytime heuristic algorithm, 158 Anytime repairing A* algorithm, 158 Anytime window A* algorithm, 159 Approximation, 353 ARA* algorithm, 157, 159 Architecture, 308–310 Artificial ant colony, 196 Artificial intelligence, 172, 181 Artificial potential field, 214, 342, 343 Artificial potential methods, 140 Artificial systems, 173 Artificial systems algorithm, 187 Association set, 351 Asymptotical optimality, 129 Atmospheric boundary layer, 44 Atom, 25 Atomic proposition, 25 Automata algorithm, 175 Automaton, 178 B Back-propagation, 183, 184 Backup, 267 Backup a policy graph at a belief n with N samples, 267, 268 Backus-Naur, 287 Backward induction and dynamic consistency, 252 Bad bracket, 20 Basic legs, 68 Bayes, 10, 13, 248, 252–254, 264, 268, 276, 357, 358 Belief, 17, 262, 273, 320 Belief function, 16 Bell membership, 16 Bellman, 260, 300 Bernoulli, 384 Bernstein-Bezier, 93 Bessel, 261 Y Bestaoui Sebbane, Planning and Decision Making for Aerial Robots, Intelligent Systems, Control and Automation: Science and Engineering 71, DOI: 10.1007/978-3-319-03707-3, © Springer International Publishing Switzerland 2014 399 400 Bifurcating steering potential, 343 Binormal vector, 42 Bounded bifurcating potential field, 343 Box-uncertainty, 294 Branch and bound algorithm, 120 Branch global leader, local leader and local follower, 324 Breadth-first algorithm(BFA), 117, 118, 352 Brushfire algorithm, 127 Buchi, 306 Buchi acceptance, 27 Buchi automaton, 27 Budget uncertainty, 295 Bug algorithm, 139 C Campbell Baker Hausdorff Dynkin , 104 Canonical motion planning, 112 Capability, 320 Capacitated chinese postman problem, 202 Capacitated probabilistic traveling salesman problem, 282 Centralized, 322 Certainty equivalence principle, 300 Chance algorithm, 278 Chance constrained path planning problem, 279 Chance constrained task allocation, 375 Chinese postman problem algorithm, 202 Christofides, 281 CLARATY, 310 Classical probability, 13 Clique, 23, 269 Cluster algorithm, 379, 382 Clustering algorithm, 282 Cognitive systems, 173 Cohesion, 323, 330 Collaborative agent, 319 Collective observability, 363 Collision avoidance, 46 Collision cone approach, 48 Collision test, 134 Common objective, 320 Communication cost, 323 Communication graph, 333 Communication models, 359 Completeness, 114 Completion, 365 Complex space planning approaches, 128 Computational intelligence, 182 Concrete organizational structure, 321 Configuration, 65 Index Configuration space, 41 Conflict-free assignment, 374 Conflictive set, 351 Connectivity, 22, 330 Consensus-based bundle algorithm, 377 Consistency, 151 Constrained motion planning, 107 Contex-free grammars, 284 Continuous events, 10 Continuous probability, 10 Control nodes, 329 Control policy, 259, 290 Control quanta, 178 Controllability, 19 Controllability Lie algebra, 19 Controller, 286 Cooperating aerial robot, 320 Cooperation, 319 Cooperation aerial robot model, 321 Cooperation model, 321 Cooperative activity, 321 Cooperative control, 322 Cooperative protocol, 355 Coverage or travel vertex, 203 Coverage problem, 281 CPP, 202, 228, 379 Cuboid, 33 Curvature, 42 Cycle building algorithm, 206 Cycle graph, 328 Cyclic polling system, 383 Cylinder, 33 D D* algorithm, 149, 150 D* lite algorithm, 151, 152 DEC-MDP, 367 DEC-POMDP, 360, 366 DEC-POMDP-COM, 362 Decentralized, 322 Decentralized control of a partially observable system, 356 Decentralized resolution of inconsistent association, 351 Decision horizon, 249 Decision table, 30, 189, 337 Decision tree, 252 Deconfliction difficulty parameter, 341 Deconfliction maintenance, 341, 342 Deconfliction maneuver, 341 Delaunay triangulation, 127 Deliberative architectures, 309 Index Dempster-Shafer, 13, 16, 17, 253 Dependence degree of set, 189 Depth, 153 Depth 2D branch algorithm, 120 Depth first algorithm, 119 Deterministic Rabin automaton, 29 Deterministic rules, 189 Differential flatness, 100 Dijkstra algorithm, 121 Directed graph, 21 Directed path, 23 Disagreement function, 327 Discrepancy, 265 Discrete event system theory, 286 Discrete events, 10 Discrete probability, 10 Distributed, 322 Distributed approximation to the chance constrained task allocation problem, 376 Distributed decision architecture of cooperative control, 323 Distributed flocking algorithms, 332 Distributed reactive collision avoidance, 341 Distribution of information, 323 Domain-level policy for an MTDP, 362 DRPP, 205 DTRP, 283 DTSP, 201 Dubins, 69, 72, 131, 199–201, 283, 380 Dubins PRM algorithms, 131 Dubins traveling salesman problem, 201 Dynamic 3D tracking problem, 324 Dynamic brushfire algorithm, 154 Dynamic consistency, 252 Dynamic graph, 328 Dynamic RRT, 156 Dynamic traveling repairman problem, 283 E EA, 218 Edge-sampled visibility graph, 125 Efficiency, 320 EGPSO, 189 EKF, 264 Ellipsoid, 32 Environment, 146 Equality graph, 24 Equivalence class, 30 Euler, 33, 37, 73, 192, 202, 206, 379 Event, 84 Event-trigger based process, 346, 347 401 Evidence theory, 16 Evolution algorithm, 214 Evolutionary algorithm, 185, 186 Execution level, 173 Expectation, 250 Expected value, 11 Expert system, 172 Exploitation, 249 Exploration, 249 Extended Kalman filter, 272 External nodes, 328 F Feasibility, 378 Feasible curve, 200 Feasible motion planning, 112 Finite Markov decision process, 258 Finite state model, 176 First visit algorithmx, 119 Flight envelope, 86 Flight plan, 3, 4, 8, 39, 68, 132, 172, 174, 175, 187, 221 Flight trajectory, 87 Flocking behavior, 329 Fly-By, 69 Fly-Over, 69 Focused D∗, 151 Formation, 333 Formation keeping, 323 Frame, 363 Frenet, 41, 342 Frenet-serret, 92, 94 Frenet-Serret frame, 42 Frequency approach, 13 Fuzzy set, 14, 16, 283 Fuzzy system, 190 G GA, 195 Gauss, 261, 270, 271, 279 Gaussian membership, 15 General vehicle routing problem, 378 General vehicle with differential constraints, 114 Generalized voronoi diagram, 154 Genetic algorithm, 185, 186, 195 Geometric decomposition, 32 Geometric optimization, 145 Geostrophic Wind, 43 Gershgorin, 22 GEST, 137 Grammar, 347 402 Graph, 233, 391 Graph search, 131 Graphic matroid, 23 Greedy, 359 Greedy action, 249 Grid, 381 Grid based state space search, 128 GRVP, 378 Guidance, 83, 229 Guided expansive search trees algorithm, 138 Gust, 45 H HARPIC, 310 Hermite, 94 Heuristic, 114, 146, 353, 365, 366 Heuristic methods, 195 Hidden Markov model, 257 Hierarchical formation, 335 Hierarchical product graph, 23 Homotopic RRT, 156 Hungarian algorithm, 369, 370 Hurwitz, 334 Hybrid architectures, 309 Hybrid automaton, 26, 174 Hypergraph, 24 Hypergraph rank, 24 I I-POMDP, 364 I-POMDP model, 363 Idleness, 384 Imbalanced, 108, 109 Implementation level, 173 Indegree, 22 Indifferent agent, 319 Inference, 172, 356 Inference machine, 172 Infinite horizon discounted stochastic optimization problem, 299 Influence diagram, 255, 357 Information system, 30 Instance, 277 Intelligent decision making, 182 Intelligent planning system, 182 Intentional models, 364 Intermediate agents, 330 Internal nodes, 328 Intersection legs, 68 Intersection-based algorithm, 346 Interval analysis, 354 Index Interval analysis for matched edges, 370 Interval analysis for unmatched edges, 371 Interval approach, 367 Interval hungarian algorithm, 369, 372 Interval of matched edges algorithm, 371 Interval type-2 fuzzy set, 284 Invariants, 27 IT2 FS, 284 Iterative closest point, 349 Iterative legs, 68 IWD, 198 J Joint compatibility branch and bound, 349 Joint domain-level policy for an MTDP, 362 Joint policy for a DEC-POMDP, 360 Joint policy for a DEC-POMDP-COM, 363 Joint probabilistic data association, 349 K Kalman, 264, 271, 279 Karush kuh tucker, 367 Kinetic state machine, 24 Kinodynamic motion planning, 113 Knowledge, 320 Knowledge base, 172 Knowledge engineering, 172 Kolmogorov, 248 Kuhn-munkres hungarian algorithm, 368 L Lagrange multipliers analysis, 78 Laplacian matrix, 22 LARC, 20 Leader/follower protocol, 356 Learning, 277 Learning processes, 265 Lie bracket, 18 Line of sight, 47–49, 115 Linear consensus protocol, 327 Linear hybrid automaton, 27 Linear Quadratic Regulator in Belief Space (LQR-B), 274 Linear Temporal Logic (LTL), 181, 284, 347 Local planning, 130 Local Policy for action for a DEC-POMDPCOM, 363 Local Policy for DEC-POMDP, 360 Long Range Flight via Orographic Lift, 238 LOS, 48 Lower approximation, 31 Index LQG-MP, 274 LQR, 137, 270, 274, 367 LTL, 24, 26 Lyapunov, 82, 343 M MAA*, 365 Mach, 39 Maneuver, 176 Maneuver automaton, 176, 285 Manhattan distance, 381 Market based Iteration Risk Allocation, 367 Markov, 255, 259, 260, 268, 389 Markov chain, 257 Markov Decision Process (MDP), 247, 256, 258, 291, 360, 367 Markov nets, 269 Matroid, 23 Maximum likelihood, 348 MDL, 179 MDLe, 179 MDLn, 348 MDTRP, 354 Membership function of class π , 16 Membership function of class t, 16 Mesh-Zermelo-TSP, 227 Milestone construction, 130 MILP, 146, 190, 348 MMDP, 385 MMKP, 380 Model, 235 Model Predictive Control (MPC), 82, 189 Models of an aerial robot, 364 Models with explicit communication, 362 Models without explicit communication, 360 Monte Carlo, 261, 301 Monte Carlo value iteration, 266 Morse, 345 Motion grammar, 285, 286 Motion parser, 286 Motion primitive, 86, 132, 176, 178 Move suppression term, 338 MPD, 384 MS-RRT algorithm, 134 Multi agent investigations, 390 Multi-agent team decision problem, 361, 362 Multi-robot data association, 349 Multiple hypothesis tracking, 349 Multiqueue multi server, 383 Mutation, 215 403 N Nash, 355 Navigation function, 140 Nearest neighbor, 348 Negative reward, 260 Negotiation-based algorithm, 346 Neighboring optimal wind routing, 75 Network, 182, 184, 195, 196, 202, 207, 210, 213, 254, 266, 268, 277, 318, 322, 328, 330, 338, 347, 354, 367, 377, 380 Neymann-Pearson, 253 NN, 209, 210 Nominal orienteering problem, 295 Nominative decision making, 252 Non cooperative protocol, 355 Nonholonomic constraints, 65 Nonseparable function, 339 Normal vector, 42 O Objectivist, 248 Observability, 328 Observable first-order discrete Markov chain, 257 Observation nodes, 329 Online coverage algorithm, 205 Opinion configuration, 327 Optimal assignment, 368 Optimal guidance, 84 Optimal kinodynamic motion planning, 113 Optimal motion planning, 112 Optimal wind routing, 75 Oracle agent, 319 Orienteering problem, 292, 293 Orographic lift and regenerative soaring, 238 Outdegree, 22 P P-indiscernibility, 30 Personality model, 357 Parametric legs, 68 Pareto, 355 Parser, 288 Parsing, tokenizing, 285 Partially Observable Markov Decision Process, 262 Partially observable stochastic games, 359 Particle Swarm Optimization (PSO), 187, 189 Particle swarm optimization algorithm, 188 Path, 259, 290 404 Path graph, 328 Path planning for multiple Robots, 337 Payoff, 250 Perfect recall, 361 Performance, 320 Philip Hall, 105 Philip-Hall basis, 19 Physical agents, 330 Plan recognition task, 357 Plus function, 107 Point of closest approach, 46 Point robot, 113 Point robot with geometric constraints, 113 Point vehicle with differential constraints, 113 Poisson, 383 Policy, 249, 256 Policy iteration for infinite horizon, 366 Policy iteration for infinite horizon DECPOMDP algorithm, 366 POMDP, 255, 262, 264, 266 Pontryagin minimum principle, 77 Popov, Belevitch, Hautus lemma for symmetric matrices, 329 Positive reward, 260 Possibility measure, 14 Possibility theory, 14 Prequential approach, 248 Prioritization approach, 367 PRM, 130 Probabilistic automaton, 29 Probabilistic traveling salesman problem, 280 Probability density function, 11 Probability distribution functions, 13 Probability theory, 13, 14 Product automaton, 28 Proximity graph, 329 Pushdown automata, 286 Pythagorean hodograph, 91 Pythagorean hodograph algorithm, 95 Q Q-learning, 265 Query connection, 131 Query phase, 131 Queuing phenomena, 354 R Randomization, 353 Rational reaction sets, 356 Reach-avoid game, 147 Index Reachability, 328 Reachability graph, 128 Reachable set, 20, 147 Reachable subspace, 329 Reactive architectures, 309 Reactive planning, 139 Real time application algorithm, 125 Receding horizon MILP, 192 Receding horizon optimization, 85 Recursive backward induction schemes, 251 Recursive modeling method, 357 Regret based approach, 295 Reinforcement, 359 Relative degree for a SISO system, 18 Replanning RRT algorithm, 156 Repulsive potential, 345 Resolution algorithm, 353 Restarting weighting A* algorithm, 159 Reynolds boids model, 322 RHC, 278, 338 Riccati, 275 RNAV, 68 Roadmap, 124 Roadmap algorithm, 123 Roadmap construction phase, 130 Robust maneuver automaton, 177 Robust orienteering problem, 292, 295, 297 Role, 320 Role model, 357 Rooted directed spanning tree, 23 RPP, 203 RRG, 135 RRG algorithm, 135 RRG-EXTEND algorithm, 136 RRT, 132, 155 RRT basic algorithm, 133 RRT*, 136 RRT*-EXTEND algorithm, 137 RRT-EXTEND algorithm, 133 RTL, 181 Rule, 17, 172, 277 Run, 28 Rural postman problem algorithm, 204 Rural chinese postman algorithm, 380 Rural CPP, 380 S Sample generation, 130 Sampling, 133 Satisfaction of linear temporal logic formula, 25 Saturation, 343 Index Scan matching, 349 Schedules, 259 Self centered agent, 319 Semantic rule, 286 Semi-modeler strategy, 358 Separable utility function, 252 Separation, 322 Set approximation, 30 Shortest path problem, 115 Simple space planning approaches, 116 Simulated annealing, 197 Simulated annealing algorithm, 198 Single agent investigations, 389 Singleton, 15 Singular controls, 80 SIPP, 163 SISO, 18 Six degrees of freedom formulation, 37 Small time local controllability, 19 Solution value for a finite horizon DECPOMDP, 361 Solution value for an infinite horizon DECPOMDP, 361 Source lines of codes, 337 Spatial decomposition, 34 Spatial self-organization, 330 Spatial value function, 86 Specification language, 287 Specification level, 173 State space navigation function with interpolation, 128 Stationary policy, 259 Steering, 133 STLC, 19 Stochastic Dubins traveling salesman problem, 283 Stochastic uncertainty, 368 Strategic modularity, 307 Strategy model, 357 Structure of the flight route planner, 221 Sub-intentional models, 364 Subjective determination, 13 Subjectivist, 248 Sum graph, 23 Superquadric function, 33 Survivability, 301 Survivability biaised risk free path, 303 Sustainability, 293 Swarm, 329 Synchronization labels, 27 Syntax directed definition, 287 System language, 287 405 T Tactical modularity, 307 Tangent vector, 42 Tatonnement, 367 Temporal difference, 265 Timed automaton, 28 Token, 288 Topological representation, 36 Torsion, 42 Transducer, 286 Transition probabilities, 261 Transitions, 27 Translational dynamics, 39 Tree, 23 Trim primitives, 176 TSP, 223, 228, 281, 353 TSP with triangle inequality, 194 Type, 364 U Unattended ground sensor, 383 Uncertain linear programming problem, 293 Uncertainties of interrelated edges algorithm, 374 Uncertainty, 13, 17, 259, 373 Uncertainty measurement for a single utility, 373 Uncertainty measurement for interval utilities, 373 Undirected graph, 327 Unmanned aerial vehicles, Unmatched edges algorithm, 372 Unobservable subspace, 329 Upper approximation, 31 Utility theory, 252 V Value iteration, 267 Vector relative degree, 20 Velocity field, 343 Virtual agents, 330 Visibility algorithm, 125 Visibility graph, 124 VISIBILITY-CLOSE algorithm, 127 VNS, 378 Von Karman, 44 Von Mises, 260 Voronoi, 125, 126, 132, 134, 154, 155, 192 VORONOI algorithm, 126 VRP, 227, 228, 348, 377 406 W Walrasian auction, 367 Wave-front planner, 118 Weight constrained shortest path problem, 115 Weighted A*, 123 Weighted A* algorithm, 158, 163 Weighted transition systems, 26 Index Wind, 46, 72, 74 Z Zadeh, 253 Zermelo, 70, 72, 75, 223 Zermelo-TSP algorithm, 223 ... Decision- theoretic making for single and multiple aerial robots • Decision making under uncertainty for single and multiple aerial robots Algorithms are constructed based on different theoretical assumptions and. .. available information Decision making determines the most important requirements and evaluates possible environment uncertainties The issues specific to planning and decision making of aerial robots. .. airplanes, helicopters and airships, or natural Y Bestaoui Sebbane, Planning and Decision Making for Aerial Robots, Intelligent Systems, Control and Automation: Science and Engineering 71, DOI:

Ngày đăng: 14/09/2020, 16:14

w