Fei Chen Wei Ren Distributed Average Tracking in Multi-agent Systems Distributed Average Tracking in Multi-agent Systems Fei Chen Wei Ren • Distributed Average Tracking in Multi-agent Systems 123 Fei Chen State Key Laboratory of Synthetical Automation for Process Industries Northeastern University Shenyang, China Wei Ren Department of Electrical and Computer Engineering University of California at Riverside Riverside, CA, USA School of Control Engineering Northeastern University at Qinhuangdao Qinhuangdao, China ISBN 978-3-030-39535-3 ISBN 978-3-030-39536-0 https://doi.org/10.1007/978-3-030-39536-0 (eBook) Mathematics Subject Classification (2010): 93-02 © Springer Nature Switzerland AG 2020 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland To my parents, wife, and child (Emma) —Fei Chen To my parents, wife, children (Kaden and Cody), and dog (Snowflake) —Wei Ren Preface This book is about distributed average tracking, a class of problems arising from the study of multi-agent systems The objective of distributed average tracking is as follows: given a group of agents, each having a reference signal, design an algorithm for each agent based on its own and adjacent neighbors’ information obtained through local sensing and/or communication such that all agents will finally track the average of these reference signals The problem has received increasing attention in recent years from the control community due to its broad applications in estimation/control-related problems When there exists a central unit having access to all references, the task is fairly easy to accomplish; however, the centralized solution is usually “expensive” and not scalable, which inspires the researchers to search for a distributed solution that employs solely local information Distributed average tracking can be viewed as an “upgraded” version of the average consensus problem whose objective is to calculate the average of static quantities, but poses far more significant challenges in control synthesis and analysis It is frequently employed as estimation algorithms in a wide spectrum of application domains, e.g., multi-robot systems, manycore microchips, and distributed optimization/games Additionally, it can be adopted as control algorithms, such as in region-following formation control, which might lead to new insights into the existing problems It is expected that many other applications of distributed average tracking are still waiting to be discovered Over the past two decades or so, there have been notable advances in discovering/defining new problems and solving them efficiently for multi-agent systems This is partly due to the development of the algebraic graph theory, whose combination with linear control theory provides an effective tool for analyzing linear multi-agent systems; yet the tool generally fails to work for systems with nonlinearity and time variation, which is perceived as the main source of design difficulties of distributed average tracking Distributed average tracking via solely local information poses significant challenges, where the references could be changing fast over time and quite different from each other While each agent could compute the average of its own and local neighbors’ references, this will not be the average of all references Also, these references could be generic external signals or vii viii Preface generated by very different models In addition, no information flooding or relay is available, and hence it is impossible for each agent to have access to all references These challenges suggest that when designing distributed average tracking algorithms, the changing trend of the average be properly predicted with local information Unfortunately, with smooth or continuous algorithms, the average that changes over time might not even be an equilibrium of the closed-loop systems While the consensus theory has been established for almost two decades, some observations have stimulated the need for studying distributed average tracking The first is the observation that average consensus algorithms generally fail to work for references with a high changing rate Except in rare scenarios, the direct application of consensus algorithms to track the average of time-varying signals are likely to result in poor performance and even instability Compared with consensus, distributed average tracking is, to some extent, an underdeveloped research area Distributed average tracking algorithms with zero tracking errors are seldom seen in the literature The second is the increasing need to develop distributed average tracking algorithms both from the theoretical and application perspectives, which arises from distributed Kalman filter, distributed optimization, formation control, and distributed task migration One aspect that plays a key role in solving distributed average tracking is the development of nonsmooth analysis tools in the multi-agent systems context The tools allow the design of consensus inputs that are robust to bounded disturbances caused by the reference signals The above observations and development have led to the work that forms the core of the present book This book is targeted for researchers and scientists, who might find distributed average tracking useful in their research The potential readers include those directly working on various problems in multi-agent systems such as distributed control, estimation, and optimization, and many others who apply distributed average tracking theory in application-oriented domains, such as manycore microchips, image processing, and signal processing The authors believe that distributed average tracking is a fundamental enough topic that deserves significant attention and consideration in the study of multi-agent systems, considering the fact that many problems in multi-agent systems require the estimation or computation of various global, mostly time-varying, information This book can also be used as a reference or text for courses on multi-agent systems at the graduate level Qinhuangdao, China Riverside, USA Fei Chen Wei Ren Acknowledgements The material of this book is based on our works in the past decade, even since Dr Fei Chen worked as a postdoc at Utah State University under the supervision of Dr Wei Ren Over the years, the authors have benefited from discussions with our colleagues and our graduate students at the University of California at Riverside, Utah State University, City University of Hong Kong, Nankai University, Xiamen University, and Northeastern University Even though it is not possible to name all of them, the authors would like to particularly thank Zengqiang Chen, Guanrong Chen, Jie Chen, Gang Feng, Zongli Lin, Yongcan Cao, Sheida Ghapani, Shan Sun, Weiyao Lan, Lu Liu, Fang Yang, Hui Liu, Salar Rahili, and Yongduan Song The authors are indebted to Oliver Jackson and Prashanth Ravichandran, for their enthusiasm in the book project Financial support from the US National Science Foundation (under Grants CMMI-1537729, ECCS-1611423, ECCS-1920798, ECCS-1307678, ECCS-1213295, and ECCS-1213291), National Science Foundation of China (under Grants 61973061, 61973064, 61473240, 61104018, and 61104151), and National Science Foundation of Hebei Province (under Grants F2019501043 and F2019501126), is also gratefully acknowledged ix Contents Part I Introduction Overview 1.1 What is a Multi-agent System? 1.1.1 Agent 1.1.2 Autonomy 1.1.3 Multi-agent Systems 1.2 What is Distributed Average Tracking? 1.2.1 Formal Definition 1.2.2 Distributed Average Tracking Versus Dynamic (Average) Consensus 1.2.3 Two Kinds of Constraints in Distributed Average Tracking Design 1.3 Applications of Distributed Average Tracking 1.3.1 Visual Maps Merging 1.3.2 Multi-camera Tracking 1.3.3 Distributed Task Migration in Manycore Systems 1.3.4 Dynamic Region-Following Formation Control 1.4 Literature Review: Distributed Average Tracking 1.4.1 Quick Overview 1.4.2 Design via the Invariant-Sum Scheme 1.4.3 Design via the Sum-Tracking Scheme 1.5 Connections with Other Cooperative Control Problems 1.5.1 Average Consensus 1.5.2 Coordinated Tracking (Leader-Following Consensus) 1.5.3 Containment Control 1.6 Organization References 3 4 5 10 11 11 12 12 14 16 17 17 18 19 21 21 xi xii Contents 25 25 25 25 27 28 28 29 30 32 32 36 Distributed Average Tracking via Nonsmooth Feedback 3.1 Problem Description 3.2 Algorithm Design 3.3 Convergence Analysis 3.4 Initialization Errors, Time Delays, and Discrete-Time Implementation 3.4.1 Robustness to Initialization Errors 3.4.2 Robustness to Time Delays 3.4.3 Discrete-Time Implementation 3.5 Simulation References 39 39 40 41 51 51 52 53 54 60 Distributed Average Tracking via an Extended PI Scheme 4.1 Problem Description 4.2 Challenges of Algorithm Design 4.3 Design Criteria Under an Extended PI Scheme 4.4 Smooth Distributed Average Tracking Algorithms 4.5 Simulation References 61 61 63 64 71 73 74 79 80 81 82 83 85 Preliminaries 2.1 Graph Theory 2.1.1 Basic Definitions 2.1.2 Connectedness Notions 2.1.3 Matrices Associated with a 2.2 Nonlinear Stability Theory 2.2.1 Nonlinear Models 2.2.2 Notions of Stability 2.2.3 Stability Theorems 2.2.4 Input-to-State Stability 2.3 Nonsmooth Analysis References Part II Part III Graph Algorithms Dynamics Distributed Average Tracking for Double-Integrator Dynamics 5.1 Problem Description 5.2 Distributed Average Tracking Under a Fixed Network Topology 5.2.1 Controllers Design 5.2.2 Algebraic Graph Results 5.2.3 Convergence Analysis 10.2 Distributed Time-Varying Convex Optimization for Double-Integrator Dynamics 217 The time derivative of W along (10.51) can be obtained as W˙ (t) = − γ μe TX (L ⊗ Im )e X + eVT (γ Im N − αζ L ⊗ Im )eV ⎡ ⎤ j∈N1 β1 j h y1 − y j ⎢ ⎥ − 2ψγ e TX eV − y T ⎣ ⎦ β h y − y N j j∈N N N j + y T (Π ⊗ Im )Φ + N (10.54) ¯ β˙i j (βi j − β) i=1 j∈Ni We rewrite (10.54) as N W˙ = W¯ − βi j yi h(yi − y j ) + y T (Π ⊗ Im )Φ + i=1 j∈Ni T e where W¯ = X eV e P¯ X eV N ¯ β˙i j , (βi j − β) i=1 j∈Ni and −ψγ Im N −μγ (L ⊗ Im ) P¯ = −ψγ Im N γ Im N − αζ (L ⊗ Im ) Because graph G is connected, we have W˙ (t) =W¯ − + N N βi j (yi − y j )h(yi − y j ) i=1 j∈Ni N (yi − y j )φi + i=1 j=1 =W¯ + 2N ≤W¯ + ≤W¯ + N 2N N N ¯ i − y j )h(yi − y j ) (βi j − β)(y i=1 j∈Ni N (yi − y j )(φi − φ j ) − i=1 j=1 N β¯ N (yi − y j )h(yi − y j ) i=1 j∈Ni N yi − y j φi − φ j − i=1 j=1 (N − 1)φ¯ N yi − y j i=1 j∈Ni − β¯ β¯ N yi − y j i=1 j∈Ni N i=1 j∈Ni yi − y j yi − y j yi − y j 2 2 + εe−ct 2 + εe−ct , where in the last inequality Assumption 10.6 is used Selecting a β¯ such that β¯ ≥ (N −1)φ¯ , we obtain 218 10 Distributed Average Tracking in Distributed Convex Optimization β¯ W˙ (t) < W¯ + N εe−ct (10.55) i=1 j∈Ni If we can show W¯ < −ψ W (or equivalently W¯ + ψ W < 0), then knowing that e−ct → as t → ∞, Lemma 2.19 in [22] implies that the system (10.51) is asymptotically stable Note that W¯ + ψ W = (−γ μ + αγ ψ + ζ μψ)L − 2ψ γ I N (γ + ζ ψ)I N − αζ L (10.56) Applying Lemma 10.3, we obtain that (10.56) is negative definite if (−γ μ + αγ ψ + ζ μψ)L − 2ψ γ I N < 0, (γ + ζ ψ)I N − αζ L < (10.57) To satisfy the first condition in (10.57), we just need to show −γ μ + αγ ψ + ζ μψ < Using conditions (10.49) and (10.50), we have −γ μ + αγ ψ + ζ μψ < −γ μ + μγ + ζ μψ < −γ μ + μγ + μγ < To satisfy the second condition in (10.57), we 2 have (γ + ζ ψ)I N − αζ L < (γ + ζ ψ)I N − αζ λ2 (L)I N < 0, where Lemma 10.1 and condition (10.48) are employed, respectively Hence, W¯ < −ψ W holds, and the agents reach consensus as t → ∞ Now, similar to the proof of Theorem 10.5, if Hi (xi , t) = H j (x j , t), ∀t and ∀i, j ∈ I , it can be shown that Nj=1 ∇ f j (x j , t) will converge to zero asymptotically Under Assumption 10.1 and the assumpN f i (x, t) is convex, Lemma 10.2 is employed Using the fact that tion that i=1 xi (t) → x j (t), ∀i, j ∈ I as t → ∞, it is easy to see that the optimization goal (10.8) is achieved γ Remark 10.10 It is worth mentioning that if we have αζ < λ2 (L), there always exists a positive coefficient ψ such that conditions (10.48)–(10.50) hold However, selecting ψ based on conditions (10.48)–(10.50) affects the convergence speed, where by having a larger ψ, the agents reach consensus faster To satisfy conditions (10.48) γ > 2ζ > ψ (e.g., selecting a large α and and (10.50), it is sufficient to have 2λ2 (L)α choosing proper γ , ζ and ψ) It can also be seen that selecting a large enough μ, (10.49) can be satisfied Remark 10.11 It can be shown that Assumption 10.6 holds under the same conditions mentioned in Remark 10.7 (see the appendices of [24].) 10.2.5 Distributed Time-Varying Convex Optimization Using Time-Invariant Approximation of Signum Function In this section, our focus is on replacing the signum function, with a time-invariant approximation 10.2 Distributed Time-Varying Convex Optimization for Double-Integrator Dynamics h(z) = z z +ε , 219 (10.58) where ε > Here, the boundary layer ε is constant Employing (10.58), instead of (10.46) in the control algorithm (10.47) makes the controller easier to implement in real applications The trade-off is that the agents will no longer reach consensus, which introduces additional complexities in convergence analysis Establishing analysis on the optimization error bound in this case is a nontrivial task, which is introduced in this section The reason that the time-invariant continuous approximation (10.58) cannot ensure distributed optimization with zero error is that the global optimal trajectory is not even an equilibrium point of the closed-loop system whenever a time-invariant continuous approximation is introduced It is worthwhile to mention that if the signum function was replaced with a different time-invariant continuous approximation other than (10.58), there would be no guarantee that the same conclusion in this section would still hold and further careful analysis would be needed Theorem 10.8 Suppose that graph G is connected, Assumptions 10.1, 10.3, and 10.6 hold, and the gradients of the cost functions can be written as ∇ f i (xi , t) = σ xi + gi (t), ∀i ∈ I , where σ and gi (t) are, respectively, a positive coefficient and a time-varying function If conditions (10.48)–(10.50) hold, using (10.47) with h(·) given by (10.58) for (10.27), we have lim t→∞ N N xi − x ∗ = 0, i=1 lim t→∞ N N vi − v∗ = 0, (10.59) i=1 where x ∗ and v∗ are the position and the velocity of the optimal trajectory, respectively In addition, the agents track the optimal trajectory with bounded errors such that as t → ∞ φ¯ N (N − 1)2 ε xi − x ∗ < , 4ψλmin [P] (10.60) 2ε ¯ φ N (N − 1) vi − v∗ < , ∀i ∈ I , 4ψλmin [P] where P is defined after (10.52) Proof The proof will be separated into two parts In the first part, we show that the consensus error will remain bounded In the second part, we show that the error between the agents’ states and the optimal trajectory will remain bounded Define the Lyapunov function candidate W as in (10.52) Similar to the proof in Theorem 10.7, with h(·) given by (10.58) instead of (10.46), we obtain β¯ W˙ < W¯ + N ε < −ψ W + i=1 j∈Ni β¯ N (N − 1)ε, 220 10 Distributed Average Tracking in Distributed Convex Optimization where β¯ is selected such that β¯ ≥ ≤ W (t) ≤ (N −1)φ¯ Then, we have β¯ N (N − 1)ε (1 − e−ψt ) + W (0)e−ψt 2ψ Therefore, as t → ∞, we have eX eV T eX eV λmin [P] ≤ W = P eX eV ≤ β¯ N (N − 1)ε 2ψ Now, it can be seen that there exists a bound on the position and velocity consensus errors as t → ∞, that is, xi − vi − N N < φ¯ N (N − 1)2 ε , 4ψλmin [P] < φ¯ N (N − 1)2 ε , 4ψλmin [P] N xj j=1 N vj j=1 (10.61) where it is easy to see that by selecting ψ satisfying conditions (10.48)–(10.50), the error bound will be smaller In what follows, we focus on finding the relation between the optimal trajectory and the agents’ states According to Assumption 10.1 and using Lemma 10.2, we know Nj=1 ∇ f j (x ∗ , t) = Hence, under the assumption of ∇ f i (xi , t) = σ xi + gi (t), the optimal trajectory is x∗ = − N j=1 Nσ gj , v∗ = − N j=1 g˙ j Nσ (10.62) Similar to the proof of Theorem 10.5, we can show that, regardless of whether consensus is reached or not, it is guaranteed that Nj=1 ∇ f j (x j , t) will converge to zero asymptomatically As a result, we have − N j=1 g˙ j N j=1 xi → − N j=1 σ gj and N j=1 vi → By using (10.62), we can conclude (10.59) According to (10.61), it follows σ that (10.60) holds Remark 10.12 Using the invariant approximation of the signum function (10.58), instead of the time-varying one (10.46), makes the implementation easier in real applications However, the results show that the team cost function is not exactly minimized, and the agents track the optimal trajectory with a bounded error It also restricts the acceptable cost functions to a class that takes in the form ∇ f i (xi , t) = σ xi + gi (t) 10.2 Distributed Time-Varying Convex Optimization for Double-Integrator Dynamics 221 Remark 10.13 The results in the appendices of [24] can be modified for Theorem 10.7, where under the assumption of ∇ f i (xi , t) = σ xi + gi (t), it is easy to show that Assumption 10.6 holds, if gi (t) − g j (t) , g˙ i (t) g j (t) , and găi (t) gă j (t) , t and i, j ∈ I , are bounded Remark 10.14 The algorithms introduced in Sects 10.1.2, 10.1.3, 10.2.2, 10.2.4, and 10.2.5 are still valid in the case of strongly connected weight-balanced directed graph G In our proofs, L can be replaced with symmetric matrix L + L T as x T L x = 21 x T (L + L T )x Since G is strongly connected weight balanced, L + L T is positive semidefinite with a simple zero eigenvalue Note that applying the introduced algorithms for directed graphs, we need to redefine λ2 as the smallest nonzero eigenvalue of L + L T 10.3 Distributed Time-Varying Convex Optimization with Swarm Tracking Behavior In this section, we introduce two distributed optimization algorithms with swarm tracking behavior, where the center of the agents tracks the optimal trajectory defined by (10.7) for single-integrator and double-integrator dynamics while the agents avoid inter-agent collisions and maintain connectivity In most, if not all, existing works on distributed optimization, the agents will eventually approach a common optimal point while in some applications, it is desirable to achieve swarm behavior The goal of flocking or swarming with a leader is that a group of agents tracks a leader with only local interaction while maintaining connectivity and avoiding inter-agent collision [4, 8, 20, 29] Swarm tracking algorithms are studied in [20, 29], where it is assumed that the leader is a neighbor of all followers and has a constant and time-varying velocity, respectively In [4], swarm tracking algorithms via a variable structure approach are introduced, where the leader is a neighbor of only a subset of the followers In the aforementioned studies, the leader plans the trajectory for the team and the agents are not directly assigned to complete a task cooperatively In [31], the agents are assigned a task to estimate a stationary field while exhibiting cohesive motions Although optimizing a certain team criterion while performing the swarm behavior is a highly motivated task in many multi-agent applications, it has not been addressed in the literature 10.3.1 Distributed Time-Varying Convex Optimization with Swarm Tracking Behavior for Single-Integrator Dynamics In this subsection, we focus on the distributed optimization problem with swarm tracking behavior for single-integrator dynamics (10.1) To solve this problem, we 222 10 Distributed Average Tracking in Distributed Convex Optimization propose the algorithm ⎡ ⎤ ∂ V ij ⎦ u i (t) = − βsgn ⎣ + φi (t), ∂ x i j∈N (t) (10.63) i where Vi j is a potential function between agents i and j to be designed, β is positive, and φi is defined in (10.9) In (10.63), each agent uses its own position and the relative positions between itself and its neighbors It is worth mentioning that in this subsection, we assume each agent has a communication/sensing radius R, where if xi − x j < R, agent i and j become neighbors Our proposed algorithm guarantees connectivity maintenance in the sense that if the graph G (0) is connected, then for all t, G (t) will remain connected Before presenting the main result of this subsection, we need to define the potential function Vi j Definition 10.2 ([4]) The potential function Vi j is a differentiable nonnegative function of xi − x j , which satisfies the following conditions: (1) Vi j = V ji has a unique minimum in xi − x j = di j , where di j is the desired distance between agents i and j and R > maxi, j di j (2) Vi j → ∞ if xi − x j → (3) Vii = c, where c is a constant ⎧ ⎨ ∂ Vi j xi (0) − x j (0) ≥ R, xi − x j ≥ R, =0 ∂( xi −x j ) (4) ∂ Vi j ⎩ → ∞ xi (0) − x j (0) < R, xi − x j → R ∂( x −x ) i j The motivation of the last condition in Definition 10.2 is to maintain the initially existing connectivity patterns It guarantees that two agents which are neighbors at t = remain neighbors However, if two agents are not neighbors at t = 0, they not need to be neighbors at t > (see [4]) Theorem 10.9 Suppose that graph G (0) is connected, Assumptions 10.1 and 10.3 hold, and the gradient of the cost functions can be written as ∇ f i (xi , t) = σ xi + gi (t), ∀i ∈ I , where σ and gi (t) are, respectively, a positive coefficient and a timevarying function If β > φi , ∀i ∈ I , for the system (10.1) with the algorithm (10.63), the center of the agents tracks the optimal trajectory while the agents maintain connectivity and avoid inter-agent collisions Proof Define the positive semidefinite Lyapunov function candidate W = The time derivative of W is obtained as N N Vi j i=1 j=1 (10.64) 10.3 Distributed Time-Varying Convex Optimization with Swarm Tracking Behavior W˙ (t) = N N i=1 j=1 ∂ Vi j ∂ Vi j x˙i + x˙ j = ∂ xi ∂x j N N i=1 j=1 223 ∂ Vi j x˙i , ∂ xi where in the second equality, Lemma 3.1 in [4] has been used Now, rewriting W˙ along the closed-loop system (10.63) and (10.1), we have ⎛ ⎞ N ∂ V ∂ V ij ij ⎠ W˙ (t) = − β sgn ⎝ ∂ x ∂ x i i i=1 j=1 j=1 ⎛ N N N N ∂ Vi j ∂ Vi j ⎝ + φi ≤ ∂ xi ∂ xi i=1 j=1 i=1 j=1 N N ⎞ [ φi − β]⎠ It is easy to see that if β > φi , ∀i ∈ I , then W˙ is negative semidefinite Therefore, having W ≥ and W˙ ≤ 0, we can conclude that Vi j ∈ L∞ Since Vi j is bounded, based on Definition 10.2, it is guaranteed that there will be no inter-agent collision and the connectivity is maintained In what follows, we focus on finding the relation between the optimal trajectory and the agents’ positions Based on Definition 10.2, we can obtain that ∂ V ji ∂ Vi j ∂ Vi j = =− ∂ xi ∂ xi ∂x j (10.65) Now, by summing both sides of the closed-loop system (10.1) with the control algorithm (10.63), for i ∈ I , we have Nj=1 x˙ j = Nj=1 φ j We also know that the agents have identical Hessians since it is assumed that ∇ f i (xi , t) = σ xi + gi (t) Now, similar to the proof of Theorem 10.5, regardless of whether consensus is reached or not, we can show that Nj=1 ∇ f j (x j , t) will converge to zero asymptomatically Hence, − N gj j=1 On the other hand, using Lemma 10.2 and under we have Nj=1 xi → σ Assumption 10.1, we know Nj=1 ∇ f j (x ∗ , t) = Hence, the optimal trajectory is − N gj N ∗ This implies that N1 x ∗ = Nj=1 j=1 x i → x , where we have shown that the σ center of the agents will track the team cost function minimizer Remark 10.15 In the appendices of [24], it is shown that a constant β can be selected such that β > φi , ∀t and ∀i ∈ I , if gi (t) and g˙ i (t) , ∀t and ∀i ∈ I , are bounded In particular, it is shown that such a constant β can be determined at time t = by using the agents’ initial states and the upper bounds on gi (t) and g˙ i (t) , ∀t and ∀i ∈ I 224 10 Distributed Average Tracking in Distributed Convex Optimization 10.3.2 Distributed Time-Varying Convex Optimization with Swarm Tracking Behavior for Double-Integrator Dynamics In this subsection, we focus on distributed time-varying optimization with swarm tracking behavior for double-integrator dynamics (10.27) We will propose an algorithm, where each agent has access to only its own position and the relative positions and velocities between itself and its neighbors We propose the algorithm u i (t) = − ∂ Vi j −β ∂ xi j∈N (t) i sgn[vi (t) − v j (t)] + φi (t), (10.66) j∈Ni (t) where Vi j is defined in Definition 10.2, β is a positive coefficient, and φi is defined in (10.28) Theorem 10.10 Suppose that graph G (0) is connected, Assumptions 10.1 and 10.3 hold, and the gradient of the cost functions can be written as ∇ f i (xi , t) = σ xi + )Φ √ m holds, for the system (10.27) with the algorithm gi (t), ∀i ∈ I If β > (Π⊗I λ2 [L(t)] (10.66), the center of the agents tracks the optimal trajectory, the agents’ velocities track the optimal velocity, and the agents maintain connectivity and avoid inter-agent collisions Proof Writing the closed-loop system (10.27) with the control algorithm (10.66) based on the consensus errors e X and eV defined in (10.30), we have ⎧ eV ⎪ ⎪e˙ X = ⎪ ⎪ ⎪ e ˙ (t) = −α[L(t) ⊗ Im ]eV (t) − β[E(t) ⊗ Im ]sgn{[E T (t) ⊗ Im ]eV } ⎪ V ⎪ ⎡ ⎨ ∂ V1 j ⎤ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎢ ⎢ ⎣ j∈N1 ∂e X (t) ∂ VN j j∈N N ∂e X N (t) ⎥ ⎥ + (Π ⊗ Im )Φ(t) ⎦ Define the positive semidefinite Lyapunov function candidate W = N N i=1 N Vi j + eVT eV j=1 The time derivative of W along (10.67) can be obtained as W˙ (t) = N N i=1 j=1 ∂ Vi j ∂ Vi j eV + eV ∂e X i i ∂e X j j + eVT e˙V (10.67) 10.3 Distributed Time-Varying Convex Optimization with Swarm Tracking Behavior 225 Using Lemma 3.1 in [4], W˙ can be rewritten as W˙ (t) = − αeVT [L(t) ⊗ Im ]eV (t) − βeVT (t)[E(t) ⊗ Im ]sgn{[E T (t) ⊗ Im ]eV (t)} + eVT (t)(Π ⊗ Im )Φ(t) √ Using a similar argument to that in (10.13), we obtain that if β λ2 [L(t)] > (Π ⊗ Im )Φ , then W˙ is negative semidefinite Therefore, having W ≥ and W˙ ≤ 0, we can conclude that Vi j , ev ∈ L∞ By integrating both sides of W˙ ≤ −αeVT (L(t) ⊗ Im )eV , we can see that ev ∈ L2 Now, applying Barbalat’s Lemma [28], we obtain that eV converges to zero asymptotically, which means that the agents’ velocities reach consensus as t → ∞ Since Vi j is bounded, it is guaranteed that there will be no inter-agent collision and the connectivity is maintained In the next step, using (10.65), by summing both sides of the closed-loop system (10.27) with the control algorithm (10.66) for i ∈ I , we have Nj=1 v˙ j = Nj=1 φ j N Now, if the team cost function i=1 f i (x, t) is convex and ∇ f i (xi , t) = σ xi + gi (t), applying a procedure similar to the proof of Theorem 10.8, we can show that N j=1 ∇ f j (x j , t) will converge to zero asymptomatically and (10.59) holds Particularly, we have shown that the average of agents’ states tracks the optimal trajectory Because the agents’ velocities reach consensus as t → ∞, we have that vi approaches v∗ as t → ∞ )Φ √ m Remark 10.16 The assumption β > (Π⊗I can be interpreted as a bound on λ2 [L(t)] the difference between the agents’ internal signals Using the fact that λ2 [L(t)] )Φ √ m is lower bounded above zero, there always exists a β satisfying β > (Π⊗I λ2 [L(t)] if Assumption 10.6 holds Here, to satisfy Assumption 10.6, with ∇ f i (xi , t) = σ xi + gi (t), ∀i ∈ I , the boundlessness of gi (t) − g j (t) , g˙ i (t) g j (t) , and găi (t) gă j (t) , t and i, j ∈ I , is sufficient Remark 10.17 The algorithms proposed in Sects 10.1.3, 10.2.3, and 10.3 are provided for time-varying graphs For algorithms introduced in Sects 10.1.2, 10.2.2, 10.2.4, and 10.2.5, the current results are demonstrated for static graphs However, the results are valid for time-varying graphs if the graph G (t) is connected for all t and a sufficiently large constant gain is used, instead of the time-varying adaptive )Φ √ m To select gains In particular, the constant gain β should satisfy β > (Π⊗I λ2 [L(t)] ¯ defined in Assumption 10.4 (or Assumption 10.6 in such a β, we need to know φ, the case of double-integrator dynamics), and βx (and βv in case of double-integrator dynamics), defined in the appendices of [24] Remark 10.18 All the proposed algorithms are also applicable to non-convex functions However, in this case, it is just guaranteed that the agents converge to a local optimal trajectory of the team cost function 226 10 Distributed Average Tracking in Distributed Convex Optimization 10.4 Simulation In this section, we present various simulation examples to illustrate the theoretical results in the previous sections Consider a team of six agents The interaction among the agents is described by an undirected ring graph The agents’ goal is to minimize f i (xi , t), where xi = (r xi , r yi )T is the coordinate of the team cost function i=1 agent i in 2D plane, subject to xi = x j In the first example, we apply the algorithm (10.9) for single-integrator dynamics (10.1) The local cost function for agent i is chosen as f i (xi , t) = [r xi − isin(t)]2 + [r yi − icos(t)]2 , (10.68) where it is easy to see that the optimal point of the team cost function creates a For trajectory of a circle whose center is at the origin and radius is equal to 21 (10.68), Assumption 10.3 and the conditions for agents’ cost functions in Remark 10.4 hold In addition, they have identical Hessians and the team cost function is convex βi j (0) = β ji (0) are chosen randomly within (0.1, 2) in the algorithm (10.9) The trajectory of the agents and the optimal trajectory is shown in Fig 10.1 It can be seen that the agents reach consensus and track the optimal trajectory which minimizes the team cost function In the case of double-integrator dynamics, we first give an example to illustrate the algorithm (10.28) for (10.27) with the local cost functions defined by (10.68) Choosing the coefficients in (10.28) as μ = 5, α = 12, γ = 5, ζ = 12, and βi j (0) = β ji (0) randomly within (0.1, 2), the agents reach consensus and the team cost function is minimized as shown in Fig 10.2 In the next example, we illustrate the results obtained in Sect 10.2.3, where it has been clarified that the algorithm (10.37)–(10.40) can be used for local cost functions with nonidentical Hessians Here, the local cost functions Fig 10.1 Trajectories of all agents along with the optimal trajectory using the algorithm (10.9) for the local cost functions (10.68) 10.4 Simulation 227 Fig 10.2 Trajectories of all agents along with the optimal trajectory using the algorithm (10.28) for the local cost functions (10.68) Fig 10.3 Trajectories of all agents along with the optimal trajectory using the algorithm (10.37)– (10.40) for the local cost functions (10.69) f i (xi , t) = r xi − sin(t) i + r yi − cos(t) i (10.69) will be used, where Hi (xi , t) = i22 I2 , ∀i ∈ I It can be obtained that the team cost function’s optimal trajectory creates a circle whose center is at the origin and radius is equal to 1.64 The algorithm (10.37)–(10.40) with κ = 12, ρ = 2, α1 = 0.1, and is used for the system (10.27) Figure 10.3 shows that the team cost function α2 = 0.2 1.1 is minimized In our next example, the results in Sect 10.2.5 are illustrated, where the invariant approximation of the signum function is employed Here, the algorithm (10.47) with h(·) given by (10.58) is used to minimize the agents’ team cost function for local cost functions defined in (10.68) The coefficients are chosen as μ = 5, α = 10, γ = 5, ζ = 5, ε = 2, and βi j (0) = β ji (0) randomly within (0.1, 2) Figure 10.4 shows the agents’ trajectories along with the optimal one It is shown that the agents track the optimal trajectory with a bounded error 228 10 Distributed Average Tracking in Distributed Convex Optimization Fig 10.4 Trajectories of all agents using the algorithm (10.47) and (10.58) for the local cost functions (10.68) Fig 10.5 Trajectories of all agents using the algorithm (10.66) for local cost function (10.70) In our last illustration, the swarm tracking control algorithm (10.66) is employed, where the local cost functions are defined as f i (xi , t) = r xi + 2i sin(0.5t) t +1 + [r yi + isin(0.1t)]2 (10.70) In this case, we let R = The parameter of (10.66) is chosen as β = 20 To guarantee the collision avoidance and connectivity maintenance, the potential function partial derivative is chosen as Eqs (36) and (37) in [4], where di j = 0.5, ∀i, j Figure 10.5 shows that the center of the agents’ positions tracks the optimal trajectory while the agents remain connected and avoid collisions 10.4 Simulation 229 Notes The distributed optimization problem has attracted significant attention recently It arises in many applications of multi-agent systems, where agents cooperate in order to accomplish various tasks as a team in a distributed and optimal fashion Particularly, the distributed average tracking algorithms play a key role in solving the problem This chapter studied a class of distributed convex optimization problems, where the goal is to minimize the sum of local cost functions, each of which is known to only an individual agent The incremental subgradient algorithm was introduced as one of the earlier approaches addressing this problem [23, 25] In the algorithm, an estimate of the optimal point is passed through the network while each agent makes a small adjustment on it Recently some significant progress has been made based on the combination of consensus and subgradient algorithms [13, 19, 35] For example, this combination was used in [13] for solving the coupled optimization problems with a fixed undirected graph A projected subgradient algorithm was proposed in [19], where each agent is required to lie in its own convex set It is shown that all agents can reach an optimal point in the intersection of all agents’ convex sets even for a time-varying communication graph with doubly stochastic edge weight matrices However, all the aforementioned works focus on discrete-time algorithms Recently, some new research was conducted on distributed optimization problems for multi-agent systems with continuous-time dynamics Such a scheme has applications in motion coordination of multi-agent systems For example, multiple physical vehicles modeled by continuous-time dynamics might need to rendezvous at a team optimal location In [18], a generalized class of zero-gradient sum controllers was introduced for twice differentiable strongly convex functions under an undirected graph In [26], a continuous-time version of [19] for directed and undirected graphs was studied, where it is assumed that each agent is aware of the convex optimal solution set of its own cost function and the intersection of all these sets is nonempty Reference [15] derived an explicit expression for the convergence rate and ultimate error bounds of a continuous-time distributed optimization algorithm In [32], a general approach was given to address the problem of distributed convex optimization with equality and inequality constraints A proportional–integral algorithm was introduced in [9, 12, 14], where [12] considered strongly connected weight-balanced directed graphs and [14] extended these results using discrete-time communication updates A distributed optimization problem was studied in [16] with the adaptivity and finite-time convergence properties In continuous-time optimization problems, the agents are usually assumed to have single-integrator dynamics However, a broad class of vehicles requires doubleintegrator dynamic models In addition, having time-invariant cost functions is a common assumption in the literature However, in many applications, the local cost functions are time varying, reflecting the fact that the optimal point could be changing over time and creates a trajectory There are just a few works in the literature addressing the distributed optimization problem with time-varying cost functions [6, 17, 27] 230 10 Distributed Average Tracking in Distributed Convex Optimization In those works, there exist bounded errors converging to the optimal trajectory For example, the economic dispatch problem for a network of power generating units was studied in [6], where it is proved that the algorithm is robust to slowly time-varying loads In particular, it is shown that for time-varying loads with bounded first and second derivatives, the optimization error will remain bounded In [27], a distributed time-varying stochastic optimization problem was considered, where it is assumed that the cost functions are strongly convex, with Lipschitz continuous gradients It is proved that under the persistent excitation assumption, a bounded error in expectation will be achieved asymptotically In [17], a distributed discrete-time algorithm based on the alternating direction method of multipliers (ADMM) was introduced to optimize a time-varying cost function It is proved that for strongly convex cost functions with Lipschitz continuous gradients, if the primal optimal solutions drift slowly enough with time, the primal and dual variables are close to their optimal values Acknowledgements © 2017 IEEE Reprinted, with permission, from Salar Rahili and Wei Ren “Distributed continuous-time convex optimization with time-varying cost functions” IEEE Transactions on Automatic Control, vol 62, no 4, pp 1590–1605, 2017 References M.S Bazaraa, H.D Sherali, C.M Shetty, Nonlinear Programming: Theory and Algorithms (Wiley, 2005) S Boyd, L El Ghaoui, E Feron, V Balakrishnan Linear Matrix Inequalities in System and Control Theory (SIAM, 1994) S Boyd, L Vandenbergher, Convex Optimization (Cambridge University Press, Cambridge, 2004) Y Cao, W Ren, Distributed coordinated tracking with reduced interaction via a variable structure approach IEEE Trans Autom Control 57(1), 33–48 (2012) F Chen, Y Cao, W Ren, Distributed average tracking of multiple time-varying reference signals with bounded derivatives IEEE Trans Autom Control 57(12), 3169–3174 (2012) A Cherukuri, J Cortés, Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment, in arXiv:1409.4382 (2014) J Cortés, Discontinuous dynamical systems IEEE Control Syst Mag 28(3), 36–73 (2008) F Cucker, S Smale, Emergent behavior in flocks IEEE Trans Autom Control 5(5), 852–862 (2007) G Droge, H Kawashima, M.B Egerstedt, Continuous-time proportional-integral distributed optimisation for networked systems J Control Decis 1(3), 191–213 (2014) 10 C Edwards, S Spurgeon, Sliding Mode Control: Theory and Applications (Taylor & Francis, 1998) 11 A Filippov, Differential Equations with Discontinuous Righthand Sides (Springer, Berlin, 1988) 12 B Gharesifard, J Cortés, Distributed continuous-time convex optimization on weight-balanced digraphs IEEE Trans Autom Control 59(3), 781–786 (2014) 13 B Johansson, T Keviczky, M Johansson, K Johansson, Subgradient methods and consensus algorithms for solving convex optimization problems, in Proceedings of the IEEE Conference on Decision and Control (2008), pp 4185–4190 References 231 14 S.S Kia, J Cortés, S Martinez, Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication Automatica 55(5), 254–264 (2015) 15 K Kvaternik, L Pavel, A continuous-time decentralized optimization scheme with positivity constraints, in Proceedings of the IEEE Conference on Decision and Control (2012), pp 6801– 6807 16 P Lin, W Ren, Y Song, J Farrell, Distributed optimization with the consideration of adaptivity and finite-time convergence, in Proceedings of the American Control Conference (2014), pp 3177–3182 17 Q Ling, A Ribeiro, Decentralized dynamic optimization through the alternating direction method of multipliers IEEE Trans Signal Process 62(5), 1185–1197 (2014) 18 J Lu, C.Y Tang, Zero-gradient-sum algorithms for distributed convex optimization: the continuous-time case IEEE Trans Autom Control 57(9), 2348–2354 (2012) 19 A Nedic, A Ozdaglar, P Parrilo, Constrained consensus and optimization in multi-agent networks IEEE Trans Autom Control 55(4), 922–938 (2010) 20 R Olfati-Saber, Flocking for multi-agent dynamic systems: algorithms and theory IEEE Trans Autom Control 51(3), 401–420 (2006) 21 R Olfati-Saber, R Murray, Consensus problems in networks of agents with switching topology and time-delays IEEE Trans Autom Control 49(9), 1520–1533 (2004) 22 Z Qu, Cooperative Control of Dynamical Systems: Applications to Autonomous Vehicles (Springer, Berlin, 2009) 23 M.G Rabbat, R.D Nowak, Quantized incremental algorithms for distributed optimization IEEE J Sel Areas Commun 23(4), 798–808 (2005) 24 S Rahili, W Ren, Distributed continuous-time convex optimization with time-varying cost functions IEEE Trans Autom Control 62(4), 1590–1605 (2016) 25 S Ram, A Nedic, V Veeravalli, Incremental stochastic subgradient algorithms for convex optimization SIAM J Optim 20(2), 691–717 (2009) 26 G Shi, K Johansson, Y Hong, Reaching an optimal consensus: dynamical systems that compute intersections of convex sets IEEE Trans Autom Control 58(3), 610–622 (2013) 27 A Simonetto, L Kester, G Leus, Distributed time-varying stochastic optimization and utilitybased communication, in arXiv:1408.5294 (2014) 28 J Slotine, W Li, Applied Nonlinear Control (Prentice Hall, 1991) 29 H Su, X Wang, Z Lin, Flocking of multi-agents with a virtual leader IEEE Trans Autom Control 54(2), 293–307 (2009) 30 W Su, Traffic Engineering and Time-Varying Convex Optimization Ph.D dissertation, (The Pennsylvania State University, 2009) 31 S.Y Tu, A Sayed, Mobile adaptive networks IEEE J Sel Top Signal Process 5(4), 649–664 (2011) 32 J Wang, N Elia, A control perspective for centralized and distributed convex optimization, in Proceedings of the European Control Conference (2011), pp 3800–3805 33 L Wang, F Xiao, Finite-time consensus problems for networks of dynamic agents IEEE Trans Autom Control 55(4), 950–955 (2010) 34 X Wang, Y Hong, Distributed finite-time χ-consensus algorithms for multi-agent systems with variable coupling topology J Syst Sci Complex 23(2), 209–218 (2010) 35 D Yuan, S Xu, H Zhao, Distributed primal-dual subgradient method for multiagent optimization via consensus algorithms IEEE Trans Syst., Man, Cybern., Part B: Cybern 41(6), 1715–1724 (2011) 36 Y Zhao, Y Liu, Z Duan, G Wen, Distributed average computation for multiple time-varying signals with output measurements Int J Robust Nonlinear Control 26(13), 2899–2915 (2016) 37 Y Zhao, Y Liu, Z Li, Z Duan, Distributed average tracking for multiple signals generated by linear dynamical systems: an edge-based framework Automatica 75(1), 158–166 (2017) ... Springer Nature Switzerland AG 2020 F Chen and W Ren, Distributed Average Tracking in Multi-agent Systems, https://doi.org/10.1007/97 8-3 -0 3 0-3 953 6-0 _1 Overview Fig 1.1 Robots moving in a two-dimensional.. .Distributed Average Tracking in Multi-agent Systems Fei Chen Wei Ren • Distributed Average Tracking in Multi-agent Systems 123 Fei Chen State Key Laboratory of... School of Control Engineering Northeastern University at Qinhuangdao Qinhuangdao, China ISBN 97 8-3 -0 3 0-3 953 5-3 ISBN 97 8-3 -0 3 0-3 953 6-0 https://doi.org/10.1007/97 8-3 -0 3 0-3 953 6-0 (eBook) Mathematics