Optimization method in designing a finite time average consensus protocol

5 21 0
Optimization method in designing a finite time average consensus protocol

Đang tải... (xem toàn văn)

Thông tin tài liệu

In this paper, the optimization methods in designing a finite-time average consensus protocol for multi-agent systems or wireless sensor network are taken into account. As we all know, the purpose of the average consensus protocol is that all agents reach the final common value, which is the average of the initial values.

ISSN 1859-1531 - THE UNIVERSITY OF DANANG, JOURNAL OF SCIENCE AND TECHNOLOGY, NO 12(133).2018 OPTIMIZATION METHOD IN DESIGNING A FINITE-TIME AVERAGE CONSENSUS PROTOCOL Tran Thi Minh Dung The University of Danang - University of Science and Technology; ttmdung@dut.udn.vn Abstract - In this paper, the optimization methods in designing a finite-time average consensus protocol for multi-agent systems or wireless sensor network are taken into account As we all know, the purpose of the average consensus protocol is that all agents reach the final common value, which is the average of the initial values In order to run a consensus protocol, there are main steps: self-configuration step, and execution step In the selfconfiguration step, the consensus protocol is designed and uploaded to each agent of the system so that the final average value is achieved in the minimal execution time The proposed optimization method is based on learning and training methods applied for neural network Key words - Consensus protocols; finite-time average consensus protocols; back-propagation method; self-configuration; matrix factorization; learning method Introduction Recent years have witnessed the increasing number of studies devoted to multi-agent systems and wireless sensors networks Several issues are then formulated as a consensus problem, which is to design a network protocol based on the local information obtained by each agent so that all agents finally reach an agreement on certain quantities of interest The network protocol is an interaction rule, which ensures that the whole group can achieve a consensus on the shared data in a distributed manner, i.e without the coordination of a central authority Consensus problems of multi-agent systems [1] have received tremendous attention from various research communities due to their broad applications in many areas including multi-sensor data fusion, multi-vehicle formation control, distributed computation and many more In study of consensus problems, the speed of convergence is obviously important to evaluate the proposed protocol Up to now, most of existing protocols cannot result in state consensus in finite-time, that is, consensus is only achieved asymptotically, that means the convergence rate is at best exponential with infinite settling time On the other hand, consensus can be used as a step of more sophisticated distributed algorithm as it is the case for Distributed Kalman filter [2], Distributed standard or alternating least squares algorithms [3-4], distributed principle component analysis [5] to cite few However, this asymptotic convergence is not suitable for these kinds of distributed methods For example, in wireless sensor network that is a reduction in the total number of iterations until convergence can lead to a reduction in the total amount of energy consumption of the network, which is essential to guarantee a longer life time for the entire network That is why several studies have been recently carried out for accelerating consensus through polynomial filtering [6], optimization methods [7] or high-order methods [8] Another solution is to resort to finite-time protocols that are obviously meaningful and more desirable In order to run an average consensus protocol, two main steps are required: the configuration step (design step) and the execution step During the configuration step, the consensus protocol is to be uploaded in each agent Such a task can be achieved through a self-configuration algorithm instead of resorting to a network manager Selfconfiguration can include graph discovering and distributed decision on some parameters For instance, if the protocol is maximum-degree weights one, each agent first computes the number of its neighbors before running a max-consensus algorithm for computing the maximum degree of the underlying graph One common used protocol is the constant edge weights, or graph Laplacianbased average consensus protocol, where a common stepsize is used by all the agents Even though there are some simple bounds that give choices for the step-size without requiring exact knowledge of the Laplacian spectrum, agents have to agree in an adequate step-size To the best of our knowledge, there is no paper dealing with selfconfiguration protocols for the constant edge weights based average consensus protocol That is why some recent works have been devoted to accelerating the speed of convergence of consensus protocols by solving some optimization problem in a centralized way; the goal being to reduce the spectral gap between the matrix of weights and the average consensus matrix 𝐽𝑁 = 11𝑇 , where N 𝑁 stands for the number of agents of the network, [6-7] In [9], the finite-time average consensus was formulated as a matrix factorization problem The resulting solution yields a link scheduling on the complete graph to achieve finite-time consensus Such a scheduling is to be controlled by a central node Following the idea of matrix factorization, Laplacian-based joint diagonalizable matrices were suggested in [10-11] The proposed solutions make use of the graph Laplacian spectrum However, the implementation of these protocols during the configuration step was not really discussed In fact, for self-configuration of Laplacian-based finite-time average consensus protocols, distributed estimation of Laplacian eigenvalues is required Such a task can be carried out by means of distributed or decentralized algorithms such as those proposed in [12-13] In this study, the author has assumed that the Laplacian matrix is not known since most of proposed protocols are based on the knowledge of the Laplacian matrix The goal is to design the weighted matrix so that the system can reach the average consensus in a finite number of steps 2 The aim is to design protocols that allow achieving average consensus in the fastest possible time, possibly as fast as diameter of the underlying graph In particular, the author has proposed a learning method, in which consensus weighted matrix are obtained using a distributed optimization method, which is a gradient back-propagation method The remainder of this paper is organized as follows: Section presents the preliminary views of the theory and formulates the problem statement Then, gradient backpropagation algorithms are derived in Section for solving a matrix factorization problem The performance of the proposed algorithm is evaluated in Section by means of simulation results before concluding the paper Problem Statement 2.1 Graph Theory Throughout this paper, we consider a connected undirected graph G = (V, E), where V={1, 2, … , N} is the set of vertices of the graph G, and E⊂ V × V is the set of edges Imagine vertices V as nodes in a network connected according to E The neighbor of node i are denoted by 𝑁𝑖 = {𝑗 ∈ 𝑉: (𝑖, 𝑗) ∈ 𝐸} • Given two vertices (i, j), the distance dist(i, j) is the length of the shortest path between i and j • The eccentricity ecc(i) of a vertex i is the greatest distance between i and any other vertex 𝑗 ∈ 𝑉 • The diameter d(G) of a graph is the maximum eccentricity of any vertex in the graph • Denote A as the adjacency matrix of the graph Its entries 𝑎𝑖𝑗 being equal to if (𝑖, 𝑗) ∈ 𝐸, and elsewhere 2.2 Finite-time Average Consensus For each agent i ∈ 𝐸, let 𝑥𝑖 (𝑡) be its state at time-step t At each time-step each node updates its state as: 𝑥𝑖 (𝑡) = 𝑤𝑖𝑖𝑡 𝑥𝑖 (𝑡 − 1) + ∑𝑗∈𝑁𝑖 𝑤𝑖𝑗𝑡 𝑥𝑗 (𝑡 − 1) (1) Defining the state of the network as 𝒙(𝑡) = [𝑥1 (𝑡), 𝑥2 (𝑡), , 𝑥𝑁 (𝑡)]𝑇 , where N is the number of nodes in the networks, the dynamics of the network is given as follows: 𝐱(𝐭) = 𝑾𝒕 𝒙(𝒕 − 𝟏), 𝒕 = 𝟏, 𝟐, (2) Tran Thi Minh Dung In [9], it has been pointed out that no solution exists if the factor matrices Wt are all equal except if the graph is complete For the trees and graph with minimum diameter spanning tree, solutions based on graph Laplacian have been introduced in closed-form [10-11] Therefore, the number of factor matrices is equal to the number of distinct nonzero Laplacian eigenvalues Intuitively, since the diameter of the graph characterizes the time necessary for a given information to reach all the agents in the network, the number of factor matrices cannot be lower than the diameter d(G) According to the results in [10] the number D is bounded by d(G) ≤ D ≤ 𝑁 − A gradient Back-Propagation algorithm for solving a matrix factorization problem Let{𝑥𝑖 (0), 𝑦𝑖 }, 𝑖 = 1, , 𝑁 be the input-output signals of a given system, represented by a graph G = (V, E) As we know that, after using consensus protocol, the output is 𝑦𝑖 = y = ∑𝑁 𝑥 (0) Our aim is to estimate the factor 𝑁 𝑖=1 𝑖 matrices Wt, t = 1, 2, …, D by minimizing the quadratic error: E(𝑾1 , 𝑾2 , … , 𝑾𝐷 ) = ∑𝑁 𝑖=1(𝑥𝑖 (𝐷) − 𝑦) ∑𝑗∈𝑁𝑖∪{𝑖} 𝑤𝑖𝑗𝑡 𝑥𝑗 (𝑡 (5) With 𝑥𝑖 (𝑡) = − 1), and 𝑤𝑖𝑗𝑡 being the entries of matrices Wt This problem can be solved by lots of methods published in the literature such as penalty method, projection method, etc However, the solution obtained by this problem works only for this given initial vector x(0) That why, we reformulate the problem so that the designed sequence of consensus matrices Wt works for any initial vector x(0) That why we apply the Back-Propagation method to solve this problem In particular, the training set (P training patterns) is employed to create optimal weight matrices which not only satisfy one input vector, but also for random input vector The realization of the consensus in a finite number of steps can be compared to a feedforward neural network with D-1 hidden layers as depicted in Figure Where Wt, with entries 𝑤𝑖𝑖𝑡 , is consistent with the graph topologies Given any set of initial values x(0), we are interested in a finite sequence of weighted matrices, Wt allows all agents to reach average consensus in a finite number of steps (or finite-time) D, i.e 𝟏 𝒙(𝑫) = 𝟏𝟏𝑻 𝒙(𝟎) = 𝑱𝑵 𝒙(𝟎) Ultimately, we desire a 𝑵 finite sequence of matrices {W1, W2,…, WD} in such a way that 𝐱(𝐃) = ∏𝟏𝒕 𝑾𝒕 𝒙(𝟎) = 𝑱𝑵 𝒙(𝟎) for all 𝐱(0) ⊂ 𝑅𝑁 (3) Meaning that ∏𝟏𝒕 𝑾𝒕 = 𝑱𝑵 (4) Within this framework, there is a question: For a given graph G, does there exist a finite sequence of matrices Wt so that (4) is achieved? If it exists, what is the minimal value of D? How can we carry out such a factorization? Figure Linear iteration scheme in space and time Figure illustrates the linear iteration scheme (1) in space and time It can be viewed as a multilayer neural network The selection of the weights can be analyzed through the scope of training By using a set of P learning sequences, the mechanism is divided into two main steps, ISSN 1859-1531 - THE UNIVERSITY OF DANANG, JOURNAL OF SCIENCE AND TECHNOLOGY, NO 12(133).2018 namely forward step and backward step The idea of this algorithm is to compute the partial derivatives of the error between the actual output and desired output, then propagate them back to each layer of the network The process is still repeated until the stop criterion is reached, and then we obtain the optimal weights Now, with P learning sequences, the problem (5) is rewritten as follows: Given {𝑥𝑖,𝑝 (0), 𝑦𝑖,𝑝 }, i = 1, , N; p = 1, , P be the input-output pairs defining the learning sequences, with 𝑦𝑖,𝑝 = 𝑦𝑝 = 𝑥𝑖,𝑝 (0) The minimization problem now becomes: 𝑁 𝑃 E(𝑾1 , 𝑾2 , … , 𝑾𝐷 ) = ∑𝑁 𝑖=1 ∑𝑝=1(𝑥𝑖,𝑝 (𝐷) − 𝑦𝑝 ) (6) With 𝑥𝑖,𝑝 (𝑡) = ∑𝑗∈𝑁𝑖∪{𝑖} 𝑤𝑖𝑗𝑡 𝑥𝑗,𝑝 (𝑡 − 1), we rewrite the cost function () as E(𝑾1 , 𝑾2 , … , 𝑾𝐷 ) = ‖∏1𝑡=𝐷 𝑾𝑡 𝑿(0) − 𝒀‖2𝐹 , (7) where ‖ ‖𝐹 stands for the Frobenius norm, 𝐘 = 𝑱𝑵 𝑿(0), Y and X(0) being N × P matrices with 𝑦𝑖,𝑝 , 𝑥𝑖,𝑝 as entries, respectively One thing to note here is that matrix Wt must associate with the graph topology We assume that X(0)X(0)T = IN which means that the input vector is orthogonal Hence, we can denote (7) as E(𝑾1 , 𝑾2 , … , 𝑾𝐷 ) = ‖∏1𝑡=𝐷 𝑾𝑡 − 𝑱𝑵 ‖2𝐹 It is equivalent to solve the factorization problem: ∑𝑃𝑝=1 𝑡𝑟(𝜀𝑝 (𝑾)𝜀𝑝𝑇 (𝑾)) {𝑾∗𝑡 }|t = 1, … , D = 𝑎𝑟𝑔 ∗ {𝑾𝑡 }|t=1, ,D = 𝑎𝑟𝑔 {𝑾∗𝑡 }|t=1, ,D ∑𝑃𝑝=1 𝐸𝑝 (𝑾) (8) With 𝜀𝑝 (𝑾) = ∏1𝑡=𝐷 𝑾𝑡 𝒙𝒑 (0) − 𝒚𝒑 Where tr(.) denotes the trace operator and 𝒚𝒑 = 𝑱𝑵 𝒙𝒑 (0) The solution of this optimization problem can then be obtained iteratively by means of a stochastic gradient descent method 𝑾𝑡 : = 𝑾𝑡 − α ∂𝐸𝑝 (𝑾) ∂𝑾𝑡 For this purpose, we first state the following technical lemma: Lemma 1: The derivatives of the cost function 𝐸𝑝 (𝑾) defined as in (8) can be computed as follows: ∂𝐸𝑝 (𝑾) ∂𝑾𝑡 ∂𝐸𝑝 (𝑾) ∂𝑾𝐷 = 𝛅𝑡,𝑝 𝒙𝑇𝑝 (𝑡 − 1), 𝑡 = 1,2, … , 𝐷 = 𝛅𝐷,𝑝 𝒙𝑇𝑝 (𝐷 − 1) (9) (10) ̅𝒑 is the difference between Where 𝛅𝐷,𝑝 = 𝒙𝒑 (𝐷) − 𝒙 ̅𝒑 = 𝑦𝑝 = the actual output and the desired output with 𝒙 𝑁 𝑥𝑖,𝑝 (0); and 𝛅𝑡−1,𝑝 = 𝑾𝑇𝑡 𝛅𝑡,𝑝 , 𝑡 = 1,2, … , 𝐷./ Proof: The consensus network being a linear system, we know that 𝒙𝒑 (𝑡) = 𝑾𝑡 𝒙𝒑 (𝑡 − 1), therefore we can explicitly write the output according to the weighted matrix of interest, i.e 𝒙𝒑 (𝐷) = 𝑾𝑡 𝒙𝒑 (𝐷 − 1), and 𝟏 (𝐷) ∏ (𝑡 𝒙𝒑 = 𝒋=𝑫 𝑾𝒋 𝑾𝑡 𝒙𝒑 − 1), 𝑡 = 1, … , 𝐷 − Equivalently, by defining 𝒁𝒕+𝟏 = ∏𝒕+𝟏 𝒋=𝑫 𝑾𝒋 , we get 𝒙𝒑 (𝐷) = 𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1) The cost function can be rewritten as 𝟏 ̅𝒑 )(𝑾𝐷 𝒙𝒑 (𝐷 − 1) − 𝒙 ̅𝒑 )𝑻 𝐸𝑝 (𝑾) = 𝒕𝒓((𝑾𝐷 𝒙𝒑 (𝐷 − 1) − 𝒙 𝟐 ̅𝒑 We can easily deduce that (10) with 𝛅𝐷,𝑝 = 𝒙𝒑 (𝐷) − 𝒙 Now we can express the cost function according to any matrix 𝑾𝑡 , 𝑡 = 1, … , 𝐷 − as 𝟏 𝐸𝑝 (𝑾) = 𝒕𝒓((𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1) − 𝟐 ̅𝒑 )(𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1) − 𝒙 ̅𝒑 )𝑻 𝒙 by expanding the above expression and taking into account the linearity of the trace operator yield 𝟏 𝐸𝑝 (𝑾) = [𝒕𝒓(𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1)𝒙𝒑 (𝑡 − 1)𝑻 𝑾𝑇𝑡 𝒁𝑇𝑡+1 ) 𝟐 ̅𝒑 𝑇 ) − 𝒕𝒓(𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1)𝒙 ̅𝒑 𝒙𝒑 (𝑡 − 1)𝑻 𝑾𝑇𝑡 𝒁𝑇𝑡+1 ) − 𝒕𝒓(𝒙 ̅𝒑 𝒙 ̅𝒑 𝑇 )] −𝒕𝒓(𝒙 Now, ∂𝐸𝑝 (𝑾) computing 𝟏 = [𝟐 × ∂𝑾𝑡 𝟐 ̅𝒑 𝒙𝒑 (𝑡 𝒁𝑇𝑡+1 𝒙 the derivative, 𝒁𝑇𝑡+1 𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 we get: 𝑻 − 1)𝒙𝒑 (𝑡 − 1) − × 𝑻 − 1) ] = 𝒁𝑇𝑡+1 (𝒁𝒕+𝟏 𝑾𝑡 𝒙𝒑 (𝑡 − 1) − ̅𝒑 )𝒙𝒑 (𝑡 − 1)𝑻 = 𝒁𝑇𝑡+1 𝛅𝐷,𝑝 𝒙𝒑 (𝑡 − 1)𝑻 = 𝒙 𝑾𝑇𝑡+1 𝑾𝑇𝑡+2 𝑾𝑇𝐷 𝛅𝐷 𝒙𝒑 (𝑡 − 1)𝑻 = 𝑾𝑇𝑡+1 𝛅𝑡+1,𝑝 𝒙𝒑 (𝑡 1)𝑻 = 𝛅𝑡,𝑝 𝒙𝒑 (𝑡 − 1)𝑻 − Then the update scheme of the optimization algorithm is as follows: 𝑾𝑡 [k + 1] = 𝑾𝑡 [k] − α ∂𝐸𝑝(𝑘)(𝑾) ∂𝑾𝑡 = 𝑾𝑡 [k] − α𝛅𝑡,𝑝(𝑘) 𝒙𝑇𝑝 (𝑡 − 1) (11) Where p(k) ∈ {1,2, , P} and k stands for the kth iteration of the optimization process Then we keep the updated weights corresponding to the graph topology Then, this algorithm acts by alternating the following steps: the learning sequence is first propagated forward; then the error between the targeted output and x(D) is computed and then be propagated backward Meaning that this algorithm trains the first pattern and changes all the weights in the network once, then apply the second pattern and the same and so on After we have done all the patterns, return to the first pattern once again and repeat the process until the stopping criteria is satisfied The convergence of this gradient backpropagation algorithm has been well studied in the literature [14-15] Convergence towards a local minimum is guaranteed if the step-size α is appropriately chosen inside the interval (0 < α < 1) If the step-size is too large, the error E tends to oscillate, never reaching a minimum In contrast, too small α prevents the optimization from making reasonable progress Algorithm 1: Stochastic gradient Back Propagation Initialization: • Number of steps D (d(G)≤ D ≤ N − 1), number of patterns P (P ≥ N); • Learning sequences 𝑥𝑖,𝑝 (0), 𝑦𝑖,𝑝 , i = 1, … , N; p = 1, … , P, with 𝑦𝑝 = 𝑥𝑖,𝑝 (0); 𝑁 • Random initial weighted t = 1, , D, and 𝑾𝑡 [−1] = 0; • Learning rate: < 𝛼[0] < 1; • Threshold θ; • Set k = Set p = 0; a.Set p = p + 1; matrices 𝑾𝑡 [0], Tran Thi Minh Dung b Select the corresponding input-output sequence 𝑥𝑖 (0) = 𝑥𝑖,𝑝 (0), 𝑥̅ = 𝑦𝑝 ; c.Learning sequence 𝑥𝑖 (𝑡) = ∑𝑗∈𝑁𝑖∪{𝑖} 𝑤𝑖𝑗𝑡 [𝑘]𝑥𝑗 (𝑡 − 1); propagation: d Error computation: δ𝑖,𝐷 = 𝒙𝒊 (𝐷) − 𝑥̅ , 𝐞𝑖,𝑝 = 𝛿𝑖,𝐷 ; e.Error propagation: 𝛿𝑖,𝑡−1 = ∑𝑗∈𝑁𝑖∪{𝑖} 𝑤𝑖𝑗𝑡 [𝑘]𝛿𝑗,𝑡 ; f Matrices updating: for t = 1, , D; i = 1, …, N and 𝑗 ∈ 𝑁𝑖 ∪ {𝑖}: 𝑤𝑖𝑗𝑡 [𝑘 + 1] = 𝑤𝑖𝑗𝑡 [𝑘] − 𝛼[𝑘]𝛿𝑖,𝑡 𝑥𝑗,𝑡−1 g Then force the updated weighted matrices to be corresponding to the given graph; h k = k+1; i If p = P, compute the mean square error 𝐸𝑖 = 𝐞𝑖,𝑝 , 𝑁 else return to 2a; j If 𝐸𝑖 < 𝜃, then stop the learning process else, return to 2./ This algorithm can be used for only pattern (P = 1) In this case, the obtained solution only works for the given input Since this Back Propagation algorithm is solved by using the gradient descent method, then its convergence rate is slow In order to overcome this issue, the author has employed the momentum in updating scheme as follows: 𝑤𝑖𝑗𝑡 [𝑘 + 1] = 𝑤𝑖𝑗𝑡 [𝑘] − 𝛼[𝑘]𝛿𝑖,𝑡 𝑥𝑗,𝑡−1 + β(𝑤𝑖𝑗𝑡 [𝑘] − 𝑤𝑖𝑗𝑡 [𝑘 − 1]), which means that the cost function now becomes: {𝑾∗𝑡 }|t = 1, … , D = 𝑎𝑟𝑔 ∗min ∑𝑃𝑝=1 𝐸𝑝 (𝑾) + After running the algorithm 1, we achieve the set of matrices {𝑾1 , 𝑾2 }: 0 0.3212 −0.3466 0.4444 0.5709 −0.4109 0.4775 0 𝑾1 = 0.4746 −0.3150 0.4263 0 0.4317 −0.1803 0.3307 0 0.4632 −0.1597 ) ( 0.5816 0.8372 0.4188 0 0.4318 0.6225 0.7283 0.4691 0 𝑾2 = 0.3503 0.7248 0.6047 0 0.4213 0.7707 0.3439 0 0.4632 0.6121 ) (0.4499 𝟏 We can easily check that: 𝑾2 𝑾1 = 𝟏𝟏𝑻 𝟓 Given an arbitrary input vector x(0), we can achieve the average consensus in steps as shown in Figure According to selection of the number of patterns P, P ≥ N, the larger the number of P is, the faster does Algorithm converge You can see it clearly in Figure Figure Average consensus in steps {𝑾𝑡 }|t=1, ,D ∑𝐷 𝑡=1 𝛽‖𝑾𝑡 [k] − 𝑾𝑡 [k − 1]‖ With β is the momentum rate Simulation Results In this section, the performance of designed protocol is evaluated by means of MSE: 𝑁 𝑃 MSE = ∑ ∑(𝑥𝑖,𝑝 (𝐷) − 𝑦𝑝 )2 𝑁𝑃 𝑖=1 𝑝=1 Given a 5-nodes undirected circle graph described in Figure 2, Figure Convergence rates of different numbers of patterns P Now, we take adding the momentum into account and see the improvement in the speed of convergence of the proposed Algorithm Figure a 5-node circle graph As we know that diameter of this graph is equal to (d(G)=2), then, the number of steps D is bounded by ≤ D ≤ Since our purpose for the finite-time consensus protocol is that the execution time is minimal as possible, we can select value of D increasingly started from Figure Comparison between a 5-node circle graph with P = in two cases: with momentum and without momentum ISSN 1859-1531 - THE UNIVERSITY OF DANANG, JOURNAL OF SCIENCE AND TECHNOLOGY, NO 12(133).2018 In this Figure, as can be shown clearly that with momentum, the speed of convergence is much faster than the original case Conclusion In this paper, the author has proposed an optimization method which is based on the Back-Propagation method to solve a consensus problem in finite number of steps By using a learning sequence, we have shown how to solve a matrix factorization problem in a fully distributed way The factorization gives rise to factor matrices that are not necessarily symmetric or stochastic Given the diameter d(G) of the graph, we can find out the optimal value of the number of steps necessary for reaching average consensus by varying the value of D in d(G) ≤ D ≤ N − Since the proposed Algorithm is based on the gradient descent method, it suffers the slow convergence speed Therefore, an improvement has been suggested by adding a penalty term to the cost function, which leads to the enhancement of the convergence rate obviously REFERENCES [1] Tran Thi Minh Dung A survey on Consensus Protocols in Multiagent Systems Journal of Science and Technology-The University of Danang Number 6(103)2016 Pages: 35-39 Year 2016 [2] R Olfati-Saber Distributed Kalman filtering for sensor networks In Proc of the 46th IEEE Conf on Decision and Control, pages 5492– 5498, New Orleans, LA, USA, December 12–14 2007 [3] S Bolognani, S Del Favero, L Schenato, and D Varagnolo Distributed sensor calibration and least-squares parameter identification in WSNs using consensus algorithms In Proc of 46th annual Allerton Conference, pages 1191–1198, Allerton House, UIUC, Illinois, USA, 2008 [4] A.Y Kibangou and A.L.F de Almeida Distributed PARAFAC based DS-CDMA blind receiver for wireless sensor networks In Proc of the IEEE Workshop SPAWC, Marrakech, Morocco, June 20-23 2010 [5] S V Macua, P Belanovic, and S Zazo Consensus-based distributed principal component analysis in wireless sensor networks In Proc of the IEEE Workshop SPAWC, Marrakech, Morocco, June 20-23 2010 [6] Kokiopoulou, E and Frossard, P (2009) Polynomial filtering for fast convergence in distributed consensus IEEE Trans on Signal Processing, 57, 342–354 [7] L Xiao and S Boyd, “Fast linear iterations for distributed averaging”, Systems Control Lett Vol 53, pp 65-78, 2004 [8] T Aysal, B Oreshkin, and M Coates, “Accelerated distributed average consensus via localized node state prediction”, IEEE Trans on Signal Processing, vol 57, no 4, pp 1563-1576, April 2009 [9] Ko, C.K (2010) On matrix factorization and scheduling for finitetime average consensus Ph.D thesis, California Institute of Technology, Pasadena, California, USA [10] A.Y Kibangou, “Graph Laplacian based Matrix Design for FiniteTime Distributed Average Consensus”, in Proc Of the American Conference on Control (ACC), Montreal, Canada, 2012 [11] A Kibangou, “Finite-time average consensus based protocol for distributed estimation over AWGN channels”, in Proc Of the 50th IEEE Conference on Decision and Control (CDC), Orlando, Fl, USA, 2011 [12] Tran, T.M.D and Kibangou, A (2013) Consensus-based distributed estimation of Laplacian eigenvalues of undirected graphs In Proc of the European Control Conference Zurich, Switzerland [13] Sahai, T., Speranzon, A., and Banaszuk, A (2012) Hearing the clusters of a graph: A distributed algorithm Automatica, 48(1), 15–24 [14] Chong, E.K.P and Zak, S.H (2012) An Introduction to Optimization John Wiley & Sons, third edition [15] Mangasarian, O.L and Solodov, M.V (1994) Serial and parallel backpropagation convergence via nonmonotone perturbed minimization (1994) Optimization Methods and Software, 4, 103–116 (The Board of Editors received the paper on 10/5/2018, its review was completed on 23/8/2018) ... thesis, California Institute of Technology, Pasadena, California, USA [10] A. Y Kibangou, “Graph Laplacian based Matrix Design for FiniteTime Distributed Average Consensus , in Proc Of the American... Given any set of initial values x(0), we are interested in a finite sequence of weighted matrices, Wt allows all agents to reach average consensus in a finite number of steps (or finite- time) ... The aim is to design protocols that allow achieving average consensus in the fastest possible time, possibly as fast as diameter of the underlying graph In particular, the author has proposed a

Ngày đăng: 10/02/2020, 04:01

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan