1. Trang chủ
  2. » Khoa Học Tự Nhiên

New exponential stabilization criteria for non autonomous delayed neural networks via Riccati equations

17 80 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

This paper deals with the problem of global exponential stabilization for a class of nonautonomous cellular neural networks with timevarying delays. The system under consideration is subject to timevarying coefficients and timevaying delays. Two cases of timevarying delays are considered: (i) the delays are differentiable and has an upper bound of the delayderivative; (ii) the delays are bounded but not necessary to be differentiable. Based on LyapunovKrasovskii functional method combined with the used of Razumikhin technique, we establish new delaydependent conditions to design memoryless state feedback controller for exponential stabilizing the system. The derived conditions are formulated in terms of the solution of Riccati differential equations, which allow to simultaneously calculate the bounds that characterize the exponential stability rate of the solution. Numerical examples are given to illustrate the effectiveness of our results

New exponential stabilization criteria for non-autonomous delayed neural networks via Riccati equations Mai Viet Thuan1 Le Van Hien2,∗ and Vu Ngoc Phat3 Department of Mathematics Thai Nguyen University, Thai Nguyen, Vietnam Hanoi National University of Education 136 Xuan Thuy Road, Hanoi, Vietnam Institute of Mathematics, VAST, 18 Hoang Quoc Viet Road, Hanoi, Vietnam ∗ Corresponding author: Hienlv@hnue.edu.vn Abstract This paper deals with the problem of global exponential stabilization for a class of non-autonomous cellular neural networks with time-varying delays The system under consideration is subject to time-varying coefficients and time-vaying delays Two cases of time-varying delays are considered: (i) the delays are differentiable and has an upper bound of the delay-derivative; (ii) the delays are bounded but not necessary to be differentiable Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, we establish new delay-dependent conditions to design memoryless state feedback controller for exponential stabilizing the system The derived conditions are formulated in terms of the solution of Riccati differential equations, which allow to simultaneously calculate the bounds that characterize the exponential stability rate of the solution Numerical examples are given to illustrate the effectiveness of our results MSC: 34D20, 37C75, 93D20 Key words Neural networks, stability, stabilization, non-differentiable delays, Lyapunov function, matrix Riccati equations, linear matrix inequalities Introduction During the past decades, we witnessed an increasing interest to the delayed cellular neural networks (CNNs) models due to their successfully applications in many fields, such as signal processing, pattern recognition and association (e.g see, [2] and references therein) Much more efforts of researchers from mathematics and systems theory communities have been paid to develop the study of stability analysis and control of such systems [2-4, 9, 12-13] General speaking, in applications, it is required that the equilibrium points of the designed network be stable In both biological and artificial neural systems, time delays due to integration and communication are ubiquitous, and often become a source of instability The time delays in electronic neural networks are usually time-varying, and sometimes vary violently with respect to time due to the finite switching speed of amplifiers and faults in the electrical circuitry Therefore, stability analysis of delayed neural networks is a very important issue, and many stability criteria have been developed in the literature [4, 9, 12] and the references cited therein In recently years, the stability analysis and control of the autonomous delayed cellular neural networks (DCNNs) have been widely investigated Many important results on global stability and stabilization, H∞ control, etc, have been established [7, 9, 14, 15] However, in many realistic systems, the system parameters usually be changing with time This leads to non-autonomous phenomena systems Thus, more attentions have been paid to study the stability and stabilization of non-autonomous systems recently [1, 6, 11, 17] Particularly, in [7], the cellular neural networks with time-varying coefficients and delays is studied Based on Lyapunov functional method and the use of matrix inequality technique, the authors established some criteria on the boundedness, global asymptotic stability and exponential stability However, these conditions can not be improved to fast time varying delays systems due to a hard assumption about bounded continuously differentiable of delay functions with upper bounds are strictly less than one In this paper, we consider the problem of exponential stabilization for a class of nonautonomous cellular neural networks with time-varying delays The system under consideration is subject to time-varying coefficients with various activation functions and two cases of time-varying delays: (1) the state delay is differentiable and has an upper bound of the delay-derivative and (2) the delays are bounded but not necessary to be differentiable In this case, the restriction on the derivative of time-delay functions is removed, which means that fast time-varying delays are allowed Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, we establish new delay-dependent conditions to design memoryless state feedback controller for exponential stabilizing the system The derived conditions are formulated in terms of the solution of suitable Riccati differential equations (RDEs), which allow simultaneous computation of two bounds that characterize the exponential stability rate of solution Numerical examples are given to illustrate the effectiveness of our results The rest of the paper is organized as follows Section presents definitions and some technical propositions needed for the proof of the main result In Section 3, new delaydependent conditions in terms of Riccati differential equations are derived for exponential stabilization of the system Illustrative examples are given in Section The paper ends with conclusion and cited references Notations The following notations will be used throughout this paper: R+ denotes the set of all real non-negative numbers; Rn denotes the n−dimensional space with the scalar n n n×r product x, y = i=1 xi yi and the vector norm x = denotes the space of i=1 xi ; R all matrices of (n × r)−dimensions, AT denotes the transpose of matrix A, A is symmetric if A = AT , I denotes the identity matrix; λ(A) denotes the set of all eigenvalues of A, λmax (A) (λmin (A), resp.) denotes the maximal (the minimal, resp.) number of the real part of eigenvalues of A; xt := {x(t + s) : s ∈ [−h, 0]}, xt = sup−h≤s≤0 x(t + s) ; matrix A is called semi-positive definite (A ≥ 0) if Ax, x ≥ 0, for all x ∈ Rn , A is positive definite (A > 0) if Ax, x > for all x = 0, A > B means A − B > 0; µ(A) denotes the matrix measure of A defined by µ(A) = λmax (A + AT ); SM + (0, ∞) denotes the set of continuous symmetric and semi-positive definite matrix function in [0, ∞), BM + (0, ∞) denotes the subset of SM + (0, ∞) consisting of bounded matrix functions; C([−d, 0], Rn) denotes the Banach space of all Rn valued continuous functions with the norm x = supt∈[−d,0] x(t) for x(.) ∈ C([−d, 0], Rn) Preliminaries Consider a class of non-autonomous cellular neural networks with time-varying delays of the form  ˙ = −A(t)x(t) + W0 (t)f(x(t)) + W1 (t)g(x(t − h(t)))   x(t)   t + W2 (t) c(x(s))ds + B(t)u(t), t ≥ 0, (2.1)  t−κ(t)    x(t) = φ(t), t ∈ [−d, 0], d = max{h, κ} where x(t) = [x1(t), x2(t), , xn (t)]T ∈ Rn is the state; u(.) ∈ L2 ([0, t], Rm) is the control; n is the neural number; f(x(t)) = (fi (xi (t)))n×1 , g(x(t − h(t))) = (gi (xi(t − h(t))))n×1 and c(x(t)) = (ci (xi(t)))n×1 are the activation functions; A(t) = diag (a1 (t), a2(t), , an (t))) represents the self-feedback term, W0 (t), W1(t), W2(t) denote the connection weight matrices and B(t) is the control input matrix Time-varying delay functions h(t), κ(t) are continuous and satisfy condition either (D1) or (D2): (D1) ≤ h(t) ≤ h, (D2) ≤ h(t) ≤ h, ˙ h(t) ≤ µ < 1, ≤ κ(t) ≤ κ, ≤ κ(t) ≤ κ, ∀t ≥ ∀t ≥ 0, The initial function φ(t) ∈ C([−d, 0], Rn ), with the norm φ = sup−d≤t≤0 φ(t) In this paper, for system (2.1) we introduce the following assumptions (H1) Matrix functions A(t), W0(t), W1(t), W2(t) and B(t) are continuous in [0, ∞), and (t) > for all t ≥ 0, i = 1, 2, , n; (H2) The activation functions f(.), g(.), c(.) satisfy the following growth conditions |fi (ξ)| ≤ |ξ|, |gi (ξ)| ≤ bi |ξ|, |ci (ξ)| ≤ ci |ξ|, where, , bi , ci are given positive constants i = 1, 2, , n, ∀ξ ∈ Rn , (2.2) Next, we recall some definitions for system (2.1) as follows Definition 2.1 For given α > 0, system (2.1) with u(t) = is said to be α−exponentially stable if there exists β > such that every solution x(t, φ) of (2.1) satisfies the following condition x(t, φ) ≤ β φ e−αt, ∀t ≥ System (2.1) is exponentially stable if it is α−exponentially stable for some α > Definition 2.2 System (2.1) is exponentially stabilizable if there exists a state feedback controller u(t) = K(t)x(t), K(t) ∈ Rm×n such that the closed-loop system  x(t) ˙ = [−A(t) + B(t)K(t)] x(t) + W0(t)f(x(t)) + W1(t)g(x(t − h(t)))     t + W2(t) c(x(s))ds, t ≥ 0, (2.3)  t−κ(t)    x(t) = φ(t), t ∈ [−d, 0], is exponentially stable We introduce the following technical well-known propositions, which will be used in the proof of our results Proposition 2.1 (Razumikhin stability theorem) [5] Consider the following functional differential equation x(t) ˙ = f(t, xt), t ≥ 0, x(t) = φ(t), t ∈ [−d, 0], (2.4) where f : R × C([−d, 0], Rn ) → Rn takes R× (bounded sets of C([−d, 0], Rn)) into bounded sets of Rn , and u, v, w : R+ −→ R+ are continuous nondecreasing functions, u(s) and v(s) are positive for s > 0, and u(0) = v(0) = 0, v is strictly increasing If there exists a continuous function V : R × Rn −→ R such that u( x ) ≤ V (t, x) ≤ v( x ), for t ∈ R and x ∈ Rn , and the derivative of V along the solution x(t) of systems (2.4) satisfies V˙ (t, x(t)) ≤ −w ( x(t) ) whenever V (t + s, x(t + s)) < qV (t, x(t)), q > 1, ∀s ∈ [−d, 0], then the zero solution of system (2.4) is globally uniformly asymptotically stable Proposition 2.2 (Cauchy Matrix Inequality) For any x, y ∈ Rn and positive definite matrix N ∈ Rn×n , we have 2xT y ≤ xT N −1 x + y TN y Proposition 2.3 For any symmetric positive definite matrix M > 0, scalar ν > and vector function ω : [0, ν] → Rn such that the integrations concerned are well defined, we have T ν ω(s)ds ν M ν ω(s)ds ω T (s)Mω(s)ds ≤ν 0 Proposition 2.4 (Schur complement lemma) Let X, Y, Z be any matrices with appropriate dimensions, X = X T , Y = Y T > Then X + Z T Y −1 Z < if and only if X ZT Z −Y < 3 Main result In this section, we present some new sufficient conditions for exponential stabilization of non-autonomous neural networks system (2.1) Firstly, we consider the case delays functions satisfy condition (D1) For α > 0, P (t) ∈ SM + (0, ∞), we denote F = diag{ai }, S(t) = G = diag{bi }, W0 (t)W0T (t) + (1 − µ) W1 (t)W1T(t) + κe2ακW2 (t)W2T (t), A(t) = −A(t) + αI + λd S(t), Q(t) = 2αλd I + λd = e−d , λ2d S(t) Pd (t) = P (t) + λd I, + F + G2 + κH , δ1 = max b2i , R(t) = S(t) − B(t)B T(t), δ2 = max c2i , 1≤i≤n p0 = λmax (P (0)), H = diag{ci }, i = 1, 2, , n, −1 1≤i≤n Λ = p0 + λd + δ1 − e−2αh 2ακ + e−2ακ − + δ2 2α 4α2 The following theorem present conditions for α−exponentially stabilizable for system (2.1) Theorem 3.1 Let conditions (H1), (H2) and (D1) hold Then for given α > 0, system (2.1) is exponentially stabilizable if there exist a matrix function P (t) ∈ SM + (0, ∞) satisfy the following Riccati differential equation T P˙ (t) + A (t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) = (3.1) The state feedback control is given by u(t) = − B T(t)Pd (t)x(t), t ≥ (3.2) Moreover, every solution x(t, φ) of the closed-loop system (2.3) satisfies Λ φ e−αt, λd x(t, φ) ≤ t ≥ Proof Let P (t) be a solution of (3.1), we consider the closed-loop system (2.3) Consider the following Lyapunov-Krasovskii functional V (t, xt) = V1 + V2 + V3 , where V1 (t, xt) = xT (t)Pd (t)x(t) t e2α(s−t)xT (s)GGx(s)ds, V2 (t, xt) = t−h(t) t e2α(τ −t)xT (τ )HHx(τ )dτ ds V3 (t, xt) = −κ t+s It is easy to verify that V (t, xt) ≥ λd x(t) 2, t ∈ R+ (3.3) Taking derivative of V1 in t along the solution of (2.3) we obtain V˙1 = xT (t)P˙ (t)x(t) + 2xT (t)Pd (t)x(t) ˙ = xT (t)P˙ (t)x(t) + xT (t) −Pd (t)A(t) − AT(t)Pd (t) + Pd (t)B(t)K(t) + K T (t)B T(t)Pd (t) x(t) T (3.4) T + 2x (t)Pd (t)W0(t)f(x(t)) + 2x (t)Pd (t)W1(t)g(x(t − h(t))) t + 2xT (t)Pd (t)W2(t) c(x(s))ds t−κ(t) From (2.2) we have the following estimations by using propositions 2.2, 2.3 2xT (t)Pd (t)W0(t)f(x(t)) ≤ xT (t)Pd (t)W0(t)W0T (t)Pd (t)x(t) + f T (x(t))f(x(t)) ≤ xT (t)Pd (t)W0(t)W0T (t)Pd (t)x(t) + xT (t)F F x(t); (3.5) 2xT (t)Pd (t)W1(t)g(x(t − h(t))) ≤ (1 − µ)−1 xT (t)Pd (t)W1(t)W1T(t)Pd (t)x(t) + (1 − µ)g T (x(t − h(t)))g(x(t − h(t))) ≤ (1 − µ)−1 xT (t)Pd (t)W1(t)W1T(t)Pd (t)x(t) (3.6) + (1 − µ)xT (t − h(t))GGx(t − h(t)); t 2xT (t)Pd (t)W2 (t) c(x(s))ds t−κ(t) 2ακ T ≤ κe x (t)Pd (t)W2 (t)W2T(t)Pd (t)x(t) T t −1 −2ακ +κ e t c(x(s))ds c(x(s))ds t−κ(t) t−κ(t) ≤ κe2ακxT (t)Pd (t)W2 (t)W2T(t)Pd (t)x(t) (3.7) t cT (x(s))c(x(s))ds + e−2ακ t−κ(t) ≤ κe 2ακ T x (t)Pd (t)W2 (t)W2T(t)Pd (t)x(t) t +e xT (s)HHx(s)ds −2ακ t−κ From (3.4) to (3.7), we have V˙1 ≤ xT (t)P˙ (t)x(t) + xT (t) −Pd (t)A(t) − AT(t)Pd (t) + Pd (t)B(t)K(t) + K T (t)B T(t)Pd (t) + F F x(t) (3.8) + xT (t)Pd (t)S(t)Pd (t)x(t) t + (1 − µ)xT (t − h(t))GGx(t − h(t)) + e−2ακ xT (s)HHx(s)ds t−κ Next, by taking derivative of V2 , V3 along solution of (2.3), respectively, we obtain V˙2 ≤ −2αV2 + xT (t)GGx(t) − (1 − µ)xT (t − h(t))GGx(t − h(t)); V˙3 ≤ −2αV3 + κxT (t)HHx(t) − e−2ακ xT (t + s)HHx(t + s)ds −κ t = −2αV3 + κxT (t)HHx(t) − e−2ακ (3.9) xT (s)HHx(s)ds t−κ Thus, we have V˙ + 2αV ≤ xT (t) P˙ (t) − Pd (t)A(t) − AT (t)Pd (t) + 2αPd (t) + Pd (t)B(t)K(t) (3.10) + K T (t)B T(t)Pd (t) + F + G2 + κH x(t) + xT (t)Pd (t)S(t)Pd (t)x(t) By substituting K(t) = − B T (t)Pd (t) into (3.10) leads to V˙ + 2αV ≤ xT (t) P˙ (t) − Pd (t)A(t) − AT (t)Pd (t) + 2αPd (t) + F + G2 + κH x(t) + xT (t)Pd (t) −B(t)B T(t) + S(t) Pd (t)x(t) T = xT(t) P˙ (t) + A (t)P (t) + P (t)A(t) − P (t)R(t)P (t) + Q(t) x(t) − 2λd xT(t)A(t)x(t) − λ2d xT (t)B(t)B T(t)x(t) Since, P (t) is a solution of (3.1), it follows from (3.11) that n V˙ + 2αV ≤ −2λd ai(t)x2i (t) − λ2d B T (t)x(t) ≤ 0, ∀t ≥ 0, i=1 which implies V (t, xt) ≤ V (0, x0 )e−2αt, ∀t ≥ 0, by integrating from to t On the other hand 0 −h e2ατ x(s) 2dτ ds −κ 0 −h e2ατ dτ ds −κ s ≤Λ φ Taking the estimation (3.3) into account, we finally obtain x(t, φ) ≤ Λ φ e−αt , λd This completes the proof of theorem s e2αsds + δ2 ≤ p0 + λd + δ1 e2αs x(s) 2ds + δ2 V (0, x0 ) ≤ x(0)P (0)x(0) + δ1 t ≥ φ (3.11) Remark 3.1 The exponential stabilizable conditions given in theorem 3.1 are derived in terms of the solution of suitable Riccati differential equation (RDE) Various efficient numerical techniques for solving RDE can be found in [8] and [16] On the other hand, condition (2.3), in fact, can be relaxed via a matrix inequality: T P˙ (t) + A (t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) ≤ (3.12) Remark 3.2 Based on the existence of some constant diagonal matrices satisfying some matrix inequalities uniformly, H Jiang and Z Teng [7] prove the global exponential stability for a class of CNNs with the upper bounds of delays less than one However, it can be seen that the proposed conditions in [7] are very conservative Differ from [7], in the case of differentiable delays, we not require the self-feedback term A(t) to be uniformly positive as well as the boundedness of A(t), W0(t), W1 (t), W2(t) In the sequel, we consider the problem of exponential stabilization for system (2.1) with no restriction on the derivative of the time-varying delay functions Based on Razumikhin stability theorem we derive conditions for the exponential stabilization of system (2.1) in terms of Riccati differential equation For this, we assume that (H1’) Matrix functions A(t), W0(t), W1(t), W2(t) and B(t) are continuous in [0, ∞), and (t) ≥ > for all t ≥ 0, i = 1, 2, , n Let P (t) ∈ BM + (0, ∞), we define some notations as follows a = , 1≤i≤n λb = inf+ λmin (B(t)B T (t)), t∈R θ = 2λd a + λ2d λb , S(t) = σ= λd W0 (t)W0T (t) + κ2 + , p = sup P (t) , t∈R+ T δ1 W1(t)W1 (t) + δ2W2 (t)W2T(t), T A(t) = −A(t) − λd B(t)B (t) + σI + S(t), T R(t) = λ−1 d S(t) − B(t)B (t), Q(t) = 2σλd I + F + λd S(t) Then, we have the following theorem Theorem 3.2 Let (H1’), (H2) and (D2) hold Then system (2.1) is exponentially stabilizable if there exist a matrix function P (t) ∈ BM + (0, ∞) satisfy the following Riccati differential equation P˙ (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) = (3.13) The state feedback control is given by u(t) = − B T(t)Pd (t)x(t), t ≥ Moreover, every solution x(t, φ) of the closed-loop system (2.3) satisfies x(t, φ) ≤ β φ e−αt, ∀t ≥ 0, (3.14) where, β = 1+ p θ and α = λd (p + λd ) Proof Let P (t) be solution of (3.13) With the state feedback control (3.14), consider the following Lyapunov-Krasovskii functional for the closed-loop system (2.3) V (t, x(t)) = Pd (t)x(t), x(t) = xT (t)P (t)x(t) + λd x(t) 2, t ≥ It is easy to verify that λd x(t) ≤ V (t, x(t)) ≤ (p + λd ) x(t) 2, ∀t ≥ (3.15) The time derivative of V (t, x(t)) along the solution of system (2.3) is estimated as follows V˙ (t, x(t)) = xT (t)P˙d (t)x(t) + 2xT (t)Pd (t)x(t) ˙ = xT(t)P˙ (t)x(t) + 2xT (t)Pd (t) (−A(t) + B(t)K(t)) x(t) + W0 f(x(t)) t + W1 (t)g(x(t − h(t))) + W2 (t) c(x(s))ds t−κ(t) = xT(t) P˙ (t) − Pd (t)A(t) − AT (t)Pd (t) (3.16) + Pd (t)B(t)K(t) + K T (t)B T(t)Pd (t) x(t) + 2xT (t)Pd (t)W0(t)f(x(t)) + 2xT (t)Pd (t)W1(t)g(x(t − h(t))) t + 2xT (t)Pd (t)W2(t) c(x(s))ds t−κ(t) By using Proposition 2.2 and condition (2.2), we have 2xT (t)Pd (t)W0(t)f(x(t)) ≤ xT(t)Pd (t)W0(t)W0T (t)Pd (t)x(t) + f T (x(t))f(x(t)) ≤ xT(t)Pd (t)W0(t)W0T (t)Pd (t)x(t) + xT (t)F F x(t) In the light of the Razumikhin stability theorem, we assume that, for any V (t + s, x(t + s)) < (1 + )V (t, x(t)), ∀s ∈ [−d, 0], (3.17) > 0, ∀t > Therefore, the following estimations hold by using Proposition 2.2 and 2.3 2xT (t)Pd (t)W1(t)g(x(t − h(t))) T T −1 T ≤ δ1λ−1 d x (t)Pd (t)W1 (t)W1 (t)Pd (t)x(t) + δ1 λd g (x(t − h(t)))g(x(t − h(t))) T T −1 T ≤ δ1λ−1 d x (t)Pd (t)W1 (t)W1 (t)Pd (t)x(t) + δ1 λd x (t − h(t))GGx(t − h(t)) T T ≤ δ1λ−1 d x (t)Pd (t)W1 (t)W1 (t)Pd (t)x(t) + λd x(t − h(t)) T T ≤ δ1λ−1 d x (t)Pd (t)W1 (t)W1 (t)Pd (t)x(t) + V (t − h(t), x(t − h(t))) T T T ≤ δ1λ−1 d x (t)Pd (t)W1 (t)W1 (t)Pd (t)x(t) + (1 + )x (t)Pd (t)x(t); (3.18) and t 2xT (t)Pd (t)W2 (t) ≤ c(x(s))ds t−κ(t) T T δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) T t t + δ2−1 λd c(x(s))ds t−κ(t) t−κ(t) c(x(s))ds t c(x(s)) 2ds T T −1 ≤ δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) + δ2 λd κ t−κ(t) ≤ T T δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) t xT (s)HHx(s)ds + δ2−1 λd κ t−κ(t) (3.19) t x(s) 2ds T T ≤ δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) + λd κ t−κ(t) λd x(t + s) 2ds T T ≤ δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) + κ −κ(t) ≤ ≤ T T δ2 λ−1 d x (t)Pd (t)W2 (t)W2 (t)Pd (t)x(t) + κ (1 + ) xT (t)Pd (t)x(t)ds −κ(t) −1 T δ2 λd x (t)Pd (t)W2 (t)W2T(t)Pd (t)x(t) + κ2 (1 + ) xT (t)Pd (t)x(t) Combining (3.16)-(3.19), we obtain V˙ (t, x(t)) ≤ xT(t) P˙ (t) − Pd (t)A(t) − AT (t)Pd (t) + Pd (t)B(t)K(t) + K T (t)B T(t)Pd (t) + κ2 + (1 + )Pd (t) + F F + Pd (t)W0 (t)W0T(t)Pd (t) (3.20) T + δ1 λ−1 d Pd (t)W1 (t)W1 (t)Pd (t) T + δ2 λ−1 d Pd (t)W2 (t)W2 (t)Pd (t) (t) By substituting K(t) = − 12 B T(t)Pd (t) and let → 0+ , equation (3.20) leads to V˙ (t, x(t)) ≤ xT (t) P˙ (t) − Pd (t)A(t) − AT(t)Pd (t) − Pd (t)B(t)B T(t)Pd (t) + (κ2 + 1)Pd (t) + F F + Pd (t)W0 (t)W0T(t)Pd (t) T + δ1λ−1 d Pd (t)W1 (t)W1 (t)Pd (t) T + δ2λ−1 d Pd (t)W2 (t)W2 (t)Pd (t) x(t) From (3.21) we obtain V˙ (t, x(t)) ≤ xT (t) P˙ (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) x(t) − 2λd xT(t)A(t)x(t) − λ2d xT (t)B(t)B T(t)x(t) 10 (3.21) Since P (t) is a solution of (3.13), we have V˙ (t, x(t)) ≤ −2λd xT (t)A(t)x(t) − λ2d xT (t)B(t)B T(t)x(t), ∀t ≥ (3.22) It is easy to check that − 2λd xT (t)A(t)x(t) ≤ −2λd a x(t) 2, − λ2d xT (t)B(t)B T(t)x(t) ≤ −λ2d λb x(t) 2, and therefore, V˙ (t, x(t)) ≤ −θ x(t) 2, ∀t ≥ By Razumikhin stability theorem, the closed-loop system (2.3) is asymptotically stable In addition, from (3.15) we get − x(t) and hence where α = ≤− V (t, x(t)), p + λd V˙ (t, x(t)) + 2αV (t, x(t)) ≤ 0, ∀t ≥ 0, (3.23) θ 2(p + λd ) Integrating both sides of (3.23) from to t yields V (t, x(t)) ≤ V (0, x(0))e−2αt , ∀t ≥ On the other hand, from (3.15), it follows that λd x(t) ≤ V (t, x(t)) ≤ V (0, x(0))e−2αt ≤ (p + λd ) φ 2e−2αt, ∀t ≥ Thus, x(t, φ) ≤ β φ e−αt, where, β = 1+ p = λd t ≥ 0, + ped This completes the proof of the theorem As an application, we apply the obtained results to exponential stabilization of neural networks system with constant coefficients in cases the time-varying delay functions h(t), κ(t) satisfy either (D1) or (D2)          x(t) ˙ = −Ax(t) + W0 f(x(t)) + W1g(x(t − h(t))) t + W2 c(x(s))ds + Bu(t), t ≥ 0, t−κ(t) x(t) = φ(t), t ∈ [−d, 0], where, A = diag{¯ a1, ¯a2, , ¯an}, a¯i > 0; W0, W1 , W2 , B are given constant matrices 11 (3.24) We will design a state feedback control u(t) = Kx(t), then the closed-loop system of (3.24) is given by t x(t) ˙ = (−A + BK) x(t) + W0f(x(t)) + W1 g(x(t − h(t))) + W2 c(x(s))ds (3.25) t−κ(t) When the time-varying delay functions h(t), κ(t) satisfies condition D1 , we denote λ1 = λmax (X −1 ), λd = e−d , = 2αλd , = (1 − µ), Ξ1 = −A + αI + λd −BB T + W0W0T + Γ1 = Q= T −BB + W0W0T + T I + λd W0 W0 + λd −1 T W1 W1 + −1 T W1 W1 + −1 T T W1 W1 + W2 W2 T W2 W2 , λ2d W2 W2T + F + G2 + = κe2ακ , , κH then we have the following corollary: Corollary 3.3 For given α > 0, system (3.24) is α−exponentially stabilizable if there exist a symmetric positive definite matrix X satisfy the following linear matrix inequality   Ω1 X λ2d XW0 λ2d XW1 λ2d XW2 XF XG κXH  ∗ − −1 0 0 0  I    ∗ ∗ −λ I 0 0 d   ∗ ∗ ∗ −λd 2I 0 0   < 0,  (3.26) −1  ∗ ∗ ∗ ∗ −λ I 0 d   ∗ ∗ ∗ ∗ ∗ −I 0    ∗ ∗ ∗ ∗ ∗ ∗ −I  ∗ ∗ ∗ ∗ ∗ ∗ ∗ −κI where, Ω1 = XΞT1 + Ξ1 X + Γ1 The state feedback control is given by u(t) = − B T (X −1 + λd I)x(t), t ≥ Moreover, every solution x(t, φ) of the closed-loop system (3.25) satisfy x(t, φ) ≤ Λ φ e−αt, λ1 t ≥ Proof Let P (t) = X −1 then we have T P˙ (t) + A (t)P (t) + P (t)A(t) − P (t)R(t)P (t) + Q(t) = ΞT1 P + P Ξ1 + P Γ1 P + Q By theorem 3.1 and remark 3.1, the closed-loop system (3.25) is exponentially stable if L1 = ΞT1 P + P Ξ1 + P Γ1 P + Q < By pre- and post-multiplying both sides of L1 with X and using Schur complement lemma, we have the condition L1 < is equivalent to the condition (3.26) This completes the proof of the corollary 12 When the time-varying delay functions h(t), κ(t) satisfy condition (D2), we set θˆ = 2λd a + λmin (BB T), λ2 = λmax (Z −1 ) Ξ2 = −A − λd BB T + σI + λd W0 W0T + δ1W1 W1T + δ2W2 W2T , T −1 T Γ2 = −BB T + W0 W0T + λ−1 d δ1 W1 W1 + λd δ2 W2 W2 , Q = 2σλd I + F + λ2d W0 W0T + δ1 λd W1 W1T + δ2λd W2 W2T Then we have following corollary: Corollary 3.4 System (3.24) is exponentially stabilizable if there exist a symmetric positive definite matrix Z satisfy the following linear matrix inequality:   Ω2 2σZ λ2d ZW0 δ1λd ZW1 δ2λd ZW2 ZF  ∗ −2σλd I 0 0    ∗  ∗ −λ I 0 d   < 0, (3.27) ∗ ∗ ∗ −δ1λd I 0    ∗ ∗ ∗ ∗ −δ2λd I  ∗ ∗ ∗ ∗ ∗ −I where, Ω2 = ZΞT2 + Ξ2Z + Γ2 The state feedback control is given by u(t) = − B T (Z −1 + λd I)x(t), t ≥ Moreover, every solution x(t, φ) of the closed-loop system (3.25) satisfy x(t, φ) ≤ where α ˆ= λ2 + λd φ e−ˆαt , λd t ≥ 0, θˆ 2(λ2 + λd ) Proof Let P (t) = Z −1 then we have P˙ (t) + AT (t)P (t) + P (t)A(t) + P (t)RP (t) + Q(t) = ΞT2 Z −1 + Z −1 Ξ2 + Z −1 Γ2 Z −1 + Q By theorem 3.2 and remark 3.2, the closed-loop system (3.25) is exponentially stable if L2 = ΞT2 Z −1 + Z −1 Ξ2 + Z −1 Γ2 Z −1 + Q < By pre- and post-multiplying both sides of L2 with Z and using Schur complement lemma, we have the condition L2 < is equivalent to the condition (3.27) This completes the proof of the corollary 13 Numerical examples In this section, we give some numerical examples to show the effectiveness of our conditions Example 4.1 Consider the system of non-autonomous cellular neural networks with timevarying delays (2.1), where + a(t) , + b(t) A(t) = , −3 W1 (t) = e−2t+1 W0 (t) = et−1 W2 (t) = et−2 −1 , −1 , B(t) = et , F = diag{0.5, 0.1}, G = diag{0.2, 0.3}, H = diag{0.4, 0.5}, a(t) = + 4e2t−3 + 16e−4t+1 + 2e3t−2 + 8e−3t+2 − 4e3t + 2e−t−1 + 2et−4 + 8e−5t + 0.45e−t , b(t) = − + 26e2t−3 + 36e−4t+1 + 13e3t−2 + 18e−3t+2 + 2e−t−1 + 13et−4 + 18e−5t + 0.29e−t , and h(t) = sin2 (t/2), sin2 t κ(t) = We have h = κ = 1, if if t ∈ I = ∪k≥0 [2kπ, (2k + 1)π] t ∈ R+ \ I µ = 0.5 For given α = 1, it is easy to verify that et 0 et P (t) = is a solution of RDE (3.1) By theorem 3.1, system (2.1) is exponentially stabilizable The state feedback controller is given by u(t) = − 0.5e2t + 0.5et−1 x(t), t ≥ Moreover, every solution x(t, φ) of the closed-loop system satisfies x(t, φ) ≤ 4.0169 φ e−t , t ≥ Example 4.2 Consider the system (2.1), where A(t) = + a(t) , + b(t) W1(t) = et−1 , −1 F = diag{0.1, 0.5}, W0(t) = e−t W2 (t) = et+1 , , G = diag{0.2, 0.3}, 14 B(t) = e2t , H = diag{0.1, 0.2}, − + 2e−2t+1 + 0.72e2t−2 + 0.08e2t+2 + e−4t + 0.36e−3 + 0.04e1 + 2e2t−1 + 0.01e2t + e−2 + 0.36e4t−3 + 0.04e4t+1 , b(t) = − + 2e4t−1 + 8e−2t−1 + 0.18e2t−2 + 0.72e2t+2 − e−2t + 4e−4t + 0.09e−3 + 0.36e + 2e2t−1 + 0.25e2t + 4e−2 + 0.09e4t−3 + 0.36e4t+1 , a(t) = and h(t) = | sin t| and κ(t) = cos2 t t ∈ I = ∪k≥0 [2kπ, (2k + 1)π] t ∈ R+ \ I if if We have h = κ = 1, a = 2, δ1 = 0.09, δ2 = 0.04, λb = Then we can verify that P (t) = e−2t 0 e−2t is a solution of RDE (3.13) By theorem 3.2, the system is exponentially stabilizable The state feedback controller is given by u(t) = − 0.5(1 + e2t−1) x(t) t ≥ Moreover, every solution of the closed-loop system satisfies x(t, φ) ≤ 1.9283e−0.5379t φ , t ≥ Example 4.3 Consider the neural networks system with constant coefficients in case the time-varying delay functions h(t), κ(t) satisfies condition (D2), where       0 0.9 0.4 0.8 0.5 0.9 0.3 A = 0 0 , W0 = 0.8 1  , W1 = 0.2 0.4 0.9 , 0 0.9 0.4 0.1 0.2 0.9     0.2 0.6    W2 = 0.5 0.9 0.8 , B = 7 , F = diag{0.04, 0.06, 0.09}, 0.2 0.3 0.6 G = diag{0.05, 0.09, 0.04}, H = diag{0.08, 0.02, 0.05}, and h(t) = 2| sin t|, κ(t) = 0.9 cos2 t if if t ∈ I = ∪k≥0 [2kπ, (2k + 1)π] t ∈ R+ \ I It is worth noting that the delay functions h(t), κ(t) are non-differentiable Therefore the methods used in [10] and [13] are not applicable to this system We have h = 2, κ = 0.9, 15 a = 2, δ1 = 0.081, δ2 = 0.064 Given any initial function φ(t) ∈ C([−2, 0], R2) Using the Matlab LMI toolbox, we have LMI (3.27) is feasible with the following matrix   1.4856 1.1665 0.8294 Z = 1.1665 1.5856 0.6996 0.8294 0.6996 1.1805 By corollary 3.4, the system (3.24) is exponentially stabilizable The state feedback controller is given by u(t) = −2.6433 −0.9858 −0.6757 x(t), t ≥ Moreover, every solution x(t, φ) of the closed-loop system satisfies x(t, φ) ≤ 4.8282e−0.0858t φ , t ≥ Example 4.4 Consider the neural networks system with time-varying delay (3.24), where A= W2 = , 0 , 0 W0 = B= 0.3 0.1 , 0.01 0.15 , W1 = F =G= 0.1 0.14 0.03 0.1 By assumption bounded continuously differentiable of delay function h(t) with upper bounds are strictly less than one, X Lou and B Cui [10] claimed the system above is stabi0.4 lization via the state feedback control u(t) = − x(t) and the maximum allowable 0.3 bound h for which the system is stabilizable by a state feedback controller was found in [10] to be However, by applying the result of corollary 3.4, a maximum allowable bound of −0.4600 −0.0701 h(t) is found to be 7.803 with state feedback controller u(t) = Thus, −0.0701 −0.3781 our result may be less conservative than those in [10] Conclusions In this paper, the problem of exponential stabilization for a class of non-autonomous neural networks with interval non-differentiable time-varying delays has been studied Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, new delay-dependent exponential stabilization conditions for the system are established in terms of the solution of Riccati differential equations to design state feedback exponential stabilizing controller, which allows to compute simultaneous the two bounds that characterize the exponential stability of the solution Numerical examples are given to illustrate the effectiveness of the obtained results Acknowledgments This work was completed while the authors were visiting Vietnam Institute for Advanced Study in Mathematics (VIASM) The authors would like to gratefully acknowledge VIASM for their support and hospitality This research was supported by the National Foundation for Science and Technology Development of Vietnam, grant number 101.01-2011.51 16 References [1] T.T Anh, L.V Hien and V.N Phat, Stability analysis for linear non-autonomous systems with continuously distributed multiple time-varying delays and applications, Acta Math Viet., 36 (2011), 129-143 [2] L.O Chua and L Yang, Cellular neural networks: Theory, IEEE Trans Circ Syst., 10 (1988), 1257-1272 [3] Y Dong, X Wang, S Mei and W Li, Exponential stabilization of nonlinear uncertain systems with time-varying delay, J Eng Math., 77 (2012), 225-237 [4] O Faydasicok and S Arik, A new robust stability criterion for dynamical neural networks with multiple time delays, Neuro computing, 99 (2013), 290-297 [5] K Gu, V.L Kharitonov and J Chen, Stability of Time-Delay Systems, Birkhauser, 2003 [6] L.V Hien and V.N Phat, Delay feedback in exponential stabilization of linear tme-varying systems with input delay, IMA, J Math Contr Inf., 26 (2009), 163-177 [7] H Jiang and Zh Teng, Global exponential stability of cellular neural networks with timevarying coefficients and delays, Neural Networks, 17 (2004), 1415-1425 [8] H.A Kandil, G Freiling, V Ionescu and G Jank, Matrix Riccati Equations in Control and Systems Theory, Birkhauser, Basel, 2003 [9] O.M Kwon, Ju H Park, S M Lee and E J Cha, Analysis on delay-dependent stability for neural networks with time-varying delays, Neuro computing, 103 (2013), 114-120 [10] X Lou and B Cui, On robust stabilization of a class of neural networks with time-varying delays, Proc IEEE Int Conf Comput Intell Security, 2006, 437-440 [11] V.N Phat and L.V Hien, An application of Razumikhin theorem to exponential stability for linear non-autonomous systems with time-varying delay, Appl Math Lett., 22 (2009), 14121417 [12] V.N Phat and P.T Nam, Exponential stability of delayed Hopfield neural networks with various activation functions and polytopic uncertainties, Phys Lett A, 374 (2010), 2527-2533 [13] V.N Phat and H Trinh, Exponential stabilization of neural networks with various activation functions and mixed time-varying delays, IEEE Trans Neural Net., 21 (2010), 1180-1184 [14] M.V Thuan and V.N Phat, New criteria for stability and stabilization of neural networks with mixed interval time-varying delays, Vietnam J Math., 40 (2012), 79-93 [15] L.A Tuan, P.T Nam and V.N Phat, New H∞ controller design for neural networks with interval time-varying delays in state and observation, Neural Processes Lett., doi: 10.1007/s11063012-9243-z [16] T William, Differential Riccati Equations, Academic Press, New York, 1972 [17] J Li, F Zhang and J Yan, Global exponential stability of non-autonomous neural networks with time-varying delays and reaction-diffusion terms, J Comput Appl Math., 233 (2009), 241-247 17 [...]... the problem of exponential stabilization for a class of non- autonomous neural networks with interval non- differentiable time-varying delays has been studied Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, new delay-dependent exponential stabilization conditions for the system are established in terms of the solution of Riccati differential equations to design... Nam, Exponential stability of delayed Hopfield neural networks with various activation functions and polytopic uncertainties, Phys Lett A, 374 (2010), 2527-2533 [13] V.N Phat and H Trinh, Exponential stabilization of neural networks with various activation functions and mixed time-varying delays, IEEE Trans Neural Net., 21 (2010), 1180-1184 [14] M.V Thuan and V.N Phat, New criteria for stability and stabilization. .. on delay-dependent stability for neural networks with time-varying delays, Neuro computing, 103 (2013), 114-120 [10] X Lou and B Cui, On robust stabilization of a class of neural networks with time-varying delays, Proc IEEE Int Conf Comput Intell Security, 2006, 437-440 [11] V.N Phat and L.V Hien, An application of Razumikhin theorem to exponential stability for linear non- autonomous systems with time-varying... stability and stabilization of neural networks with mixed interval time-varying delays, Vietnam J Math., 40 (2012), 79-93 [15] L.A Tuan, P.T Nam and V.N Phat, New H∞ controller design for neural networks with interval time-varying delays in state and observation, Neural Processes Lett., doi: 10.1007/s11063012-9243-z [16] T William, Differential Riccati Equations, Academic Press, New York, 1972 [17] J Li,... V.N Phat, Delay feedback in exponential stabilization of linear tme-varying systems with input delay, IMA, J Math Contr Inf., 26 (2009), 163-177 [7] H Jiang and Zh Teng, Global exponential stability of cellular neural networks with timevarying coefficients and delays, Neural Networks, 17 (2004), 1415-1425 [8] H.A Kandil, G Freiling, V Ionescu and G Jank, Matrix Riccati Equations in Control and Systems... (2011), 129-143 [2] L.O Chua and L Yang, Cellular neural networks: Theory, IEEE Trans Circ Syst., 10 (1988), 1257-1272 [3] Y Dong, X Wang, S Mei and W Li, Exponential stabilization of nonlinear uncertain systems with time-varying delay, J Eng Math., 77 (2012), 225-237 [4] O Faydasicok and S Arik, A new robust stability criterion for dynamical neural networks with multiple time delays, Neuro computing,... in Mathematics (VIASM) The authors would like to gratefully acknowledge VIASM for their support and hospitality This research was supported by the National Foundation for Science and Technology Development of Vietnam, grant number 101.01-2011.51 16 References [1] T.T Anh, L.V Hien and V.N Phat, Stability analysis for linear non- autonomous systems with continuously distributed multiple time-varying delays... observation, Neural Processes Lett., doi: 10.1007/s11063012-9243-z [16] T William, Differential Riccati Equations, Academic Press, New York, 1972 [17] J Li, F Zhang and J Yan, Global exponential stability of non- autonomous neural networks with time-varying delays and reaction-diffusion terms, J Comput Appl Math., 233 (2009), 241-247 17 ... (p + λd ) φ 2e−2αt, ∀t ≥ 0 Thus, x(t, φ) ≤ β φ e−αt, where, β = 1+ p = λd t ≥ 0, 1 + ped This completes the proof of the theorem As an application, we apply the obtained results to exponential stabilization of neural networks system with constant coefficients in cases the time-varying delay functions h(t), κ(t) satisfy either (D1) or (D2)          x(t) ˙ = −Ax(t) + W0 f(x(t)) + W1g(x(t −... design state feedback exponential stabilizing controller, which allows to compute simultaneous the two bounds that characterize the exponential stability of the solution Numerical examples are given to illustrate the effectiveness of the obtained results Acknowledgments This work was completed while the authors were visiting Vietnam Institute for Advanced Study in Mathematics (VIASM) The authors would ... present some new sufficient conditions for exponential stabilization of non- autonomous neural networks system (2.1) Firstly, we consider the case delays functions satisfy condition (D1) For α > 0,... those in [10] Conclusions In this paper, the problem of exponential stabilization for a class of non- autonomous neural networks with interval non- differentiable time-varying delays has been studied... strictly less than one In this paper, we consider the problem of exponential stabilization for a class of nonautonomous cellular neural networks with time-varying delays The system under consideration

Ngày đăng: 26/10/2015, 14:08

Xem thêm: New exponential stabilization criteria for non autonomous delayed neural networks via Riccati equations

TỪ KHÓA LIÊN QUAN

w