1. Trang chủ
  2. » Giáo án - Bài giảng

ℋ∞ Finite time boundedness for discrete time delay neural networks via reciprocally convex approach

14 25 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 14
Dung lượng 569,51 KB

Nội dung

This paper addresses the problem of ℋ∞ finite-time boundedness for discrete-time neural networks with interval-like time-varying delays. First, a delay-dependent finite-time boundedness criterion under the finite-time ℋ ∞ performance index for the system is given based on constructing a set of adjusted Lyapunov–Krasovskii functionals and using reciprocally convex approach.

VNU Journal of Science: Mathematics – Physics, Vol 36, No (2020) 10-23 Original Article ℋ∞ Finite-time Boundedness for Discrete-time Delay Neural Networks via Reciprocally Convex Approach Le Anh Tuan* Department of Mathematics, University of Sciences, Hue University, 77 Nguyen Hue, Hue, Vietnam Received 25 May 2020 Revised 07 July 2020; Accepted 15 July 2020 Abstract: This paper addresses the problem of ℋ∞ finite-time boundedness for discrete-time neural networks with interval-like time-varying delays First, a delay-dependent finite-time boundedness criterion under the finite-time ℋ∞ performance index for the system is given based on constructing a set of adjusted Lyapunov–Krasovskii functionals and using reciprocally convex approach Next, a sufficient condition is drawn directly which ensures the finite-time stability of the corresponding nominal system Finally, numerical examples are provided to illustrate the validity and applicability of the presented conditions Keywords: Discrete-time neural networks, ℋ∞ performance, finite-time stability, time-varying delay, linear matrix inequality Introduction In recent years neural networks (NNs) have received remarkable attention because of many successful applications have been realised, e.g., in prediction, optimization, image processing, pattern recognization, association memory, data mining, etc Time delay is one of important parameters of NNs and it can be considered as an inherent feature of both biological NNs and artificial NNs Thus, analysis and synthesis of NNs with delay are important topics [1-3] It is worth noting that Lyapunov’s classical stability deals with asymptotic behaviour of a system over an infinite time interval, and does not usually specify bounds on state trajectories In certain situations, finite-time stability, initiated from the first half of the 1950s, is useful to study behaviour of a system within a finite time interval (maybe short) More precisely, those are situations that state  Corresponding author Email address: latuan@husc.edu.vn https//doi.org/ 10.25073/2588-1124/vnumap.4530 10 L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36, No (2020) 10-23 11 variables are not allowed to exceed some bounds during a given finite-time interval, for example, large values of the state are not acceptable in the presence of saturation [4-5] By using the Lyapunov function approach and linear matrix inequality (LMI) techniques, a variety of results on finite-time stability, finite-time boundedness, finite-time stabilization and finite-time ℋ∞ control were obtained for continuous- or discrete-time systems in recent years [5-14] In particular, within the framework of discrete-time NNs, there are two interesting articles [9, 10], which deal with finite-time stability and finite-time boundedness in that order To the best of our knowledge, ℋ∞ finite-time boundedness problem for discrete-time NNs with interval time-varying delay has not received adequate attention in the literature This motivates our current study For that purpose, in this paper, we first suggest conditions which guarantee finite-time boundedness of discrete-time delayed NNs and reduce the effect of disturbance input on the output to a prescribed level Soon afterward, according to this scheme, finite-time stability of the nominal system is also obtained Two numerical examples are presented to show the effectiveness of the achieved results Notation: ℤ+ denotes the set of all non-negative integers; ℝ𝑛 denotes the 𝑛-dimensional space with the scalar product 𝑥 T 𝑦; ℝ𝑛×𝑟 denotes the space of (𝑛 × 𝑟) −dimension matrices; 𝐴T denotes the transpose of matrix 𝐴; 𝐴 is positive definite (𝐴 > 0) if 𝑥 T 𝐴𝑥 > for all 𝑥 ≠ 0; 𝐴 > 𝐵 means 𝐴 − 𝐵 > The notation diag{ } stands for a block-diagonal matrix The symmetric term in a matrix is denoted by ∗ Preliminaries Consider the following discrete-time neural networks with time-varying delays and disturbances 𝑥(𝑘 + 1) = 𝐴𝑥(𝑘) + 𝑊𝑓(𝑥(𝑘)) + 𝑊1 𝑔(𝑥(𝑘 − ℎ(𝑘))) + 𝐶𝜔(𝑘), 𝑘 ∈ ℤ+ , 𝑧(𝑘) = 𝐴1 𝑥(𝑘) + 𝐷𝑥(𝑘 − ℎ(𝑘)) + 𝐶1 𝜔(𝑘), (1) { 𝑥(𝑘) = 𝜑(𝑘), 𝑘 ∈ {−ℎ2 , −ℎ2 + 1, ,0}, where 𝑥(𝑘) ∈ ℝ𝑛 is the state vector; 𝑧(𝑘) ∈ ℝ𝑝 is the observation output; 𝑛 is the number of neurals; 𝑓(𝑥(𝑘)) = [𝑓1 (𝑥1 (𝑘)), 𝑓2 (𝑥2 (𝑘)), , 𝑓𝑛 (𝑥𝑛 (𝑘))]T , 𝑔(𝑥(𝑘 − ℎ(𝑘))) = [𝑔1 (𝑥1 (𝑘 − ℎ(𝑘))), 𝑔2 (𝑥2 (𝑘 − ℎ(𝑘))), , 𝑔𝑛 (𝑥𝑛 (𝑘 − ℎ(𝑘)))]T are activation functions, where 𝑓𝑖 , 𝑔𝑖 , 𝑖 = 1, 𝑛, satisfy the following conditions ∃𝑎𝑖 > 0: |𝑓𝑖 (𝜉)| ≤ 𝑎𝑖 |𝜉|, ∀𝑖 = 1, 𝑛, ∀𝜉 ∈ ℝ, (2) ∃𝑏𝑖 > 0: |𝑔𝑖 (𝜉)| ≤ 𝑏𝑖 |𝜉|, ∀𝑖 = 1, 𝑛, ∀𝜉 ∈ ℝ The diagonal matrix 𝐴 = diag {𝑎1 , 𝑎2 , , 𝑎𝑛 } represents the self-feedback terms; the matrices 𝑊, 𝑊1 ∈ ℝ𝑛×𝑛 are connection weight matrices; 𝐶 ∈ ℝ𝑛×𝑞 , 𝐶1 ∈ ℝ𝑝×𝑞 are known matrices; 𝐴1 , 𝐷 ∈ ℝ𝑝×𝑛 are the observation matrices; the time-varying delay function ℎ(𝑘) satisfies the condition < ℎ1 ≤ ℎ(𝑘) ≤ ℎ2 ∀𝑘 ∈ ℤ+ , (3) where ℎ1 , ℎ2 are given positive integers; 𝜑(𝑘) is the initial function; external disturbance 𝜔(𝑘) ∈ ℝ𝑞 satisfies the condition T ∑𝑁 (4) 𝑘=0 𝜔 (𝑘)𝜔(𝑘) < 𝑑, where 𝑑 > is a given number Definition 2.1 (Finite-time stability) Given positive constants 𝑐1 , 𝑐2 , 𝑁 with 𝑐1 < 𝑐2 , 𝑁 ∈ ℤ+ and a symmetric positive-definite matrix 𝑅, the discrete-time delay neural networks 12 L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36 No (2020) 10-23 𝑥(𝑘 + 1) = 𝐴𝑥(𝑘) + 𝑊𝑓(𝑥(𝑘)) + 𝑊1 𝑔(𝑥(𝑘 − ℎ(𝑘))), 𝑘 ∈ ℤ+ , 𝑥(𝑘) = 𝜑(𝑘), 𝑘 ∈ {−ℎ2 , −ℎ2 + 1, , 0}, is said to be finite-time stable w.r.t (𝑐1 , 𝑐2 , 𝑅, 𝑁) if max 𝜑T (𝑘)𝑅𝜑(𝑘) ≤ 𝑐1 ⟹ 𝑥 T (𝑘)𝑅𝑥(𝑘) < 𝑐2 ∀𝑘 ∈ {1, 2, , 𝑁} (5) 𝑘∈{−ℎ2 ,−ℎ2 +1,… ,0} Definition 2.2 (Finite-time boundedness) Given positive constants 𝑐1 , 𝑐2 , 𝑁 with 𝑐1 < 𝑐2 , 𝑁 ∈ ℤ+ and a symmetric positive-definite matrix 𝑅, the discrete-time delay neural networks with disturbance 𝑥(𝑘 + 1) = 𝐴𝑥(𝑘) + 𝑊𝑓(𝑥(𝑘)) + 𝑊1 𝑔(𝑥(𝑘 − ℎ(𝑘))) + 𝐶𝜔(𝑘), 𝑘 ∈ ℤ+ , (6) 𝑥(𝑘) = 𝜑(𝑘), 𝑘 ∈ {−ℎ2 , −ℎ2 + 1, ,0}, is said to be finite-time bounded w.r.t (𝑐1 , 𝑐2 , 𝑅, 𝑁) if max 𝜑T (𝑘)𝑅𝜑(𝑘) ≤ 𝑐1 ⟹ 𝑥 T (𝑘)𝑅𝑥(𝑘) < 𝑐2 ∀𝑘 ∈ {1, 2, , 𝑁}, 𝑘∈{−ℎ2 ,−ℎ2 +1,…,0} for all disturbances 𝜔(𝑘) satisfying (4) Definition 2.3 (ℋ∞ finite-time boundedness) Given positive constants 𝑐1 , 𝑐2 , 𝛾, 𝑁 with 𝑐1 < 𝑐2 , 𝑁 ∈ ℤ+ and a symmetric positive-definite matrix 𝑅, system (1) is ℋ∞ finite-time bounded w.r.t (𝑐1 , 𝑐2 , 𝑅, 𝑁) if the following two conditions hold: (i) System (6) is finite-time bounded w.r.t (𝑐1 , 𝑐2 , 𝑅, 𝑁) (ii) Under zero initial condition (i.e., 𝜑(𝑘) = ∀𝑘 ∈ {−ℎ2 , −ℎ2 + 1, , 0}), the output 𝑧(𝑘) satisfies 𝑁 T T ∑𝑁 (7) 𝑘=0 𝑧 (𝑘)𝑧(𝑘) ≤ 𝛾 ∑𝑘=0 𝜔 (𝑘)ω(𝑘) for all disturbances 𝜔(𝑘) satisfying (4) Next, we introduce some technical propositions that will be used to prove main results Proposition 2.1 (Discrete Jensen Inequality, [15]) For any matrix 𝑀 ∈ ℝ𝑛×𝑛 , 𝑀 = 𝑀𝑇 > 0, positive integers 𝑟1 , 𝑟2 satisfying 𝑟1 ≤ 𝑟2 , a vector function 𝜔: {𝑟1 , 𝑟1 + 1, , 𝑟2 } → ℝ𝑛 , then 𝑟2 T 𝑟2 𝑟2 (∑ 𝜔(𝑖)) M (∑ 𝜔(𝑖)) ≤ (𝑟2 − 𝑟1 + 1) ∑ 𝜔T (𝑖)𝑀𝜔(𝑖) 𝑖=𝑟1 𝑖=𝑟1 𝑖=𝑟1 Proposition 2.2 (Reciprocally Convex Combination Lemma, [16, 17]) Let 𝑅 ∈ ℝ𝑛×𝑛 be a symmetric positive-definite matrix Then for all vectors 𝜁1 , 𝜁2 ∈ ℝ𝑛 , scalars 𝛼1 > 0, 𝛼2 > with 𝛼1 + 𝛼2 = and a matrix 𝑆 ∈ ℝ𝑛×𝑛 such that 𝑅 𝑆 [ T ] ≥ 0, 𝑆 𝑅 the following inequality holds T 𝜁 T 𝑅 𝑆 𝜁1 𝜁1 𝑅𝜁1 + 𝜁2T 𝑅𝜁2 ≥ [ ] [ T ] [ ] 𝜁2 𝑆 𝑅 𝜁2 𝛼1 𝛼2 Proposition 2.3 (Schur Complement Lemma, [18]) Given constant matrices 𝑋, 𝑌, 𝑍 with appropriate dimensions satisfying 𝑋 = 𝑋 𝑇 , 𝑌 = 𝑌 𝑇 > Then 𝑋 + 𝑍 T 𝑌 −1 𝑍 < ⟺ [𝑋 𝑍 𝑍 T ] < −𝑌 L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36, No (2020) 10-23 13 Main results In this section, we investigate the ℋ∞ finite-time boundedness of discrete-time neural networks in the form of (1) with interval time-varying delay It will be seen from the following theorem that reciprocally convex approach is employed in our derivation Let’s define ℎ12 = ℎ2 − ℎ1 , 𝑦(𝑘) = 𝑥(𝑘 + 1) − 𝑥(𝑘) and assume there exists a real constant 𝜏 > such that max 𝑦 T (𝑘)𝑦(𝑘) < 𝜏 𝑘∈{−ℎ2 ,−ℎ2 +1,…,−1} Before present main results, we define the following matrices 𝐹 = diag{𝑎1 , , 𝑎𝑛 }, 𝐺 = diag{𝑏1 , , 𝑏𝑛 }, Ω11 = −𝛿(𝑃 + 𝑆1 ) + (ℎ12 + 1)𝑄 + 𝑅1 , Ω12 = 𝛿𝑆1 , Ω18 = 𝐴𝑃, Ω19 = ℎ12 (𝐴 − 𝐼)𝑆1 , Ω1,10 = ℎ12 (𝐴 − 𝐼)𝑆2 , Ω1,11 = 𝐴1T , Ω1,12 = 𝐹, Ω22 = 𝛿ℎ1 (−𝑅1 + 𝑅2 − 𝛿𝑆2 ) − 𝛿𝑆1 , Ω23 = Ω34 = 𝛿ℎ1 +1 (𝑆2 − 𝑆), Ω24 = 𝛿ℎ1 +1 𝑆, Ω33 = −𝛿ℎ1 𝑄 − 𝛿ℎ1 +1 (2𝑆2 − 𝑆 − 𝑆 T ), Ω3,11 = 𝐷 T , Ω3,13 = 𝐺, Ω44 = −𝛿ℎ2 𝑅2 − 𝛿ℎ1 +1 𝑆2 , Ω55 = Ω66 = Ω11,11 = Ω12,12 = Ω13,13 = −𝐼, Ω58 = 𝑊 T 𝑃, Ω59 = ℎ12 𝑊 T 𝑆1 , Ω5,10 = ℎ12 𝑊 T 𝑆2 , Ω68 = 𝑊1T 𝑃, Ω69 = ℎ12 𝑊1T 𝑆1 , Ω6,10 = ℎ12 𝑊1T 𝑆2 , 𝛾 T Ω77 = − 𝛿 𝑁 𝐼, Ω78 = 𝐶 T 𝑃, Ω79 = ℎ12 𝐶 T 𝑆1 , Ω7,10 = ℎ12 𝐶 𝑆2 , Ω7,11 = 𝐶1T , Ω88 = −𝑃, Ω99 = −ℎ12 𝑆1 , Ω10,10 = −ℎ12 𝑆2 , T Ω𝑖𝑗 = for any other 𝑖, 𝑗: 𝑗 > 𝑖, Ω𝑖𝑗 = Ω𝑗𝑖 , 𝑖 > 𝑗, 𝜌1 = 𝑐 (ℎ 1 + ℎ2 )(ℎ12 + 1)𝛿𝑁+ℎ2 , 𝜌2 = 𝜏ℎ12 (ℎ1 + ℎ2 + 1)𝛿𝑁+ℎ2 , Λ11 = 𝛾𝑑 − 𝑐2 𝛿𝜆1 , Λ12 = 𝑐1 𝛿 𝑁+1 𝜆2 , Λ13 = 𝜌1 𝜆3 , Λ14 = 𝑐1 ℎ1 𝛿𝑁+ℎ1 𝜆4 , Λ15 = 𝑐1 ℎ12 𝛿𝑁+ℎ2 𝜆5 , Λ16 = 𝜏ℎ12 (ℎ1 + 1)𝛿𝑁+ℎ1 𝜆6 , Λ17 = 𝜌2 𝜆7 , Λ 22 = −𝑐1 𝛿 𝑁+1 𝜆2 , Λ 33 = −𝜌1 𝜆3 , Λ 44 = −𝑐1 ℎ1 𝛿𝑁+ℎ1 𝜆4 , Λ 55 = −𝑐1 ℎ12 𝛿𝑁+ℎ2 𝜆5 , Λ 66 = − 𝜏ℎ12 (ℎ1 + 1)𝛿𝑁+ℎ1 𝜆6 , Λ 77 = −𝜌2 𝜆7 , T Λ 𝑖𝑗 = for any other 𝑖, 𝑗: 𝑗 > 𝑖, Λ 𝑖𝑗 = Λ𝑗𝑖 , 𝑖 > 𝑗 Theorem 3.1 Given positive constants 𝑐1 , 𝑐2 , 𝛾, 𝑁 with 𝑐1 < 𝑐2 , 𝑁 ∈ ℤ+ and a symmetric positivedefinite matrix 𝑅 System (1) is ℋ∞ finite-time bounded w.r.t (𝑐1 , 𝑐2 , 𝑅, 𝑁) if there exist symmetric positive definite matrices 𝑃, 𝑄, 𝑅1 , 𝑅2 , 𝑆1 , 𝑆2 ∈ ℝ𝑛×𝑛 , a matrix 𝑆 ∈ ℝ𝑛×𝑛 and positive scalars 𝜆𝑖 , 𝑖 = 1, 7, 𝛿 ≥ 1, such that the following matrix inequalities hold: 𝜆1 𝑅 < 𝑃 < 𝜆2 𝑅, 𝑄 < 𝜆3 𝑅, 𝑅1 < 𝜆4 𝑅, 𝑅2 < 𝜆5 𝑅, 𝑆1 < 𝜆6 𝐼, 𝑆2 < 𝜆7 𝐼, (8) 𝑆2 𝑆 Ξ=[ T (9) ] > 0, 𝑆 𝑆2 Ω = [Ω𝑖𝑗 ]13×13 < 0, (10) Λ = [Λ 𝑖𝑗 ]7×7 < (11) Proof Consider the following Lyapunov–Krasovskii functional: L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36 No (2020) 10-23 14 𝑉(𝑘) = ∑ 𝑉𝑖 (𝑘), 𝑖=1 where 𝑉1 (𝑘) = 𝑥 T (𝑘)𝑃𝑥(𝑘), 𝑉2 (𝑘) = −ℎ1 +1 𝑘−1 ∑ ∑ 𝛿 𝑘−1−𝑡 𝑥 T (𝑡)𝑄𝑥(𝑡), 𝑠=−ℎ2 +1 𝑡=𝑘−1+𝑠 𝑘−ℎ1 −1 𝑘−1 𝑉3 (𝑘) = ∑ 𝛿 𝑘−1−𝑠 𝑥 T (𝑠)𝑅1 𝑥(𝑠) + 𝑠=𝑘−ℎ1 𝑉4 (𝑘) = ∑ 𝛿 𝑘−1−𝑠 𝑥 T (𝑠)𝑅2 𝑥(𝑠), 𝑠=𝑘−ℎ2 𝑘−1 ∑ ∑ ℎ1 𝛿 𝑘−1−𝑡 𝑦 T (𝑡)𝑆1 𝑦(𝑡) + 𝑠=−ℎ1 +1 𝑡=𝑘−1+𝑠 −ℎ1 𝑘−1 ∑ ∑ ℎ12 𝛿 𝑘−1−𝑡 𝑦 T (𝑡)𝑆2 𝑦(𝑡) 𝑠=−ℎ2 +1 𝑡=𝑘−1+𝑠 Denoting 𝑓 T (𝑥(𝑘)) 𝑔T (𝑥(𝑘 − ℎ(𝑘))) 𝜔T (𝑘)]T , and taking the difference variation of 𝑉𝑖 (𝑘), 𝑖 = 1, ,4, we have 𝜂(𝑘): = [𝑥 T (𝑘) 𝛤: = [𝐴 𝑊 𝑊1 𝐶] 𝑉1 (𝑘 + 1) − 𝛿𝑉1 (𝑘) = 𝑥 T (𝑘 + 1)𝑃𝑥(𝑘 + 1) − 𝛿𝑥 T (𝑘)𝑃𝑥(𝑘) T 𝑥(𝑘) 𝐴T 𝑓(𝑥(𝑘)) 𝑊T =[ ] [ T ] 𝑃[𝐴 𝑊 𝑔(𝑥(𝑘 − ℎ(𝑘))) 𝑊1 𝜔(𝑘) 𝐶T 𝑊1 𝑥(𝑘) 𝑓(𝑥(𝑘)) 𝐶] [ ] 𝑔(𝑥(𝑘 − ℎ(𝑘))) 𝜔(𝑘) −𝛿𝑥 T (𝑘)𝑃𝑥(𝑘) = 𝜂 T (𝑘)𝛤 T 𝑃𝛤𝜂(𝑘) − 𝛿𝑥 T (𝑘)𝑃𝑥(𝑘), −ℎ1 +1 𝑉2 (𝑘 + 1) − 𝛿𝑉2 (𝑘) = ∑ −ℎ1 +1 ∑ 𝑘−1 ∑ ∑ 𝛿 𝑘−𝑡 𝑥 T (𝑡)𝑄𝑥(𝑡) 𝑠=−ℎ2 +1 𝑡=𝑘−1+𝑠 𝑘−1 𝑘−1 T [𝑥 (𝑘)𝑄𝑥(𝑘) + ∑ 𝛿 𝑠=−ℎ2 +1 −𝛿 −ℎ1 +1 ∑ 𝛿 𝑘−𝑡 𝑥 T (𝑡)𝑄𝑥(𝑡) − 𝑠=−ℎ2 +1 𝑡=𝑘+𝑠 = (12) 𝑘 𝑘−𝑡 T 𝑥 (𝑡)𝑄𝑥(𝑡) − ∑ 𝛿 𝑘−𝑡 𝑥 T (𝑡)𝑄𝑥(𝑡) 𝑡=𝑘+𝑠 𝑡=𝑘+𝑠 𝑘−(𝑘−1+𝑠) T 𝑥 (𝑘 − + 𝑠)𝑄𝑥(𝑘 − + 𝑠)] −ℎ1 +1 = ∑ [𝑥 T (𝑘)𝑄𝑥(𝑘) − 𝛿 1−𝑠 𝑥 T (𝑘 − + 𝑠)𝑄𝑥(𝑘 − + 𝑠)] 𝑠=−ℎ2 +1 −ℎ1+1 = (ℎ2 − ℎ1 + 1)𝑥 T (𝑘)𝑄𝑥(𝑘) − ∑ 𝑠=−ℎ2 +1 𝛿 1−𝑠 𝑥 T (𝑘 − + 𝑠)𝑄𝑥(𝑘 − + 𝑠) L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36, No (2020) 10-23 15 𝑘−ℎ1 T 𝛿 𝑘−𝑠 𝑥 T (𝑠)𝑄𝑥(𝑠) = (ℎ12 + 1)𝑥 (𝑘)𝑄𝑥(𝑘) − ∑ 𝑠=𝑘−ℎ2 T ≤ (ℎ12 + 1)𝑥 (𝑘)𝑄𝑥(𝑘) − 𝛿𝑘−(𝑘−ℎ(𝑘)) 𝑥 T (𝑘 − ℎ(𝑘))𝑄𝑥(𝑘 − ℎ(𝑘)) ≤ (ℎ12 + 1)𝑥 T (𝑘)𝑄𝑥(𝑘) − 𝛿ℎ1 𝑥 T (𝑘 − ℎ(𝑘))𝑄𝑥(𝑘 − ℎ(𝑘)), 𝑘 𝑉3 (𝑘 + 1) − 𝛿𝑉3 (𝑘) = 𝛿 𝑘−𝑠 𝑥 T (𝑠)𝑅1 𝑥(𝑠) − ∑ ∑ 𝑠=𝑘+1−ℎ1 𝛿 𝑘−𝑠 𝑥 T (𝑠)𝑅1 𝑥(𝑠) 𝑠=𝑘−ℎ1 𝑘−ℎ1 + (13) 𝑘−1 𝑘−ℎ1 −1 ∑ 𝛿 𝑘−𝑠 T 𝛿 𝑘−𝑠 𝑥 T (𝑠)𝑅2 𝑥(𝑠) 𝑥 (𝑠)𝑅2 𝑥(𝑠) − ∑ 𝑠=𝑘+1−ℎ2 𝑠=𝑘−ℎ2 T T ℎ1 = 𝑥 (𝑘)𝑅1 𝑥(𝑘) + 𝑥 (𝑘 − ℎ1 )[𝛿 (−𝑅1 + 𝑅2 )]𝑥(𝑘 − ℎ1 ) −𝛿ℎ2 𝑥 T (𝑘 − ℎ2 )𝑅2 𝑥(𝑘 − ℎ2 ), 𝑉4 (𝑘 + 1) − 𝛿𝑉4 (𝑘) = (14) 𝑘 ∑ ∑ ℎ1 𝛿 𝑘−𝑡 𝑦 T (𝑡)𝑆1 𝑦(𝑡) − 𝑠=−ℎ1+1 𝑡=𝑘+𝑠 −ℎ1 + 𝑘−1 ∑ ∑ ℎ1 𝛿 𝑘−𝑡 𝑦 T (𝑡)𝑆1 𝑦(𝑡) 𝑠=−ℎ1 +1 𝑡=𝑘−1+𝑠 𝑘 ∑ ℎ12 𝛿 𝑘−𝑡 𝑦 T (𝑡)𝑆2 𝑦(𝑡) ∑ − 𝑠=−ℎ2 +1 𝑡=𝑘+𝑠 −ℎ1 𝑘−1 ∑ ∑ ℎ12 𝛿 𝑘−𝑡 𝑦 T (𝑡)𝑆2 𝑦(𝑡) 𝑠=−ℎ2 +1 𝑡=𝑘−1+𝑠 = ℎ1 [𝑦 T (𝑘)𝑆1 𝑦(𝑘) − 𝛿 1−𝑠 𝑦 T (𝑘 − + 𝑠)𝑆1 𝑦(𝑘 − + 𝑠)] ∑ 𝑠=−ℎ1 +1 −ℎ1 + ∑ ℎ12 [𝑦 T (𝑘)𝑆2 𝑦(𝑘) − 𝛿 1−𝑠 𝑦 T (𝑘 − + 𝑠)𝑆2 𝑦(𝑘 − + 𝑠)] 𝑠=−ℎ2 +1 = ℎ12 𝑦 T (𝑘)𝑆1 𝑦(𝑘) − ℎ1 𝛿 1−𝑠 𝑦 T (𝑘 − + 𝑠)𝑆1 𝑦(𝑘 − + 𝑠) ∑ 𝑠=−ℎ1 +1 −ℎ1 + T ℎ12 𝑦 (𝑘)𝑆2 𝑦(𝑘) − ℎ12 𝛿 1−𝑠 𝑦 T (𝑘 − + 𝑠)𝑆2 𝑦(𝑘 − + 𝑠) ∑ 𝑠=−ℎ2 +1 𝑘−1 =𝑦 T (𝑘)[ℎ12 𝑆1 + ℎ12 𝑆2 ]𝑦(𝑘) − ℎ1 ∑ 𝛿 𝑘−𝑠 𝑦 T (𝑠)𝑆1 𝑦(𝑠) 𝑠=𝑘−ℎ1 𝑘−1−ℎ1 𝛿 𝑘−𝑠 𝑦 T (𝑠)𝑆2 𝑦(𝑠) − ℎ12 ∑ 𝑠=𝑘−ℎ2 𝑘−1 ≤𝑦 T (𝑘)[ℎ12 𝑆1 + ℎ12 𝑆2 ]𝑦(𝑘) − ℎ1 𝛿 ∑ 𝑠=𝑘−ℎ1 𝑦 T (𝑠)𝑆1 𝑦(𝑠) L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36 No (2020) 10-23 16 𝑘−1−ℎ − ℎ12 𝛿ℎ1 +1 ∑𝑠=𝑘−ℎ21 𝑦 T (𝑠)𝑆2 𝑦(𝑠) (15) By Proposition 2.1, 𝑘−1 −ℎ1 𝛿 ∑ 𝑠=𝑘−ℎ1 T 𝑘−1 ℎ1 𝛿 𝑦 T (𝑠)𝑆1 𝑦(𝑠) ≤ − [ ∑ (𝑘 − 1) − (𝑘 − ℎ1 ) + 𝑘−1 𝑦(𝑠)] 𝑆1 [ ∑ 𝑠=𝑘−ℎ1 𝑦(𝑠)] 𝑠=𝑘−ℎ1 = −𝛿[𝑥(𝑘) − 𝑥(𝑘 − ℎ1 )]T 𝑆1 [𝑥(𝑘) − 𝑥(𝑘 − ℎ1 )], (16) 𝑘−1−ℎ1 − ℎ12 𝛿ℎ1 +1 ∑ 𝑦 T (𝑠)𝑆2 𝑦(𝑠) 𝑠=𝑘−ℎ2 𝑘−ℎ(𝑘)−1 𝑘−ℎ1 −1 = − ℎ12 𝛿ℎ1 +1 [ ∑ 𝑦 T (𝑠)𝑆2 𝑦(𝑠) + 𝑠=𝑘−ℎ(𝑘) 𝑦 T (𝑠)𝑆2 𝑦(𝑠)] ∑ 𝑠=𝑘−ℎ2 ℎ12 ≤ 𝛿ℎ1 +1 (− [ (𝑘 − ℎ1 − 1) − (𝑘 − ℎ(𝑘)) + ℎ12 − [ (𝑘 − ℎ(𝑘) − 1) − (𝑘 − ℎ2 ) + T 𝑘−ℎ1 −1 ∑ 𝑦(𝑠)] 𝑆2 [ 𝑠=𝑘−ℎ(𝑘) 𝑘−ℎ(𝑘)−1 ∑ 𝑠=𝑘−ℎ2 𝑘−ℎ1 −1 T 𝑦(𝑠)] 𝑆2 [ ∑ 𝑦(𝑠)] 𝑠=𝑘−ℎ(𝑘) 𝑘−ℎ(𝑘)−1 ∑ 𝑦(𝑠)]) 𝑠=𝑘−ℎ2 1 𝜁1T 𝑆2 𝜁1 − 𝜁T𝑆 𝜁 ) (ℎ(𝑘) − ℎ1 )/ℎ12 (ℎ2 − ℎ(𝑘))/ℎ12 2 where 𝜁1 = 𝑥(𝑘 − ℎ1 ) − 𝑥(𝑘 − ℎ(𝑘)) and 𝜁2 = 𝑥(𝑘 − ℎ(𝑘)) − 𝑥(𝑘 − ℎ2 ) From note that ℎ(𝑘) − ℎ1 ℎ2 − ℎ(𝑘) ℎ(𝑘) − ℎ1 ℎ2 − ℎ(𝑘) ≥ 0, ≥ 0, + = 1, ℎ12 ℎ12 ℎ12 ℎ12 𝜁1 = if (ℎ(𝑘) − ℎ1 )/ℎ12 = and 𝜁2 = if (ℎ2 − ℎ(𝑘))/ℎ12 = 0, = 𝛿ℎ1 +1 (− and the hypothesis (9), Proposition 2.2 gives us 𝑘−1−ℎ1 −ℎ12 𝛿 ℎ1 +1 ∑ 𝑠=𝑘−ℎ2 𝜁 T 𝑆2 𝑦 T (𝑠)𝑆2 𝑦(𝑠) ≤ −𝛿ℎ1 +1 [ ] [ T 𝜁2 𝑆 𝑆 𝜁1 ][ ] 𝑆2 𝜁2 = −𝛿ℎ1 +1 [𝜁1T 𝑆2 𝜁1 + 𝜁1T 𝑆𝜁2 + 𝜁2T 𝑆 T 𝜁1 + 𝜁2T 𝑆2 𝜁2 ] Substitute (16), (17) into (15) and combine with (12)-(14), we get 𝑉(𝑘 + 1) − 𝛿𝑉(𝑘) ≤ 𝜂 T (𝑘)𝛤 T 𝑃𝛤𝜂(𝑘) + 𝑥 T (𝑘)[−𝛿𝑃 + (ℎ12 + 1)𝑄 + 𝑅1 − 𝛿𝑆1 ]𝑥(𝑘) + 𝑥 T (𝑘)[2𝛿𝑆1 ]𝑥(𝑘 − ℎ1 ) + 𝑥 T (𝑘 − ℎ(𝑘))[−𝛿ℎ1 𝑄 − 𝛿ℎ1 +1 (2𝑆2 − 𝑆 − 𝑆 T )]𝑥(𝑘 − ℎ(𝑘)) + 𝑥 T (𝑘 − ℎ(𝑘))[2𝛿ℎ1 +1 (𝑆2 − 𝑆 T )]𝑥(𝑘 − ℎ1 ) + 𝑥 T (𝑘 − ℎ(𝑘))[2𝛿ℎ1 +1 (𝑆2 − 𝑆)]𝑥(𝑘 − ℎ2 ) + 𝑥 T (𝑘 − ℎ1 )[𝛿ℎ1 (−𝑅1 + 𝑅2 ) − 𝛿𝑆1 − 𝛿ℎ1 +1 𝑆2 ]𝑥(𝑘 − ℎ1 ) (17) L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36, No (2020) 10-23 17 + 𝑥 T (𝑘 − ℎ1 )[2𝛿ℎ1 +1 𝑆]𝑥(𝑘 − ℎ2 ) + 𝑥 T (𝑘 − ℎ2 )[−𝛿ℎ2 𝑅2 − 𝛿ℎ1 +1 𝑆2 ]𝑥(𝑘 − ℎ2 ) + 𝑦 T (𝑘)[ℎ12 𝑆1 + ℎ12 𝑆2 ]𝑦(𝑘) 𝛾 𝛾 + 𝑧 T (𝑘)𝑧(𝑘) − 𝑁 𝜔T (𝑘)𝜔(𝑘) + 𝑁 𝜔T (𝑘)𝜔(𝑘) − 𝑧 T (𝑘)𝑧(𝑘) 𝛿 𝛿 T T = 𝜂 (𝑘)𝛤 𝑃𝛤𝜂(𝑘) + 𝑥 T (𝑘)[−𝛿𝑃 + (ℎ12 + 1)𝑄 + 𝑅1 − 𝛿𝑆1 + 𝐴1T 𝐴1 ]𝑥(𝑘) + 𝑥 T (𝑘)[2𝛿𝑆1 ]𝑥(𝑘 − ℎ1 ) + 𝑥 T (𝑘)[2𝐴1T 𝐷]𝑥(𝑘 − ℎ(𝑘)) + 𝑥 T (𝑘)[2𝐴1T 𝐶1 ]𝜔(𝑘) + 𝑥 T (𝑘 − ℎ1 )[𝛿ℎ1 (−𝑅1 + 𝑅2 ) − 𝛿𝑆1 − 𝛿ℎ1 +1 𝑆2 ]𝑥(𝑘 − ℎ1 ) + 𝑥 T (𝑘 − ℎ1 )[2𝛿ℎ1 +1 (𝑆2 − 𝑆)]𝑥(𝑘 − ℎ(𝑘)) + 𝑥 T (𝑘 − ℎ1 )[2𝛿ℎ1 +1 𝑆]𝑥(𝑘 − ℎ2 ) + 𝑥 T (𝑘 − ℎ(𝑘))[−𝛿ℎ1 𝑄 − 𝛿ℎ1 +1 (2𝑆2 − 𝑆 − 𝑆 T ) + 𝐷 T 𝐷]𝑥(𝑘 − ℎ(𝑘)) + 𝑥 T (𝑘 − ℎ(𝑘))[2𝛿ℎ1+1 (𝑆2 − 𝑆)]𝑥(𝑘 − ℎ2 ) + 𝑥 T (𝑘 − ℎ(𝑘))[2𝐷 T 𝐶1 ]𝜔(𝑘) + 𝑥 T (𝑘 − ℎ2 )[−𝛿ℎ2 𝑅2 − 𝛿ℎ1 +1 𝑆2 ]𝑥(𝑘 − ℎ2 ) 𝛾 + 𝜔T (𝑘) [− 𝑁 𝐼 + 𝐶1T 𝐶1 ] 𝜔(𝑘) + 𝑦 T (𝑘)[ℎ12 𝑆1 + ℎ12 𝑆2 ]𝑦(𝑘) 𝛿 𝛾 + 𝑁 𝜔T (𝑘)𝜔(𝑘) − 𝑧 T (𝑘)𝑧(𝑘) 𝛿 (18) Besides, from (2), it can be verified that ≤ −𝑓 T (𝑥(𝑘))𝑓(𝑥(𝑘)) + 𝑥 T (𝑘)𝐹 𝑥(𝑘), (19) ≤ −𝑔T (𝑥(𝑘 − ℎ(𝑘)))𝑔(𝑥(𝑘 − ℎ(𝑘))) + 𝑥 T (𝑘 − ℎ(𝑘))𝐺 𝑥(𝑘 − ℎ(𝑘)) Moreover, by setting 𝜉(𝑘) ≔ [𝑥 T (𝑘) 𝑥 T (𝑘 − ℎ1 ) 𝑥 T (𝑘 − ℎ(𝑘)) 𝑥 T (𝑘 − ℎ2 ) 𝑓 T (𝑥(𝑘)) 𝑔T (𝑥(𝑘 − ℎ(𝑘))) 𝜔T (𝑘)]T 𝑃𝐴 0 𝑃𝑊 𝑃𝑊1 𝑃𝐶 2 2 Υ: = [ ℎ1 𝑆1 (𝐴 − 𝐼) 0 ℎ1 𝑆1 𝑊 ℎ1 𝑆1 𝑊1 ℎ1 𝑆1 𝐶 ], 2 2 ℎ12 𝑆2 (𝐴 − 𝐼) 0 ℎ12 𝑆2 𝑊 ℎ12 𝑆2 𝑊1 ℎ12 𝑆2 𝐶 we can rewrite 𝜂 T (𝑘)𝛤 T 𝑃𝛤𝜂(𝑘) + 𝑦 T (𝑘)[ℎ12 𝑆1 + ℎ12 𝑆2 ]𝑦(𝑘) 𝐴T 0 T = 𝜉 (𝑘) 𝑃[𝐴 𝑊T 𝑊1T [ 𝐶T ] 0 𝑊 𝑊1 𝐶]𝜉(𝑘) L.A Tuan / VNU Journal of Science: Mathematics – Physics, Vol 36 No (2020) 10-23 18 (𝐴 − 𝐼)T 0 + 𝜉 T (𝑘) [ℎ12 𝑆1 + ℎ12 𝑆2 ][(𝐴 − 𝐼) 0 T 𝑊 𝑊1T [ 𝐶T ] −1 𝑃 0 ] Υ𝜉(𝑘) = 𝜉 T (𝑘)Υ T [ ℎ1 𝑆1 0 ℎ12 𝑆2 Consequently, combining (18), (19) and (20) gives 𝑃 T (𝑘) T 𝑉(𝑘 + 1) − 𝛿𝑉(𝑘) ≤ 𝜉 (Φ + Υ [ 0 ℎ12 𝑆1 0 ] ℎ12 𝑆2 𝑊 𝑊1 𝐶]𝜉(𝑘) (20) −1 Υ) 𝜉(𝑘) 𝛾 + 𝛿 𝑁 𝜔T (𝑘)𝜔(𝑘) − 𝑧 T (𝑘)𝑧(𝑘), (21) where Ω11 + 𝐴1T 𝐴1 + 𝐹 ∗ ∗ ∗ Φ: = ∗ ∗ Ω12 Ω22 ∗ ∗ ∗ ∗ 𝐴1T 𝐷 Ω23 Ω33 + 𝐷 T 𝐷 + 𝐺 ∗ ∗ ∗ ∗ ∗ ∗ [ Next, by using Proposition 2.3, it can be deduced that 𝑃 T Φ+Υ [ 0 ℎ1 𝑆1 This, together with (21), gives 𝑉(𝑘 + 1) − 𝛿𝑉(𝑘) ≤ Ω24 Ω34 Ω44 ∗ ∗ 0 0 −𝐼 ∗ 0 0 −𝐼 ∗ ∗ ∗ 𝐴1T 𝐶1 T 𝐷 𝐶1 0 − −1 0 ] ℎ12 𝑆2 Υ

Ngày đăng: 27/09/2020, 17:53

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w