1. Trang chủ
  2. » Giáo án - Bài giảng

finite time reliable filtering for t s fuzzy stochastic jumping neural networks under unreliable communication links

17 2 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 17
Dung lượng 1,38 MB

Nội dung

Duan and Peng Advances in Difference Equations (2017) 2017:54 DOI 10.1186/s13662-017-1108-3 RESEARCH Open Access Finite-time reliable filtering for T-S fuzzy stochastic jumping neural networks under unreliable communication links Huiling Duan* and Tao Peng * Correspondence: huilingduan@sohu.com College of Mathematics and Statistics, Chongqing Three Gorges University, Wanzhou, Chongqing 404130, P.R China Abstract This study is concerned with the problem of finite-time state estimation for T-S fuzzy stochastic jumping neural networks, where the communication links between the stochastic jumping neural networks and its estimator are imperfect By introducing the fuzzy technique, both the nonlinearities and the stochastic disturbances are represented by T-S model Stochastic variables subject to the Bernoulli white sequences are employed to determine the nonlinearities occurring in different sector bounds Some sufficient conditions for the existence of the state estimator are given in terms of linear matrix inequalities, whose effectiveness are illustrated with the aid of simulation results Keywords: finite-time boundedness; discrete-time; T-S fuzzy model; Lyapunov-Krasovskii functional; stochastic jumping neural networks Introduction Over the past decades, an enormous number of works have been significant as regards various neural networks because of wide applications, such as signal processing, pattern recognition, solving nonlinear algebraic equations, and so on Therefore, numerous works which have been considered in the stability analysis performance behavior of neural networks [–] Over the past decades, the stochastic jumping neural network (SJNN) has been widely investigated due to random changes in the interconnections of dynamic network nodes, and many works have been devoted to the study of SJNN [–] It is well recognized that the time delays are frequently encountered in many practical systems, such as communication systems, neural networks and engineering systems etc., which is the main source of poor performance in the system It is noted that, because of finite speed, the discrete delay always involves the information processing Therefore, the stability problem for the discrete-time stochastic jumping neural network (DTSJNN) has attracted a considerable amount of attention; see [–] and the references therein However, complexity and uncertainty as well as vagueness exist in the dynamic systems, which can be described by fuzzy theory It is noted that T-S fuzzy systems give a local linear representation of the considered nonlinear dynamic system, which involve of a set of IF-THEN rules It is reasonable that nonlinear systems are modeled by a set of linear sub-models with the aid of a T-S fuzzy model [–] Originally, the linear models © The Author(s) 2017 This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 are introduced to represent the local dynamics of state space regions Recently, various previous results have been considered on the stability and other dynamical behaviors of T-S fuzzy neural networks [–] and stochastic differential equations with fuzzy rules [–] However, in the existing literature, the proposed state estimator design methods were based on knowledge of communication between the neural network and the estimator are perfect However, in many practical systems, the communication may be partly available It is necessary to study such general SJNN with unreliable communication links (UCLs) [–], which was neglected in the aforementioned literature On the other hand, most obtained results concern an infinite-time interval Compared with the infinite-time one, a finite-time possesses performance of fast convergence and achieves better stability properties Therefore, many scholars have devoted their studies to the finite-time stability problem for nonlinear systems without delay [] and with delay [–] From [–, –] and the references therein it is apparent that researchers in this field have not established an estimation problem for fuzzy DTSJNN with UCLs so far A natural question is how to cope with the finite-time state estimation problem for T-S fuzzy DTSJNN with UCLs To the best of our knowledge, such a question has not been fully studied Motivated by the above discussion, we present a new and more relaxed technique to study the finite-time state estimation for T-S fuzzy stochastic jumping neural networks subject to UCLs This paper gets more information as regards large and small activation functions, which covers some existing activation functions as special cases A new random process is introduced to model the phenomenon of signal transmission, and some delay dependent sufficient conditions are given by implementing the Lyapunov functional Finally a numerical example has been offered to show the effectiveness of the proposed approach Notation: Rn denotes the n-dimensional Euclidean space; the superscripts – and T denote the inverse and transpose, respectively · denotes the expectation operator with respect to some probability measure The symbol He(Q) is used to represent Q + Q ∗ is employed to represent a term that is induced by symmetry ⊗ denotes the Kronecker product es = [n×(s–)n In n×(–s)n ] (s = , , , ) Preliminaries Given a probability space ( , F, ρ) where is the sample space, F is the algebra of events, and ρ is the probability measure defined on F The T-S fuzzy DTSJNNs over the space ( , F, ρ) are given by the following model: Plant Rule i: IF ξ is Mi and and ξp is Mip THEN ⎧ ⎨x(k + ) = A (r )x(k) + B (r )f (x(k)) + C (r )g(x(k – τ (k))) + D (r )ω(t) i k i k i k i k ⎩y(k) = Ci (rk )x(k) + Ci (rk )x(k – τ (k)), () where x(k) ∈ Rn and y(k) ∈ Rp represent state and output measured vector, respectively The external disturbance ω(k) ∈ Rq is a disturbance signal that belongs to l [, ∞) and Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 satisfies N E ω (k)ω(k) ≤ d , d ≥  () k= The stochastic jump process {rk , k ≥ } is a discrete-time, discrete-state Markov chain taking values in a finite set L = {, , , s} with transition probabilities πlm given by s m= πlm = , πlm > , l ∈ L ξj and Mij (i = , , , q, j = , , , p) are, respectively, the premise variables and the fuzzy sets, q is the number of IF-THEN rules The fuzzy basis functions are given by p j= μij (ξj (k)) , q p i= j= μij (ξj (k)) hi ξ (k) = in which μij (ξj (k)) represents the grade of membership of ξj (k) in μij It is obvious that q i= hi (ξ (k)) =  with hi (ξ (k)) >  The transmission delay τ (k) is time-varying and satisfies  < τ ≤ τ (k) ≤ τ , where τ and τ are known constants f (x(k)) and g(x(k – τ (k))) are the neuron activation functions Utilizing the centroid method for defuzzification, the fuzzy system () is inferred as follows: ⎧ ⎪ ⎪ ⎨x(k + ) q = i= hi (ξ (k))[Ai (rk )x(k) + Bi (rk )f (x(k)) + Ci (rk )g(x(k – τ (k))) + Di (rk )ω(t)], () ⎪ ⎪ ⎩ q y(k) = i= hi (ξ (k))[Ci (rk )x(k) + Ci (rk )x(k – τ (k))] Throughout the paper, it is definitely understood that the actual input available to the desired estimator is yas (k) In the early research of state estimation for neural networks, the signals’ transmissions were assumed to be in an ideal communication link, that is, yas (k) = y(k) However, there exists a spot with transmission from the sensor to the estimator in the real world The missing data phenomenon was modeled by introducing a stochastic Bernoulli approach, which is employed to described the UCL, and the relationship between yas (k) and y(k) can be described by yas (k) = (k)y(k), () where the stochastic variable by the following law: Pr (k) = (k) is Bernoulli-distributed white noise sequence specified , () where ∈ [, ] is a known constant Obviously, =  means the information of communication link (CL) is not available Similarly, =  means the information of CL is available For the stochastic variable (k), it is easy to see that E (k) – = , E (k) –  = ( – ) () Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 For the T-S fuzzy SJNN (), the state estimator is presented as follows: q xˆ (k + ) = hi ξ (k) Ai (rk )ˆx(k) + Bi (rk )f xˆ (k) + Ci (rk )g xˆ k – τ (k) i= + Ki (rk ) yas (k) – Ci (rk )ˆx(k) – Ci (rk )ˆx k – τ (k) q hi ξ (k) Ai (rk )ˆx(k) + Bi (rk )f xˆ (k) + Ci (rk )g xˆ k – τ (k) = i= + Ki (rk ) (k)y(k) – Ci (rk )ˆx(k) – Ci (rk )ˆx k – τ (k) , () where xˆ (k) is the estimate of the state x(k) and for each rk ∈ L, Ki (rk ) is the estimator parameter to be determined Let x(k) = (e (k), e (k), , en (k)) = x(k) – xˆ (k) be the state error and the output estimation error f (e(k)) = f (x(k)) – f (ˆx(k)) and g(e(k)) = g(x(k)) – g(ˆx(k)) For convenience, we denote Ai (rk ) = Ai,l , and the other symbols are similarly represented The resulting estimation error is governed by q e(k + ) = q hi ξ (k) hj ξ (k) (Ai,l – Ki,l Cj,l )e(k) – Ki,l Cj,l e k – τ (k) i= i= + Bi,l f e(k) + Ci,l g e k – τ (k) + ( – )Ki,l Cj,l x(k) – (k) – Ki,l Cj,l x(k) + ( – )Ki,l Cj,l x k – τ (k) – (k) – Ki,l Cj,l x k – τ (k) + Di (rk )ω(t) () In the following, we introduce a new vector η(k) = [x (k) e (k)] , f (η(k)) = [f (x(k)) f (e(k))] , and g(η(k – τ (k))) = [g (x(k – τ (k))) g (e(k – τ (k)))] , the state estimation for SJNN can be represented as follows: η(k + ) = Ai,l η(k) + Ai,l η k – τ (k) + Bi,l f η(k) + Ci,l g η t – τ (k) + Di,l ω(k) + + (k) – MKi,l Cj,l N η(k) MKi,l Cj,l N η k – τ (k) , (k) – () where q q Ai,l = hi ξ (k) i= hj ξ (k) j= q q Ai,l = hi ξ (k) i= hi ξ (k) i= q Di,l = hi ξ (k) i= hj ξ (k) j= q Bi,l = Ai,l ( – )Ki,l Cj,l Bi,l   , Bi,l Di,l , Di,l ( –  )Ki,l Cj,l  , Ai,l – Ki,l Cj,l  , –Ki,l Cj,l q Ci,l = hi ξ (k) i= M=  , –I N = [I Ci,l  ]  , Ci,l Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 Definition . ([, ]) The augmented T-S fuzzy MJNN () with ω(k) =  is said to be stochastically finite-time stable (SFTS) with respect to (c , c , R, N), if there exist a positive matrix R and scalars c , c > , such that E x (k )Rx(k ) ≤ c ⇒ E x (k )Rx(k ) < c , ∀t ∈ {–τ , , –, }, t ∈ {, , , N} Definition . ([, ]) The augmented T-S fuzzy MJNN () is said to be stochastically finite-time bounded (SFTB) with respect to (c , c , R, N, d), if there exist a matrix R >  and scalars c , c > , such that E x (k )Rx(k ) ≤ c ⇒ E x (k )Rx(k ) < c , ∀t ∈ {–τ , , –, }, t ∈ {, , , N} Lemma . ([]) Let X = X , Y and Z be real matrices of appropriate dimensions with L satisfying L L ≤ I, the following inequality holds: X + YLZ + Z LY <  if and only if there exists a positive scalar ε >  such that X + εYY + ε– Z Z <  Remark  In [], it is found that the neuron state-based nonlinear functions f (·) and g(·) are related to η(k) and η(k – τ (k)), respectively, which cannot be handled directly by the Matlab tool Notice that f () = , g() = , one has f (μ) – f (ν) – U (μ – ν) f (μ) – f (ν) – U (μ – ν) ≤ , g(μ) – g(ν) – V (μ – ν) g(μ) – g(ν) – V (μ – ν) ≤ , where U , U , V and V are real matrices with compatible dimensions In this paper, f (·) and g(·) are mode-dependent nonlinear functions: f η(k) – Ul η(k) f η(k) – Ul η(k) g η k – τ (k) – Vl η k – τ (k) ≤ , g η k – τ (k) – Vl η k – τ (k) ≤ , () where Ul , Ul , Vl , and Vl are real matrices with appropriate dimensions It will be used in the proof of our results It is noted that tr(Ul ) ≤ tr(Ul ) and tr(Vl ) ≤ tr(Vl ) In such a case, one finds that f (η(k)) ∈ [Ul , Ul ] and g(η(k – τ (k))) ∈ [Vl , Vl ] One has ⎧ ⎨ if f (η(k)) ∈ [U , U ], l l χ (k) = ⎩ if f (η(k)) ∈ [Ul , Ul ], χ (k) + χ (k) = , ⎧ ⎨ if g(η(k – τ (k))) ∈ [V , V ], l l κ (k) = ⎩ if g(η(k – τ (k))) ∈ [Vl , Vl ], κ (k) + κ (k) = , Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 where χ (k) and κ (k) are two independent Bernoulli-distributed sequences satisfying Pr χ (k) =  = χ , Pr χ (k) =  =  – χ , Pr κ (k) =  = κ , Pr κ (k) =  =  – κ , which yields f η(k) – Ul η(k) f η(k) – Ul η(k) ≤ , f η(k) – Ul η(k) f η(k) – Ul η(k) ≤ , g η k – τ (k) – Vl η k – τ (k) g η k – τ (k) – Vl η k – τ (k) ≤ , g η k – τ (k) – Vl η k – τ (k) g η k – τ (k) – Vl η k – τ (k) ≤ , () where ⎧ ⎧ ⎨f (η(k)), χ (k) = , ⎨f (η(k)), χ (k) = ,   f η(k) = f η(k) = ⎩Ul η(k), χ (k) = , ⎩Ul η(k), χ (k) = , ⎧ ⎨g(η(k – τ (k))), κ (k) = ,  g η k – τ (k) = ⎩Vl η(k – τ (k)), κ (k) = , ⎧ ⎨g(η(k – τ (k))), κ (k) = ,  g η k – τ (k) = ⎩Vl η(k – τ (k)), κ (k) =  Therefore, f (η(k)) and g(η(k – τ (k))) can be replaced by f η(k) = χ (k)f η(k) + χ (k)f η(k) , g η k – τ (k) () = κ (k)g η k – τ (k) + κ (k)g η k – τ (k) Main results The following is the main result of this paper Theorem . For given scalars N > , α > , c > , c >  and d > , the system () is SFTB if there exist symmetric matrices Pl > , Qs >  (s = , , ), Rn >  (n = , ), Sl >  and appropriately matrices Hs (s = , , ), Xnl > , Ynl >  (n = , ) such that, for any l ∈ L, the following LMIs hold: ⎡ ⎢ ⎣ l l l ∗ l  ⎤ ∗ ⎥ ∗ ⎦ < , () l ψ c + ψ ρ + λ d < λ c α –N , () Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 where l = e (Q + Q + Q )e – e Q e – e Q e – e Q e + e (τ Q + τ R + τ R )e + H (e – e ) + H (e – e ) + H (e – e ) – (α – )e Pl e – e Sl e – (e – Ul e ) Xl (e – Ul e ) – (e – Ul e ) Xl (e – Ul e ) – (e – Vl e ) Yl (e – Vl e ) – (e – Vl e ) Yl (e – Vl e ), l () l = () l () l  √ = [ τ H  ⎡ , l = diag –Pl– , –Pl– , –Pl– , –Pl– , –Pl– , –Pl– , √ τ H τ H ] , l = diag –R , –(R + R ), –R , ⎤ √ √ Ai,l Ai,l   ⎢√ ⎥ () ( – )MKi,l Cj,l N   ⎦ , l = ⎣ √ ( – )MKi,l Cj,l N    ⎡ ⎤ √ Di,l     ⎢ ⎥ () =      ⎦, ⎣ l      ⎡√ ⎤ χ Bi,l     √ ⎢ χ Bi,l   ⎥  ⎢ ⎥ () = √ ⎢ ⎥, l ⎣ κ Ci,l  ⎦   √    κ Ci,l  l √ q Pl = πlm Pm , τ = τ – τ , m=  ψ = λ + τ λ + τM λ + τM λ + τ (τ + τ – )λ ,   ψ = τ (τ + τ – )λ + τ (τ – )λ ,  λ = max λmin (Pl ), l∈L λ = λmax (Q ), Pl = R –  Pl R –  λ = max λmax (Pl ), l∈L λ = λmax (R ), , Qs = R –  Qs R λ = λmax (Q ), λ = λmax (R ), –  (s = , , ), λ = λmax (Q ), λ = max λmax (Sl ), l∈L Rs = R –   Rn R–  (n = , ) Proof Let us construct the following Lyapunov functional of the form  V η(k), rk = Vn (xk , rk ), () n= where V η(k), rk = x (k)Pl x(k), k– V η(k), rk = () k– η (s)Q η(s) + s=k–τ k– η (s)Q η(s) + s=k–τ (t) η (s)Q η(s), s=k–τ () Duan and Peng Advances in Difference Equations (2017) 2017:54 –τ Page of 17 –τ – k– k– V η(k), rk = η (s)Q η(s) + – ς (s)R ς(s) t–τ s=k+t s=t–τ + s=k+t k– ς (s)R ς(s), + () t=–τ s=k+t with ς(k) = η(k + ) – η(k) Let E { V (η(k), rk )} = E {V (η(k + ), rk+ = j) | rk = i – V (η(k), rk = i)} Then we have E V η(k), rk = η (k + )Pl η(k + ) – x (k)Pl x(k) = ϕ (k) [Ai,l e + Ai,l e + Di,l e ] Pl [Ai,l e + Ai,l e + Di,l e ] + [Ai,l e + Ai,l e + Di,l e ] Pl  × Bi,l  χn e+n + Ci,l n= κs e+s s=   χn e+n Bi,l Pl Bi,l e+n + + n= κs e+s Ci,l Pl Ci,l e+s s= + ( – )e (MKi,l Cj,l N ) Pl MKi,l Cj,l N e + ( – )e (MKi,l Cj,l N ) Pl MKi,l Cj,l N e ϕ(k), () where ϕ (k) = [η (k) η (k – τ (k)) η (k – τ ) η (k – τ ) f (η(k)) f (η(k)) g (η(k – τ (k))) g (η(k – τ (k))) ω (k)] Using Lemma .  ϕ (k)[Ai,l e + Ai,l e + Di,l e ] Pl Bi,l χn e+n ϕ(k) n= ≤ ϕ (k)[Ai,l e + Ai,l e + Di,l e ] Pl [Ai,l e + Ai,l e + Di,l e ]ϕ(k)  χn e+n Bi,l Pl Bi,l e+n ϕ(k), + ϕ (k) () n=  ϕ (k)[Ai,l e + Ai,l e + Di,l e ] Pl Ci,l κs e+s ϕ(k) s= ≤ ϕ (k)[Ai,l e + Ai,l e + Di,l e ] Pl [Ai,l e + Ai,l e + Di,l e ]ϕ(k)  κs e+s Ci,l Pl Ci,l e+n ϕ(k), + ϕ (k) () s= k E V η(k), rk k– =E η (s)Q η(s) – s=k–τ + k η (s)Q η(s) s=k–τ k– η (s)Q η(s) – + s=k–τ (t)+ η (s)Q η(s) s=k–τ (t) Duan and Peng Advances in Difference Equations (2017) 2017:54 Page of 17 k k– + η (s)Q η(s) – η (s)Q η(s) s=k–τ + s=k–τ ≤ ϕ (k) e (Q + Q + Q )e – e Q e – e Q e – e Q e ϕ(k) k–τ + η (s)Q η(s), () s=k–τ + –τ E V η(k), rk k k– =E η (s)Q η(s) – t=–τm + s=k+t+ –τ – η (s)Q η(s) s=k+t k k– ς (s)R ς(s) – + t=–τm s=k+t+ – ς (s)R ς(s) s=k+t k k– + ς (s)R ς(s) – t=–τm s=k+t+ ς (s)R ς(s) s=k+t k–τm = ϕ (k)e (τ Q + τ R + τ R )e ϕ(k) – η (k)Q η(k) s=k–τ + k–τm – – k– ς (k)R ς(k) – s=k–τ ς (k)R ς(k) () s=k–τ On the other hand, for any appropriately dimensioned matrices Hs (s = , , ), the following inequalities hold: k–  = ϕ (k)H (e – e )ϕ(k) – ς(s) , () ς(s) , () ς(s) () s=k–τ (k) k–τ –  = ϕ (k)H (e – e )ϕ(k) – s=k–τ (k) k–τ (k)–  = ϕ (k)H (e – e )ϕ(k) – s=k–τ From (), for any matrix variables Xnl >  and Ynl >  (n = , ), one has  ≤ – f η(k) – Ul η(k) Xl f η(k) – Ul η(k) ,  ≤ – f η(k) – Ul η(k) Xl f η(k) – Ul η(k) ,  ≤ – g η k – τ (k) – Vl η k – τ (k) Yl g η k – τ (k) – Vl η k – τ (k) ,  ≤ – g η k – τ (k) – Vl η k – τ (k) Yl g η k – τ (k) – Vl η k – τ (k) , which can be rewritten as  ≤ –ϕ (k)(e – Ul e ) Xl (e – Ul e )ϕ(k), ()  ≤ –ϕ (k)(e – Ul e ) Xl (e – Ul e )ϕ(k), () Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 10 of 17  ≤ –ϕ (k)(e – Vl e ) Yl (e – Vl e )ϕ(k), ()  ≤ –ϕ (k)(e – Vl e ) Yl (e – Vl e )ϕ(k) () Combining ()–(), one has E V η(k), rk = E ϕ (k)( l + i,l )ϕ(k) k– H ϕ(k) + R λ(s) R–  H ϕ(k) + R λ(s) – s=k–τ (k) k–τ – H ϕ(k) + R λ(s) R–  H ϕ(k) + R λ(s) – s=k–τ (k) k–τ (k)– H ϕ(k) + (R + R )λ(s) (R + R )– H ϕ(k) + (R + R )λ(s) – s=k–τ ≤ E ϕ (k)( l + i,l )ϕ(k) , () where i,l = [Ai,l e + Ai,l e + Di,l e ] Pl [Ai,l e + Ai,l e + Di,l e ] + ( – )e (MKi,l Cj,l N ) Pl MKi,l Cj,l N e + ( – )e (MKi,l Cj,l N ) Pl MKi,l Cj,l N e – – + τ H R–  H + τ H (R + R ) H + τ H R H Let λ = min{ }, then λ >  due to E V η(k), rk Finally from (), we obtain, for any k ≥ , = E V η(k + ), rk+ = j | η(k), rk = i – E V η(k), rk = i ≤ (α – )η (k)Pl η(k) + ω (k)Sl ω(k) ≤ V η(k), rk + ω (k)Sl ω(k) () Taking mathematical expectation on both sides of inequality () and noting that α ≥ , it can be shown from () and () that E V η(k + ), rk+ < αV η(k), rk + λmax (Sl )E ω (k)ω(k) k– < · · · < α k E V η(), r + λmax (Sl )E α k–s– ω (k)ω(k) s= ≤ α k E V η(), r + λmax (Sl )α k d () Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 11 of 17     In view of conditions (), letting Pl = R–  Pl R–  , Qs = R–  Qs R–  (s = , , ) and Rn =  –  R Rn R–  (n = , ), we obtain – E V η(), r = η ()Pl η() + – η (s)Q η(s) + s=–τ –τ – + – η (s)Q η(s) + s=–τ η (s)Q η(s) s=t–τ + s=t –τ – – – – λ (s)R λ(s) + + η (s)Q η(s) s=–τ (k) t–τ s=t λ (s)R λ(s) t=–τ s=t ≤ (λmax (Pl ) + τ λmax (Q ) + τM λmax (Q ) + τM λmax (Q )  + τ (τ + τ – )λmax (Q )]c   + τ (τ + τ – )λmax (R ) + τ (τ – )λmax (R ) ρ  () On the other hand, for all l ∈ L, it can be seen from () that E V η(k), rk ≥ E η (k)Pl η(k) ≥ λmin (Pl )η (k)Rη(k) () From () and (), we get η (k)Rη(k) < (ψ c + ψ ρ + λmax (Sl )d)α N λmin (Pl ) () Noting condition (), it can be derived from () and () that η (k)Rη(k) < c for all k ∈ {, , , N} Remark  To estimate the derivative of the Lyapunov functional, more information is needed on the slope of neuron activation functions f (η(k)) and g(η(k – τ (k))) derivative than [–], which yield less conservative results Remark  In this brief contribution, the UCL is introduced to save the communication resource, which was assumed to be perfect in the existing literature Hence, the applicability of SJNN subject to UCL is reasonable and relatively wide Remark  Note that the failures of sensors are mode-dependent and depict that the signal may vary between actuator and controller, which is extended to the filtering for T-S fuzzy stochastic jumping neural networks subject to UCLs Theorem . For given scalars N > , α > , c > , c > , and d > , the system () is SFTB if there exist symmetric matrices Pl = diag{Pl , Pl } > , Qs >  (s = , , ), Rn >  (n = , ), Sl > , and appropriately matrices Hs (s = , , ), Xnl > , Ynl >  (n = , ) such that, for any l ∈ L, the following LMIs hold: ij,l + ji,l <  (i < j), () Duan and Peng Advances in Difference Equations (2017) 2017:54 ii,l Page 12 of 17 < , () ψ  c + ψ ρ + λ d < c α –N , () where ⎡ ij,l l ⎢ =⎣ ij,l ∗ l ⎤ ∗ ⎥ ∗ ⎦, ij,l = () ij,l × () ij,l  () ij,l () ij,l ,  l l ⎤ ⎡√ √ √ πl In πl In · · · πlN In () √ √ ⎥ ⎢√ πl In · · · πlN In ⎦ , ij,l = ⎣ πl In √ √ √ πl In πl In · · · πlN In ⎤ ⎡ √ √ A¯ ij,l A¯ ij,l   () ⎥ ⎢√ ¯ ij,l ( – )M   ⎦ , ij,l = ⎣ √ ( – )N¯ ij,l    ⎡ ⎤ √ D¯ i,l     () ⎢ ⎥  ⎦, ij,l = ⎣         ⎤ ⎡√ χ B¯i,l     √ ⎢ χ B¯i,l   ⎥  () ⎥ ⎢ = √ ⎥, ⎢ ij,l ⎣ κ C¯i,l  ⎦   √ κ C¯i,l     ¯ ij,l = M l  –Zi,l Cj,l  ,  N¯ ij,l =  –Zi,l Cj,l  ,  = diag{–P – Pl , –P – Pl , , –Ph – Pl , –P – Pl , –P – Pl , , –Ph – Pl , h h –P – Pl , –P – Pl , , –Ph – Pl , –P – Pl , –P – Pl , , –Ph – Pl , h h –P – Pl , –P – Pl , , –Ph – Pl , –P – Pl , –P – Pl , , –Ph – Pl , }, h A¯ ij,l = Pl Ai,l ( – )Zi,l Cj,l A¯ i,l =  )Zi,l Cj,l ( – Pl Ci,l C¯i,l =  h  , Pl Ai,l – Zi,l Cj,l  , –Zi,l Cj,l  , Pl Ci,l Di,l = Pl Bi,l B¯i,l =  Pl Pl  , Pl Bi,l Di,l , Di,l  ψ  = λ + τ λ + τM λ + τM λ + τ (τ + τ – )λ ,   ψ = τ (τ + τ – )λ + τ (τ – )λ ,  λ–  = max λmin (P l ), λmin (P l ) , l∈L λ = λmax (Q ), λ = λmax (Q ), Duan and Peng Advances in Difference Equations (2017) 2017:54 λ = λmax (Q ), Pmml = R –  λ = λmax (R ), Pmml R  Page 13 of 17  Rs = R–  Rn R–  –  (m = , ), λ = λmax (R ), Qs = R –  Qs R λ = max λmax (Sl ), l∈L –  (s = , , ), (n = , ) Moreover, the finite-time state estimator can be constructed by – Zi,l Ki,l = Pl () Proof Letting Pl = diag{Pl , Pl } Pre- and post-multiplying () by the block-diagonal matrix Pl = diag{I, I, I, I, I, I, I, I, I, Pl– , Pl– , Pl– , Pl– , Pl– , Pl– , I, I, I} and using the Schur complement lemma, one has q q hi ξ (k) hj ξ (k) ij,l < , () i= j= where ⎡ ⎢ ij,l = ⎣ ij,l l  l ⎡ ⎤ ∗ ⎥ ∗ ⎦, ∗ l ij,l = l () ij,l () ij,l () ij,l ×  l √ Aij,l  √ ( – )Nij,l √ Aij,l ⎢√ () ( – )Mij,l ij,l = ⎣  Mij,l = () ij,l  –Pl Ki,l Cj,l  ,  Nij,l =    , ⎤  ⎥ ⎦ ,   –Pl Ki,l Cj,l  ,  = diag –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl ,, –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl , N N –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl , –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl , N N –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl , –Pl P– Pl , –Pl P– Pl , , –Pl PN– Pl , N N A¯ ij,l = Pl Ai,l ( – )Pl Ki,l Cj,l A¯ i,l =  )Pl Ki,l Cj,l ( –  , Pl Ai,l – Pl Ki,l Cj,l  –Pl Ki,l Cj,l It follows that – [Pm – Pl ] ≥  [Pm – Pl ] Pm (m = , , , N), which leads to – Pm ≤ Pm – Pl –Pl Pm (m = , , , N) Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 14 of 17 It follows from () that q q hi ξ (k) hj ξ (k) ij,l <  () i= j= Furthermore, condition () can be written as q q q hi ξ (k) hj ξ (k) [ i= ij,l + ji,l ] + j>i ii,l <  i= Illustrative example Example  Consider the T-S fuzzy Markovian jump neural network () involving two modes with the following parameters: A = .   , . C = . –. A = .  C = –. –. A = .  C = . –. A = .   , . C = . .  , . . , .  , . B = D = Ul = Vl = .   , . .   , . . –. . , . .   , . B = . –. . , . D = .  D = B =  , . .  .  . , . D = C = C = C = C = [ Ul = Vl = . , . . –. B = . , .  , . . –. . , –. . , –. ],  , . C = C = C = C = [–. Ul = Vl = .  . , –. l = ,  Moreover, assume that the transition rate matrix is given by = . . . . .], Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 15 of 17 The nonlinear activation functions f (η(k)) and g(η(k)) are chosen as f η(k) = g η(k) = .η() + tan h .η() + .η() – .η() – tan h .η() and the membership functions h (η(k)) and h (η(k)) are defined as ⎧ ⎨.( – η(k)), |η(k)| < , h η(k) = ⎩, |η(k)| ≥  Given the initial values for R = I, c = , d = , N = , α = , τ =  and τ =  By using the Matlab Toolbox, one has minimum c = . Therefore, the normal augmented fuzzy Markovian jump neural network () is SFTB with respect to (, , I, , .) Remark  In view of the parameters given above, the sector bounds of the activation functions f (η(k)) and g(η(k – τ (k))) are [{Ul , Vl }, {Ul , Vl }] If the lower and upper bounds of the activation functions are introduced instead of the probability distribution information, . that is, χ = κ = , χ = κ =  and letting Ul = Vl = [ . ], the minimum c = .  –. However, if the probability information of the small and large activation functions is employed, one has minimum c = . Conclusions This paper is concerned with the finite-time state estimation problem for T-S fuzzy stochastic jumping neural networks under unreliable communication links Stochastic variables subject to the Bernoulli white sequences are employed to govern the nonlinearities occurring in different sector bounds By employing the reasonable LyapunovKrasovskii functional and using Newton-Leibniz enumerating, sufficient conditions for the existence of the state estimator are given in terms of linear matrix inequalities Finally a numerical example has been offered to show the effectiveness of the proposed approach The main results in this paper may be further extended to famous dynamical models, such as fuzzy semi-Markovian jump systems, which will be dealt with by the authors in future work Competing interests The authors declare that they have no competing interests Authors’ contributions All authors drafted the manuscript, and they read and approved the final version Acknowledgements This work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission under Grant no KJ1601009 Received: 27 September 2016 Accepted: February 2017 References Zhang, B, Xu, S, Zou, Y: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays Neurocomputing 72(1-3), 321-330 (2008) Liu, Y, Wang, Z, Liu, X: Robust stability of discrete-time stochastic neural networks with time-varying delays Neurocomputing 71(4-6), 823-833 (2008) Zhang, D, Shi, P, Zhang, W, Yu, L: Energy-efficient distributed filtering in sensor networks: a unified switched system approach IEEE Trans Cybern (2016) doi:10.1109/TCYB.2016.2553043 Zhang, D, Shi, P, Wang, QG, Yu, L: Analysis and synthesis of networked control systems: a survey of recent advances and challenges ISA Trans 66, 376-392 (2017) Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 16 of 17 Pan, L, Cao, J: Exponential stability of stochastic functional differential equations with Markovian switching and delayed impulses via Razumikhin method Adv Differ Equ 2012, 61 (2012) Wang, J, Yao, F, Shen, H: Dissipativity-based state estimation for Markov jump discrete-time neural networks with unreliable communication links Neurocomputing 139, 107-113 (2014) Shen, H, Huang, X, Zhou, J, Wang, Z: Global exponential estimates for uncertain Markovian jump neural networks with reaction-diffusion terms Nonlinear Dyn 69, 473-486 (2012) Zhang, D, Shi, P, Wang, QG: Energy-efficient distributed control of large-scale systems: a switched system approach Int J Robust Nonlinear Control 26, 3101-3117 (2016) Chen, Y, Zheng, W: Stochastic state estimation for neural networks with distributed delays and Markovian jump Neural Netw 25, 14-20 (2012) 10 Arunkumar, A, Sakthivel, R, Mathiyalagan, K, Park, JH: Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks ISA Trans 53, 1006-1014 (2014) 11 Zhang, Y, Mu, J, Shi, Y, Zhang, J: Finite-time filtering for T-S fuzzy jump neural networks with sector-bounded activation functions Neurocomputing 186, 97-106 (2016) 12 Chang, X, Yang, G: New results on output feedback H∞ control for linear discrete-time systems IEEE Trans Autom Control 59(5), 1355-1359 (2014) 13 Chang, X: Robust nonfragile H∞ filtering of fuzzy systems with linear fractional parametric uncertainties IEEE Trans Fuzzy Syst 20(6), 1001-1011 (2012) 14 Shen, M, Ye, D: Improved fuzzy control design for nonlinear Markovian-jump Fuzzy Sets Syst 217, 80-95 (2013) 15 Wang, X, Fang, J, Dai, A, Zhou, W: Global synchronization for a class of Markovian switching complex networks with mixed time-varying delays in the delay-partition approach Adv Differ Equ 2014, 248 (2014) 16 Zhang, D, Cai, W, Xie, L, Wang, Q: Nonfragile distributed filtering for T-S fuzzy systems in sensor networks IEEE Trans Fuzzy Syst 23(5), 1883-1890 (2015) 17 Su, X, Shi, P, Wu, L, Nguang, SK: Induced l filtering of fuzzy stochastic systems with time-varying delays IEEE Trans Cybern 43(4), 1251-1264 (2013) 18 Su, X, Wu, L, Shi, P, Chen, CLP: Model approximation for fuzzy switched systems with stochastic perturbation IEEE Trans Fuzzy Syst 23(5), 1458-1473 (2015) 19 Su, X, Wu, L, Shi, P, Song, Y: A novel approach to output feedback control of fuzzy stochastic systems Automatica 50(12), 3268-3275 (2014) 20 Chang, X, Park, J, Tang, Z: New approach to H∞ filtering for discrete-time systems with polytopic uncertainties Signal Processing 113, 147-158 (2015) 21 Malinowski, MT: Strong solutions to stochastic fuzzy differential equations of Itô type Math Comput Model 55(3), 918-928 (2012) 22 Liu, F, Wu, M, He, Y, Yokoyama, R: New delay-dependent stability criteria for T-S fuzzy systems with time-varying delay Fuzzy Sets Syst 161, 2033-2042 (2010) 23 Pan, Y, Zhou, Q, Lu, Q, Wu, C: New dissipativity condition of stochastic fuzzy neural networks with discrete and distributed time-varying delays Neurocomputing 162, 250-260 (2015) 24 He, S, Liu, F: L2 – L∞ fuzzy control for Markov jump systems with neutral time-delays Math Comput Simul 92, 1-13 (2013) 25 Malinowski, MT: Some properties of strong solutions to stochastic fuzzy differential equations Inf Sci 252, 62-80 (2013) 26 Malinowski, MT, Agarwal, RP: On solutions to set-valued and fuzzy stochastic differential equations J Franklin Inst 352, 3014-3043 (2015) 27 Malinowski, MT: Set-valued and fuzzy stochastic differential equations in M-type Banach spaces Tohoku Math J 67(3), 349-381 (2015) 28 Malinowski, MT: Stochastic fuzzy differential equations of a nonincreasing type Commun Nonlinear Sci Numer Simul 33, 99-117 (2016) 29 Malinowski, MT: Fuzzy and set-valued stochastic differential equations with local Lipschitz condition IEEE Trans Fuzzy Syst 23(5), 1891-1898 (2015) 30 Malinowski, MT: Fuzzy stochastic differential equations of decreasing fuzziness: approximate solutions J Intell Fuzzy Syst 29(3), 1087-1107 (2015) 31 Malinowski, MT: Fuzzy stochastic differential equations of decreasing fuzziness: non-Lipschitz coefficients J Intell Fuzzy Syst 31(1), 13-25 (2016) 32 Shen, H, Zhu, Y, Zhang, L, Park, J: Extended dissipative state estimation for Markov jump neural networks with unreliable links IEEE Trans Neural Netw Learn Syst 28(2), 346-358 (2017) 33 Chen, M, Shen, H, Li, F: On dissipative filtering over unreliable communication links for stochastic jumping neural networks based on a unified design method J Franklin Inst 353(17), 4583-4601 (2016) 34 Li, H, Chen, Z, Wu, L, Lam, H: Event-triggered control for nonlinear systems under unreliable communication links IEEE Trans Fuzzy Syst (2016) doi:10.1109/TFUZZ.2016.2578346 35 Cheng, J, Zhu, H, Zhong, S, Zheng, F, Zeng, Y: Finite-time filtering for switched linear systems with a mode-dependent average dwell time Nonlinear Anal Hybrid Syst 15, 145-156 (2015) 36 Cheng, J, Zhu, H, Zhong, S, Zeng, Y, Dong, X: Finite-time H∞ control for a class of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov functionals ISA Trans 52(6), 768-774 (2013) 37 Cheng, J, Park, JH, Zhang, L, Zhu, Y: An asynchronous operation approach to event-triggered control for fuzzy Markovian jump systems with general switching policies IEEE Trans Fuzzy Syst (2016) doi:10.1109/TFUZZ.2016.2633325 38 Shen, H, Park, JH, Wu, ZG, Zhang, Z: Finite-time H∞ synchronization for complex networks with semi-Markov jump topology Commun Nonlinear Sci Numer Simul 24(1-3), 40-51 (2015) 39 Cheng, J, Park, JH, Liu, Y, Liu, Z, Tang, L: Finite-time H∞ fuzzy control of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions Fuzzy Sets Syst (2016) doi:10.1016/j.fss.2016.06.007 40 Tian, E, Yue, D, Wei, G: Robust control for Markovian jump systems with partially known transition probabilities and nonlinearities J Franklin Inst 350, 2069-2083 (2013) Duan and Peng Advances in Difference Equations (2017) 2017:54 Page 17 of 17 41 He, S, Liu, F: Finite-time H∞ fuzzy control of nonlinear jump systems with time delays via dynamic observer-based state feedback IEEE Trans Fuzzy Syst 20(4), 605-614 (2012) 42 Cheng, J, Li, G, Zhu, H, Zhong, S, Zeng, Y: Finite-time H∞ control for a class of Markovian jump systems with mode-dependent time-varying delay Adv Differ Equ 2013, 214 (2013) 43 Xu, Y, Lu, R, Zhou, K, Li, Z: Nonfragile asynchronous control for fuzzy Markov jump systems with packet dropouts Neurocomputing 175, 443-449 (2016) ... with the finite -time state estimation problem for T- S fuzzy stochastic jumping neural networks under unreliable communication links Stochastic variables subject to the Bernoulli white sequences... estimation for T- S fuzzy stochastic jumping neural networks subject to UCLs This paper gets more information as regards large and small activation functions, which covers some existing activation functions... this field have not established an estimation problem for fuzzy DTSJNN with UCLs so far A natural question is how to cope with the finite -time state estimation problem for T- S fuzzy DTSJNN with

Ngày đăng: 04/12/2022, 10:31