1. Trang chủ
  2. » Luận Văn - Báo Cáo

STABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKS

103 7 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • A. Motivation (9)
  • B. Literature review (12)
  • C. Methodology (16)
  • D. Research topics (16)
  • E. Main contributions (18)
  • F. Thesis outline (19)
    • 1. PRELIMINARIES (0)
      • 1.1. Nonnegative matrix and M-matrix (20)
      • 1.2. Functional differential equations and Lyapunov stability (21)
      • 1.3. Positive systems and stability of nonlinear systems (23)
        • 1.3.1. Linear positive systems (23)
        • 1.3.2. Exponential stability of positive Hopfield neural networks (23)
      • 1.4. Conformable fractional derivative (25)
      • 1.5. Some other auxiliary results (27)
    • 2. STABILITY OF NONLINEAR POSITIVE TIME-DELAY SYSTEMS IN BAM- COHEN-GROSSBERG NEURAL NETWORKS (0)
      • 2.1. Model description and preliminaries (30)
        • 2.1.1. Existence and uniqueness of solutions (31)
        • 2.1.2. Positive solutions and equilibrium points (33)
      • 2.2. Positive solutions (34)
      • 2.3. Positive equilibria (36)
      • 2.4. Exponential stability of positive equilibrium point (40)
      • 2.5. Simulations (47)
    • 3. POSITIVE SOLUTIONS AND EXPONENTIAL STABILITY OF CONFORMABLE (0)
      • 3.1. Model description and preliminaries (51)
      • 3.2. Positive solutions (54)
      • 3.3. Positive equilibria (56)
      • 3.4. Fractional exponential stability (60)
      • 3.5. Applications and simulations (65)
        • 3.5.1. An application to fractional linear systems with delay (65)
        • 3.5.2. Simulations (66)
    • 4. EXPONENTIAL STABILITY OF INERTIAL BAM NEURAL NETWORKS (0)
      • 4.1. Model description (71)
      • 4.2. Global existence of solutions (73)
      • 4.3. State transformations and positive solutions (76)
        • 4.4.1. Equilibrium (80)
        • 4.4.2. Exponential stability of positive EP of INNs (85)
      • 4.5. Simulations (90)

Nội dung

STABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIFFERENTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKSSTABILITY OF POSITIVE SOLUTIONS OF NONLINEAR DIEREFFNTIAL EQUATIONS WITH DELAYS IN NEURAL NETWORKS

Motivation

Stability theory is one of the top priority research topics in the qualitative theory of differential equations and, more general, systems and control theory Date back to the pioneering work of Lyapunov [51], stability theory has been extensively devel- oped Its intrinsic interest and relevance has been found in a variety of disciplines like mechanics, physics, chemistry, ecology or artificial intelligence.

Appearing naturally in practice, many models in population, economic growth or labor migration are described by dynamic systems whose states are always nonneg- ative when the initial states and inputs are nonnegative Such models are described by he so-called positive systems [18] Applications of positive systems can be found in various disciplines, from physics, chemistry, ecology and epidemiology to economics, control engineering and telecommunications networks [4, 35, 69, 73] Research on pos- itive systems shows that, besides a wide range of applications, positive systems also possess many properties that are not found in general systems For example, based on the monotonicity and robustness induced by the positivity, positive systems are em- ployed to design interval observers, state estimation or stability analysis of nonlinear systems Thus, due to theoretical and practical features, the theory of positive systems has received ever-increasing interest in the past few years While the theory of positive systems has been intensively studied for various kinds of linear systems, this area is still considerably less well-developed for nonlinear systems, in particular, for models arising in artificial and/or biological neural networks Typically, dynamics of a network is represented by a system of nonlinear differential equations with or without delay.

In the past two decades, the study involving qualitative behavior of nonlinear systems describing various types of neural networks has attracted significant research attention due to a wide range of applications [7, 21–24, 30, 41, 45, 58, 77].

The terminology of neural networks, appeared in the late 1800s, was mentioned by scientists while studying the function of human brain with the desire to be able to design computers that can work like the human brain, capable of learning through databases, remembering experiences and use in appropriate situations Over the history of more than 200 years, with the advent of computers, the study of neural networks has evolved extensive development and has obtained many important results in widen the ability to recognize and adapt to the industry computer technology Using ideas from experimental results in studies of human brain, many intelligent computers, which have components like neurons or sets of neurons and the connections between those components like the synapses of neurons, were invented.

There are many kind of neural networks mentioned in the literature Due to their specific structure and practical applicability, some popular models of neural networks such as Hopfield neural networks (HNNs), Cohen-Grossberg neural networks (CGNNs), inertial neural networks (INNs), or bidirectional associative memory networks (BAMs) have been widely studied [5, 14, 52, 56] However, there has been very little attempt devoted to the study of positive nonlinear systems in neural networks For neural systems, the nonlinearity of neuron activation functions makes the study of positive neural networks more complicated and challenging, which requires in-depth knowledge and specific techniques.

In industrial processes, engineering systems or biological processes, time-delay is often encountered For example, in the Mackey–Glass model of blood cell production, control of cell production at a time is based on the number of blood cells at that time. However, there is a significant delay between the start of cell production and the release of mature cells into the blood This means that the change in blood volume at one time depends not only on the instantaneous blood volume but also on the delay Time-delay can be defined as the period that takes for a signal after being transmitted through a system to reach the operating element, and most importantly, it is rooted in many human-made natural systems such as industries and engineering applications, where these delays generally describe the transmission of materials or energy in common systems, data transmission in communication systems In fact, the occurrence of time- delay is inevitable because the delay occurs spontaneously during data transmission or due to equipment and technology limitations Many examples of time-delay systems can be found in transmission lines or telecommunication networks [76] The presence of time-delay normally changes the long-term behavior of solutions and heavily affects stability of the system, which is an important universal property of application models. The stability analysis and control of such systems is important in theoretical and practical to overcome instability and poor performance Therefore, the field of control engineering cannot be separated from research on systems with time delays [76] The analysis of delay systems has attracted considerable interest particularly in the past two decades A frequent research topic is stability or strong stability, and it has undergone remarkable development both conceptually and computationally In addition, because of the infinite-dimensional nature of the phase space, the qualitative study of time-delay systems becomes much more complicated than that of ordinary differential equation systems A very rich literature with numerous important results concerning the systems and control theory of time-delay systems has been reported For a few references, we refer the readers to [26, 27, 46, 48], the survey paper [81] and references therein.

Along with the development of mathematics, research results on time-delay sys- tems are not only considered for integer-order derivatives but also developed for fractional- order derivatives The subject of fractional calculus is known as one of powerful and efficient mathematical tools due to its recognized applications in many fields Formed shortly after the development of conventional calculus, in the late 17th century, frac- tional calculus has great relevance to the dynamics of complicated real-world problems. Through fractional-order differential equations, many mathematical models are accu- rately governed The first systematic studies were attributed to Riemann, Liouville, Caputo, etc In early times, fractional calculus was regarded as a pure mathemati- cal realm without real applications But such a state of affairs has changed in recent decades Fractional calculus and its applications are undergoing speedy development with numerous convincing applications The multidisciplinary applicability of frac- tional calculus has been pointed out when it appears in many contexts, such as con- tinuum mechanics, signal analysis, quantum mechanics, biomedicine, bioengineering, financial systems and other branches of pure and applied mathematics.

After a long period of evolution and advancement, artificial intelligence, which had a profound impact on almost fields of science and engineering, has gradually revo- lutionized all aspects of today life Neural networks, which form a stepping stone in the search for artificial intelligence, hold a very important position in this regard It has been verified that in some conditions, neural networks can show disorderly behavior and complicated dynamics The development of novel methods and new models as well as the extension of existing techniques for the analysis of chaotic neural networks along with their broadband applications and random-like behavior are of particular importance Thus, more studies need to be carried out in this field One of the major challenges for scientists is finding reliable tools to model real-world phenom- ena [36] Fortunately, fractional calculus provides an appropriate facility to solve this problem In describing long-term memory of systems, fractional-order derivatives have been shown to have many advantages over integer-order derivatives Furthermore, studies of many physical models show that memory properties change over the time so that dynamics of such systems are more accurately described through variable-order fractional derivatives Considering the role of neural networks as an indispensable part of artificial intelligence, as well as the potential results of fractional order calculus in various research fields, the current special issue has been prioritized to the application of fractional-order calculus in neural networks

Research on the qualitative behavior and stability of nonlinear neuronal systems with delays remains a significant area of interest in systems and control theory Despite progress in understanding positive systems, challenges persist in studying the stability of positive solutions in ecological models with multiple delays, particularly in models with realistic and complex structures These difficulties stem from technical limitations and the limitations of current approaches, motivating research on positive solutions and stability in nonlinear differential equations with delays.

Literature review

B1 Stability of positive BAM neural networks

Bidirectional associative memory (BAM) model of neural networks, introduced by Kosko in 1988 [44], was first used to study stability and encoding properties of two- layer nonlinear feedback neural networks Specifically, a BAM network is constructed by neurons arranged into two layers called layer X and layer Y Neurons in each layer are connected in such a way that each neuron in one layer is fully integrated with the neurons in the other layer, and there are no connections between neurons in the same layer This structure represents a bidirectional search process between pairs of bipolar data and is a generalization of Hebb self-correlation from one layer to heterogeneously coupled two-layer circuits This model possesses many application prospects in the areas of pattern recognition, signal and image processing, which has attracted con- siderable research attention Many problems in systems and control theory, networks analysis and design have been studied for BAM networks model with delays and its variant Another model of neural networks, namely Cohen-Grossberg model [15], has also been extensively investigated in the past few decades [25, 46] Cohen-Grossberg neural networks (CGNNs), which have been widely applied within various scientific and engineering fields, such as computing technology, population biology, and neuro- biology, include many ecological models and neural networks, such as Lotka-Volterra model in population dynamics and Hopfield neural networks In such applications, the design problem that ensures stability of the network is of prime importance However, most existing works in the literature have been devoted to specific types of BAM neural networks described by Hopfield model [6, 47, 54, 71] There have been only a few re- sults concerning stability analysis of BAM neural networks in Cohen-Grossberg model. For example, in [3], the problem of asymptotic stability of neutral-type BAM-Cohen- Grossberg neural networks with mixed discrete and distributed time-varying delays was studied via Lyapunov–Krasovskii functional (LKF) method and linear matrix inequal- ities (LMIs) approach Fixed-time stabilization problem was investigated for impulsive BAM-Cohen-Grossberg neural networks without delay in [49] Based on comparison techniques via differential inequalities, fixed-time stability and stabilization conditions were derived in terms of matrix inequalities involving settling time and frequency of impulses.

Despite elegant properties and potential applications of positive BAM neural net- works with delays, studies concerning stability and control of such systems are quite scarce In [31], the problem of exponential stability was first studied for positive non- linear systems, which describe a model of BAM-Hopfield neural networks with delays.Based on extended comparison techniques via differential inequalities combined with

M-matrix theory and Brouwer’s fixed point theorem, the existence and global expo- nential stability of a unique positive equilibrium of the system were derived through tractable linear programming (LP) conditions Over the past decade, the appearance of research on BAM networks model with delays in prestigious journals shows the spe- cial interest of mathematicians in this model However, the problems of positivity and global exponential stability of positive equilibrium of BAM Cohen-Grossberg neural networks with time-varying delays have not been addressed in the literature In addi- tion, it should be mentioned that the proposed methods and existing results developed for Hopfield-type BAM neural networks cannot be simply extended to BAM-CG neural networks with delays according to the nature of the structure themselves This requires further investigation.

B2 Stability of positive conformable BAM neural networks

The theory of fractional calculus is one of the most active research areas in the past few years due to its demonstrated applications in numerous practical models such as data analysis, intelligent control, associated memory or optimization [40] It has been recognized that functional calculus and fractional differential equations (FrDES) can describe many physical phenomena more adequately than integer-order models. Several different approaches to fundamental concepts like fractional derivatives (FrDs) or fractional integrals (FrIs) have been developed in the senses of the Riemann-Liouville, Caputo or Gr¨unward-Letnikov [40,68] For example, the approach of Riemann-Liouville is constructed based on iterating the integral operator n times combining with the Cauchy formula, while the approach of Gr¨unward-Letnikov is based on iterating the derivative n time combining with the use of the Gamma function in the binomial coefficients The concepts of FrDs formulated in this direction are quite complicated in applications and have some common drawbacks For instance, some basic properties of normal derivatives like product rule or chain rule are not preserved for FrDs In addition to this, the monotonicity of the function typically cannot be determined by FrDs of it in certain meanings.

To overcome some drawbacks of existing FrDs, the authors of [39] proposed a new well-behaved simple derivative called the conformable fractional derivative (CFD).

Basic results in calculus of functions subject to CFD were also developed in [1, 57]. Compared with the classical fractional derivative, the conformable fractional derivative has two advantages First, the conformable fractional derivative definition is natural and it satisfies most of the properties which the classical integral derivative has such as linearity, vanishing derivatives for constant functions, Rolle theorem, mean value theorem, product rule, quotient rule, power rule, and chain rule Second, since the differential equations with the conformable fractional derivative are easier to solve numerically than those associated with the Riemann-Liouville or Caputo fractional derivative, the conformable derivative brings us a lot of convenience when it is applied for modeling many physical problems Recently, conformable fractional-order systems have also attracted considerable research attention and a number of interesting results involving various aspects of analysis and control of dynamical systems described by conformable fractional-order differential equations with or without delays have been published For a few references, we refer the reader to recent works [10,17,59–62,70,78]

The research topic of fractional neural networks (FrNNs) has received growing attention in recent years Some important issues in analysis such as stability, pas- sivity, dispassivity or identification have also been extensively studied and developed for neural network models with delays [34, 65, 74, 84] Research on conformable frac- tional derivatives has continuously increased in recent years However, there are still many issues regarding stability of conformable FrDEs that remain open In particu- lar, the positive characterization and existence, uniqueness and exponential stability of conformable delay systems in the BAM neural network model have not been studied.

B3 Stability of positive inertial BAM neural networks

In their own publication in 1986, Babcock and Westervelt showed that the dy- namics could be complex when the neural couplings include an inertial nature via com- paring the electronic neural networks and the standard resistor-capacitor variety [8].

An inertial neural network (INN) is represented by a second-order differential system,where the inertial terms are described in first-order derivative terms Inertial neural networks (INNs) have important implications in biology, engineering technology and information systems, see [8, 43, 75] Over the past few years, many studies on inertial neural networks have been published For example, by matrix measure strategies, [72] and [13] considered the dissipation, stability and synchronization of inertial delayed neural networks By using analysis methods and inequality techniques, [38] and [67] studied the global exponential stability in Lyapunov sense of inertial neural networks.

In [82], exponential stability of inertial BAM neural networks with time-varying delay was analyzed through intermittent control Hien and Hai-An [32] explored the positivity of solutions and exponential stability of positive equilibrium of INNs with multiple time-varying delays using comparison principles and homeomorphisms Despite these advancements, the positivity of global solutions and exponential stability of positive equilibrium of inertial BAM neural networks with delays remain unexplored, motivating the present study.

Methodology

In this thesis, we utilize a combination of comparison principle via differential and integral inequalities, nonlinear functional analysis and specific tools in control theory.

In particular, we develop novel comparison techniques, through M-matrix theory, to establish stability conditions in the form of tractable LP conditions, which can be effectively solved by various computational tools and algorithms.

Research topics

The following topics will be taken into consideration in this thesis.

D1 Stability of nonlinear time-delay systems in BAM-Cohen-Grossberg neural networks

Consider the following system of differential equations with delays x 0 i (t) = αi(xi(t)) h

System (1) describes a type of BAM-Cohen-Grossberg neural networks with time- varying delays and nonlinear self-excitation rates ofnneurons inX-layer andmneurons in Y-layer More details on the description of model (1) will be presented in Chapter

2 The communication delays τ i (t) and σ j (t) satisfy

0≤τi(t)≤τ , 0≤σj(t)≤σ, (2) where τ and σ are known positive constants I i and J j denote the external inputs to the ith neuron and jth neuron, respectively Initial conditions associated with system (1) are specified as follows x(t 0 +ξ) = x 0 (ξ), ξ ∈[−τ ,0], y(t 0 +θ) = y 0 (θ), θ ∈[−σ,0], (3) where x 0 ∈C([−τ ,0],R n ) and y 0 ∈ C([−σ, 0],R m ) are initial functions.

The objective is to establish the positivity of global solutions and the existence, uniqueness and exponential stability of a unique positive equilibrium of system (1). Based on novel comparison techniques via differential inequalities, unified conditions for the existence and exponential stability of a unique positive equilibrium of model (1) are derived in terms of tractable LP-based conditions.

D2 Stability of conformable BAM-Hopfield neural networks with delays

In Chapter 3, we consider a class of differential equations with delays described by conformable fractional derivative (CFD) This differential equation type can be used to describe the dynamics of various practical models, including biological and artificial neural networks with heterogeneous time-varying delays Consider the following system cD t α 0 x(t) y(t)

System (4), described using the conformable fractional derivative (CFD), models the dynamics of neural networks in the BAM model For given initial times t0≥0 and positive constants σ and τ, the initial conditions (5) specify the values of x(t) and y(t) in the intervals [-τ, 0] and [-σ, 0], respectively.

By novel comparison techniques via fractional differential and integral inequali- ties, unified conditions in terms of tractable LP-based conditions for the existence and exponential stability of a unique positive equilibrium of the CFD model (4) are derived.

D3 Stability of positive inertial BAM neural network model

In Chapter 4, we consider a model of inertial BAM neural networks with delays described by the following system of second-order differential equations x 00 i (t) =−a i x 0 i (t)−c i x i (t) + m

System (6) initial conditions are given by compatible functions ϕ, ϕ d, ψ, and ψ d defined in specific domains Leveraging novel comparison techniques, tractable conditions based on M-matrix and parameters of system (6) are derived to guarantee positive solutions and the existence of a unique EP These conditions ensure the unique EP is positive and globally attractive, which is demonstrated using the derived conditions.

Main contributions

This dissertation investigates the positivity of solutions and exponential stability of positive equilibrium in nonlinear differential equations with delays, focusing on various types of neural networks The main contributions include the analysis of the positivity of solutions and the establishment of the exponential stability conditions for the positive equilibrium of such systems.

This study addresses the positivity of solutions and exponential stability of positive equilibrium for delayed BAM-Cohen-Grossberg neural networks incorporating nonlinear self-excitation rates New conditions are proposed based on the Lyapunov function method, which guarantee the positivity of solutions and exponential stability of the positive equilibrium These conditions are expressed in terms of linear programming (LP), making them computationally efficient and applicable to practical network design.

2 Proved the positivity and derived tractable conditions for the global exponential stability of a unique positive equilibrium of conformable BAM neural networks with communication delays.

3 Established LP-based conditions ensuring the positivity of solutions and global exponential stability of a unique positive equilibrium of inertial BAM neural net- works with bounded delays.

The aforementioned results have been published in 03 papers in international journals indexed in Web of science/Scopus (ranked Q2, Q3 in Scimago database) and have been presented at

• The weekly seminar onDifferential and Integral Equation, Division of Mathemat- ical Analysis, Faculty of Mathematics and Informatics, Hanoi National University of Education.

• PhD Annual Conferences, Faculty of Mathematics and Informatics, Hanoi Na- tional University of Education, 2021-2023.

Thesis outline

STABILITY OF NONLINEAR POSITIVE TIME-DELAY SYSTEMS IN BAM- COHEN-GROSSBERG NEURAL NETWORKS

STABILITY OF NONLINEAR POSITIVE TIME-DELAY SYS- TEMS IN BAM-COHEN-GROSSBERG NEURAL NETWORKS

In this chapter, we consider the problems of positivity and global exponential stability of a unique positive equilibrium of the following BAM-Cohen-Grossberg neural networks with multiple time-varying delays and nonlinear self-excitation rates x 0 i (t) = α i (x i (t)) h

By utilizing the Brouwer fixed point theorem and novel comparison techniques via differential-integral inequalities, tractable conditions that guarantee the exponential stability of a unique positive equilibrium of system (2.1) are derived using M-matrix theory Main results of this chapter are presented based on the paper [P1] in the List of publications.

Consider system (2.1), wheren,mrepresent the number of neurons inX-layer and

Y-layer, respectively, andi∈[n],j ∈[m];x i (t) andy j (t) represent the state variables of cell ith in field F X and cell jth in field F Y ;αi(xi) and βj(yj) are neural amplification functions, ϕ i (x i ), ψ j (y j ) are nonlinear decay rate functions and δ i > 0, ρ j > 0 are self-inhibition coefficients For linear decay rate functions (that is, ϕ i (x i ) = x i and ψ j (y j ) = y j ), δ i and ρ j are the rates at which ith and jth neurons will reset their potential to the resting state in isolation when disconnected from the network and external inputs In system (2.1), f j ,g i are neuron activation functions anda ij ,b ij ,c ji , d ji are connection weights which represent the strengths of connectivity between jth neuron inF Y andith neuron inF X I i andJ j are external inputs to theith neuron and jth neuron, respectively The functions τ i (t) and σ j (t) denote communication delays between neurons which satisfy

0≤τ i (t)≤τ , 0≤σ j (t)≤σ, (2.2) where τ andσ are known positive constants Initial conditions associated with system (2.1) are specified as follows x(t 0 +ξ) = x 0 (ξ), ξ ∈[−τ ,0], y(t 0 +θ) = y 0 (θ), θ ∈[−σ,0], (2.3) where x 0 ∈ C([−τ ,0],R n ) and y 0 ∈ C([−σ, 0],R m ) are initial functions For conve- nience, we denote the matrices α(x) = diag{α 1 (x 1 ), , α n (x n )}, β(y) = diag{β 1 (y 1 ), , β m (y m )},

D δ = diag{δ 1 , , δ n }, D ρ = diag{ρ 1 , , ρ m }, and A = (a ij ), B = (b ij ) ∈R n×m , C = (c ji ), D = (d ji ) ∈ R m×n In addition, we also use the vectors I = (I i )∈R n , J = (J j)∈R m and the vector-valued functions Φ(x(t)) = col(ϕ i (x i (t))), Ψ(y(t)) = col(ψ j (y j (t))), f(y(t)) = col(f j (y j (t))), f(y(t−σ(t))) = col(f j (y j (t−σ j (t)))), g(x(t)) = col(g i (x i (t))), g(x(t−τ(t))) = col(g i (x i (t−τ i (t)))).

Then, system (2.1) can be written in the following vector form

2.1.1 Existence and uniqueness of solutions

Let D denote the set of continuous functions ϕ:R → R that satisfies ϕ(0) = 0 and there exist positive scalars cϕ, ˆcϕ such that c ϕ ≤ ϕ(u)−ϕ(v) u−v ≤cˆ ϕ (2.5) for all u, v ∈ R, u 6= v Clearly, the function class D includes all linear functions,ϕ(u) =cϕu, where cϕ is some positive scalar.

(B1) α i (.) and β j (.) are continuous and there exist scalars α i , α i , β j, β j such that

0< α i ≤αi(u)≤αi, 0< β j ≤βj(u)≤β j (B2) ϕ i (.) and ψ j (.) belong to the function class D.

(B3) f j (.), g i (.) are continuous, f j (0) = 0,g i (0) = 0, and there exist positive constants

Remark 2.1.1 It follows from condition (2.5) that any function ϕ∈D is continuous and strictly increasing Thus, there exists a continuous inverse function ϕ −1 of ϕ. Moreover, ϕ −1 also belongs to D with c ϕ −1 = ˆc −1 ϕ and ˆc ϕ −1 =c −1 ϕ

Theorem 2.1.1 With the assumptions (B1)-(B3), for any initial condition defined by x 0 ∈C([−τ ,0],R n ) and y 0 ∈ C([−σ, 0],R m ), system (2.1)possesses at least a solution χ(t) = col(x(t), y(t)) on [t0,+∞), which is absolutely continuous int.

Proof We denote the function space

! :x 0 ∈C([−τ ,0],R n ), y 0 ∈ C([−σ, 0],R m ) and define a function F : [t 0 ,+∞)×Cd →R n+m by

Then, system (2.1) can be written as the following abstract functional differential equation χ 0 (t) = F(t, χ t ), t≥t 0 , χ t 0 =φ∈Cd, (2.6) where χt = col(xt, yt) ∈ Cd and xt ∈ C([−τ ,0],R n ), y t ∈ C([−σ, 0],R m ) are defined as x t (ξ) =x(t+ξ),ξ ∈[−τ ,0],y t (θ) = y(t+θ), θ ∈[−σ,0].

By assumptions (B1)-(B3), the functionF(t, φ) is continuous and thus, the prob- lem described by equation (2.6) possesses a local solution χ(t) which is absolutely continuous in t on a maximal interval [t 0 , t f ) [2] On the other hand, it follows from (2.6) that χ(t) = χ(t 0 ) +

In addition, we can deduce by direct computation from the expression of the function

F(t, φ) that there exist positive scalars 1 , 2 such that kF(t, χ t )k ≤ 1 + 2 (kx t k C +ky t k C ). Therefore, according (2.7), we have kχ(t)k ≤ kχ(t 0 )k+ 1 (t−t 0 ) + 2

Z t t 0 ˆ χ(s)ds, where ˆχ(t) = kx t k C +ky t k C , which leads to ˆ χ(t)≤2kχ(t 0 )k+ 2 1 (t−t 0 ) + 2 2

By applying Gronwall inequality, from (2.8), we are ready obtain ˆ χ(t)≤ 1

+ 2kχ(t 0 )ke 2 2 (t−t 0 ) which yields lim sup t→t f kχ(t)k ≤ 1

This contradiction shows that t f = +∞ The proof is completed

2.1.2 Positive solutions and equilibrium points

Let χ(t) = col(x(t), y(t)) be a solution of system (2.4) If the trajectory of χ(t) is confined within the first orthant, that is, χ(t) ∈ R n+m + for all t ≥ t 0 , then χ(t) is said to be a positive solution of (2.4) We define the following admissible set of initial conditions for system (2.4)

Definition 2.1.1 System (2.4) is said to be positive if for any initial function φ ∈A and nonnegative input vector col(I, J) ∈ R n+m + , the corresponding solution χ(t) col(x(t), y(t)) of (2.4) is positive.

Definition 2.1.2 For given input vectors I ∈ R n and J ∈ R m , a vector χ ∗ = col(x ∗ , y ∗ ) ∈ R n+m , where x ∗ ∈ R n and y ∗ ∈ R m , is said to be an equilibrium point (EP) of system (2.4) if it satisfies the following algebraic system

Moreover, χ ∗ is a positive equilibrium point if it is an equilibrium point and χ ∗ 0.

Definition 2.1.3 A positive EP χ ∗ = col(x ∗ , y ∗ ) of system (2.4) is said to be glob- ally exponentially stable (GES) if there exist positive scalars κ and λ such that any solutionχ(t) = col(x(t), y(t)) of (2.4) with initial condition (2.3) satisfies the following inequality kχ(t)−χ ∗ k ∞ ≤κkφ−χ ∗ k C e −λ(t−t 0 ) , t≥t 0

In this section, we will prove that, under assumptions (B1)-(B3), any solution of system (2.4) with nonnegative initial states is positive provided that the weighted coef- ficients are nonnegative First, by extending Lemma 1 in [31], we obtain the following auxiliary result.

Lemma 2.2.1 Let ϕ ∈ D and α be a continuous function such that 0 < α(x) ≤ α, x∈R, where α is a positive constant Consider the following problem x 0 (t) = −α(x(t))ϕ(x(t)) +w(t), t≥t0, x(t0) = x0,

(2.11) where w(t) is a continuous function on [t 0 ,+∞) If w(t) ≥ 0 for t ≥ t 0 and x 0 ≥ 0, then it holds that x(t)≥0 for t ≥t0.

Proof According to the continuity dependence, it is only necessary to prove in the case x 0 >0 We will show that x(t)>0 for t≥ t 0 Assume in contrary that there exists a t 1 > t 0 such that x(t 1 ) = 0 and x(t)>0 for t∈[t 0 , t 1 ) By the assumptions of Lemma 2.2.1, we have c ϕ ≤ ϕ(x(t)) x(t) ≤ˆc ϕ , 0< α(x(t))≤α, t∈[t 0 , t 1 ).

This, together with (2.11), gives x 0 (t)≥ −αˆcϕx(t) +w(t) (2.12) From the linear differential inequality (2.12), we readily obtain x(t)≥e −αˆ c ϕ (t−t 0 ) x 0 +

Let t ↑ t1, it follows from inequality (2.13) that 0 < x0e −αˆ c ϕ (t 1 −t 0 ) ≤ x(t1) = 0, which yields a contradiction This shows that x(t)>0 for t ∈[t 0 ,+∞) The proof is completed

The positivity of BAM-Cohen-Grossberg neural network model (2.4) is presented in the following theorem.

Theorem 2.2.1 Let assumptions (B1)-(B3)hold and assume that the matrix

 is nonnegative (M 0) Then, the BAM-CG neural network model described by system (2.4) is positive Specifically, for any initial condition φ ∈A and nonnegative input vector J = col(I, J)∈R n+m , I 0, J 0, the corresponding solution satisfies χ(t) = col(x(t), y(t))0 for all t≥t0.

Proof Letχ(t) = col(x(t), y(t)) be a solution of the system (2.4) with initial condition φ ∈ A and input vector J = col(I, J) ∈ R n+m + By assumption (B3), Af(y) and

Bf(y) are order-preserving vector fields on R m + (see, [69] for more details) Thus, if y(t)0, t∈[−σ, t 1 ), for some t 1 > t 0 , then q i (t) : m

In addition, from (2.1), we have x 0 i (t) = −δ i α i (x i (t))ϕ i (x i (t)) +w i (t), where w i (t) = α i (x i (t))q i (t) Based on Lemma 2.2.1 and assumptions (B1), (B2), we can conclude that x i (t) ≥ 0 for i ∈ [n] and t ∈ [t 0 , t 1 ) Therefore, χ(t) 0 for t ∈[t0, t1).

For sufficiently small >0, let χ (t) = col(x (t), y (t)) be the solution of system (2.4) with initial condition φ = φ +1 n+m , where 1 n+m = (1,1, ,1) > ∈ R n+m

By the continuity dependence of solutions with respect to initial conditions [2], there exists a t 1 > t 0 such that χ (t)0 fort ∈[t 0 , t 1 ) We now show thaty (t)0 for all t ≥t 0 In contrast, suppose that there exists a ˜t > t 0 and an index j ∈[m] such that y j (˜t) = 0, y j ≥0, t∈[t 0 ,˜t), (2.14) and y (t) 0 for t ∈ [t 0 ,˜t] By using similar arguments in the first part of the proof, and by Lemma 2.2.1, we have x (t)0 fort ∈[−τ ,˜t] Therefore, ˜ q j (t) : n

By similar lines used in the proof of Lemma 2.2.1, we also obtain yj(t)≥e −β j ρ j c ˆ ψj (t−t 0 ) y j 0 (0) ++β j

≥e −β j ρ j c ˆ ψj (t−t 0 ) , t∈[t 0 ,˜t) (2.15) Let t↑˜t, from (2.15), we obtain y j (˜t)≥e −β j ρ j c ˆ ψj (˜ t−t 0 ) >0 which contradicts with (2.14) By this we can conclude that y (t) 0 and thus x (t)0 for t≥t 0 Let↓0 we obtainχ(t) = lim ↓0 χ (t)0 for all∈[t 0 ,+∞) The proof is completed

In this section, by utilizing Brouwer fixed point theorem, we derive conditions by which model (2.4) possesses at least one positive EP for a given input vector J col(I, J)∈R n+m + First, it can be verified from (2.10) that a vector χ ∗ = col(x ∗ , y ∗ )∈

R n+m is an EP of system (2.4) if and only if it satisfies the algebraic system

Revealed by system (2.16), we define a mapping H :R n+m →R n+m by

(2.17) where χ = col(x, y), x ∈ R n and y ∈ R m The mapping H defined by (2.17) can be written in terms componentwise as

, j ∈[m], and ϕ −1 i (.), ψ j −1 (.) denote the inverse functions of ϕ i (.) and ψ j (.), respectively.

In regard to equations (2.16) and (2.17), a vectorχ ∗ ∈R n+m is an EP of system (2.4) if and only if it is a fixed point of the mapping H, that is, H (χ ∗ ) =χ ∗ Based on Brouwer fixed point theorem, we have the following result.

Theorem 2.3.1 Let assumptions (B1)-(B3)hold and assume that ρ(Λ) t 0 , x(t 0 +s) =φ(s), s ∈[−τ ,0],

(3.32) where A, A d ∈R n×n are given real matrices,τ(t)∈[0, τ] is the time-varying delay.

Similar to the proof of Theorem 3.4.1, it can be verified from (3.32) that any solution x(t, t 0 , φ) of system (3.32) satisfies

|x(t, t 0 , φ)| x(t, tˆ 0 ,|φ|), t≥t 0 , (3.33) where ˆx(t, t 0 ,|φ|) is the corresponding solution of the following problem cD t α 0 x(t) =ˆ A D x(t) +ˆ |A d |ˆx(t−τ(t)), t > t 0 , ˆ x(t 0 +s) = |φ(s)|, s∈[−τ ,0],

(3.34) where the matrix A D = (a D ij ) is determined as a D ij 

Clearly, M=A D +|A d | is a Metzler matrix Thus, if the matrix M is Hurwitz, then system (3.34), and hence system (3.32), is GFES We summarize this result in the following proposition.

Proposition 3.5.1 Assume that one of the following equivalent conditions is satisfied.

(ii) There exists a vector ξ ∈R n , ξ 0, such that

Then, system (3.31) is GFES More precisely, there exists a positive scalar δ such that any solution x(t, t0, φ) of system (3.31) satisfies the fractional exponential estimate kx(t, t0, φ)k ∞ ≤δkφk C Eα(−λ, t−t0), t ≥t0, where the maximum allowable decay rate λ > 0 can be determined by the following generic procedure maximize λ >0 s.t. aii+ n

Example 3.5.1 Consider system (3.1) with Sigmoidal-Boltzmann nonlinear functions fj(yj) =S θ f j

(yj),gi(xi) =S θ g i(xi), whereθ f j ,θ i g (i, j = 1,2,3) are given positive scalars and weighted sigmoid function S θ (x) is defined as

It can be verified by simple calculation from (4.1) that

Thus, assumption (C) is satisfied withl f j = 1

2θ i g To illustrate the obtained theoretical results, we specify the system parameters as follows θ j f = 1.75, θ g i = 1.6 (i, j = 1,2,3),

By Proposition 1.1.1,I 3 −He is a nonsingular M-matrix Thus, the derived conditions in Theorem 3.4.1 are fulfilled By Theorem 3.4.1, for a given input vector J = col(I, J), where I, J ∈ R 3 +, system (3.1) has a unique positive EP χ ∗ , which is GFES for any bounded time-varying delays τ i (t), σ j (t).

 , by solving system (3.5) with Matlab Symbolic Toolbox, we obtain a unique positive

EXPONENTIAL STABILITY OF INERTIAL BAM NEURAL NETWORKS

EXPONENTIAL STABILITY OF INERTIAL BAM NEURAL NETWORKS WITH TIME-VARYING DELAYS

Conditions on damping coefficients and self-excitation rates establish positive state transformation, reducing damping terms in Hopfield-type BAM model with time-varying delays Brouwer fixed point theorem and comparison techniques prove the positivity of solutions and a unique EP M-matrix theory ensures the positivity and global attractivity of the EP These results are supported by numerical simulations.

Consider a model of inertial BAM neural networks with delays described by the following second-order differential equations x 00 i (t) = −aix 0 i (t)−cixi(t) + m

X i=1 q ji g i (x i (t−τ i (t))) +J j , t ≥t 0 , j ∈[m], (4.1b) where n, m are positive integers representing the number of neurons in X-layer and

Y-layer, x(t) = (x i (t)) ∈R n and y(t) = (y j (t)) ∈R m are the state vectors of neuron fields F X and F Y , respectively, ai > 0, bj > 0, i ∈ [n], j ∈ [m], are the damping coefficients and c i > 0, d j > 0 are self-inhibition coefficients, that is, the rates at which ith and jth neurons will reset their potential to the resting state in isolation when disconnected from the network and external inputs In model (4.1), f j and g i are the neuron activation functions, P = (p ji ), Q = (q ji ) ∈ R m×n and R = (r ij ),

S = (s ij ) ∈ R n×m are connection weight matrices, which represent the strengths of connectivity between cells in F Y and F X , I = (I i ) ∈ R n , J = (J j) ∈ R m are external input vectors to the networks The functions τ i (t) and σ j (t) represent the communication delays between neurons which are assumed to satisfy

0≤τ j (t)≤τ and 0≤σ i (t)≤σ for all t ≥t 0 , whereτ, σ are known positive constants.

The initial condition associated with (4.1), which specifies the initial state of the networks, is defined by x(t 0 +θ) = ϕ(θ), x 0 (t 0 +θ) = ϕ d (θ), θ∈[−τ ,0], y(t0+θ) = ψ(θ), y 0 (t0+θ) = ψ d (θ), θ ∈[−σ,0],

(4.2) where ϕ, ϕ d ∈C([−τ ,0],R n ) and ψ, ψ d ∈ C([−σ,0],R m ) are compatible initial func- tions For convenience, we denote the following matrices and vector-valued functions

B = diag{b1, b2, , bm}, D= diag{d1, d2, , dm}, f(y(t)) = col(f j (y j (t))), f(y(t−σ(t))) = col(f j (y j (t−σ j (t)))), g(x(t)) = col(g i (x i (t))), g(x(t−τ(t))) = col(g i (x i (t−τ j (t)))).

Then, system (4.1) can be written in the following vector form x 00 (t) = −Ax 0 (t)−Cx(t) +Rf(y(t)) +Sf(y(t−σ(t))) +I, (4.3a) y 00 (t) = −By 0 (t)−Dy(t) +P g(x(t)) +Qg(x(t−τ(t))) +J (4.3b)

Consider the inertial BAM neural network model (4.1) We define a state trans- formation by

(4.4) where η i , ξ i , and à j , ζ j are positive scalars, which will be determined later System (4.3) can be represented as x 0 (t) =−D ξ x(t) +D η x(t),ˆ (4.5a) ˆ x 0 (t) =−D α x(t) +ˆ D γ x(t) +D η −1 [Rf(y(t)) +Sf(y(t−σ(t))) +I], (4.5b) y 0 (t) =−D ζ y(t) +D à y(t),ˆ (4.5c) ˆ y 0 (t) =−D β y(t) +ˆ D ν y(t) +D à −1 [P g(x(t)) +Qg(x(t−τ(t))) +J], (4.5d) where ˆx(t) = (ˆx i (t)), ˆy(t) = (ˆy j (t)) and

Assumption (D): The neuron activation functions f j (.), g i (.), i ∈ [n], j ∈ [m], are continuous, f j (0) = 0,g i (0) = 0 and there exist positive constants K j f , K i g such that

Theorem 4.2.1(Global existence of solutions) Let assumption(D) hold For any ini- tial functions ϕ, ψ, ϕ d and ψ d , the problem governed by the system (4.1) and (4.2) possesses a unique solution on the infinite interval [t 0 ,+∞), which is absolutely con- tinuous in t.

Proof It is clear that col(x(t), y(t)) is a solution of the system (4.1) with respect to the initial condition (4.2) if χ(t) = col(x(t),x(t), y(t),ˆ y(t)) is a solution of system (4.5)ˆ with the initial condition defined by x(t 0 +θ) =ϕ(θ), x(tˆ 0 +θ) =ϕ(θ),b θ ∈ [−τ , 0], y(t 0 +θ) =ψ(θ), y(tˆ 0 +θ) = ψ(θ),b θ∈[−σ,0],

 and consider the function G: [t 0 ,+∞)× D →R 2(n+m) , (t, φ) 7→G(t, φ), defined by

System (4.5) can be written in the following form of the functional differential equation χ 0 (t) = G(t, χ t ), t≥t 0 , χt 0 =φ∈ D,

(4.9) where χ t = col(x t ,xˆ t , y t ,yˆ t )∈ D defined by state segments x t ,xˆ t ∈C([−τ ,0],R n ) and y t ,yˆ t ∈C([−σ,0],R m ), x t (θ) = x(t+θ), xˆ t (θ) = ˆx(t+θ), θ ∈[−τ ,0], y t (θ) = y(t+θ), yˆ t (θ) = ˆy(t+θ), θ ∈[−σ,0].

It can be verified from (4.6) and (4.8) that there exists a scalar L G >0 such that kG(t, φ1)−G(t, φ2)k ≤L G kφ1−φ2k C for anyφ 1 , φ 2 ∈ D Thus, by the fundamental theory of functional differential equations (see, [2]), the initial value problem (4.9) possesses a unique local solution χ(t), which is absolutely continuous in t, on a maximal interval [t0, t f ) We will show that t f +∞ Indeed, assume in contrary that t f t 0 then we have x(t+θ) ˆ x(t+θ)

By similar arguments used to derive (4.12) we also have y t ˆ yt

By the Gronwall inequality, from (4.14) we readily obtain kχtk C ≤2kχt 0 k C e 2L G (t−t 0 ) + ρ

, t∈[t0, t f ) (4.15) Taking limit when t approaches t f , it follows from (4.15) that lim sup t↑t f kχ(t)k ≤2kχ t 0 k C e 2L G (t f −t 0 ) + ρ

0, ξ i > 0, i ∈ [n], and à j , ζ j , j ∈ [m] We define the following set of admissible initial functions for system (4.1)

Clearly, the admissible set D A contains all nonnegative nondecreasing initial functions ϕ,ψ with ϕ d =ϕ 0 and ψ d =ψ 0

Definition 4.3.1 System (4.1) (or vector form (4.3)) is said to be positive if for any initial condition that belongs to D A and any input vectors I ∈ R n +, J ∈ R m +, the corresponding solution X(t) of (4.1) is positive.

Definition 4.3.2 Given input vectors I ∈ R n , J ∈ R m A vector X ∗ = col(x ∗ , y ∗ ),where x ∗ ∈R n and y ∗ ∈R m , is said to be an equilibrium point (EP) of system (4.3) if it satisfies the following algebraic system

Moreover, if X ∗ 0 then it is called a positive equilibrium point.

Definition 4.3.3 An EP X ∗ of (4.3) is said to be globally exponentially stable (GES) if there exist positive scalarsκ and ω such that any solution X(t) of (4.3) satisfies the following inequality kX(t)− X ∗ k ∞ ≤κΨ ∗ e −ω(t−t 0 ) , t≥t 0 , where Ψ ∗ = max{kϕ−x ∗ k C ,kϕ d k C ,kψ−y ∗ k C ,kψ d k C }.

The main objective of this section is to derive testable LP-based conditions that ensure inertial BAM neural networks described by system (4.3) positive and there exists a unique positive EP which is GES.

To establish the positivity of the system (4.1) or vector form (4.3), via the trans- formed system (4.5), the existence of a state transformation given in (4.4) with positive coefficients is essential This will be shown in the following technical lemma.

Lemma 4.3.1 Given positive damping coefficientsa i ,b j and self-inhibition coefficients ci, dj, i ∈ [n], j ∈ [m] Then, there exists a transformation (4.4) defined by positive coefficient matrices D ξ , D η , D ζ and D à such that diag{D α , D γ , D β , D ν } 0 if and only if it holds that

Proof The necessity of (4.17) follows from the fact that ξi(ai−ξi)−ci >0, i∈[n], ζ j (b j −ζ j )−d j >0, j ∈[m].

By this we obtain the condition (4.17).

We now prove the sufficiency Let condition (4.17) hold Then, it is clear that the constants ξ i l = a i −p a 2 i −4c i

(4.18) are well-defined and satisfy 0 < ξ i l < ξ i r < a i , 0 < ζ j l < ζ j r < b j For any ξ i ∈(ξ i l , ξ i r ), ζ ∈(ζ j l , ζ j r ), we have ξ i (a i −ξ i )−c i = ξ i −ξ i l

By choosing positive constants η i , à j , we obtain α i =a i −ξ i >0, β j =b j −ζ j >0, γ i = 1 η i ((a i −ξ i )ξ i −c i )>0, ν j = 1 à j ((b j −ζ j )ζ j −d j )>0, fol all ξ i ∈(ξ i l , ξ r i ),ζ j ∈(ζ j l , ζ j r ) Hereafter, we assume the coefficient matrices A, B, C and D satisfy condition (4.17) and consider a fixed transformation (4.4) with the matrices D ξ , D η , D ζ , D à having components satisfy

(4.19) where the constants ξ i l ,ξ i r and ζ j l , ζ j r are defined by (4.18).

Theorem 4.3.1 Let assumption (D) and condition (4.17) hold Assume that the con- nection matrices P, Q, R and S are nonnegative Then, system (4.3) is positive for any bounded delays.

Proof Let X(t) be a solution of (4.3) with respect to initial function φ ∈ D A and external input vectors I ∈R n +, J ∈ R m + Then, according to the transformation (4.4), χ(t) is a solution of system (4.5) on the interval [t 0 ,+∞) It suffices to show that χ(t) is a positive solution For convenience, we denote the vector functions X(t) col(x(t),x(t)),ˆ Y(t) = col(y(t),y(t)),ˆ xτ(t) = x(t−τ(t)),yσ(t) =y(t−σ(t)) and

System (4.5) can be written in the following form

First, it can be verified that if Y(t) 0, t ∈ [−σ, t1), for some t1 > t0 then y(t)0 andy σ (t)0,t∈[t 0 , t 1 ) Sincef is an order-preserving vector field,D −1 η 0 and R, S are nonnegative matrices, we have F(y(t), y σ (t)) 0 for t ∈[t 0 , t 1 ) By the formula of constant variation, it follows from (4.20a) that

For any > 0, let χ (t) = col(X (t), Y (t)) be the solution of system (4.5) with the initial condition φ =φ+1 2(n+m) 0, φ ∈ D A By the continuity with respect to initial conditions [2], there exists a t 1 > t 0 such that χ (t) 0 for t ∈ [t 0 , t 1 ) We will show that Y (t)0 for all t≥t0 Indeed, if this does not hold then there exist a t f > t 0 and an index j such that

Y j (t f ) = 0, Y j (t)>0, t∈[t 0 , t f ), (4.22) and Y (t) 0 for t ∈ [t0, t f ) By (4.21), we have X (t) 0 for t ∈ [−τ , t f ), which implies x (t)0 and x τ (t)0,t ∈[t 0 , t f ) It follows from (4.5d) that ˆ y 0 (t) −D β yˆ (t) +D −1 à [P g(x (t)) +Qg(xτ (t)) +J].

Thus, in regard to the order-preserving of the vector fieldsP g(.) andQf(.), the above inequality leads to ˆ y (t)e −D β (t−t 0 ) ψ(0) +b 1m

In addition, since ˆy (t)0, t∈[t 0 , t f ], from (4.5c) we also have y (t) = e −D ζ (t−t 0 )

Let t↑t f , again we have y (t f )e −D ζ (t f −t 0 ) (ψ(0) +1m) e −D ζ (t f −t 0 ) 1 m 0 (4.24) From (4.23) and (4.24) we finally obtain

Y (t f )e − diag{D ζ ,D β }(t f −t 0 ) 1 2m 0 which contradicts with (4.22) It can be deduced from this contradiction thatY (t)0 and thus X (t)0 for t≥ t 0 Let ↓0, we obtain χ(t) = lim ↓0 col(X (t), Y (t)) 0.

4.4 Exponential stability of positive equilibrium point of iner- tial BAM neural networks

In this section, by utilizing the homeomorphism theory, we establish conditions for the existence and uniqueness of an EP for system (4.3) with respect to any initial augmented vector col(I, J)∈R n+m

Lemma 4.4.1 For given positive matricesD ξ ,D η ,D ζ andD à , a vectorX ∗ = col(x ∗ , y ∗ ) is an EP of system (4.3) if and only if the vector χ ∗ = col(x ∗ ,xˆ ∗ , y ∗ ,yˆ ∗ ), where ˆ x ∗ =D −1 η D ξ x ∗ , yˆ ∗ =D à −1 D ζ y ∗ , is an EP of system (4.5) In other words, the vector χ ∗ satisfies the following algebraic system

=−Dy ∗ These facts verify the equivalence of (4.16) and (4.25)

Consider a continuous mapping Q : R 2(n+m) → R 2(n+m) , χ = col(x, x, y, ˆ y) ˆ 7→ Q(χ), defined by

, (4.26) where D αξ =D α D ξ , D αη =D α D η , D βζ =D β D ζ and D βà =D β D à In view of (4.25) and (4.26), for given input vectors I ∈ R n and J ∈ R m , an EP of (4.5) exists if and only if the equation Q(χ) = 0 admits at least a solution χ ∗ = col(x ∗ ,xˆ ∗ , y ∗ ,yˆ ∗ ).

Theorem 4.4.1 Assume that there exist vectors Λ ∈ R 2n , Υ ∈ R 2m , Λ 0 and Υ0, such that Λ Υ

Then, for a given input vector I = col(I, J) ∈ R n+m , there exists a unique EP χ ∗ col(x ∗ ,xˆ ∗ , y ∗ ,yˆ ∗ ) of system (4.5).

Proof According to (4.26), we decompose the mapping Q(χ) as Q(χ) = (Q k (χ)),k 1,2,3,4 Then, for any two vectorsχ 1 = col (x 1 ,xˆ 1 , y 1 ,yˆ 1 ) and χ 2 = col (x 2 ,xˆ 2 , y 2 ,yˆ 2 ), it can be verified that

−D ξ |x 1 −x 2 |+D η |ˆx 1 −xˆ 2 |, (4.28) whereS(v) = diag{sgn(v i )}denotes the diagonal matrix formulated by the sign vector of v = (v i ) In addition, since

+ (R+S) (f(y 1 )−f(y 2 )), by assumption (D) and the fact D αξ −C =D γ D η 0, we have

(4.29) Similar to (4.28) and (4.29), we also have

By combining (4.28)-(4.30), we then obtain

Let 0≺ Λ ∈R 2n , 0 ≺ Υ ∈ R 2m be feasible vectors of (4.27) and E = Λ Υ

If Q(χ 1 )− Q(χ 2 ) = 0 then, by (4.33), E > Φ|χ 1 −χ 2 | = 0 and hence χ 1 = χ 2 due to (4.27) This shows that the mapping Q is injective on R 2(n+m) In addition, for any χ∈R 2(n+m) , we have

2(n+m)kEk ∞ E > Φ|χ| − kQ(0)k ∞ For any sequence {χ k } ∈ R 2(n+m) , kχ kk ∞ →+∞, it is clear by the above inequality that kQ(χ k )k ∞ →+∞ Therefore, the mappingQis proper By Lemma 1.5.3, Qis a homeomorphism onR 2(n+m) Thus, the equationQ(χ) = 0 possesses a unique solution χ ∗ ∈R 2(n+m) , which is a unique EP of system (4.5) The proof is completed

Remark 4.4.1 Subject to the condition (4.17), the feasibility of (4.27) is independent of the choice of scalars η i , ξ i and à j , ζ j that satisfy (4.19) More specifically, we state this fact in the following proposition.

Proposition 4.4.1 For any scalars η i , ξ i and à j , ζ j satisfying condition (4.19), con- dition (4.27) is feasible for a positive vector 0 ≺ E = Λ Υ

∈ R 2(n+m) if and only if there exist vectors K 1 ∈R n and K 2 ∈R m , K 1 0, K 2 0, such that

! , where Λ 1 ,Λ 2 ∈ R n and Υ 1 ,Υ 2 ∈R m , it follows from (4.27) that Φ > E 0, which leads to

This shows that the condition (4.34) is feasible for K 1 = Λ 2 and K 2 = Υ 2

0,mini,j np a 2 i −4ci, q b 2 j −4dj o , we define scalars ξ i = ξ i r − and ζ j =ζ j r −, then we have α i = ξ i l +, β j = ζ j l + Let K = K 1

! , where 0≺ K 1 ∈R n , 0 ≺ K 2 ∈R m , be a feasible solution of (4.34) We now define the vectors L 1 = D l ξ K 1 and L 2 = D l ζ K 2 , where D l ξ = diag ξ l i , D ζ l = diag ζ j l It can be verified that

K 2 → 0 as → 0, where E n denotes the identity matrix in R n×n Thus, there exists a sufficiently small > 0 such thatξi, ζj satisfy (4.19) and the two following conditions hold

By selecting E = col (L 1 ,K 1 ,L 2 ,K 2 ), we have Φ > E 0 as established in (4.27)

Remark 4.4.2 Under the assumptions of Theorem 4.3.1, the condition (4.34) holds if and only if

! is a nonsingular M-matrix By this fact, the existence of a unique EP of model (4.5) can be checked by various equivalent conditions for nonsingular M-matrices [12].

4.4.2 Exponential stability of positive EP of INNs

The result of Theorem 4.4.1 guarantees that for any input vectorI = col(I, J)∈

Ngày đăng: 31/07/2024, 16:30

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w