1. Trang chủ
  2. » Thể loại khác

Ebook Control theory and systems biology: Part 2

202 47 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 202
Dung lượng 4,9 MB

Nội dung

(BQ) Part 2 book Control theory and systems biology has contents: A control-theoretic interpretation of metabolic control analysis, structural robustness of biochemical networks, robustness of oscillations in biological systems, a theory of approximation for stochastic biochemical processes,...and other contents.

8 A Control-Theoretic Interpretation of Metabolic Control Analysis Brian P Ingalls In this chapter, the main results of metabolic control analysis (MCA) are reinterpreted from the point of view of engineering control theory To begin, the standard model of metabolic systems is identified as redundant in both state dynamics and input eÔects A key feature of these systems is that, whereas the dynamics are typically nonlinear, these redundancies appear linearly, through the stoichiometry matrix This means that the eÔect of the input can be linearly decomposed into a component driving the state and a component driving the output A statement of this separation principle is shown to be equivalent to the main theorems of MCA Presenting a control-theoretic treatment of stoichiometric systems, the chapter arrives at an alternative derivation of some of the fundamental results in the theory of control of biochemical systems 8.1 Background Biochemical mechanisms for implementation of feedback control were first discovered in the biosynthetic pathways of metabolism (Pardee and Reddy, 2003), and it was within the study of metabolism that a quantitative theory of the control and regulation of biochemical networks was first developed In the 1970s, researchers on both sides of the Atlantic, led by Michael Savageau in the United States and by Henrik Kacser and Reinhart Heinrich in Europe, elucidated theoretical frameworks for addressing issues of regulation in metabolic networks A fundamental tool used by both groups was local parametric sensitivity analysis, applied primarily at steady state The European camp, whose theory was dubbed metabolic control analysis (MCA), or sometimes metabolic control theory (MCT), made use of a standard linearization technique in addressing steady state behavior (Heinrich and Rapoport, 1974a,b; Kacser and Burns, 1973) Savageau’s work, known as biochemical systems theory (BST), makes use of a more sophisticated log linearization that provides an improved approximation of nonlinear dynamics (Savageau, 1976) With respect to local parametric sensitivity analysis, the two approaches yield identical results 146 Brian P Ingalls The analysis in the present chapter follows the linearization method used in metabolic control analysis, which provides a direct connection between these biochemical studies and the general theory of local parametric sensitivity analysis Moreover, linearization leaves intact the stoichiometric relationships that are exploited in studies of these networks Indeed, as will be shown below, it is this stoichiometric nature that distinguishes the mathematics of metabolic control analysis from that of standard sensitivity analysis As first shown by Reder (1988), an application of some basic linear algebra provides an extension of sensitivity analysis that captures the features of stoichiometry Beyond these mathematical underpinnings, the field of metabolic control analysis deals with myriad intricacies of application to biochemical networks that demand careful interpretation of experimental and theoretical results (surveyed in Fell, 1992, 1997; Heinrich and Schuster, 1996) Local parametric sensitivity analysis addresses the behavior of dynamical systems under small perturbations in system parameters Such analysis plays an important role in control theory, and several texts on sensitivity analysis have been written with control applications in mind (see, for example, Frank, 1978; Rosenwasser and Yusupov, 2000; Tomovic´, 1963; and Varma et al., 1999) The analysis in this chapter is based on the standard ordinary diÔerential equationbased description of biochemical systems (chapter 1) in which the states are the concentrations of the chemical species involved in the network and the inputs are parameters influencing the reaction rates In addressing metabolic systems, researchers commonly take enzyme activity as the parameter input This choice of input channel typically results in an overactuated system—with more inputs than states Additionally, the reaction rates are important outputs Because they depend directly on the parameter inputs, these rates enjoy some autonomy from the state dynamics and can, to a degree, be manipulated separately The discussion that follows highlights a procedure for making explicit the separation between manipulating metabolite concentrations, on the one hand, and reaction rates, on the other, which complements investigations of metabolic ‘‘redesign’’ that have appeared in the literature (Dean and Dervakos, 1998; Hatzimanikatis et al., 1996; Torres and Voit, 2002) Within the metabolic control analysis community, a significant step in this direction was taken by Kacser and Acerenza (1993), who described a ‘‘universal method’’ for altering pathway flux Later, the goal of increasing specific metabolite concentrations was taken up by Kacser and Small (1994) A local description of the combined problem was given by WesterhoÔ and Kell (1996) These results can all be seen as contained within the ‘‘metabolic design’’ approach described by Kholodenko et al (1998, 2000) In the sections that follow, equivalent results are derived from a control-engineering viewpoint, culminating in a controltheoretic interpretation of the main results of metabolic control analysis: the summation and connectivity theorems A Control-Theoretic Interpretation of Metabolic Control Analysis 8.2 147 Redundancy in Control Engineering The results presented here are a consequence of redundancies that appear in stoichiometric systems Before addressing these, let us briefly review the standard manner in which such redundancies are treated in control engineering 8.2.1 State Redundancy: Nonminimal Realizations Recall from chapter the standard description of a linear, time-invariant system: d xtị ẳ Axtị ỵ Butị; dt 8:1aị ytị ẳ Cxtị ỵ Dutị; 8:1bị where x A R n0 , u A R m0 , y A R p0 and A, B, C and D are constant matrices of the appropriate dimensions In systems theory, one is often interested primarily in the input-output behavior associated with this system, characterized by the output trajectories that arise from various choices of the input uị with initial condition x0ị ẳ Given a particular system of the form (8.1), the associated input-output behavior can be equally generated from a whole class of systems of this form That is, the representation, or realization, of these input-output behaviors is not unique A realization is said to be minimal if there are no alternative systems of smaller order that represent the same behavior Nonminimal realizations exhibit redundancy (typically due to a symmetry or to decoupled behavior); they can be improved by removal of the redundant components A simple instance of nonminimality is when there is a redundancy among the state variables, regardless of the input or output structure Biochemical systems typically exhibit such simple redundancies, as will be seen in section 8.3 8.2.2 Input Redundancy: Overactuation In control engineering, much eÔort has gone into the analysis of system (8.1) in the underactuated case (n0 > m0 ), where one attempts to manipulate a system for which there are fewer input channels than degrees of freedom In the case that the number of input channels equals the number of degrees of freedom (n0 ¼ m0 ) the system is fully actuated, and much of that analysis is trivial Finally, if n0 < m0 , the system is overactuated, in which case a redundancy in the control inputs presents an embarrassment of riches to the control designer; the state dynamics can be controlled without completely specifying the input The additional degrees of freedom in the input can then be used to meet further performance criteria (Haărkegard and Glad, 2005) In the overactuated case, system (8.1) can be treated as follows For simplicity, take the case that B has rank n0 (so there are exactly n0 À m0 redundancies among 148 Brian P Ingalls the inputs) Because B does not have full column rank, it can be factored as B ¼ B B , where B is n0  n0 and has full rank, while B is n0  m0 and has rank n0 The control input u can them be mapped to a virtual control input u~ A R n0 by u~ ¼ B1 u, resulting in the fully actuated system d xðtÞ ẳ Axtị ỵ B0 u~tị; dt where two diÔerent control inputs u1 and u2 whose diÔerence lies in the nullspace of B (and hence of B) have an identical eÔect on the state dynamics because they give rise to the same virtual input u~ This redundancy can be made explicit by writing u as the sum of two terms that lie inside and outside of the nullspace of B, respectively: utị ẳ Ka1 tị ỵ Ma2 tị; where the columns of matrix K form a basis for the nullspace of B and the columns of M are linearly independent of one another and of the columns of K Through this decomposition, the state dynamics can be manipulated by the choice of a2 ðÁÞ, while a1 ðÁÞ can be chosen to satisfy other design criteria In particular, if the system output involves a feedthrough term (that is, D in system (8.1) is nonzero) then the choice of a1 may reveal itself in the output Stoichiometric systems, as defined in the next section, have this property, allowing the separate design of strategies for controlling state and output behavior 8.3 Stoichiometric Systems Consider n chemical species involved in m reactions in a fixed volume The concentrations of the species make up the n-dimensional vector s The rates of the reactions are the elements of the m-vector v These rates depend on the species concentrations and on a set of parameter inputs that are collected into vector p The network topology is described by the n  m stoichiometry matrix N, whose i; jth element indicates the net number of molecules of species i produced in reaction j (negative values indicate consumption) The system dynamics are described by d stị ẳ Nvstị; ptịị; dt for all t b 0: ð8:2Þ In addition to the state, sðÁÞ, and the input pðÁÞ, the variables of primary interest in this system are the reaction rates vðs; pÞ Thus, in interpreting (8.2) as a control system, we will choose the vector of reaction rates as the system output: A Control-Theoretic Interpretation of Metabolic Control Analysis yðs; pÞ ¼ vðs; pÞ: 149 ð8:3Þ Systems of the form of equations (8.2) and (8.3) can be defined as stoichiometric systems precisely because the reaction rates v in (8.2) are outputs of interest As will be shown below, the structure of the stoichiometry matrix can be exploited to yield insights into the behavior of the concentration and reaction rate variables The key to exploiting the stoichiometric structure of (8.2) is to describe how dependencies among the rows and columns of N have consequences for the input-output behavior of the system Linearly dependent rows within the stoichiometry matrix correspond to integrals of motion of the system: quantities that not change with time Each redundant row identifies a chemical species whose dynamics are completely determined by the behavior of other species in the system Biochemically, such structural constraints most often appear as conserved moieties, where the concentration of some species is a function of the concentration of others due to a chemical conservation (A simple example is a system that models the interconversion of two chemical species A and B, but does not incorporate the production or consumption of either species In this case, the total concentration ẵA ỵ ẵB is conserved.) An extensive theory has been developed to determine preferred conservation relations from algebraic descriptions of the system network (section 3.1 in Heinrich and Schuster, 1996) The consequences of linear dependence among the columns of N will be explored below If the stoichiometry matrix has full column rank then steady state can only be attained when vs; pị ẳ Biochemical systems typically admit steady states in which there is a nonzero flux through the network These correspond to reaction rate vectors v that lie in the nullspace of N The dimension of this nullspace determines the number of degrees of freedom in these steady-state reaction profiles 8.4 Rank Deficiencies Networks that describe metabolic systems often have highly redundant stoichiometries As an example, consider a metabolic map from Escherichia coli published by Reed et al (2003) that has a 770  931 stoichiometry matrix of rank 733 Clearly, in attempting an analysis of such a system, it is worthwhile to begin with a reduction aÔorded by linear dependence 8.4.1 Deciencies in Row Rank As mentioned, structural conservations in the reaction network reveal themselves as linear dependencies among the rows of the stoichiometry matrix N Let r denote the rank of N Following Reder (1988), we relabel the species so that the first r rows of N are independent The species concentration vector can then be partitioned as 150 Brian P Ingalls ! si ; s¼ sd where si A R r is the vector of independent species and sd A R nÀr contains the dependent species Next, we partition N into two submatrices Calling the first r rows N R , we can write N ¼ LN R , where the matrix L, referred to as the row link matrix, has the form ! Ir L¼ : L0 System (8.2) can then be written as ! ! Ir d si ðtÞ N R vstị; ptịị: ẳ dt sd tị L0 It follows that d d sd tị ẳ L0 si tị; dt dt for all t b 0: Integrating gives sd tị ẳ L0 si tị ỵ T~, for all time, where T~ ẳ sd 0ị L0 si 0ị Finally, concatenating T~ with 0r A R r , we define T ẳ ẵ0rT ; T~ T T , and write stị ẳ Lsi tị ỵ T: 8:4ị As a consequence of this decomposition, attention can be restricted to a reduced version of (8.2), namely, d si tị ẳ N R vLsi tị ỵ T; ptịị: dt 8:5ị It follows that the n-dimensional state enjoys only r degrees of freedom, because the n À r dependent species are fixed by the behavior of the r independent species From an input-output perspective, we conclude that, provided r < n, the original description in terms of n state variables is a nonminimal realization of the system’s inputoutput behavior, regardless of the form of the reaction rates 8.4.2 Deficiencies in Column Rank Recalling that r denotes the rank of the stoichiometry matrix N, we relabel the reactions so that the first m À r columns of N are linearly dependent on the remaining r We partition the vector of reaction rates v correspondingly into m À r independent (vi ) and r dependent (vd ) rates as A Control-Theoretic Interpretation of Metabolic Control Analysis 151 ! vi : v¼ vd Following the procedure outlined above, one might hope to reach a reduced description of the system dynamics in which some of these reaction rates are eliminated, but this is an impossible task Such an elimination could, for instance, decouple an input channel from the dynamics As with the construction of the row link matrix, we let N C denote the submatrix of N consisting of the last r columns, from which N can be recovered as N ¼ N C P, where the column link matrix P is of the form P ¼ ½P0 I r Š: The column link matrix can be determined by constructing a matrix of the form ! I mÀr ; K¼ ÀP0 whose columns span the nullspace of N, forming a basis for the nullspace of P, and hence of N To realize an alternative system description, we write d stị ẳ Nvstị; ptịị dt ẳ N C Pvstị; ptịị ẳ N C ẵP0 I r vstị; ptịị: ð8:6Þ At steady state, this factored description reveals a dependence among the reaction rates Denoting the steady-state rate vector by v ss ¼ J (for system flux), we have a partitioning of J into dependent and independent components: ! Ji : J¼ Jd From equation (8.6), steady state occurs when J d ¼ ÀP0 J i As described by, for example, Heinrich and Schuster (1996), this steady-state dependence can be written as ! I mr Ji: 8:7ị J ẳ KJ i ¼ ÀP0 Note that Heinrich and Schuster (1996) refer to the submatrix ÀP0 as K The notation proposed here is dual to the notation used in addressing row redundancy 152 Brian P Ingalls The partitioning of reaction rates is nonunique, and the advantages of one choice over another are not addressed here A straightforward procedure for choosing independent reaction rates as the ‘‘entry’’ and ‘‘exit’’ points from the network is outlined by WesterhoÔ et al (1994) 8.4.3 Complete Reduction The two types of dependence described above lead to complementary system decompositions Reducing the system by eliminating redundancies in rows and columns leads to an alternative description of the dynamics: d stị ẳ LN RC Pvstị; ptịị dt ! Ir N RC ẵP0 I r vstị; ptịị; ẳ L0 8:8ị where the factored form of the original n  m stoichiometry matrix involves the invertible N RC , defined as the upper right r  r submatrix of N 8.5 Overactuation We now consider the consequence of these linear dependencies on input-output behavior To begin with, observe that, if the reaction rates were considered as inputs (that is, u ¼ v) then, restricting to the nonredundant dynamics, system (8.5) would be an overactuated system of the form (8.1) (with A ¼ 0, referred to as a driftless system) Identifying B with N R , B0 with N RC and B1 with P, we could define the corresponding virtual input as u~ ¼ Pv and any input satisfying Pv ẳ would have no eÔect on the state dynamics Of course, because the reaction rates depend on the species concentrations, they cannot be treated directly as inputs Nevertheless the behavior resulting from this supposition can be realized from both biochemical and control design viewpoints One is often interested in the case where the system inputs (to be manipulated by an experimenter or through inherent regulation) are the activity levels of the enzymes associated with the reactions in the network In most kinetic models, each reaction rate varies linearly with the activity of the corresponding enzyme, and there is one specific enzyme associated with each reaction In such cases, we may write for each reaction vk ðs; pÞ ¼ pk wk ðsÞ; A Control-Theoretic Interpretation of Metabolic Control Analysis 153 where the function wk is referred to as the turnover rate for reaction k In this framework, the parameter inputs can be identified directly with the reaction rates in two ways If one is interested in the eÔect of relative changes in reaction rates, then changes in the input are equivalent to changes in the reaction rate, for example, a 1% change in pk amounts to a 1% change in vk Alternatively, one can follow a standard procedure in control engineering known as input redefinition by setting u~k tị ẳ pk tịwk stịị; so the system dynamics become simply d stị ẳ N u~tị: dt The system overactuation can then be analyzed as follows Any change in u~ that lies in the nullspace of the stoichiometry matrix N, or equivalently of the column link matrix P, will have no eÔect on state dynamics; the redened input can be decomposed into a component that lies in the nullspace of P and another that does not, as discussed in section 8.2.2 Recall that the columns of K form a basis for the nullspace of P We take M to be an independent extension of the columns of K to a basis for R m Then we can decompose u~tị ẳ Kav tị ỵ Mas tị; 8:9ị where av A R mÀr and as A R r , and where an input u~ has no eÔect on the state exactly when as ¼ Because this holds regardless of the choice of M, the question arises as to which form of M will make the decomposition most useful We will consider two alternatives 8.5.1 Input Decomposition: General Dynamics We rst take " M ẳM ẳ 0mrịr # N RC ÞÀ1 : With this decomposition in place, the independent state dynamics take the form d si tị ẳ N R u~tị dt ẳ N RC PKav tị ỵ Mas tịị ¼ N RC PMas ðtÞ 154 Brian P Ingalls " ẳ N RC ẵP0 I r 0mrịr N RC ị1 # as tị ẳ N RC N RC ị1 as tị ẳ as tị; 8:10ị indicating that concentration dynamics are manipulated directly by the choice of the coe‰cients of as ðÁÞ The dynamics of the reaction rates, though also of interest, not appear in such a simple form The decomposition with M leads to ! ! av ðtÞ vi tị ; 8:11ị ẳ vtị ẳ vd tị P0 av tị ỵ N RC ị1 as tị which provides a dynamic generalization of equation (8.7) and confirms that manipulation of the state variables has been decoupled from the independent reactions’ rates, which can be manipulated directly through the coe‰cients of av ðÁÞ Equations (8.10) and (8.11) indicate that, once outputs are taken into consideration, it is inappropriate to refer to the system as overactuated Since the number of input channels (m) corresponds exactly to the number of degrees of freedom of the system (r for the independent species dynamics and m À r for the independent reaction rates), the system can be interpreted as fully actuated An equivalent conclusion can be reached when attention is restricted to local analysis, as we next consider 8.5.2 Input Decomposition: Local Steady-State Analysis Fixing a particular parameter input value p and a corresponding steady state s , which is assumed asymptotically stable, we can describe the local eÔect of the input on concentrations and fluxes through a linearization around this steady state The treatment of local input response is equivalent to a local parametric sensitivity analysis qv is invertible at the steady state, For an arbitrary input parameter vector p, if qp then changes in the parameter input can be identified with changes in the reaction rates by redefining the input as u~ ẳ qv p p ị; qp ð8:12Þ where the derivatives are evaluated at the steady state qv We require that qp be invertible so that we can recover p from u~ This redefined input realizes the direct connection between rate and input in a local sense since 332 References Vaudry, D., Stork, P J S., Lazarovici, P., and Eiden, L E (2002) Signaling pathways for PC12 cell differentiation: Making the right connections Science 296:1648–1649 Vayttaden, S J., Ajay, S M., and Bhalla, U S (2004) A spectrum of models of signaling pathways ChemBioChem (10): 1365–1374 Vershik, A M (2006) Kantorovich metric: Initial history and little-known applications Journal of Mathematical Science 133 (4): 1410–1417 Vilar, J M G., Kueh, H Y., Barkai, N., and Leibler, S (2002) Mechanisms of noise-resistance in genetic oscillators Proceedings of the National Academy of Sciences USA 99:59885992 Villa-KomaroÔ, L., Efstratiadis, A., Broome, S., Lomedico, P., Tizard, R., Naber, S P., Chick, W L., and Gilbert, W (1978) A bacterial clone synthesizing proinsulin Proceedings of the National Academy of Sciences USA 75:3727–3731 Wagner, A (2005) Circuit topology and the evolution of robustness in two-gene circadian oscillators Proceedings of the National Academy of Sciences USA 102:11775–11780 Wang, L., and Sontag, E D (2008) Singularly perturbed monotone systems and an application to double phosphorylation cycles Journal of Nonlinear Science 18 (5): 527–550 Wang, X., Hao, N., Dohlman, H G., and Elston, T C (2006) Bistability, stochasticity, and oscillations in the mitogen-activated protein kinase cascade Biophysical Journal 90 (6): 19611978 WesterhoÔ, H V., and Chen, Y.-D (1984) How enzyme activities control metabolite concentrations? An additional theorem in the theory of metabolic control European Journal of Biochemistry 142:425430 WesterhoÔ, H V., Hofmeyr, J.-H., and Kholodenko, B N (1994) Getting to the inside of cells using metabolic control analysis Biophysical Chemistry 50:273283 WesterhoÔ, H V., and Kell, D B (1996) What biotechnologists knew all along ? Journal of Theoretical Biology 182:411–420 Whittle, P (1957) On the use of the normal approximation in the treatment of stochastic processes Journal of the Royal Statistical Society B 19:268–281 Widmann, C., Gibson, S., Jarpe, M B., and Johnson, G L (1999) Mitogen-activated protein kinase: Conservation of a three-kinase module from yeast to human Physiological Reviews 79 (1): 143–180 Wiener, N (1948) Cybernetics; or, Control and Communication in the Animal and the Machine New York: Wiley Wilhelm, T., Behre, J., and Schuster, S (2004) Analysis of structural robustness of metabolic networks Systems Biology (1): 114–120 Willems, J C (1999) Behaviors, latent variables, and interconnections Systems, Control and Information 43 (9): 453–464 Williams, R S B., Boeckeler, K., Graf, R., Muller-Taubenberger, A., Li, Z., Isberg, R R., Wessels, D., Soll, D R., Alexander, H., and Alexander, S (2006) Towards a molecular understanding of human diseases using Dictyostelium discoideum Trends in Molecular Medicine 12 (9): 415–424 Winfree, A T (2001) The Geometry of Biological Time 2nd ed New York: Springer Wolpert, L (1969) Positional information and the spatial pattern of cellular diÔerentiation Journal of Theoretical Biology 25:147 Wong, W W., Tsai, T Y., and Liao, J C (2007) Single-cell zeroth-order protein degradation enhances the robustness of synthetic oscillator Molecular Systems Biology 3:130 Xia, X., and Moog, C H (2003) Identifiability of nonlinear systems with application to HIV/AIDS models IEEE Transactions on Automatic Control 48 (2): 330–336 Xia, X., and Zeitz, M (1997) On nonlinear continuous observers International Journal of Control 66:943– 954 Xiong, W., and Ferrell, J E., Jr (2003) A positive-feedback-based bistable memory module that governs a cell fate decision Nature 426:460–465 Yen, J., Randolph, D., Liao, J C., and Lee, B (1995) A hybrid approach to modeling metabolic systems using genetic algorithm and simplex method In Proceedings of the 11th Conference on Artificial Intelligence for Applications, 277–283 Benalmadena, Spain: IEEE Computer Society Press References 333 Yoda, M., Ushikubo, T., Inoue, W., and Sasai, M (2007) Roles of noise in single and coupled multiple genetic oscillators Journal of Chemical Physics 126:115101 Zak, D E., Gonye, G E., Schwaber, J S., and Doyle, F J., III (2003) Importance of input perturbations and stochastic gene expression in the reverse engineering of genetic regulatory networks: Insights from an identifiability analysis of an in silico network Genome Research 13 (11): 2396–2405 Zevedei-Oancea, I., and Schuster, S (2003) Topological analysis of metabolic networks based on Petri net theory In Silico Biology 3:0029 Contributors David Angeli Department of Systems and Information University of Florence angeli@dsi.unifi.it Declan G Bates Control and Instrumentation Research Group Department of Engineering University of Leicester dgb3@leicester.ac.uk Eric Bullinger Industrial Control Centre Department of Electronic and Electrical Engineering University of Strathclyde, Glasgow eric.bullinger@eee.strath.ac.uk Peter S Chang Department of Chemical Engineering University of California, Santa Barbara psochang@engr.ucsb.edu Domitilla Del Vecchio Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor ddv@umich.edu Francis J Doyle III Department of Chemical Engineering University of California, Santa Barbara doyle@engineering.ucsb.edu Hana El-Samad Department of Biochemistry and Biophysics University of California, San Francisco helsamad@biochem.ucsf.edu Dirk Fey Industrial Control Centre Department of Electronic and Electrical Engineering University of Strathclyde, Glasgow dirk.fey@eee.strath.ac.uk Rolf Findeisen Institute for Automation Engineering Otto-von-Guericke University, Magdeburg rolf.findeisen@ovgu.de Simone Frey Systems Biology and Bioinformatics Group Department of Computer Science University of Rostock frey@informatik.uni-rostock.de Jorge Gonc¸alves Control Group Department of Engineering University of Cambridge jmg77@cam.ac.uk Pablo A Iglesias Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore pi@jhu.edu Brian P Ingalls Department of Mathematics University of Waterloo bingalls@math.uwaterloo.ca 336 Elling W Jacobsen Automatic Control Lab School of Electrical Engineering, KTH Royal Institute of Technology, Stockholm jacobsen@s3.kth.se Mustafa Khammash Department of Mechanical Engineering University of California, Santa Barbara khammash@engineering.ucsb.edu Jongrae Kim Department of Aerospace Engineering University of Glasgow jkim@aero.gla.ac.uk Eric Klavins Department of Electrical Engineering University of Washington, Seattle klavins@ee.washington.edu Eric C Kwei Department of Chemical Engineering University of California, Santa Barbara kwei@engineering.ucsb.edu Thomas Millat Systems Biology and Bioinformatics Group Department of Computer Science University of Rostock thomas.millat@informatik.unirostock.de Jason E Shoemaker Department of Chemical Engineering University of California, Santa Barbara jshoe@engineering.ucsb.edu Eduardo D Sontag Department of Mathematics Rutgers, State University of New Jersey, Piscataway sontag@math.rutgers.edu Stephanie R Taylor Department of Chemical Engineering University of California, Santa Barbara staylor@cs.ucsb.edu David Thorsley Department of Electrical Engineering Contributors University of Washington, Seattle thorsley@u.washington.edu Camilla Trane´ Automatic Control Lab School of Electrical Engineering, KTH Royal Institute of Technology, Stockholm camilla.trane@ee.kth.se Sean Warnick Department of Computer Science Brigham Young University, Provo, Utah sean.warnick@gmail.com Olaf Wolkenhauer Systems Biology and Bioinformatics Group Department of Computer Science University of Rostock ow@informatik.uni-rostock.de Index A-optimality, 171, 173 ACA (adenlylyl cyclase of aggregation), 227, 235, 238 Activator, 58, 102, 105, 108–110, 114, 285 Activator-inhibitor system, 58–59, 61–63 Activator-repressor clock, 102–103, 108–114 Actuation, 89–90, 164 Actuator, 89–90 Adaptation, 85, 93 Adaptation precision, 178 Adaptation time, fragile, 178 Adenlylyl cyclase of aggregation (ACA), 227, 235, 238 Adenylyl cyclase, 226 ADP, 158–159 Advection, 47 A‰ne propensity, 39 subspace, 130 A‰nity, 102, 106, 109, 118, 119 Amplification, 70, 73, 95, 103, 104, 119, 120, 201, 206, 208, 210 Amplification factor, 73 Amplification flux, 93 Amplification gain, 119–120 Amplifier, 118, 120, 308 electronic, 119 feedback, 120 negative feedback, viii non-inverting, 101 operational or OPAMP, 103, 118, 124 Angelfish (Pomacanthus semicirculatus), 61 Apoptosis, 75, 178, 191, 195 Association, 104 constant, 76, 78, 117, 297 rate, 115 Asymptotic stability See Stability, asymptotic ATP, 15, 76, 158–160 Attractivity, 10 Attractor, 191, 195 Attractor limit cycle, 191, 195 Autocatalytic, 198 Autocorrelation, 66–67 Autorepressed circuit, 102–103 Avogadro’s number, 237 Bacteria, 48, 52, 85–86, 98–99, 102, 103, 191, 225 chemotaxis, 28 heat-shock response, viii Basin of attraction, 11 Bayesian network, 244, 245, 246 Becker-Weimann model, 175, 185 Bessel functions, 66 Bicoid, 49 Bifurcation, 17–19, 110, 194–196, 200, 215, 223 diagram, 17–18, 204, 205, 212, 216217, 219, 220 diÔusion-driven, 55 Hopf, 17, 111, 112, 195–200, 205–208, 211–212, 216–223 parameter, 205, 219 point, 17, 195–197, 200, 205, 211, 219 saddle node, 195–197, 215–216, 218 supercritical, 205 Bijective correspondence, 142 Bimodal probability distribution, 30 Biochemical Systems Theory, 145 BioSens, 172 Bipartite graph, 125–127, 131 Bipolar junction transistor, 103 Birhythmicity, 112 Bisimulation metric, 252–253 Bistability, 11, 30, 75, 191, 195, 214, 216, 218, 224 Black, Harold, viii bmal1, 176, 221 Bode plot, 24–25 Boltzmann’s constant, 67 Boolean component, 272, 275 network, 244, 269, 272, 274, 276, 283, 293–294 reconstruction, 280, 287, 290, 291, 292 representation, 272, 273 structure, 269, 272, 273, 276, 281–283, 290, 293 338 Boundary conditions, 47–48, 60, 66 Dirichlet, 47, 60 Neumann, 47, 60 Robin, 47 Boveri, Theodor, 49 Brownian motion, 37, 46, 67 13 C labeling, 302 cAMP, 225–228, 238–240 cAMP oscillations, 194, 226–228, 236 cAMP receptors (cARs), 226–227, 238 cAR1, 227, 238–239 Carbohydrates, 267 cARs (cAMP receptors), 226–227 CellDesigner, 74 Central limit theorem, 37 Channel communication, 90 input, 129, 146, 147, 151, 154 input-output, 180, 181 output, 112 reaction, 32 Chaotic attractors, 140 Chaperones, 86–96, 98 Characteristic equation, 12 Characteristic of a system, 134–143 Characteristic time, 70 Chemical Langevin equation, 36–37, 246 Chemical master equation (CME), 32–33, 36, 38, 43, 44, 237, 252, 267 Chemical reaction network (CRN), 127–130, 131 Chemotaxis, 28, 178, 191, 226 Chick embryo, 55 Chromatin, 54 Circadian rhythm, 30, 186, 221–223 Clock, 114 activator-repressor, 103, 108–114 behavior, 102 circadian, 175–176, 186–187, 189, 191, 195, 221– 224 control of VPAC2, 186 synthetic, 116 Cloning, 103 CME (chemical master equation), 32–33, 36, 38, 43, 44, 237, 252, 267 Coe‰cient of variation, 41 Cocci, 48 Cochaperone, 87 Coherence resonance, 31 Cold-shock response, 98 Collins toggle switch See Switch, toggle Compartment model, 45 Connectivity Theorem, 146, 158, 162, 165, 167– 168 Conservation law(s), 130, 133–134 Continuation diagram, 17 Continuous-time Markov process (CTMP), 247– 249, 253 Index Continuum hypothesis, Control coe‰cient, 164 Control law, 20 Controllability, 26–27, 142 algebraic test, 26 Cooperativity, 108 Covariance, 39, 44 matrix, 39, 41 measurement, 170 CREB, 185 CRN (chemical reaction network), 127–130, 131 Cross-talk, 72, 76, 90, 98 cry, 221 Cry1, 176 Cry2, 176 CTMP (continuous-time Markov process), 247– 249, 253 Cumulative distribution function (CDF), 250–254, 262 Cybernetics, vii Cytoplasm, 45, 54, 86, 98, 203, 221 Cytoplasmic heat-shock response, 90, 91, 98 DAEs (diÔerential-algebraic equations), 9091 Dalton, 68 DASPK, 91 DASSL, 91 DC gain, 134 Decapentaplegic (Dpp), 54 Decay, 13, 53, 105, 106, 115, 117, 124, 182, 183, 236, 257 Decay rate, 105–106, 110, 115, 116, 257 Degradation, 1, 87, 89, 90, 92–93, 95–97, 105, 205 Degradation rate, 52–53, 93 Degradation tags, 106 Delay, 93, 95, 177, 194, 206, 213, 219, 220, 224 Delay time, 70 Determinant, 12, 56–57, 60 Dickkopf (DKK), 56 Dictyostelium discoideum, ix, 225227, 237, 239, 241 DiÔerential-algebraic equations (DAEs), 90, 91 DiÔerential equations nonlinear, ordinary, viii, 1, 3, 33, 45, 47, 76, 105, 125–126, 146, 174, 190, 192, 213, 218, 228, 236–237, 267, 276, 298 partial, 46, 47 periodic, 232 stochastic, 34, 3637, 244245 DiÔusion, 2, 37, 4547, 48, 5457, 6466, 218 coe‰cient, 46, 54–55, 57, 64–65, 67, 68, 218 equation, 4748, 6566 approximation, 3637 DiÔusive instability (Turing instability), 5556, 60, 61, 63 Dilution, 105 Dirac delta function, 48 Index Direction field See Vector field Dirichlet’s boundary conditions, 47, 60 Dispersion, 50, 52, 54, 58 Dissociation, 104 constant, 76, 78, 117, 297 rate, 115 Dizzy, 240 DKK (Dickkopf ), 56 DNA, 102, 103, 285 binding to transcription factor, 105 recombinant technology, 103 DnaJ, 87 DnaK, 87, 90, 92–95 Double phosphorylation, 73, 80–83, 84 Dpp (decapentaplegic), 54 Drag coe‰cient, 67 Drosophila melagonaster (fruit fly), 49, 52 Dual, 151, 252 Dublin, 297 Earth movers distance, 251 EÔectors, 48, 49 EGF (epidermal growth factor), 69 Eigenvalues, 10, 12–14, 17, 27, 28, 57, 62, 70, 178, 180, 195–198, 200, 229, 233 Eigenvectors, 12 Einstein, Albert, 67 Einstein-Stokes relationship, 67–68 Elasticity, 164 Energy, 14, 32, 86, 97 Envelope stress response (ESR), 98 Enzyme kinetics, 76–77, 79, 84 Enzymes, 1, 17, 19, 54, 76, 86, 102, 103, 129, 146, 152, 157, 164, 165, 166, 226 Epidermal growth factor (EGF), 69 Equations chemical master, 32–33, 36, 38, 43, 44, 237, 252, 267 diÔerential-algebraic, 9091 nonlinear diÔerential, ordinary diÔerential, viii, 1, 33, 45, 47, 76, 105, 125126, 146, 174, 190, 192, 213, 218, 228, 236– 237, 267, 276, 298 partial diÔerential, 46, 47 periodic diÔerential, 232 stationary covariance, 40 stochastic diÔerential, 34, 3637, 244245 Equilibrium, 34, 914, 1718, 22, 25, 32, 56, 57, 64, 105, 107–111, 132, 138, 191, 195, 235, 274, 280, 288, 293 mathematical, thermal, 245 ERK2, 227, 238 ERKs (extracellular signal-regulated kinases), 69– 70, 227, 238, 308–312 Escherichia coli, 86, 98, 102–103, 178 cell cycle, 255 cell division, 106 339 heat shock response, 86 metabolic map, 149 periplasmic compartment, 98 piliation, 30 Eukaryotic cells, 213 Euler’s formula, 13 Extracellular signal-regulated kinases (ERKs), 69, 227, 238, 308–312 Extrinsic noise, 29, 95 FasL, 178 Fast Fourier transform, 240 Feedback, vii, 20–21, 30, 72, 76, 85, 101, 104, 134, 135, 137, 140–145, 190, 196–199, 202, 203, 208, 209, 212, 214, 221, 223, 225, 230, 246, 270, 293, 305 amplifier, 120 dynamic output, 21 gene regulatory, 185, 221 high gain, 97 insulin signaling, 171–173 negative, 75, 119–121, 141–143, 172, 176, 178, 214, 227 output, 21, 119 positive, 15, 16, 75, 136, 142, 172, 176, 178, 227 reducing sensitivity, viii regulated degradation of s 32 , 87 in response systems, 85–87, 91–93, 95, 99 state, 20, 27 static output, 21 through sequestration, 93–94 unity, 26, 137 Feedforward, 72, 76, 85, 87, 91–93 Feinberg-Horn-Jackson deficiency theory, 126 Fick’s law of diÔusion, 46 Filtering, measurement, 308 Filter, 26, 28 Kalman, 28 low pass, 24–26 noise, 29, 308 FIM (Fisher information matrix), 169–174 Finite State Projection method (FSP), 30, 43–44, 252 Fisher information matrix (FIM), 169–174 Fixed point, Fluorescence correlation spectroscopy, 66–67 Fluorescence recovery after photobleaching, 64–66 Fluorescence techniques, 218 Fluorophore, 64, 65 Flux, 46–47, 50, 53, 131, 146, 149, 151, 154–160, 163, 167, 302 Flux Connectivity Theorem (Connectivity Theorem), 146, 158, 162, 165, 167–168 Flux control coe‰cients, 167 Flux, metabolic pathway, 19 Flux response coe‰cients, 163 Forward Kolmogorov equation, 32 Fourier analysis, 23 340 Fourier transform, 23–24, 240 Fragility, 189–190, 203, 213, 215, 218, 219, 224 French flag model, 49 Frequency domain analysis, 22–26, 178 Frequency response, 24 Fruit fly (Drosophila melagonaster), 49, 52 FSP (Finite State Projection method), 30, 43–44, 252 FtsH, 87, 90, 93, 95 G protein, 226 GA (genetic algorithm), 235 Gain, 24, 51 Gardner-Cantor-Collins toggle switch See Switch, toggle Gap metric, 246 Gaussian distribution, 48, 65, 170 Gaussian noise, 286 GDP, 54 Gene expression, 1, 30, 31, 40–41, 75, 86, 102, 106, 122, 203, 248, 255–264, 284 Gene knockout, 287 Gene networks, 31, 44, 85, 90, 94, 98, 141, 221, 257 design of, 106–110 Goodwin oscillator and, 203–210 Gene silencing, 266, 284, 288 Gene, 102 coding region, 102, 114 heat shock, 86–87 mutations, 192, 193 overexpression, 266, 285, 288 regulatory network, 175 Genetic algorithm (GA), 235 GFP See Green fluorescent protein (GFP) Gierer, Alfred, 61 Gillespie’s stochastic simulation algorithm See Stochastic simulation algorithm (SSA) GLUT4, 172 Glycolytic pathway (chain), 15–16, 158 Glycolytic reaction scheme, 158 Goodwin oscillator, 190, 203–213 G6P, 158 Gradient, 46, 48–50, 54, 55, 192, 235, 245, 257, 259 Graph, bipartite, 125–127, 131 Graph-theoretic analysis, 125, 131, viii Green fluorescent protein (GFP), 68, 103, 114, 248 GroEL, 87 GroES, 87 GrpE, 87 GTP, 54 GTPase, 54 Guanylyl cyclase, 226 Hair follicle spacing, 56 Hard excitation, 112 Index Heat shock proteins (HSPs), 86–87, 93, 96 Heat shock response (HSR), 86–99, 191 Heinrich, Reinhart, 145 Heinrich MAPK model (Hg MAPK model), 76– 78, 80–83 Heteroclinic connection, 139 HF MAPK model (Huang and Ferrell MAPK model), 76, 77, 80–83, 213, 218 Hg MAPK model (Heinrich MAPK model), 76– 78, 80–83 Hg model, 77 Hill kinetics, 105, 109, 192, 267, 299, 302, 304 Hirsch’s Generic Convergence Theorem, 140 Hodgkin, Alan, 297 Homeostasis, 85, 98, 192, viii Hopf bifurcation, 17, 111, 112, 195–200, 205–206, 208, 211–212, 216–223 HSPs (heat shock proteins), 86, 87, 93, 96 HSR (heat shock response), 86–99 Huang and Ferrell MAPK model (HF MAPK model), 76, 77, 80–83, 213, 218 Hurwitz matrix, 13, 56, 306 Hurwitz polynomial, 13, 306 Huxley, Andrew, 297 Hybrid optimization techniques, 194, 235, 241 Hydra tentacle formation, 61 Hyperosmotic shock, 86 Hypothalamus, 185 Identifiability, 312–315 of model parameters, 170, 173, 313 practical, 314 practical time-varying, 315 Identification of fragilities, 189 network, 188 parameter, 172, 173, 299–300 system, 177, 275–277, 279–280, 282, 287, 288, 293, 297–315 target, 177, 188 Impedance, 104, 114, 118, 124, 127 Importin-b, 54 Inactivation, 53 Inhibitor, 56, 58, 313 Input selection, 171–173 International Genetically Engineered Machines competition, 143 Instability, 11, 55, 107, 208, 55, 56, 60, 61, 63 Insulation device, 104, 118–124 Insulin, 102, 171–174 Insulin receptors, 172 Intrinsic noise, 29, 95 Ising spin-glass model, 141 j, 13 Jacobian, 8, 14, 22, 38, 56, 58, 61, 64, 70, 136, 155, 195–197, 200, 216 time-varying, 195 Index K MAPK model (Kholodenko MAPK model), 76, 78, 79, 80 Kacser, Henrik, 145 Kalman filters, 28 Kantorovich metric, 251 Kholodenko MAPK model (K model), 76, 78, 79, 80 Kinases, 69–72, 74–76, 78, 120, 122, 129, 178, 182, 184, 185, 213, 227 See also MAPK cascade; MAPK models Kinetic parameters, 78 Kinetics, 299, 308 enzyme, 76–77, 79, 84 Hill, 105, 109, 192, 267, 299, 302, 304 mass action, 2, 31, 32, 33, 90, 128, 129, 267, 299 Michaelis-Menten, 2, 78, 79, 192, 267, 299, 302, 304 Monod, 299 power law, 299 Knockdown, 188 Knockout, 185, 188, 287 Kullback-Leibler divergence, 246 Lac operon, 143 Lambda phage (l-phage), 30, 244 Langevin leaping formula, 37 Laplace transform, 24, 196, 198, 229–230, 289 Laub-Loomis model, 226, 238 Law of Large numbers, 33 Leap condition, 35–36 Leibler, Stanislas, ix LFT (linear fractional transformation), 181, 229– 231 Lifting, 202, 232–233 Limb development, 55 Limit cycle, 15, 106, 139, 174, 191, 195, 205, 232 Linear fractional transformation (LFT), 181, 229– 231 Linear kinase-phosphatase cascade, 71 Linear time-invariant system (LTI), 21–26, 147, 232–233, 274–276, 279 Linearization, 8–9, 14, 17, 21, 25, 56, 60, 145, 154, 163, 190, 195–197, 199, 202, 232, 274 Lipids, 267 Linear noise approximation (LNA), 37–38 Lon, 87 Low pass filters, 25 LTI (linear time-invariant system), 21–26, 147, 232–233, 274–276, 279 Lyapunov, Aleksandr Mikhailovich, 14 Lyapunov equation, 40, 41 Lyapunov function, 14, 132 Lyapunov indirect method, 14 Lysis-lysogeny decision, 30, 244 Mallows distance, 251 MAPK cascade, 69, 71, 74–84, 120, 189, 143, 213–218, 308 341 MAPK models, 75–84, 216, 221, 224, 312 Heinrich or Hg, 76–78, 80–83 Huang and Ferrell or HF, 76, 77, 80–83, 213, 218 Kholodenko or K, 76, 78, 79, 80 Markov process (chain), 42, 43, 247–249, 253 Markov property, 33, 247 Mass action kinetics, 2, 31–33, 90, 128–129, 267, 299 Mathematica, Matlab, 3, 42, 240 Matrix exponential, Maxwell-Boltzmann distribution, 245 MCA (metabolic control analysis), 145–168 Means, stationary, 40 Measurement filtering, 308 Measurement selection, 171, 173–174 Meinhardt, Hans, 61 MEK, 308, 309, 310 Memoryless input-output map, 20 Mesoscopic scale, 31, 34 Metabolic control analysis (MCA), 145–168 Metabolic cost, 93 Metabolic network, 145 Metabolic pathways, 19, 70, 149 Metabolites, 1, 126, 146, 161, 166, 218–220 Michaelis-Menten kinetics, 2, 78, 79, 192, 267, 299, 302, 304 Microarray, 268 Microtubule, 47 MIMO (multiple-input, multiple-output system), 20 Model approximation, 71, 75, 76, 78, 84 Model comparison, 243–244, 255–257 Model invalidation, 243, 259–264 Model reduction, 244 Modularity, viii, 101, 104–124 Modules, 75, 88–90, 104–106, 112, 117, 124, 126 Moiety, conserved, 149, 159, 213 Moment closure function, 42, 44 Monge-Kantorovich transportation problem, 251 Monod kinetics, 299 Monotonicity, 127, 134–143 Monte Carlo simulation, 34, 44, 228, 240, 241 Morgan, Thomas Hunt, 48 Morphogen, 48–55 Morphogenesis, 55 mRNA, 40–41, 87, 96, 105–112, 115, 126, 176, 203, 206, 221, 222, 248, 257, 284–286 Bmall, 176 Per, 175, 186 rpoH, 87, 88 VPAC2, 186 m-analysis, 228, 234, 236, 241 Nerve growth factor (NGF), 69 Networks conservative, 132 gene regulatory, 175 342 Networks (cont.) identification, 188 metabolic, 145 stoichiometry, viii subnetwork, 223, 224, 274 topology, 148 transcriptional, 102, 104, 118 Neurons, 185–186, 297 Neutrophil model, 218–220 Neutrophils, 218 Neumann’s boundary conditions, 47, 60 NGF (nerve growth factor), 69 Noble, Denis, 297 Noise filters, 29, 308 NP-hard problem, 180 Nuclear envelope, 46 Nucleic acid, 267 Nullcline, 7, 136 Nyquist plot, 25–26, 178–179, 206–207 Nyquist stability, 178–179, 180, 199 Nyquist theorem, 199 Observability, 26–28, 142, 305–308, 311, 313 algebraic test, 27 Observer, 28, 297–298, 300–308, 311–313 based design, ix canonical form, 305 definition of, 28 design, 302, 304 high-gain, 306 nonlinear, 301, 305 ODEs See Ordinary diÔerential equations o-limit set, 132 Oncogenes, 75 OPAMP (operational amplifier), 103, 118, 124 Orbit, 112, 174 existence, 139 limit cycle, 177 periodic, 107, 112, 139140, 143 Ordinary diÔerential equations (ODEs), viii, 1, 33, 45, 47, 76, 105, 125–126, 146, 174, 190, 192, 213, 218, 228, 236–237, 267, 276, 298 Orthant cone as, 140 positive, 132, 137 Orthant-monotone systems, 127, 140 Oscillations, ix, 14–18, 75, 106–109, 111, 113, 114, 116, 139, 140, 174–188, 218, 225–242 amplitude, 24 circadian (see Circadian rhythm) damped, 15, 112 metabolic, 189, 218 phase, 24 robustness, 30, 108, ix spatial, 57 Oscillators, 102, 106, 108, 115, 118 circadian, 175, 185, 189 Goodwin, 190, 203–213 Index phase sensitivity, 174–177 relaxation, 30, 108, 112, 139 synthetic, 30, 101, 104 Osmolarity, 85 Overactuation, 147, 152–153 P-semiflow, 131–133 Parameter estimation, 243–244, 257–259, 297–315 Parametric impulse phase response curve (pIPRC), 174–177 Partial order, 139 Passivity, 126 Pathway, uncertain, 79 PCR (polymerase chain reaction), 103 PDE See Equations, partial diÔerential Per1, 176 Per2, 176 Per/Cry, 176 per gene, 221–223 Performance filter, 183 Periplasm, 98 Petri net, 125–127, 131–134 conservative, 132–134 species-reaction or SR, 131 Phage, l, 30, 244 Phagosome, 218, 219–220 Phase, 185, 186 advance, 186 delay, 186 sensitivity, 188 shift, 24 Phase advance, 186 Phase delay, 186 Phase plane, 4–7, 9, 137–138 Phase response curve (PRC), 176–177, 186–187 Phosphatase, 69, 70–72, 75, 77–79, 120–122, 129, 182 Phosphodiesterase, 227 Phosphorylation, 182, 184 pIPRC (parametric impulse phase response curve), 174–177 PKA (protein kinase A), 227, 238 Plant, 20, 88, 89, 293 Plasma membrane, 46 Plasmids, 103 Poincare´-Bendixson Theorem, 15 Poisson distribution, 42 Poisson process, 53 Polarity, 45, 48 Polymerase chain reaction (PCR), 103 Pomacanthus semicirculatus (angel fish), 61 Power law kinetics, 299 PRC (phase response curve), 176, 177, 186, 187 Probability distribution, bimodal, 30 Production, 50, 58, 59, 87, 91, 93, 102, 105, 117, 124, 149, 165, 227, 238 Promoter, 87, 102, 103, 104, 106, 109, 114, 115, 117, 121, 122, 255 Index Propensity, 32, 35, 39, 41, 44, 237, 247–248 Protein folding, 87, 89, 90, 91, 98 delayed, 91, 92 Protein kinase A (PKA), 227, 238 Protein substrate, 129 Proteins, 1, 30, 40–41, 45, 54, 68, 69, 71–79, 86, 96, 102, 105, 106, 109–112, 115–117, 119, 121, 126, 128, 129, 176, 203, 221, 226, 248, 250, 257, 260–261, 265–267, 285–286, 308 activation, 76, 78 activation state, 76 adaptor, 76 denatured, 87 half life, 116 mass, 68 misfolded, 98 phosphorylated, 77, 83, 121 synthesis, 98, 269 synthesis rate, 96 unfolded, 87, 91, 92, 93, 95, 98 Pseudometric, 249 Quasi-steady-state, 77, 84 QSSRP (quasi-steady state reduction principle), 127, 134–143 theorems, 142–143 Quantitative measure, 72 Raf, 308, 309, 311 Ran, 54 RanGTP, 54 RanBP1, 54 Random variable reporter, 250, 255, 261, 263 RanGAP, 54 RanGDP, 54 RCC1, 54 Reaction chain, 162, 270, 273, 274 complex, 269 linear, 162 unbranched, 165, 167 Reaction-diÔusion, 49, 50, 64 Realization, 211, 275–277 minimal, 147 nonminimal, 147, 150 Receptors, 70, 71, 74, 77, 125, 134, 182, 185, 186, 239 cAMP (see CAR1) deactivation, 77, 182 growth-factor, 76 Reconstruction, 275–277 Redundancy, 147–148 column, 150–152, 159 input, 147–148 row, 149–150, 159 state, 147 RegA, 227, 238 Relaxation time, 70 Repressilator, 30, 102, 103, 106–109, 112 343 Repressor, 102, 103, 105, 106, 108–110 Response coe‰cient, 163 Restriction enzymes, 102 Retroactivity, viii, 101, 104, 112, 114–124, 127 to the input, 116, 118, 119, 123, 124 modeling, 114 to the output, 115, 116, 117–119, 121, 123–124 quantification, 117 Reynolds number, 67 Rhythm, 112, 185, 186 Ribosome, 87, 255 Rise time, 70 RMS (root mean square), 251 RNA interference (RNAi), 284 RNA polymerase (RNAP), 29, 86, 94, 102, 255, 285 Robin’s boundary conditions, 47 Robust performance (RP), 86, 177–182, 191–193 Robust stability, 181, 191 Robust stability (RS), 178–181, 191 Robustness, viii, 13, 17, 19, 93–94, 97, 107–108, 118, 169, 177, 189–242, 282 definitions, 177, 190–193 heat-shock response, 91–96 noise enhanced, 30 oscillations, ix oscillatory, 225–242 stochastic, 236–241 Root mean square (RMS), 252 RP (robust performance), 86, 177–182, 192 RS (robust stability), 178–181, 191 Saddle node bifurcation, 195–197, 215, 216, 218 Savageau, Michael, 145 Schroădinger, Erwin, 297 SCN (suprachiasmatic nucleus), 185 Sea urchin, 49 Self-degradation, 198, 227 Self-dynamics, 198, 205 Self-repression, 103 Sensing, 89, 90, 93, 97 Sensitivity analysis, viii, 91, 94, 146, 165, 169, 188 frequency, 124 parametric, 145–146, 154, 162 Sensitivity invariants, 165 Sensors, 87, 89–91, 96 Separation of variables, 48 Separation principle, 143, 157, 168 Separatrix, 11 Sequestration, 87, 93, 94, 97 Settling time, 70 s 32 , 86–97 activity regulation, 87 degradation, 88, 96 mRNA, 89 sequestration, 87 stability, 95 stabilization, 87 synthesis, 87, 91 344 s E , 98 Signal amplitude, 73, 183 Signal duration, 70, 71–72, 73, 80, 83, 84, 183 Signal strength, 71–72, 73 Signaling time, 71–72, 80, 83, 84, 183 Single-input, single output systems (SISO), 20 Singular values, 12–13, 201, 231, 312 Sinusoids, 23, 106, 112, 176, 177, 206, 230 Siphon, 132–133 siPRC (state impulse phase response curve), 174– 175 SISO (single-input, single output systems), 20 Skewed-m, 181, 185 Slowly-varying functions, 23 Sonic hedgehog, 54–55 Species-reaction Petri net (SR Petri net), 131 SSA (stochastic simulation algorithm), 34–35, 91, 95, 237, 239–240, 253, 255, 257 Stability, 10–14 asymptotic, 10, 13, 155, 163 definition, 10 eigenvalue test, 13 global asymptotic, 13–14, 134 local, 13–14 neutral, 10 nonlinear systems, 13 robust, 191 State impulse phase response curve (sIPRC), 174– 175 Stationary behavior, 191, 195 Steady state See Equilibrium Stochastic disturbances, 28 Stochastic focusing, 31, 38 Stochastic gradient, approximate, 257–259 Stochastic models, ix, viii, 2, 29–44, 264, 267 Stochastic processes, 264 Stochastic simulation algorithm (SSA), 34–35, 91, 95, 237, 239–240, 253, 255, 257 Stoichiometry, viii, 146, 297–298, 301–302, 308 Stoichiometry coe‰cients, 128 Stoichiometry matrix, 32, 40, 126, 128, 130, 145, 148–153, 158, 167–168, 273, 299, 301 Stress response, 85–86, 98–99 Structured singular value (SSV), 169, 178–185, 188, 201, 214, 228, 231 Subnetwork, 223, 224, 274 Substrate-depletion system, 59, 63–64 Summation Theorem, 146, 158, 162, 165–168 Suprachiasmatic nucleus (SCN), 185 Switch biochemical, 75 bistable, 191 genetic, 30 irreversible, 191 stochastic, 30 toggle, 30, 102–103 translational, 91, 93 Switching probabilistic scheme, 235 Index Synchronization, 113, 185–187, 241 Synchronized neurons, 185–186 Synthetic biology, 101–104, 114, 124, 143 System fragile, 96–97, 215 input-output, 19–22 linear, discrete-time, 232, 233 linear, time-invariant, 21–26, 147, 232–233, 274– 276, 279 multiple-input, multiple-output, 20 single-input, single output, 20 T-semiflow, 132 Tau-leaping, 35–36, 240 Taylor series, Temperature sensing, 93, 97 Thermal energy, 32 Thermodynamic limit, 34 Thiele modulus, 50–52 Time-evolution of probability, 32 Time-scale separation, 104, 111, 112, 117, 135 Time-series data, 266, 276, 285, 287, 297, 298, vii plot, Token-passing systems, 132 Topology circuit, 112 network, 148 ring, 102 Trace, 12, 56–57, 63 Trajectory, 4, 138, 249 Transcription, 87, 89, 94, 96, 102, 104–105, 109– 110, 115, 119, 124, 176, 213, 259–261, 285, 286, 308 Transcription factor, 1, 49, 75, 102, 104–106, 117, 182, 185, 203, 204, 206–209, 211–213, 285–287 Transcriptional circuit, 101, 103, 104, 117 Transcriptional module, 105, 106, 108, 112, 115 Transcriptional network, 102, 104, 118 Transcriptional regulation, 102 Transfer function, 24, 179, 202, 274–283, 285, 288–292, 294 Translation rate, time-varying, 261 Transport, 45–47, 185, 189, 192, 193, 203, 221 Transportation metrics, 251 Transversality, 142 Turbo-charged, 14 Turing, Alan, 55, 61 Turing instability (diÔusive instability), 5556, 60, 61, 63 Turing pattern, 55–61 Turing theory, 55 Turnover rate, 153 Ultrasensitivity, 75, 214 Uncertainty, viii, 177–180, 183, 184 block, 179 environmental, 177 Index feedback, 179 matrix, 181, 231, 233 model, 192, 193 model structure, 192 parameter, 178, 180, 183, 185, 189, 193, 224, 229–231, 233 –235, 240, 299 parametric, 93 structural, 189, 193, 224 system, 177, 179, 180, 181 weight, 181 Universal method, 146 Van Kampen’s linear noise approximation (LNA), 37–38 Vector field, 5–7, 12 VIP, 185–187 VIP receptor (VPAC2), 185–187 Viscosity, 67–68 Viscous flow, 67 VPAC2 (VIP receptor), 185–187 Wasserstein pseudometric, 243, 246, 249–264 Wave numbers, 60, 62 Wavelength, 57 Well-mixed assumption, Wiener, Norbert, vii Wing size determination, 52 WNT, 55–56 Wolpert, Lewis, 49 XPP, 172 Yeast genome, 177 Zero-order hold, 233 345 ... metabolic control analysis theorems (8 .22 ) and (8 .23 ) describe the result of postmultiplying the control coe‰cients with specific matrices Referring to the partitioned-response equation (8 .20 ), these... 9 .2 Parameter identification from optimized measurement selections X 1:96spj =pj Np j State measurement Parameters x2 ; x3 ; ; x21 21 x15 ; x17 ; x19 ; x20 ; x21 21 11.7% x15 ; x17 ; x19 ; x20... (Cornish-Bowden and Cardenas, 1999; Kholodenko and WesterhoÔ, 20 04; Stephanopoulos et al., 1998) and has been used in rational drug design (Cascante et al., 20 02; Cornish-Bowden and Ca´rdenas,

Ngày đăng: 22/01/2020, 02:31

TỪ KHÓA LIÊN QUAN