DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME 4 Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology potx

340 2.4K 1
DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME 4 Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology DATA HANDLING IN SCIENCE AND TECHNOLOGY Advisory Editors: B.G.M Vandeginste, O.M Kvalheim and L Kaufman Volumes in this series: Volume Microprocessor Programming and Applications for Scientists and Engineers by R.R Smardzewski Volume Chemometrics: A textbook by D.L Massart, B.G.M Vandeginste, S.N Deming, Y Michotte and L Kaufrnan Volume Experimental Design: A Chemometric Approach by S.N Deming and S.N Morgan Volume Advanced Scientific Computing in BASIC with Applications in Chemistry, Biology and Pharmacology by P Valk6 and S.Vajda DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME Advisory Editors: B.G.M Vandeginste, O.M Kvalheim and L Kaufman Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology P VALKO Eotvos Lorand University, Budapest, Hungary S VAJDA Mount Sinai School of Medicine, New York, N Y, U.S A ELSEVIER Amsterdam - Oxford - New York - Tokyo 1989 ELSEVIER SCIENCE PUBLISHERS B.V Sara Burgerhartstraat P.O Box 1, 1000 AE Amsterdam, The Netherlands Disrriburors for the United Stares and Canada: ELSEVIER SCIENCE PUBLISHING COMPANY INC 655, Avenue of the Americas New York NY 10010, U.S.A ISBN 0-444-87270-1 (Vol 4) (software supplement 0-444-872 17-X) ISBN 0-444-42408-3 (Series) 0Elsevier Science Publishers B.V., 1989 All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in ariy form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science Publishers B.V./ Physical Sciences & Engineering Division, P.O Box 330, 1000 AH Amsterdam, The Netherlands Special regulationsfor readers in the USA - This publication has been registered with the Copyright Clearance Center Inc (CCC), Salem, Massachusetts Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the USA All other copyright questions, including photocopying outside of the USA, should be referred to the publisher No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein Although all advertising material is expected t o conform t o ethical (medical) standards, inclusion in this publication does not constitute a guarantee or endorsement of the quality or value of such product or of the claims made of it by its manufacturer Printed in The Netherlands I m I m 1.1 1.1.1 112 113 114 12 121 122 13 131 132 133 134 14 1.5 16 17 18 1.B.1 182 183 184 185 186 187 180 21 211 212 213 214 215 216 22 221 222 23 231 232 233 D Y F U T A T I m L I N P R CYGEBFW Basic concepts and mett-uds Linear vector spaces Vector coordinates in a new basis Solution of matrix equations by Gauss-Jordan mliminatim Matrix inversion by Gauss-Jordan eliminatim Linear programming Simplex method for normal form Reducing general problenw to normal form The two phase simplex method LU decomposition Gaussian eliminatim Performing the LU decomposition Solution of matrix equations Matrix inversion Inversion of symnetric, positive definite matrices Tridiagonel systms of equations Eigenvalues and eigenvectors of a symnetric matrix Accuracy in algebraic canpltatims Ill-cmditimed problems Applications and further problms Stoichianetry of chemically reacting species Fitting a line by the method of least absolute deviations Fitting a line by minimax method Chalyis of spectroscopic data for mixtures with unknctm backgrcund absorption Canonical form of a quadratic response functim Euclidean norm and conditim n h b e r of a square matrix Linear dependence in data Principal component and factor analysis References NELINAR EGWTIONS CYUD E X T E W I PRoBLDVls Nanlinear equations in m e variable Cardano method for cubic equatims Bisection False positim method Secant method Newton-Raphsa, method Successive approximatim Minimum of functims in me d i m s i m Golden section search Parabolic interpolation Systems of nonlinear equatims Wegstein method Newton-Raphsm method in m u l t i d i m s i m s Sroyden method VIII 2 12 14 15 19 27 27 28 32 34 35 39 45 47 47 51 54 96 59 68 61 65 67 69 7 74 77 00 82 85 87 88 96 99 W 104 107 VI PMNCIER ESTIMCITIaV Fitting a straight line by weighted linear regression Wltivariable linear regression Nonlinear least squares Linearization weighting and reparameterization Ill-conditioned estimation problems Ridge regression Overparametrized nonlinear models kltirespmse estimation Equilibrating balance equations Fitting error-in-variables models Fitting orthogonal polynomials Applications and further problems 0-1 different criteria for fitting a straight line Minimization in multidimnsions Simplex method of Nelder and Mead Davidon-Fletcher-Powell method Applications and further problems Analytic solution of the Michaelis-l*lenten kinetic e q w t i m Solution equilibria Liquid-liquid equilibrium calculation Minimization subject to linear equality constraints chemical equilibrium composition in gas mixtures RefermcE 112 113 119 123 123 125 127 31 32 33 34 35 351 3.5.2 36 37 38 39 3.10 31 .01 Design of experiments for parameter estimation 02 Selecting the order in a family of homrlogous models 03 3.10.4 Error-in-variables estimation of van Laar parameters fran vapor- 139 145 151 161 173 178 179 182 184 24 241 242 25 251 252 253 254 liquid equilibrium data References 41 411 412 413 414 4.2 421 422 43 431 432 433 44 441 442 SI(3rYy PRonSsING Classical mthods Interpolation Smmthing Differentiation Integratim Spline functions in signal prccessing Interpolating splines Smmthing splines Fourier transform spectral methods Continuous Fourier transformation Discrete Fourier transformation Application of Fourier transform techniques Applications and further problem Heuristic methods o f local interpolation Praessing of spectroscopic data References 130 137 1BE 194 205 209 209 210 213 214 217 220 224 224 228 230 234 235 235 21 46 246 247 249 252 257 257 258 2&0 VII 51 511 512 513 9.2 5.3 5.4 55 56 5.7 5.8 5.8.1 5.8.2 D W I W MWLS rUunerical solution of ordinary differential e q w t i m s Runge Kutta methods hltistep methods Adaptive step size control Stiff differential equations Sensitivity analysis Qmsi steady state approximation Estimation of parameterm in differential equations Identification of linear system Determining thh input of a linear system by numrrical deconvolutim Applications and furtkr problem Principal component analysis of kinetic models Identification of a linear cunpartmntal model References 311 311 313 317 WECT 319 INDEX 21 263 266 29 272 273 278 283 286 297 306 VIII INTRODUCTION This book is a practical introduction to scientific computing and offers BPSIC subroutines, suitable for use on a perscnal complter, for solving a number of important problems in the areas o chmistry, biology and f pharmacology Althcugh our text is advanced in its category, we assume only that you have the normal mathmatical preparation associated with an undergraduate degree in science, and that you have some familiarity with the e S I C programing language W obviously not persuade you to perform quantum chemistry or molecular dynamics calculations on a PC , these topics are even not considered here There are, however, important information handling needs that can be performed very effectively A PC can be used to model many experiments and provide information what should be expected as a result In the observation and analysis stages of an experiment it can acquire raw data and exploring various assumptions aid the detailed analysis that turns raw data into timely information The information gained f r m the data can be easily manipulated, correlated and stored for further use Thus the PC has the potential to be the major tool used to design and perform experiments, capture results, analyse data and organize information Why w e use BASIC? Althcugh we disagree with strong proponents of one or another programing language who challenge the use of anything else on either technical or purely motional grounds, m t BASIC dialects certainly have limitations First, by the lack of local variables it is not easy to write multilevel, highly segmented programs For example, in FWTRAN you can use e subroutines as "black boxes" that perform ~ operations in a largely unknown way, whereas programing in BASIC requires to open tl-ese black boxes up to certain degree We not think, hOwever, that this is a disadvantage for the purpose of a book supposed to teach you numerical methods Second, BASIC is an interpretive language, not very efficient for programs that a large amwnt of "number - crunching'' or programs that are to be run many times kit os the l s of execution speed is compensated by the interpreter's ability to e enable you to interactively enter a program, immdiately execute it and s e the results without stopping to compile and link the program There exists no more convenient language to understand how a numerical method works BASIC is also superb for writing relatively small, quickly needed programs of l s than llaaw es program lines with a minimvn programming effort Errors can be found and corrected in seconds rather than in hours, and the machine can be inmediately quizzed for a further explanation of questionable answers or for exploring further aspects of the problem In addition, once the program runs properly, you can use a S I C compiler to make it run faster It is also important that IX on most PC’s BASIC is usually very powerful for using all re5Wrce5, including graphics, color, sound and commvlication devices, although such aspects will not be discussed in this book Why we claim that cur text is advanced? W believe that the methods and e programs presented here can handle a number of realistic p r o b l w with the power and sophistication needed by professionals and with simple, step - by step introductions for students and beginners In spite of their broad range of applicability, the subrcutines are simple enwgh to be completely understood and controlled, thereby giving m r e confidence in results than software packages with unknown source code Why we call cur subject scientific computing? First, w e a s that you, the reader, have particular problems to solve, and not want to teach you neither chemistry nor biology The basic task we consider is extracting useful information fran measurements via mcdelling, simulatim and data evaluation, s and the methods you need are very similar whatever your particular application i More specific examples are included only in the last sections of each chapter to show the power of some methods in special situations and pranote a critical approach leading to further investigation Second, this book is not a course in numerical analysis, and we disregard a number of traditional topics such as function approximation, special functions and numerical integration of k n m functions These are discussed in many excellent books, frequently with PASIC subroutines included Y o u will find here, however, efficient and robust numerical methods that are well established in important scientific applications For each class of problems w e give an introduction to the relevant theory and techniques that should enable you to recognize and use the appropriate methods Simple test examples are chDsRl for illustration Although these examples naturally have a numerical bias, the dominant theme in this book is that numerical methods are no substitute for poor analysis Therefore, we give due consideration to problem formlation and exploit every opportunity to emphasize that this step not only facilitates your calculations, but may help ycu to avoid questionable results There is nothing mt-e alien to scientific computing than the use of highly sophisticated numerical techniques for solving very difficult p r o b l w that have been made 50 difficult only by the lack of insight when casting the original problem into mathematical form What is in this book? It cmsists of five chapters The plrpose of the preparatory Chapter is twofold First, it gives a practical introduction to basic concepts of linear algebra, enabling you to understand the beauty of a linear world FI few pages will lead to comprehending the details of the two phase simplex method of linear programing Second, you will learn efficient numerical procedures for solving simultaheous linear equations, inversion of matrices and eigenanalysis The corresponding subrcutines are extensively used 308 The deconvolution method we propose here i s also parametric and i s based on d i r e c t integral parameter estimation ( r e f 27) We consider a S* linear system with input function of the r e a l system y* = y , the Since y* = y S u* = h , and , where , c m p a r i s m o f equations scught NMnl, h' h* of S* (5.66) (5.66) "hypothetical" i s the known weighting the c u t p l t o f known response function Then by weighting function h and i s assumed t o be we have (5.71) shavs that the S* equals the input function u which i s being can be estimated by i d e n t i f y i n g the weighting function o f a linear model of the form (5.65) as described i n the previous section The sane program can be used f o r input determination i f the r o l e of the variables i s properly understood Example 5.7 Determining the absorption curve f o r a given response function W continue solving the t e s t example o f Cutler ( r e f ) I n Example 5.6 w e e i d e n t i f i e d the weighting function o f the system Now we consider the second half of the data set generated by Cutler and shown i n Table 5.4 input The "true" u ( t ) = l.Zexp(-Zt) and the "true" weighting function were used by Cutler t o generate the "true" response, then "observed" response (i.e., goal i s t o f i n d random e r r o r was added t o obtain the % the observed drug concentratim i n the plasma) Our the input (i.e., the absorption curve) making use o f the weighting function i d e n t i f i e d i n the previous example and the "observed" response Table 5.4 Data t o determine the absorDtion curve Time, t 0.1 0.2 0.3 0.4 0.6 0.8 1.0 1.2 1.4 1.6 2.0 "True" input 1.2 0.9825 0.8044 0.6586 0.5392 0.3614 0.2423 0.1624 0.1089 0.0733 0.0489 0.0i-m "True" response 0.1m 0.293 0.3M 0.394 cC aa 0.368 0.327 0.288 0.252 0.211 0.155 'I Observed'I respMse (1% r r o r ) e 0.181 0.291 0.361 0.388 0.399 0.372 0.328 0.286 0.249 0.210 0.153 309 W h e n identifying the hypothetical s y s t m S* we need u* The w e i g h t i n g f u n c t i o n found i n Example 5.6 is s u b s t i t u t e d f o r the i n p t of the h y p o t h e t i c a l system T h i s i n p u t does n o t c o n t a i n an i m p u l s e or a u n i t s t e p cmponent, and hence we set M3 = and The US = r e s p o n s e of the h y p o t h e t i c a l s y s t e m e q u a l s t h e "Observed" response The program is the m e used i n Example 5.6, o n l y t h a d a t a l i n e s are changed a s f o l l o w s : T h e assumed model o r d e r is ND = We list here m l y t h e essential parts o f the rxltput _._ _ - ! E E L OKCER: 310 The "weighting function" we fwnd is that of the hypothetical system, therefore it is the absorption curve we were looking for It is useful to compare it with the "true" input given in Table 5.4 In this special case the input function fwnd and the "true" input are of the same analytical form, so w e can compare the parameters of the two functions, as well In realistic applications, however, we are not interested in the "analytical form" of the inplt function and rather the table of computed values is of primary interest The direct integral approach t numerical deconvolution preserves the o symmetry of system identification and input determination, similarly to the point - area method By (5.71) the inplt function u = ht is restricted to the class of weighting functions generated by a single - inpt, single outplt, time invariant system (5.65).This class includes plyexponmtials, polynomials and trigmometric functions, so that the constraint on the form of the i n p t is relatively mild This constraint may in fact have a physical meaning in pharmacokinetics For example, in the problem studied in Example 5.7 the hypotetical system Sx may be a real linear system whose response is the bioavailability of the drug following an impulse administration via an extravascular route Exercise Repeat the input identification experiment with the model order MD = Cwnpare the linear regression residual errors for the two cases Select the "best"model order on the basis o f the Akaike Information Criterion (5Section and ref 27) 03 311 5.8 WPLICATIaVS W D NRHTER PRoBLOls 5.8.1 Principal c o m m e n t analysis of kinetic models The researcher usually looks for a model that not only fits the data well, but describes the mechanism of action of the chemical or biological process Such detailed models are, however, frequently overparameterized with respect tn the available data, leading to ill-conditioned pi-nblems of parameter estimation In Sectim 3.5.2 you have learned that principal component analysis of the normalized cross-product matrix J T ( p ) W ( p ) is a standard method of detecting ill-conditioned parameter estimation problems In Section 5.3 w e introduced the matrix S of normalized sensitivity coefficients It plays the same role for dynamical models as J(i0) in algebraic parameter estimation problems Therefore, the principal c o m p m m t analysis of STS (or of STWS , if weighting is necessary) offers a convenient t w l for extracting information from sensitivity coefficients, and it reveals whether or not there is any hope to identify the parameters of the model Although we need initial parameter estimates to perform the calculation, such are usually available in the literature, at least in the form of sane order of magnitude guesses In this section we reconsider the sensitivity coefficients obtained in Example Example 5.8.1 Practical idmtifiability of the parameters of the microbial gravth process As sham by Holmberg (ref 3) the four parameters Vm, K , , Kd and Y are theoretically identifiable if both the concentration of the microorganism (yl) and that of the substrate (yz) are observed Practical idmtifiability of the parameters is, hmever, a rmch more difficult issue In the following four cases are investigated: (i) Both concentrations, y1 and yz small: = 0.01 are observed The error variance is (ii) Both y1 and yz are observed The error variance is large: 6' = (iii) Gnly the substrate, yz is observed The error variance is c ? = 0.01 (iv) b l y yz i observed The error variance is s To investigate cases (i) and (ii), the S 2= matrix obtained in Example is used directly Forming STS and applying eigenvalue-eigmvector decomposition (by the module M18), we obtain the results show, in Table 5.5 312 Table 5.5 Principal component analysis of the normalized sensitivity matrix; both concentrations observed Eigenvalue Eigenvector components corresponding to l KS Kd Y 69429 12304 2.583 1.724 0.957 0.230 0.042 0.172 In case (i) -0.095 -0.137 0.846 0.37 -0.134 0.020 -0.518 0.845 lWa2 = , and -0.239 0.963 0.121 0.013 hence the problem is not ill-conditioned, all the parameters can be identified Lhfortunately we can hardly hope such a small error variance in biotechnical applications In the more realistic case (ii) l W a = 100 , thus two eigenvalues are below the threshold As it was discussed in Section 3.5, the eigenvectors corresponding to the small eigenvalues show that there is no hope to identify parameters Ks and Kd with reasonable accuracy To investigate cases matrix S (iii) and obtained in Example 5.3 (iv), we include only every second r w of w h e n forming STS Plpplying eigenvalue- eigenvector decomposition again, the results shown in Table 5.6 are obtained Table 5.6 Principal compcnent analysis of the normalized sensitivity matrix; only substrate y2 is observed _ _ Eigenvalue 51599 19.225 0.409 m Eigenvector cmponents corresponding to KS Kd Y Vm 0.912 0.334 0.212 0.10-5 -0.137 -0.225 0.964 -0.041 -0.DBl -0.097 0.W 0.991 -0.378 0.W 0.162 0.057 A s seen from the table, in case (iii) we can identify V , and Y , but neither K , nor Kd can be estimated In the (unfortunately)more realistic case (iv) one can hope a reasonable parameter estimate only for V , It is advantageous to fix all the other parameters at s m e nominal value, so avoiding the inherent difficulties of the parameter estimation process Practical identifiability is not the only problem that can be adressed by principal compcnent analysis of the sensitivity matrix In (refs 29-30) several examples of model reduction based on this technique are discussed 313 Computing the sensitivities is time consuming Fortunately the direct integral approximation of the sensitivity matrix and its principal compment analysis can offer almost the same information whmever the direct integral method of parameter estimation applies 5.8.2 Identification of a linear compartmental model Assuming that a small dose of drug does not m v e the organism far from equilibrium state, linear differential equations are frequently used to describe the kinetics of drug distribution a m g different organs, and its oe elimination from the body Giving s m insight into the mechanism of action, linear compartmental models are particularly important and m r e popular than models of the form (5.65) In Example 2.2.1 a very simple cmpartmmtal model was used to describe the concentration of a certain drug in blwd Jennrich and Bright (ref 31) estimated the parameters of the linear campartmental model shown in Fig 5.7 from the data of Table 5.7 Table 5.7 Sulphate kinetics data Time, ti - _ 10 15 20 25 30 ” Activity, yi 2m0m 151117 113601 97652 90935 880 42 74991 732 34 753 09 609 74 64313 Time, ti 50 60 60 90 110 132 150 160 170 160 ” Fktivity, yi 61554 590 94 57689 54 6m 53915 5093 40717 45996 498 46 467 30 468 26 The experiment consists of applying an intravenous bolus of sulphate traced by a radioactive isotope and measuring the activity of blwd samples The in Fig leads to the differential equations compartmental &el (5.72) 314 Fig Canpartmental model of sulphate distribution kinetics In this model x1 plasm volume, x2 is the activity in Cmpartment representing the blood and x3 are unobserved activities, and kl, k2, , k5 are the rate constants to be determined The initial values xIo = ZXlD5 , x20 = xJO = assumed to be known exactly The only observed variable is y = x1 Jennrich and Bright (ref 31) used the indirect approach to parameter estimation and solved the equations (5.721 numerically in each iteration of a Gauss-Newton type procedure exploiting the linearity of (5.72) only in the sensitivity calculation They used relative weighting A l t b g h a similar s procedure i too time consuming on most personal complters, this does not mean that w e are not able to solve the problem In fact, linear differential equations can be solved by analytical methods, and solutions of most important linear compartmental models are listed in pharmacokinetics textbooks ( s e e e g , ref 33) For the three compartment model of Fig 5.7 the solution is of the form where the parameters Al, A2, the rate constants kl, k2, evaluating A1 + (5.73) at + + % = x10 A3, ., XI, X2 k5 and and X3 are given as functions of initial conditions In addition, t = shcws that , thereby eliminating one of the parameters of (5.74) (5.73) 315 Now w e can proceed i n two d i f f e r e n t ways, either by e s t i m a t i n g the parameters module kl, M45, k2, ., k d i r e c t l y , u s i n g the a n a l y t i c a l s n l u t i o n and the or e s t i m a t i n g f i r s t the p a r a m e t e r s i n (5.73) In t h i s latter case w e can use the v e r y s i m p l e p e e l i n g method, also known a5 the method o f r e s i d u a l s A l t h w g h the p e e l i n g procedure is of approximate character and d o e s (5.741, it s t i l l g i v e s n o t t a k e i n t o account t h e a v a i l a b l e c o n s t r a i n t s s u c h a5 u s e f u l i n i t i a l estimates f o r the least s q u a r e s method T h e p e e l i n g method is based on the o b s e r v a t i o n t h a t f o r compartmental models Xi < (5.73) I n a d d i t i o n , the exponents are i n the s o l u t i o n s of t h e form n o t close to e a c h other, s i n c e otherwise we are u n a b l e t o s e p a r a t e the terms of (5.73) and m u s t lump s e v e r a l compartments A s s u m e that the i n e q u a l i t i e s X1 < (i) h2 < X3 < hold, t h m the p e e l i n g c o n s i s t s of the f o l l o w i n g steps: Divide the time i n t e r v a l i n t o subintervals, containing points, respectively, w h e r e n3 sample p i n t s (ii) Since and X1 n1 + n2 + n3 = n are s m a l l e r t h a n X2 X3, , the nl, n2 and total number o f w e may assume t h a t i n the l a s t s u b i n t e r v a l the c o n t r i b u t i o n from the f i r s t two e x p o n e n t s is small Therefore, * log yi z log % and and + , hJti i = n + n2 + 1, ., n , (5.75) can be found by f i t t i n g a s t r a i g h t l i n e t o the l a s t X3 n3 p o i n t of the d a t a ( i i i ) In the second s u b i n t e r v a l o n l y the f i r s t term o f be small, b u t (5.73) is a l r e a d y k n m from +xp(A3ti) is assumed to ( i i ) Thus again a s t r a i g h t l i n e is f i t t e d to the d a t a log[; - A exp(X t 11 z log A 3 i + X2ti , i = n + 1, ., n + n2 ’ (5.76) thereby estimating (iv) % and X2 F i n a l l y , a s t r a i g h t l i n e is f i t t e d to the d a t a i n o r d e r to e s t i m a t e A1 and X1 T h e c r i t i c a l p o i n t i n the p e e l i n g t e c h n i q u e is the r i g h t choice o f n2 By (5.75) n3 and the l o g a r i t h i z e d o b s e r v a t i o n s are close to a s t r a i g h t l i n e i n 316 the last subinterval, and hence a semi find the value of n3 values * log[yi - A - lcqarithic plot of the data helps to similar plot of the corrected and logarithized A3exp(h3ti)l may help to choose n2 For the data of Table 5.7 we select n1 = 6, n2 = and n3 = Since relative error i5 assumed in the original data, unit weights are used when fitting the logarithic data (see Section 3.4), and hence the modul M40 applies The resulting estimates are A1 = l.WXlD5 Xi = -.313 , , + = ~ 0, ~ h2 = 306 A3 = ~ 0, ~ ' 3= -.07 A 002 These values are further refined by the module M45 applying relative ' c weighting wi = l/yi2 and eliminating and standard errors are obtained A-J by (5.74) The following estimates The weighted residual sum of squares is Q = 0 , close to the value 08 Q = m of Jennrich and Bright Thus the fit is satisfying and the peeling method is 5to give surprisingly good initial estimates The only remaining problem is to find the values of the original parameters kl, k2, , k5 This can be done via the formulas listed in (ref 32) where The final estimates 317 kl = 0.0754 , k = 0.1754 , k = 0.1351 , k = 0.0156 and kg = 0 m agree well with the ones of Jmnrich and Bright (ref 31) Exercises Carry w t numerical experiments with other choices of n1 , n2 and n3 in the peeling method Try to construct a heuristic rule for subinterval selection which can be used in a computer without hwnan interaction Compute approximate standard errors of the parameters kl, k2, using the error propagation law ., k5 , REFERu\K3ES P Henrici, Discrete Variable Methods in Ordinary Differential Equations, John Wiley, New York, 1962 R.L Joinston, hmerical Methods, A Software Approach, John Wiley, New York, 1982 A Holmberg, the practical identifiability of microbial growth models incorporating Michaelis-Menten type Nonlinearities, Mathematical Bioscimces, 62 (1982) - 34 B Carnahan and J.O Wilkes, Digital Cmputing and r\lunerical Methods, Join Wiley, New York, 1973 E Fehlberg, K l a s s i ~ h e Runge-Kutta-Forrrula fiinfter und siebenter Ordung mit Schrittweitm-Kontrolle, Computing, (1969) 93-1m6 C.W Gear, The automatic integration o f ordinary differential equations Communications of the AavI, 14 (1971) 176-180 P Seifert, Compltational experiments with algorithns for stiff ODES, h p u t i n g , 38 (1987) 163-176 E B.A Gottwald and G Wanner, A reliable Rosenbrock-integrator for stiff differential equations, Computing, (1981) 335-357 R.J Field and R.M Noyes, Oscillations in chemical systems J Chemical Physics, 60 (1974) 1877-1884 10 H Rabitz, Sensitivity analysis: Thwry with applications to molecular dynamics and kinetics Computers and Chemistry, (1960)167-180 11 R.P Dickinson and R.J Gelinas, Sensitivity analysis of ordinary differential equation, J Comp Physics, 21 (1978) 123-143 12 A.M Dunker, The decoupled direct method for calculating sensitivity 3529 coefficients in chemical kinetics, J Chem Phys 81 (1984) - 3 13 P Valkb and S Vajda, Pn extended ODE solver for sensitivity calculations, Computers and Chemistry, (1984) 255-271 318 14 R.A Alberty and F Daniels, Physical Chmistry 5th ed John Wiley, NEW York, 1980 15 Y Bard, Nonlinear Parameter Estimatim Academic Press, NEW York, 1974 16 D.M Himlblau, C.R: Jones and K.B Bixhoff, Determination of rate constants for complex kinetic d e l s , Ind Eng Chm Fundamentals, (1967) 539-546 17 A Yermakova, S Vajda and P Valkb, Direct integral method via spline approximation for estimating rate constants Applied Catalysis, (1982) 139-1E0 18 S Vajda, P Valkb and K.R Godfrey, Direct and indirect least squares methods in continuous-time parameter estimation, k t m t i c a , 23 (1987) 707-718 19 F.K Uno, H.L Ralston, R.I Jennrich and P.F Sampsm, Test p r o b l w from the pharmacokinetic literature requiring fitting models defined by differential equations, Technical Report No 61 WlW Statistical Software, Los Fhgeles, 1979 20 D.J Cutler, b r i c a l decmvolutim by least squares: Use of prescribed input functions, J Pharmacokinetics and Biopharm., (19781 227-242 21 D.J Cutler, Mmerical decmvolution by least squares: Use of polynomials to represent input function J Pharmacokinetics and Biopharm., (1978) 243-263 22 F Langenbucher, Numerical convolutim/decmvolution as a tool for correlating in vitro with in vivo drug availability, Pharm Ind., 44 (1962) 1166-1172 23 C.T Chen, Introduction to Linear System Theory, Holt, Rinehart and Winston, NEW York, 1970 24 D.P Vaughan and M Dennis, Mathematical basis for the point-are dKCnvOlUtiM method for determining in vivo input functions J Phar Sci., Part I, 69 (1960) 298-305, Part 11, 69 (1960) 663-665 25 B.R h n t , Biased estimation for nmparametric identification of linear systwm, Math Bioscimces, 10 (1971) 215-237 P Veng-Pdersen, Novel deconvolutim method for linear pharmacokinetic systems with plyexponential impulse response, J Pharm Sci., 69 (1980) 312-318 27 S Vajda, K.R Gcdfrey and P Valkb, kmerical decmvolution using system identification methods, J Pharmacokinetics and Biopharm., 16 (1988) 85-107 28 P Veng-Pedersm, I algorithm and computer program for deconvolutim in % linear pharmacokinetics J Phar Biopharm, (19EQl) 463-481 29 S.Vajda, P Valk6 and T Turdnyi, Principal component analysis of kinetic models, Int J Chem Kinet 17 (1985) 55-81 Je, S Vajda and T Turdnyi, Principal component analysis for reducing the Edelsm-Field-Noyes mcdel of Belousov-Zhabotinsky reactim, J Phys C h m 90 (1986) 1664 31 R.I Jennrich and P.B Right, Fitting s y s t w of linear differential equations using computer generated exact derivatives, Technomtrics, 18 (1976) 385-399 32 M.S Gibaldi and D.Perrier, Pharmacokinetics, Marcel Dekker, N e w York, 1975 319 SUBJECT I N D E X absorption curve 306 abstract factors 65 accelerating factor 99 acid-catalysed reaction 158, 179 activity coefficient 127 addition of zeros 253 affine linear relationship 62, 186 Aitken form Akaike's Information Criterion 213, 306 Akima method 257 aliasing of the spectrum 250 Almdsy indicator 189, 192 Pntoine equation 214 Arrhmius dependence 173, 182 artificial variables 20 atom matrix 48, 131 -, virtual 48, 133 background absorption 56 backsubstitution 28, 32 balance equation, linear 188 -, nonlinear 193 base line correction 253 basis - variable 11 Belousov-Zhabotinsky reaction 277 bisecticn method 74 blending problem 13, 24 Box - Draper method 184 boxcar function 246 bracketing interval 74 Brent method 96 broadening of the spectrum 5) Broyden method 107, 119, 128 canonical basis 5, 7, canonical form 59 Cardano method 71 charge balance equation 125 Ckbysev approximation chemical reaction 47, 102 chi square distribution 154, 189 Cholevsky method 35, 197 compartmental model 91, 313 condition number 4.5, 60 confidence interval 147 confidence r q i m 144, 154, 178 convergence, motonic 86 -, oscillating 86 conversion convolution 247, 253, 298 coordinate transformation correlation coefficients 1.53 covariance matrix 63, 153, 163 cross product matrix, normalized 164, 182, 311 curtosis 210 cut-off method 88 cycling damped iteration 99 Davidon-Fletcher-Powell method 119 deconvolution 298, 307 determinant 29, 31 - criterion 184 diagonal dominance 39 diagonal matrix 42 dimension Dirac impulse 248, 300 direct integral method 284, 3aM direct search method 112 discriminant dissociation reaction 125 distribution kinetics 302 divergence, motonic 86 -, oscillating 86 divided difference 225 drug dosing 91 Durbin - Wattson D-statistics 152 eigenanalysis electronic absorption spectrum 258 enthalpy 226, 239 enzyme reaction 123, 177, 283 equilibrium condition 128 equilibrium relations 102, 125 equivalence point, detection of error measure 189 error propagation 317 estimation criterion 140 Euler method, explicit 263 -, implicit 265 experiment design, A optimal 211 -, D - optimal 211, 212 -, E - optimal 211 extent of reaction 48 extract 127 extrapolation 228 - F-test 146, 152 false position method 77 Fast Fourier Transformation 2321 feasible solution 15 Fibonacci search 96 FLEPaVlIN program 119 Forsythe polynomials 205 Fourier transformation 45 -, cmtinuous 247 -, discrete 249, 298, 3BB free variable 11 frequmcy d m i n 240 full pivoting 13 320 Gauss-Newton-Marquardt method 195, 164 Gaussian elimination 27, 36, 39 Gaussian function 223, 254, 258 Gear program 273 Gibbs free energy 127, 131 Newton-Raphson method 82, 104, 130 normal distribution 144, 210 normal form 15 normalized eigenvectors 41 Nyquist critical frequency 2% golden section 68, 98 gradient method 112 odd multiplicity 75 ordinary differential equation 261 Hausholder formula 108, 111 Hessian matrix 112, 173 Hilbert matrix 37, 61 Oregmator model 277 orthogonal polynomials 225, 228 orthonormal eigenvectors 41 outlier 55, 210 overrelaxation 99 ill-conditioned problem 45, 178, 282, 3e)6 indicator variable inverse matrix 12 inverse transform 247, 249 iterative improvement 46 iterative reweighting 196 isomerization of alpha-pinene 61, 185 Jacobi method 42 Jacobian matrix 105, 162, 274, 288 Lagrange formula 224 Lagranqe multiplier 132, 188, 241 least absolute deviations 51 least squares 58, 140, 258, 289 Levenberg-Marquardt modification 163 linear combination linear dependence linear interpolation 210 linear system 297 linearization, Eadie-Hofstee 176 -, Hanes 176 -, Lineweaver Wlrk 176 -, Scatchard 176 linearly dependent vectors linearly independent vectors lower triangular matrix 27 LU decomposition 28, 131 Oregonator model 277 orthogonal polynomials 205, 228 orthonormal eigenvectors 41 outlier 55, 210 overrelaxation 99 partial pivoting 13 peeling method 315 Peng-Robinson equation of state 72 permutation matrix 27, 29 phax equilibrium 129 pivot element 6, 9, 3 point-area method 299, 307 polynomial equation 126 positive definite matrix 35, 119 potenticmetric titration 232, 254 practical identifiability 311 predictor-corrector method 269 principal component analysis 65, 183, 282, 311 quadratic form 35, 188 quasi Newton method 107 quasi steady state approximation 124, 283 Milne method minimax criterion 54, 210 multiresmse estimatim 61 radiographic investigation 2(2w raffinate 127 RAM) algorithm 133 random number 144 reaction invariants 51, 133 reaction matrix 47 -, virtual 48 regression line 145 residual 45, 143 response function 59, 139 -, virtual 197 restricted equilibrium 133 ridge parameter 155, 179 ridge regression 179 Rosmbrock function 117, 121 Rosenbrock method 273 Newton formula 2251 Newton method 112, 241 Runge-Kutta method 265 -, m i implicit 273 Margules equation 127, 164 Marquardt parameter 163, 179 mass balance equation 125 material balance 128 matrix inverse 2, 12, 34 maximum likelihood principle 141, 194 method of residuals 315 Michaelis - Mmten equation 123, 176, 268, 294 m A 273 321 , saddle point 59 Savitzky - Golay formu1a 229, 231, 253 scalar product scaling of the parameters 155 secant method m sensitivity coefficient -, semi-logarithmic 281 sensitivity equation sensitivity matrix 281 stladow price 24, 137 signal-to-noise ratio 221 similarity transformation 41 simplex 113 simplex method o f Nelder and Mead 113, 187 simplex tableau 19 Simpson rule singular value 61 singularity 37 slack variables 15, i0 solving a matrix equation J3 spectroscopy 56 spectrum, amplitude -, phase 248 -, power 248 3, spline, cubic -, interpolating 235, S02 -, natural 236, 241, 287 -, smothing 240 stability 265 standard error 146 steepest descent method 112 step size 2 stiff differential equation stoichimtric coefficient rjtoichiwnetric number of freedom stoichimtric subspace stoichimtry 47 Student's t distributim 57, 147 subspace successive approximation 85, 99 symmetric matrix 35, 41 Taylor series 265 Thomas algorith trapezium rule 234, tridiagmal matrix equation unimdal function unit vector updating formla lB , 119 upper triangular matrix user supplied subrwtine van Laar parameters 215 0, vapor pressure 73, 214 vapor-liquid equilibrium 214 vector coordinates Weigstein method 99 weighting coefficients 145, 174 weighting function 298, 339 weighting matrix 187 weighting, Poisson 161 -, relative 148, 155, 169 window function 258 This Page Intentionally Left Blank .. .DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology DATA HANDLING IN SCIENCE AND TECHNOLOGY. .. vapor- 139 145 151 161 173 178 179 182 1 84 24 241 242 25 251 252 253 2 54 liquid equilibrium data References 41 41 1 41 2 41 3 41 4 4. 2 42 1 42 2 43 43 1 43 2 43 3 44 44 1 44 2 ... Applications in Chemistry, Biology and Pharmacology by P Valk6 and S.Vajda DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME Advisory Editors: B.G.M Vandeginste, O.M Kvalheim and L Kaufman Advanced scientific

Ngày đăng: 22/03/2014, 23:20

Từ khóa liên quan

Mục lục

  • Advanced Scientific Computing in BASIC with Applications in Chemistry, Biology and Pharmacology

  • Copyright Page

  • CONTENTS

  • INTRODUCTION

  • CHAPTER 1. COMPUTATONAL LINEAR ALGEBRA

    • 1.1 Basic concepts and methods

    • 1.2 Linear programming

    • 1.3 LU decomposition

    • 1.4 Inversion of symmetric, positive definite matrices

    • 1.5 Tridiagonal systems of equations

    • 1.6 Eigenvalues and eigenvectors of a symmetric matrix

    • 1.7 Accuracy in algebraic computations. Ill-conditioned problems

    • 1.8 Applications and further problems

    • References

    • CHAPTER 2. NONLINEAR EQUATIONS AND EXTREMUM PROBLEMS

      • 2.1 Nanlinear equations in one variable

      • 2.2 Minimum of functions in one dimension

      • 2.3 Systems of nonlinear equations

      • 2.4 Minimization in multidimensions

      • 2.5 Applications and further problems

      • References

      • CHAPTER 3. PARAMETER ESTIMATION

        • 3.1 Fitting a straight line by weighted linear regression

Tài liệu cùng người dùng

Tài liệu liên quan