SPECIAL FEATURES OF LONGITUDINAL MODELS AND FUNCTIONAL DATA

Một phần của tài liệu Modeling intraindividual variability with repeated measures data (Trang 115 - 122)

Let us now have a look at modeling longitudinal data, with the possibility that the number of observations ni may be rather larger than for the d a t a sets considered by CH and R. These remarks are selected from material in Ramsay and Silverman (1997).

Within-Subject or Curve Modeling

There are two general approaches t o modeling curves f

1. Parametric: A family of nonlinear curves defined by a specific model is chosen. For example, test theorists are fond of the three-parameter logistic model, P ( 0 ) = c + (1 - c)/{l + ezp[-a(0 - b ) ] } , for modeling the relationship between the probability of getting a test item correct as a function of ability level.

2 . Nonparametric: A set of K standard fixed functions & , called basis functions, are chosen, and the curve is expressed in the form

The parametric approach is fine if one has a good reason for proposing a curve model of a specific form, but this classic approach obviously lacks the flexibility that is often called for when good d a t a are available.

T h e so-called nonparametric approach can be much more flexible given the right choice of basis functions and a sufficient number of them, although it is, of course, also parametric in the sense that the coefficients k must be estimated from the data. However, t,he specific functional form of the basis functions tends not to change from one application t o another. However, this approach is linear in the parameters and therefore perfect for multilevel analysis. Here are some common choices of basis functions:

1. Polynomials: Q,(t) = t”’. These well-known basis functions (or equivalent bases such as centered monomials, orthogonal polynomials, and so on) are great for curves with only one or two events, as defined previously. A quadratic function, for example, with K = 3, is fine for modeling a one-bump curve with 3 to 7 or more data points. These bases are not much good for describing more complex curves, though, and have been more or less superseded by the B-spline basis described subsequently.

2 . Polygons: 0, = 1 if j = k, and 0 otherwise. For this simple basis, coefficient a k is simply the height of the curve at time t k , and there is a basis function for each time at which a measurement is taken. Thus, there is no data compression; and there are as many parameters to estimate as observations.

Longitudinal and Functional Data 99 3. B-splines: This is now considered the basis of choice for nonperiodic da t a with sufficient resolution t o define several events. A B-spline consists of polynomials of a specified degree joined together a t junc- tion points called knots, and is required to join smoothly in the sense t ha t a specified set of derivatives are required to match. The B-spline basis from which these curves are constructed has the great advantage of being local, t h at is, nonzero only over a small number of adjacent intervals. This brings important advantages at both the modeling and computational levels.

4. Fourier series: No list of bases can omit these classics. However, they are more appropriate for periodic d at a, where the period is known, than for unconstrained curves of the kind considered by CH and R.

They are, too, in a sense, local, but in the frequency domain rather in the time domain.

5 . Wavelets: These basis functions, which are the subject of a great deal of research in statistics, engineering and pure mathematics, are local simultaneously in both the time and frequency domain and are espe- cially valuable if the curves have sharp local features at unpredictable points in time.

Which basis system to choose depends on (a) whether the process is periodic or not and (b) how many events or features the curve is assumed to have t h at are resolvable by the d ata at hand. The trick is t o keep the total number K of basis functions as small as possible while still being able t o fit events of interest.

Basis functions can be chosen so that they separate into two orthogo- nal groups, one group measuring the low-frequency or smooth part of the curve and the other part measuring the high-frequency and presumably more variable component of within-curve variation. This opens up the pos- sibility of a within-curve separation of levels of model. The large literature on smoothing splines follows this direction, and essentially uses multilevel analysis, but refers t o this as using regularization.

Variance Components

We have here the three variance components: (a) R containing the within- curve variances and covariances, (b) D containing the second level variances and covariances, and (c) B containing the variances and covariances among the whole-sample parameters a.

The behavioral science applications th at I’ve encountered tend t o regard components in R as of little direct interest. For nonlongitudinal data, it is usual t o use R = ~ ’ 1 , and, as Diggle et al. (1994) suggest, this may even be a wise choice for longitudinal d a t a because least squares estimation is known t o be insensitive t o mis-specification of the data’s variance-covariance struc- ture. However, in any case, the counsel tends t o be t o keep this part of

the model as simple as possible, perhaps using a single first-order auto- correlation model if this seems needed. More elaborate serial correlation models can burn up precious degrees of freedom very quickly and are poorly specified unless there are large numbers of observations per curve.

It is the structure of D t h at is often the focus of modeling strategies. If, as is usual, between-subject covariances are assumed to be 0, then we noted previously th at D is block-diagonal and the m submatrices C in the diag- onal will be of order K and all equal. Both CH and R consider saturated models in which the entire diagonal submatrix is estimated from the data.

Of course, as the number of basis functions K increases, the number of vari- ances and covariances, namely K ( K + 1 ) / 2 , increases rapidly. The great virtue of local basis systems, such as B-splines, Fourier series, and wavelets, is that all covariances sufficiently far away from the diagonal of this sub- matrix can be taken as 0, so that the submatrix is band-structured. The PROC MIXED program in the SAS (1995) enables a number of specialized variance-covariance structures.

The contents of B will be of interest only if the group parameters in p

are considered random. This seems not often t o be the case for behavioral science applications.

CONCLUSIONS

Multilevel analysis requires a substantial investment in statistical technol- ogy. Programs such as SAS PROC MIXED can be difficult t o use because of unfriendly documentation, poor software design, and options not well adapted t o the structure of the d a t a at hand. Although this is not the place to comment on algorithms used to fit multilevel models, these are far from being bulletproof, and failures t o converge, extremely long com- putation times, and even solutions t h at are in some respect absurd are real hazards. Indeed, the same may be said for most variance components esti- mation problems, and there is a large and venerable literature on computa- tional strategies in this field. The best known variance components model for behavioral scientists is probably factor analysis, where computational problems and the sample sizes required for stable estimates of unique vari- ances have led most users with moderate sample sizes t o opt for the simpler and more reliable principal components analysis.

The emphasis t ha t CII and R place on testing the fit of the various extensions of their basic models might persuade one that fitting the da t a better is always a good thing. It is, of course, if what is added t o the model t o improve the fit is of scientific or practical interest. In the case of mul- tilevel models, what are added are the random components and whatever parameters are used t o define the variance-covariance structures in D and R.

On the other hand, adding fitting elements may not be wise if the effects of interest can be reasonably well estimated with simpler methods and fewer

Longitudinal and Functional Data 101 parameters. For example, if there is little variability in the curve shapes across individuals, then adding random coefficients burns up precious de- grees of freedom for error. The resulting instability of fixed effect parameter estimates, and loss of power in hypothesis tests for these effects, will more than offset the modest decrease in bias achieved relative t o that for a sim- ple fixed effects model. As for variance components, is worth saying again along with Diggle et al. (1994) and others that modeling the covariance structure in D and R will usually not result in any big improvement in the estimation and testing of the fixed effect vector /3. KBK have added an important message of caution in this regard.

If you are specifically interested in the structure of D and/or R, and have the sample size t o support the investigation, then multilevel analysis is definitely for you. If you want t o augment sparse or noisy data for a single individual by borrowing information from other individuals, so that you can make better predictions for that person, then this technique has much t o offer. However, if it is the fixed effects in /3 that are your focus, then I wish we knew more about when multilevel analysis really pays off. If you have longitudinal data, it might be worth giving some consideration t o the new analyses such as curve registration and differential models emerging in functional data analysis. To sum up, which method t o use very much depends on your objectives.

REFERENCES

Brown, K. W., & Moskowitz, D. S. (1998). Dynamic stability of behavior:

The rhythms of our interpersonal lives. Journal of Personality, 66, 105-134.

Diggle, P. J., Liang, K.-Y., & Zeger, S. L. (1994). Analysis of longitudinal data. Oxford: Clarendon Press.

Jennrich, R., & Schluchter, M. (1986). Unbalanced repeated-measures model with structured covariance matrices. Biornetrics, 42, 809-820.

Little, R. J . A., & Rubin, D. B. (1987). Statistical analysis with missing data. New York: John Wiley and Sons.

Ramsay, J . O., & Dalzell, C. (1991). Some tools for functional d ata analysis (with discussion). Journal of the Royal Statistical Society, Series B, 53, 539-572.

Ramsay, J. O., & Dalzell, C. (1995). Incorporating parametric effects into functional principal components analysis. Journal of the Royal Statistical Society, Series B, 57, 673-689.

Ramsay, J . O., Heckman, N., & Silverman, B. (1997). Spline smoothing with model-based penalties. Behavior Research Methods, 29(1), 99- 106.

Ramsay, J. O., & Li, X. (1998). Curve registration. Journal of the Royal Statistical Society, Series B, 60, 351-363.

Ramsay, J. O., & Silverman, B. W. (1997). Functional data analysis. New York: Springer.

SAS Institute. (1995). Introduction t o the M I X E D procedure course notes (Tech. Rep.). Cary, NC: SAS Institute Inc.

Searle, S. R., Casella, G., & McCulloch, C. E. (1992). Variance components.

New York: Wiley.

Chapter 5

Analysis of Repeated Measures Designs with Linear Mixed Models

Dennis Wallace Samuel B. Green

University of Kansas Medical Center

Arizona State University

Repeated measures or longitudinal studies are broadly defined as studies for which a response variable is observed on a t least two occasions for each experimental unit. These studies generate serial measurements indexed by some naturally scaled characteristic such as age of experimental unit, length of time in study, calendar date, trial number, student’s school grade, individual’s height as a measure of physical growth, or cumulative exposure of a person t o a risk factor. The experimental units may be people, white rats, classrooms, school districts, or any other entity appropriate t o the question that a researcher wishes t o study. To simplify language, we will use the term ”subject” t o reference the experimental units under study and the term ”time” t o reference the longitudinal indexing characteristic.

In most general terms, the objectives of repeated measures studies are to characterize patterns of the subjects’ response over time and t o define re- lationships between these patterns and covariates (Ware, 1985). Repeated measures studies may focus simply on change in a response variable for subjects over time. Examples include examining changes in satisfaction with health as a function of time since coronary bypass surgery, assess- ing the pattern of change in self concept for adolescents as a function of height, and characterizing the growth of mathematical ability in elemen-

103

tary school children as a function of grade level. Studies may incorporate simple, non-time-varying covariates that differentiate subjects into two or more subpopulations. The primary emphasis in these studies is on charac- terizing change over time for each subpopulation and assessing whether the patterns of change differ across subpopulations. For example, an objective of a study might be t o assess whether the growth rate of emotional matu- rity across grade levels differs between male and female students. Besides having an interest in differences between average growth rates for subpop- ulations, we could also be interested in the differences between variability in growth rates for these subpopulations. The non-time-varying covariate could also be a continuous variable, such as family income at beginning of study, and the focus could be on growth rate as a function of income.

Alternatively, a repeated measures design might include a covariate that varies over time. For example, a study examining weekly ratings by knee- surgery patients of their health satisfaction might include weekly ratings by physicians of the inflamation level of the patients’ knees. Regardless of whether a study includes non-time-varying covariates and/or time-varying covariates, we may choose not t o include time as a variable in our analysis but t o have time act simply as an index for defining different observations.

One example might be a study in which we explore the relationship between body image and sexual self-confidence over time. We could ignore time as a predictor in the analysis and investigate the hypothesis th at the slope coefficients in predicting sexual self-confidence from body image vary more greatly for obese men than for normal weight men.

As suggested by the previous research questions, we may want t o use repeated measures designs to answer questions about variability in behavior as well as average behavior. The linear mixed model allows us t o model both means and variances so th at we can investigate questions about both of these components. In addition, because the linear mixed model allows accurate depiction of the within-subject correlation inherently associated with the repeated measurements, we can make better statistical infcrences about the mean performances of subjects. This chapter introduces the linear mixed model as a tool for analyzing repeated measures designs.

Basic terminology for repeated measures studies (Helms, 1992) is intro- duced t o simplify the subsequent discussion. A repeated measures study has a regularly t i m e d schedule if measurements are scheduled at uniform time intervals on each subject; it has regularly t i m e d data if measurements are actually obtained at uniform time intervals on each subject. Note that the uniform intervals can be unique for each subject, but the time intervals between observations are constant for any given subject. A repeated mea- sures study has a consistently t i m e d schedule if all subjects are scheduled t o be measured at the same times (regardless of whether the intervals between times are constant) and has consistently t i m e d data if the measurements for all subjects are actually made at the same times. If studies are designed with a regular and consistent schedule and data are actually collected reg- ularly and consistently, analyses can be accomplished relatively easily with

Mixed Models 105 general linear models techniques. Although many experimental studies are designed t o have regularly and consistently timed data, they often yield mistimed d a t a or data with missing observations for some subjects. In nonexperimental, observational studies, data are rarely designed and col- lected in this manner. Consequently, we need flexible, analytic methods for analyzing d a t a that are not regularly and consistently timed, and the linear mixed model meets this need.

Although many articles have been published throughout the history of statistics discussing methods for the analysis of repeated measures data, the literature has expanded rapidly in the past 15 years, as evidenced by the publications of Goldstein (1987), Lindsey (1993), Diggle et al. (1994), Bryk and Raudenbush (1992), and Littell, Milliken, Stroup, and Wolfinger (1996). Some authors have focused on the analysis of repeated measures d a t a from studies with rigorous experimental designs and discuss analysis of variance ( ANOVA) techniques using univariate and multivariate proce- dures. Procedures using a n ANOVA approach are described in detail by Lindquist (1953), Winer (1971), Kirk (1982a, 1982b), and Milliken and Johnson (1992). Although these methods work well for some experimen- tal studies, they encounter difficulties if studies do not have regularly and consistently timed data. Consequently, researchers faced with observational d a t a have approached repeated measures data through the use of regression models, including hierarchical linear models (HLM) and random coefficients models. The discussions of HLM by Bryk and Raudenbush (1992) and of random coefficients models by Longford (1993) and Diggle et al. (1994) take this approach. As we illustrate in this chapter, linear mixed models provide a general approach t o modeling repeated measures data that encompasses both the ANOVA and HLM/random-coefficients approaches.

The remaining sections of the chapter outline how the mixed model can be used to analyze d a t a from repeated measures studies. We first introduce the linear mixed model and discuss it in the context of a simple example.

Next we describe methods for estimating model parameters and conducting statistical inference in the mixed model. We also briefly introduce how t o implement these methods with the SAS procedure MIXED. Then we outline some of the issues that must be considered in constructing models arid discuss strategies for building models. In the last section, we present an example analysis illustrating the use of the MIXED procedure and the strategies for building models.

Một phần của tài liệu Modeling intraindividual variability with repeated measures data (Trang 115 - 122)

Tải bản đầy đủ (PDF)

(293 trang)