1. Trang chủ
  2. » Luận Văn - Báo Cáo

Ebook Design and analysis of experiments (Volume 2: Advanced experimental design): Part 2

430 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

C H A P T E R 10 Designs with Factors at Three Levels 10.1 INTRODUCTION In discussing the 2n factorial design in Chapter we saw that main effects and interactions can be defined simply as linear combinations of the true responses, more specifically as the average response of one set of 2n−1 treatment combinations minus the average response of the complementary set of 2n−1 treatment combinations And even more specifically, the main effect of a certain factor is the average response with that factor at the level minus the average response with that factor at the level Turning now to the situation where each factor has three levels, which we shall refer to as level, level, and level, such a simple definition of main effects and interactions no longer exists We can no longer talk about the main effect of a factor or the interaction between two or more factors but shall talk instead about main effect components or comparisons belonging to a certain factor and about interaction components We shall see how all this can be developed as a generalization of the formal approach described for the 2n experiment in Section 7.4 10.2 DEFINITION OF MAIN EFFECTS AND INTERACTIONS 10.2.1 The 32 Case To introduce the concepts we shall consider first the simplest case, namely that of two factors, A and B say, each having three levels, denoted by 0, 1, A treatment combination of this 32 factorial is then represented by x  = (x1 , x2 ) where xi = 0, 1, 2(i = 1, 2), with x1 referring to factor A and x2 to factor B Design and Analysis of Experiments Volume 2: Advanced Experimental Design By Klaus Hinkelmann and Oscar Kempthorne ISBN 0-471-55177-5 Copyright  2005 John Wiley & Sons, Inc 359 360 DESIGNS WITH FACTORS AT THREE LEVELS We now partition the set of nine treatment combinations into three sets of three treatment combinations each according to the levels of factor A: set I: {(0, 0), (0, 1), (0, 2)} set II: {(1, 0), (1, 1), (1, 2)} set III: {(2, 0), (2, 1), (2, 2)} More formally, we can define these three sets by the three equations: set I: x1 = set II: x1 = set III: x1 = (10.1) Comparisons among the mean true responses for these three sets are then said to belong to main effect A Since there are three sets, there are two linearly independent comparisons among these three sets (i.e., their mean responses), and these comparisons represent the d.f for main effect A For example, the comparisons could be (set I − set II) and (set I − III), or (set I − set II) and (set I + set II − set III) Similarly, we can divide the nine treatment combinations into three sets corresponding to the levels of factor B or, equivalently, corresponding to the equations: x2 = 0: {(0, 0), (1, 0), (2, 0)} x2 = 1: {(0, 1), (1, 1), (2, 1)} x2 = 2: {(0, 2), (1, 2), (2, 2)} (10.2) Comparisons among the mean responses of these three sets then constitute main effect B As in the 2n case, the interaction between factors A and B will be defined in terms of comparisons of sets (of treatment combinations), which are determined by equations involving both x1 and x2 One such partitioning is given by set I: x1 + x2 = set II: x1 + x2 = set III: x1 + x2 = mod 3: mod 3: mod 3: {(0, 0), (1, 2), (2, 1)} {(1, 0), (0, 1), (2, 2)} {(2, 0), (0, 2), (1, 1)} (10.3) Comparisons among these three sets account for of the d.f for the A × B interaction The remaining d.f are accounted for by comparisons among the sets based on the following partition: set I: x1 + 2x2 = set II: x1 + 2x2 = set III: x1 + 2x2 = mod 3: {(0, 0), (1, 1), (2, 2)} mod 3: {(1, 0), (0, 2), (2, 1)} mod 3: {(2, 0), (0, 1), (1, 2)} (10.4) 361 DEFINITION OF MAIN EFFECTS AND INTERACTIONS To see what our development so far means with respect to the usual factorial representation, we consider (see also Section I.11.8.1) τij = µ + Ai + Bj + (AB)ij (10.5) with  Ai =  Bj = j =0 i=0  (AB)ij = i=0 for each j ,  (AB)ij = j =0 for each i, where τij is the true response for the treatment combination (x1 = i, x2 = j ) With the model (10.5), a contrast among sets (10.1), that is,     ci τ i· = ci Ai ci = i i is the corresponding contrast among A main effects A contrast among sets (10.2), that is,     cj = cj τ ·j = c j Bj j j it is the corresponding contrast among B main effects A contrast among sets (10.3) can be written as c1 (τ00 + τ12 + τ21 ) + c2 (τ10 + τ01 + τ22 ) + c3 (τ20 + τ02 + τ11 ) (c1 + c2 + c3 = 0) which, using (10.5), reduces to the same contrast in the (AB)ij ’s The same is true for comparisons among sets (10.4), that is, c1 (τ00 + τ11 + τ22 ) + c2 (τ10 + τ02 + τ21 ) + c3 (τ20 + τ01 + τ12 ) (c1 + c2 + c3 = 0) The reader will notice that the last two comparisons have no particular meaning or interpretation for any choice of the ci ’s, except that they each belong to the 2factor interaction A × B, and that each represents d.f of that interaction This is in contrast to the parameterization given in Section I.11.8.1 in terms of orthogonal 362 DESIGNS WITH FACTORS AT THREE LEVELS polynomials One difference, of course, is that the parameterization given there is in terms of single degree of freedom parameters, and a second difference is that it is meaningful only for quantitative factors, whereas the definitions in terms of the partitions as summarized in (10.6) below hold for quantitative and qualitative factors But the most important point is that the current definitions of effects and interactions will prove to be important in the context of systems of confounding (see Section 10.5) and fractional factorials (see Section 13.4) To sum up our discussion so far, the effects and interactions for a 32 experiment are given in pairs of degrees of freedom by comparisons among three sets of treatment combinations as follows: A: x1 = 0, 1,         x2 = 0, 1, mod   x1 + x2 = 0, 1,     A × B:  x + 2x = 0, 1,  B: (10.6) It is convenient to denote the pairs of degrees of freedom corresponding to x1 + x2 = 0, 1, by the symbol AB and the pair corresponding to x1 + 2x2 = 0, 1, by AB It is easy to see that the groups given by the symbols AB and A2 B are the same It is, therefore, convenient, in order to obtain a complete and unique enumeration of the pairs of degrees of freedom, to adopt the rule that an order of the letters is to be chosen in advance and that the power of the first letter in a symbol must be unity This latter is obtained by taking the square of the symbol with the rule that the cube of any letter is to be replaced by unity, that is, if the initial letter of the symbol occurs raised to the power 2, for example, A2 B, we then obtain A2 B ≡ (A2 B)2 ≡ A4 B ≡ AB This procedure follows from the fact that the partitioning produced by 2x1 + x2 = 0, 1, is the same as that produced by · 2x1 + 2x2 = 0, 2, that is, x1 + 2x2 = 0, 2, which is the partitioning denoted by AB 363 DEFINITION OF MAIN EFFECTS AND INTERACTIONS Table 10.1 Effects and Interactions for 33 Experiment Effect/Interaction Left-Hand Side of Defining Equation A B x1 x2  A×B C A×C  AB x1 + x2 AB x1 + 2x2 x3 AC x1 + x3 AC x1 + 2x3  BC x2 + x3 BC  ABC     ABC A×B ×C  AB C    2 AB C x2 + 2x3 B ×C x1 + x2 + x3 x1 + x2 + 2x3 x1 + 2x2 + x3 x1 + 2x2 + 2x3 10.2.2 General Case The procedure of formally defining effects and interactions, illustrated for the 32 experiment, can be extended easily to the 3n case We shall have then (3n − 1)/2 symbols, each representing d.f For example, for the 33 experiment there will be 13 symbols as given in Table 10.1 together with their defining equations of the form α1 x1 + α2 x2 + α3 x3 = 0, 1, mod For the general case of the 3n experiment, denoting the factors by A1 , A2 , , An , the (3n − 1)/2 symbols can be written as Aα1 , Aα2 , , Aαnn with αi = 0, 1, (i = 1, 2, , n) and the convention that (1) any letter Ai with αi = is dropped from the expression, (2) the first nonzero α is equal to one (this can always be achieved by multiplying each αi by 2), and (3) any αi = is not written explicitly in the expression (This is illustrated in Table 10.1 by replacing A by A1 , B by A2 , and C by A3 ) The n-tuple α  = (α1 , α2 , · · · , αn ) associated with Aα1 Aα2 · · · Aαnn is referred to as a partition of the 3n treatment combinations into three sets according to the equations α1 x1 + α2 x2 + · · · + αn xn = 0, 1, mod (10.7) 364 DESIGNS WITH FACTORS AT THREE LEVELS We now list some properties of such partitions: Each partition leads to three sets of 3n−1 treatment combinations each as is evident from Eqs (10.7) If α  = (α1 , α2 , , αn ) and β  = (β1 , β2 , , βn ) are two distinct partitions, then the two equations α1 x1 + α2 x2 + · · · + αn xn = δ1 mod (10.8) β1 x1 + β2 x2 + · · · + βn xn = δ2 mod (10.9) and are satisfied by exactly 3n−2 treatment combinations x  = (x1 , x2 , , xn ) This implies that the set of treatment combinations determined by α  x = δ1 has exactly 3n−2 treatment combinations in common with each of the three sets determined by the equations β  x = 0, 1, mod 3, respectively It is in this sense that the two partitions α and β are orthogonal to each other If a treatment combination x  = (x1 , x2 , , xn ) satisfies both Eqs (10.8) and (10.9) for a particular choice of δ1 , δ2 , then x also satisfies the equation (α1 + β1 )x1 + (α2 + β2 )x2 + · · · + (αn + βn )xn = δ1 + δ2 mod (10.10) Equation (10.10) is, of course, one of the three equations associated with the partition α  + β  = (α1 + β1 , α2 + β2 , , αn + βn ), in which each component is reduced mod 3, and hence with the interaction α +β α +β α +β A1 1 A2 2 · · · Ann n In agreement with the definition in Section 7.4 α +β α +β α +β we refer to E α+β = A1 1 A2 2 · · · Ann n as generalized interaction β β β α1 α2 αn α β (GI) of E = A1 A2 · · · An and E = A1 A2 · · · Ann In addition to satisfying (10.10), the treatment combination x, which satisfies (10.8) and (10.9), also satisfies the equation (α1 + 2β1 )x1 + (α2 + 2β2 )x2 + · · · + (αn + 2βn )xn = δ1 + 2δ2 mod (10.11) which is associated with the partition α  + 2β  and hence the interaction α +2β α +2β α +2β E α+2β = A1 1 A2 · · · Ann n This interaction is therefore another GI of E α and E β To summarize then, any two interactions E α and E β have two GIs E α+β and E α+2β , where α + β and α + 2β are formed mod and are subject to the rules stated earlier We illustrate this by the following example 365 PARAMETERIZATION IN TERMS OF MAIN EFFECTS AND INTERACTIONS Example 10.1 Consider AB and ABC in the 33 case, that is, α  = (1, 1, 0) and β  = (1, 1, 2) Then (α + β) = (2, 2, 2) ≡ (1, 1, 1) and (α + 2β) = (3, 3, 4) ≡ (0, 0, 1) and hence the GIs of AB and ABC are ABC and C Another way of obtaining this result is through formal multiplication and reduction mod 3, that is, (AB)(ABC ) = A2 B C = (A2 B C )2 = ABC and (AB)(ABC )2 = A3 B C = C  10.3 PARAMETERIZATION IN TERMS OF MAIN EFFECTS AND INTERACTIONS The symbols used in the previous section to denote pairs of degrees of freedom will also be used to denote the magnitude of effects and interactions in the following way (see also Section 10.4) Each symbol represents a division of the set of 3n treatment combinations into three sets of 3n − treatment combinations each The symbol, with a subscript that is the right-hand side of the equation determining the particular one of the three sets in which the treatment combinations lie, will denote the mean response of that set as a deviation from the overall mean, M If E α = Aα1 Aα2 · · · Aαnn represents an interaction, then Eiα = Aα1 Aα2 · · · Aαnn i  = mean of treatment combinations satisfying α  x = i mod  − M (10.12) We shall also use the notation Eαα x for given α and x to denote one of the quantities E0α , E1α , E2α depending on whether α  x = 0, 1, mod 3, respectively We note that a comparison belonging to E α is, of course, given by c0 E0α + c1 E1α + c2 E2α (c0 + c1 + c2 = 0) (10.13) Also, it follows from (10.12) that E0α + E1α + E2α = (10.14) so that any comparison of the form (10.13) could be expressed in terms of only two Eiα Such a procedure was, in fact, adopted for the 2n factorial, but as we 366 DESIGNS WITH FACTORS AT THREE LEVELS shall see below, in the present situation this would only lead to a certain amount of asymmetry As an extension of (7.42) we can now state and prove the following result, which expresses the response a(x) of a treatment combination x as a linear combination of interaction components This parameterization of a(x) is given by  a(x) = M + Eαα x (10.15) α where summation is over all α  = (α1 , α2 , , αn ) = (0, 0, , 0), subject to the rule that the first nonzero αi equals one, and α  x is reduced mod The proof of (10.15) follows that of (7.42) and will be given for the general case in Section 11.5 We illustrate (10.15) with the following example Example 10.2 Consider the 33 factorial with factors A, B, C and denote the true response of the treatment combination (i, j, k) by bj ck Then (10.15) can be written as bj ck = M + Ai + Bj + ABi+j + ABi+2j + Ck + ACi+k + ACi+2k + BCj +k + BCj2+2k + ABCi+j +k + ABCi+j +2k + AB Ci+2j +k + AB Ci+2j +2k For i = 1, j = 1, k = 2, for example, this becomes a1 b1 c2 = M + A1 + B1 + AB2 + AB02 + C2 + AC0 + AC22 + BC0 + BC22 + ABC1 + ABC02 + AB C2 + AB C12  We emphasize again that the parameterization (10.15), which because of (10.14) is a non-full-rank parameterization, becomes important in connection with systems of confounding (Section 10.5) and fractional factorials (Section 13.4) 10.4 ANALYSIS OF 3n EXPERIMENTS Suppose that each treatment combination is replicated r times in an appropriate error control design, such as a CRD or a RCBD Comparisons of treatments are then achieved by simply comparing the observed treatment means, and tests for main effects and interactions are done in an appropriate ANOVA 367 ANALYSIS OF 3n EXPERIMENTS Table 10.2 ANOVA for 33 Experiment in Randomized Complete Block Design Source d.f Blocks r −1 Treatments A SS 27 y i··· − y ···· i=1 33 − = 26 r  γ   E cc γ    vs E (18.39) SS(X r | I, Xs , Xc Xτ ) = SS The RHS of (18.36)–(18.39) can be computed following (11.93) with the proper γ modifications, that is, computing the quantities Xδ  from the squares as indicated in each expression and then modifying c(γ  ) and u(γ  ) accordingly [this includes replacing s by cr (γ  ) + u(γ  ) in (18.38) and by cc (γ  ) + u(γ  ) in (18.39)] With regard to estimating wr and wc , the sums of squares (18.38) and (18.39) and their expected values are needed Again, these are easily obtained from (11.94) as   E MS(X r | I, Xs , Xc , X τ ) = σe2 + r Kσr2 (18.40) where r = u(γ  ) s (u(γ  ) + cr (γ  ) (18.41) E2,r ! and E2,r denotes summation over all E γ  ∈ E2 , which are confounded with rows, and   E MS(X c | I, Xs , Xr , Xτ ) = σe2 + c Kσc2 (18.42) u(γ  ) s (u(γ  ) + cc (γ  ) (18.43) where c = E2,c ! and E2,c denotes summation over all E γ  ∈ E2 , which are confounded with columns We then obtain  σe2 = = MS(I | I, Xs , Xr , Xc , Xτ ) w  Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn (18.44) Tai lieu Luan van Luan an Do an 675 TWO-RESTRICTIONAL LATTICES from (18.40) and (18.44)  σe2 + K σr2 =  1  MS(I | I, Xs , Xr , Xc , Xτ ) = 1− w r r + MS(X r | I, Xs , X c , Xτ ) r and from (18.42) and (18.44)  σe2 + K σc2 =  1  MS(I | I, Xs , X r , Xc , X τ ) = 1− w c c + MS(X c | I, X s , Xr , Xτ ) c We illustrate the above by the following example Example 18.2 Suppose we have t = 52 treatments in an unbalanced lattice square with s = squares We can use the following system of confounding: Square I Rows Columns II III IV A B AB AB AB AB AB AB The experimental plan (apart from randomization of rows and columns) is then as given in Table 18.11, and the relevant parts of the R | C, T-ANOVA and the C | R, T-ANOVA are given in Tables 18.12 and 18.13, respectively Concerning treatment comparisons such as, for example, (0, 0) versus (1, 1), we have       0 −  0 − B 1 + B 1   τ (0, 0) −   τ (1, 1) =  A A              − AB  − AB  − AB  + AB  + AB 3 + AB with [see (18.34)]   1 + + var   τ (0, 0) −   τ (1, 1) = σe2 + ρr−1 + ρr−1 + ρc−1 + ρc−1  18.9.2 Lattice Squares for General K We now turn to the case where K is not prime or prime power Suppose there exist q MOLS of order K Then the row and column classification and the Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 676 LATTICE DESIGNS Table 18.11 Experimental Plan for Lattice Square with 25 Treatments in Squares Square I (0, (1, (2, (3, (4, 0) 4) 3) 2) 1) (0, (1, (2, (3, (4, 1) 0) 4) 3) 2) (0, (1, (2, (3, (4, 2) 1) 0) 4) 3) (0, (1, (2, (3, (4, 3) 2) 1) 0) 4) (0, (1, (2, (3, (4, 4) 3) 2) 1) 0) (3, (4, (0, (1, (2, 0) 2) 4) 1) 3) (4, (0, (1, (2, (3, 0) 2) 4) 1) 3) (3, (4, (0, (1, (2, 4) 0) 1) 2) 3) (4, (0, (1, (2, (3, 2) 3) 4) 0) 1) (3, (4, (0, (1, (2, 2) 0) 3) 1) 4) (4, (0, (1, (2, (3, 1) 4) 2) 0) 3) Square II (0, (1, (2, (3, (4, 0) 2) 4) 1) 3) (1, (2, (3, (4, (0, 0) 2) 4) 1) 3) (2, (3, (4, (0, (1, 0) 2) 4) 1) 3) Square III (0, (1, (2, (3, (4, 0) 1) 2) 3) 4) (1, (2, (3, (4, (0, 3) 4) 0) 1) 2) (0, (1, (2, (3, (4, 0) 3) 1) 4) 2) (1, (2, (3, (4, (0, 4) 2) 0) 3) 1) (2, (3, (4, (0, (1, 1) 2) 3) 4) 0) Square IV (2, (3, (4, (0, (1, 3) 1) 4) 2) 0) classifications with respect to the q languages, say L1 , L2 , , Lq , can be used in pairs to construct various numbers of squares, that is, if we denote the orthogonal classifications by R = Lq+1 , C = Lq+2 , L1 , , Lq , respectively, then any pair (Li , Lj ) with i = j (i, j = 1, 2, , q + 2) defines a suitable square arrangement of the t = K treatments, where the levels of Li determine the rows and the levels of Lj determine the columns of the square Example 18.3 For K = we have q = A suitable arrangement would be as follows: I Rows Columns L2 L3 Square II III L1 L2 L3 L1 Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn  Tai lieu Luan van Luan an Do an 677 TWO-RESTRICTIONAL LATTICES Table 18.12 R | C, T-ANOVA for Example 18.2 Source d.f Xs | I E(MS) Xτ | I, X s 24 Xc | I, Xs , Xτ 16  II,III,IV  I vs AB AB  2II vs AB  2I,III,IV AB  vs AB 3 AB IV I,II,III 4  vs AB  AB III I,II,IV Xr | I, X s , X c , Xτ 4 16 σe2 + II,III,IV I vs A A σe2 + I,III,IV II vs B B σe2 + 3  vs AB AB III I,II σe2 +  II,III  IV vs AB AB σe2 + I | I, X s , Xr , Xc , X τ 40 12 5σr 5σr 5σr 3 5σr2 5σr2 σe2 Table 18.13 C | R, T-ANOVA for Example 18.3 Source d.f Xs | I E(MS) Xτ | I, X s 24 Xr | I, X s , X τ 16 II,III,IV I vs A A I,III,IV II vs B B 3 AB III 3 AB I,II,IV  I,II,III  IV vs AB AB vs Xc | I, Xs , Xr , Xτ 16 σe2 +  II,III  I vs AB AB σe2 + 12 5σc 2 5σc  2I,III,IV  2II vs AB AB σe2 + 5σc2 4  vs AB AB III I,II,IV σe2 + 5σc2 σe2 + 5σc2 3 AB IV vs 3 AB I,II I | I, X s , Xr , Xc , X τ 40 σe2 Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 678 LATTICE DESIGNS Example 18.4 For K = 10 we have q = A suitable arrangement would be as follows: Square I II III IV Rows Columns L1 L2 L2 L3 L3 L4 L4 L1  Other arrangements are, of course, possible 18.10 LATTICE RECTANGLES The concept of the lattice square as a design that enables elimination of heterogeneity in two directions can be extended to situations where the rows and columns not contain the same number of experimental units The notion of double confounding presented in Section 9.8 may be used to develop designs for K n treatments, K being prime or prime power, with replicates having a rectangular pattern with rows of size K r and columns of size K c with r + c = n There are, obviously, many possible configurations To describe the general case leads to rather complicated notation For this reason we shall illustrate this type of design by the following example Example 18.5 Suppose we have t = 33 = 27 treatments and replicates with rows of size and columns of size A possible system of row and column confounding would be as follows: Replicate I: Rows: Columns: A, B, AB, AB ABC Replicate II: Rows: Columns: A, C, AC, AC AB C Replicate III: Rows: Columns: B, C, BC, BC ABC This leaves each effect or interaction unconfounded in at least one replicate  With proper modifications, the analysis proceeds as outlined in Section 18.9 using model (18.32) and the weights wr = σe2 + K r σr2 wc = σe2 + K c σc2 Estimation of the weights is accomplished easily by following the procedures outlined in Section 11.12 in combination with (18.41) and (18.43) We merely illustrate this for Example 18.5 Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 679 RECTANGULAR LATTICES Table 18.14 Partial ANOVA for Lattice Rectangle of Example 18.5 Source d.f E(MS) Xr | I, Xs , X c , Xτ 24 III II,III vs A A   BI,III vs BII I II,III vs C C σe2 + 32 3σr2    2  σe + 3σr   II,III  I vs AB AB  2II,III  2I vs AB AB  I,III  II vs AC AC  2I,III  2II vs AC AC  I,II  III vs BC BC  2I,II  2III vs BC BC II I vs A A III I vs B B III II vs C C 2 2 2 Xc | I, X s , Xr , Xτ  I vs ABC  II,III ABC   C vs AB 2C AB II I,III 2  vs ABC  ABC III I,II                             σe2 + 3σr2 σe2 + 3σr2 σe2 + 23 9σc2     σe2 + 23 9σc2    Example 18.5 (Continued) The partitioning of SS(Xr | I, Xs , Xc , X τ ) and SS(Xc | I, X s , Xr , Xτ ) is indicated in Table 18.14 together with the associated E(MS) With these, wr and wc can be estimated in the usual way  18.11 RECTANGULAR LATTICES The class of two-dimensional lattices is applicable when the number of treatments is a perfect square; that is, t = K Obviously, this limits the number of cases in which such a lattice design can be used To remedy this deficiency Harshbarger (1947, 1949, 1951) introduced designs for t = K(K − 1) treatments in blocks of size K − 1, which are referred to as rectangular lattice designs Actually, this is a special case of a more general class of designs for t = K(K − L) treatments in blocks of size K − L (K > L) However, only the case L = is really useful from a practical point of view since (K − 1)2 < K(K − 1) < K , Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 680 LATTICE DESIGNS thus accommodating intermediate values of t In contrast to the lattice designs discussed so far there does not now exist a correspondence between the treatments and factorial combinations, a fact that will be reflected in the analysis of these designs 18.11.1 Simple Rectangular Lattices We denote the treatments by ordered pairs (x, y) with x, y = 1, 2, , K, x = y We then form two replicates, each with K blocks of size K − as follows: Replicate I: Treatments with the same value for x form the xth block Replicate II: Treatments with the same value for y for the yth block Thus each treatment (x, y) appears together in a block with 2(K − 2) treatments all of which have one digit in common with (x, y) The remaining K − 3K + treatments have either 0, 1, or digits in common with (x, y) Nair (1951) has shown that these groups of treatments define an association scheme for a PBIB(4) design for K ≥ with the actual resolved PBIB(4) design as given above with t = K(K − 1), k = K − 1, r = 2, b = 2K More formally, the association scheme and the parameters of the design (see Section 4.3) are as follows: Two treatments (x, y) and (x  , y  ) are said to be 1st associates if (x = x  , y = y  ) or (y = y  , x = x  ) 2nd associates if x = x  , x = y  , y = x  , y = y  3rd associates if (x = y  , y = x  ) or (y = x  , x = y  ) 4th associates if x = y  , y = x  Thus n1 = 2(K − 2), n2 = (K − 2)(K − 3), n3 = 2(K − 2), n4 = with λ1 = 1, λ2 = λ3 = λ4 = Furthermore   0   1 K − K −3 0     P = 0 K − (K − 3)(K − 4) K − 0   0 K −3 K − 1   0  0  0 2(K − 4)   P = 1 2(K − 4) (K − 4)(K − 5) 2(K − 4)  0 2(K − 4)  0 Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn   0   1  0  Tai lieu Luan van Luan an Do an 681 RECTANGULAR LATTICES  0  0 K −3 K −3   P = 0 K − (K − 3)(K − 4) K −  1 K − K −3  0   1   0  0   0 0  0 0 2(K − 2)   (K − 2)(K − 3) P = 0  0 2(K − 2) 0  0   0   0  0  For K = this design reduces to a PBIB(3) design since then n2 = The analysis of this design proceeds as outlined in Section 4.5 18.11.2 Triple Rectangular Lattices We now denote the treatments by triplets (x, y, z) with x, y, z = 1, 2, , K and x = y = z = x The triplets are chosen in the following way We take a Latin square of order K, replacing the Latin letters A, B, C, by the “Latin” numbers 1, 2, 3, , respectively, arranged in such a way that the diagonal contains the numbers 1, 2, , K Then leaving out the diagonal the remaining K(K − 1) cells are identified by the row number, x, the column number, y, and the “Latin” number, z Each such cell corresponds to a treatment (x, y, z), and the treatments, which are allocated to the blocks in the three replicates as follows: Replicate I: Treatments with the same x value form the xth block Replicate II: Treatments with the same y value form the yth block Replicate III: Treatments with the same z value form the zth block giving a resolved design with parameters t = K(K − 1), k = K − 1, b = 3K, r = Nair (1951) has shown that for K = and K = the resulting design is a PBIB design, but that this is no longer true for K ≥ For K = we have a PBIB(2) design with the following association scheme: Two treatments (x, y, z) and (x  , y  , z ) are said to be 1st associates if x = x  or y = y  or z = z and 2nd associates otherwise It then follows that n1 = 3, n2 = 2, λ1 = 1, λ2 = and  P = 1 2   0 P = 0 0 1  Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 682 LATTICE DESIGNS For K = 4, the association is as follows: Two treatments (x, y, z) and (x  , y  , z ) are said to be 1st associates if x = x  or y = y  or z = z , 2nd associates if they have three digits alike, and 3rd associates otherwise This leads to a PBIB(3) design with parameters n1 = 6, n2 = 2, n3 = 3, λ1 = 1, λ2 = λ3 = and  1  P1 =  0 2 1  2  1  0  P2 =  0 3 1  3  0  0  P3 =  0 0  0  0 The analysis of these triple rectangular lattices follows the procedures given in Section 4.5 For K ≥ these designs can be analyzed according to the methods developed in Chapter for the general incomplete block design It should be mentioned, however, that for K = treatments are compared with five different variances depending whether they (1) appear together in the same block and have either 1, 2, or digits alike or (2) not appear in the same block and have or digits alike It appears then that this design has a structure that is more general than that of a PBIB design (see also Nair, 1951) but that is unknown The method used to construct triple rectangular lattices can be generalized to construct rectangular lattices with more than three replicates by using several MOLS (where available) to label the treatments appropriately For K − MOLS (if they exist) we obtain the near balance rectangular lattices with K replicates (Harshbarger, 1951) 18.12 EFFICIENCY FACTORS It should be clear from our discussion in the preceding sections that the onerestrictional lattice designs are certain types of incomplete block designs, and that the two-restrictional lattic designs are resolvable row–column designs For both of these types of designs we have discussed in Sections 1.12 and 6.6.7, respectively, how comparisons among competing designs can be made by computing their efficiency factors, and by providing upper bounds for the efficiency factors These results can, of course, be used here, too More specifically, however, Patterson and Williams (1976) give the efficiency factor for a square lattice as E= (K − 1)(s − 1) (K − 1)(s − 1) + s(K − 1) (18.45) where s is the number of replicates or number of different systems of confounding used For other one-restrictional lattices (18.45) represents an upper bound Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 683 EFFICIENCY FACTORS For a two-restrictional lattice John and Williams (1995) give an upper bound Eργ = t −ρ−γ +1 t −1 (18.46) where t = number of treatments, ρ = number of rows, and γ = number of columns Alternatives to (18.46) were derived by John and Street (1992, 1993) Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an C H A P T E R 19 Crossover Designs 19.1 INTRODUCTION We have mentioned earlier (e.g., I.9.1) that a very important aspect of experimental design is the reduction of error in order to improve the inference from experiments This is particularly true if an experiment involves biological entities, such as animals or humans, since such entities typically exhibit rather larger variability One way to reduce this natural variability is to use each animal or human, which we shall refer to as subjects, as a block rather than as an experimental unit Different treatments are then applied successively, that is, in different time periods, to each subject, so that each subject–period combination now represents the experimental unit This is often referred to as each subject acting as its own control, since now comparisons between treatments can be made within rather than between subjects Obviously, for this procedure to be of any value certain conditions have to be fulfilled: (1) a subject reacts to the treatment soon after the treatment has been applied, (2) the treatment effect only lasts for a limited time, (3) after this time the subject is restored to its original state, and (4) the treatment effects are the same in each period If these conditions are satisfied, we may use some form of block design where the subjects are the blocks and the treatments to be administered to the subject are applied at random If, however, period effects are suspected, that is, the subjects change systematically over the time of the trial, then we employ some sort of row–column design, with rows representing the periods and columns representing the subjects This situation may occur, for example, in a dairy cattle feeding trial, where the treatments are applied during the cow’s lactation period and where it is known that changes occur during the lactation period regardless of the treatments (Cochran, Autrey, and Cannon, 1941) It is this latter type of situation that we are concerned with in this chapter The designs suitable for these situations are called crossover designs or changeover Design and Analysis of Experiments Volume 2: Advanced Experimental Design By Klaus Hinkelmann and Oscar Kempthorne ISBN 0-471-55177-5 Copyright  2005 John Wiley & Sons, Inc 684 Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 685 THE MODEL designs, and we shall discuss some of their features in the following sections We should mention that these designs are sometimes referred to as repeated measurements designs, but we shall reserve this term for designs where repeated measurements are obtained on a subject following a single treatment application (see I.13.7) Crossover designs were first considered in principle in the context of agricultural experiments Cochran (1939), for example, alluded to their special features in connection with rotation experiments Further developments came with applications in animal feeding trials (e.g., Lucas, 1957; Patterson, 1951; Patterson and Lucas, 1959, 1962), biological assay (e.g., Finney, 1956), pharmaceutical and clinical trials (e.g., Senn, 1993), psychology (e.g., Cotton, 1998), and industrial research (e.g., Raghavarao, 1990) For a brief history of this subject see Hedayat and Afsarinejad (1975) 19.2 RESIDUAL EFFECTS We have pointed out above that one of the advantages of crossover designs is that certain treatment contrasts may be estimated more precisely on a withinsubject basis as compared to designs where only between-subject information is available There are, however, also disadvantages, such as the length of time required for a trial Another major disadvantage may arise if the treatments exhibit effects beyond the period in which they are applied These lingering effects are referred to as residual effects or carryover effects If these effects cannot be accounted for, they may bias the estimates of contrasts among treatment effects or, as they also are referred to, direct effects Wash-out periods (with either no treatment or a standard treatment) have been used to eliminate this problem, but that may in some cases be unethical and it also prolongs the duration of the trial even more There may be situations in which one is interested to estimate the residual effects, but generally we are interested only in the direct effects It is therefore important to construct designs that allow us to estimate the direct effects separately from the residual effects 19.3 THE MODEL We have argued before (see I.2.2) that the development of the experimental design and the formulation of an appropriate statistical model are intimately connected in that the structures of the treatment design, the error control design, and the sampling and observation design determine essentially the complexity of the statistical model for purposes of analyzing the data From our discussion, so far it is clear that a model for crossover designs follows that for a row–column design, that is, it contains period, subject, and treatment (direct) effects In addition, however, we also need to include residual effects, and that requires that we have Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn Tai lieu Luan van Luan an Do an 686 CROSSOVER DESIGNS to make assumptions about the nature of the residual effects Over how many periods they extend? Do they interact with the treatments applied in these periods? Do they change over time? The commonly used assumptions are that the carryover effects last only for one period and that they are constant over time and not depend on the treatment applied in that period (but see also Section 19.8.6) Thus, if we denote by yij the observation in period i (i = 1, 2, , p) on subject j (j = 1, 2, , n), we write the model as yij = µ + πi + βj + τd(i,j ) + ρd(i−1,j ) + eij (19.1) where πi represents the ith period effect, βj the j th subject effect, τd(i,j ) the treatment effect, with d(i, j ) denoting the treatment applied to subject j in period i using the design d, ρd(i−1,j ) the carryover effect associated with the treatment assigned to subject j in period i − 1, and eij the error with mean and variance σe2 We shall rewrite (19.1) in matrix notation as follows (see Stufken, 1996): y = Iµ + X π + X β + Xd3 τ + X d4 ρ + e (19.2) with y  = (y11 , y21 , , ypn ), π  = (π1 , π2 , , πp ), β  = (β1 , β2 , , βn ), τ  = (τ1 , τ2 , , τt ), ρ  = (ρ1 , ρ2 , , ρt ),   Ip 0p 0p   Ip   0p Ip 0p  .    X1 =   = I n × Ip   = I × Ip , X2 =      Ip 0p 0p Ip and the pn × t design-dependent matrices as     Xd31 Xd41     Xd32  Xd42      Xd4 =   Xd3 =           Xd3n (19.3) Xd4n where the p × t matrix Xd3j denotes the period–treatment incidence matrix for subject j and where Xd4j = L∗ X d3j denotes the p × p period–residual effect incidence matrix for subject j , with the p × p matrix L∗ defined as    0p−1 ∗ L = (19.4) I p−1 0p−1 The form of L∗ in (19.4) implies that ρd(0,j ) = Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn

Ngày đăng: 07/07/2023, 01:14

Xem thêm: