1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Semiparametric varying coefficient model for interval censored data with a cured proportion

167 191 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 167
Dung lượng 596,87 KB

Nội dung

SEMIPARAMETRIC VARYING-COEFFICIENT MODEL FOR INTERVAL CENSORED DATA WITH A CURED PROPORTION SHAO FANG NATIONAL UNIVERSITY OF SINGAPORE 2013 SEMIPARAMETRIC VARYING-COEFFICIENT MODEL FOR INTERVAL CENSORED DATA WITH A CURED PROPORTION SHAO FANG (M.Sc. National University of Singapore) (B.Sc. Suzhou University) SUPERVISED BY A/P LI JIALIANG A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF STATISTICS AND APPLIED PROBABILITY NATIONAL UNIVERSITY OF SINGAPORE 2013 i CONTENTS Acknowledgements Summary List of Notations Chapter Introduction Chapter Modelling Methodology 2.1 Two-part models with varying-coefficients . . . . . . . . . . . . . . xi xii xiii 12 12 CONTENTS ii 2.2 Estimation under mixed case interval censoring . . . . . . . . . . . 14 2.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.1 Estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chapter Inference 26 3.1 Asymptotic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2 Estimation of asymptotic variance . . . . . . . . . . . . . . . . . . . 28 3.3 Bandwidth and model selection . . . . . . . . . . . . . . . . . . . . 30 3.3.1 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3.2 BIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3.3 Algorithm for bandwidth selection . . . . . . . . . . . . . . . 33 3.3.4 Algorithm for model selection . . . . . . . . . . . . . . . . . 34 Chapter Simulations and Data Analysis 36 4.1 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.2 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 CONTENTS iii 4.2.1 Data and statistical models . . . . . . . . . . . . . . . . . . 80 4.2.2 Normal distribution . . . . . . . . . . . . . . . . . . . . . . . 84 4.2.3 Logistic distribution . . . . . . . . . . . . . . . . . . . . . . 90 4.2.4 Gumbel distribution . . . . . . . . . . . . . . . . . . . . . . 92 Chapter Discussion and Further Research Topics 95 5.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.2 Further research topics . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.2.1 Modelling survival times of non-cured subjects with the inverse Gaussian distributions . . . . . . . . . . . . . . . . . . 5.2.2 96 Bayesian two-part models with varying-coefficients using adaptive regression splines . . . . . . . . . . . . . . . . . . . . . . 100 5.2.3 Two-part models with varying-coefficients and random effects 107 5.2.4 Other topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Chapter A Proofs of Theorems 126 A.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 A.2 Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 CONTENTS A.3 Proofs of theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 iv v List of Figures Figure 4.1 Estimated varying-coefficients with median performance using normal distributions (setting I), n = 500. . . . . . . . . . . . . . 42 Figure 4.2 Estimated varying-coefficients with median performance using normal distributions (setting I), n = 1000. . . . . . . . . . . . . 43 Figure 4.3 Estimated varying-coefficients with median performance using logistic distributions (setting I), n = 500. . . . . . . . . . . . . . 45 Figure 4.4 Estimated varying-coefficients with median performance using logistic distributions (setting I), n = 1000. . . . . . . . . . . . . 46 List of Figures vi Figure 4.5 Estimated varying-coefficients with median performance using Gumbel distributions (setting I), n = 500. . . . . . . . . . . . . 49 Figure 4.6 Estimated varying-coefficients with median performance using Gumbel distributions (setting I), n = 1000. . . . . . . . . . . . . 50 Figure 4.7 Estimated varying-coefficients with median performance using normal distributions (setting II). . . . . . . . . . . . . . . . . . 53 Figure 4.8 Estimated varying-coefficients with median performance using logistic distributions (setting II). . . . . . . . . . . . . . . . . . 54 Figure 4.9 Estimated varying-coefficients with median performance using Gumbel distributions (setting II). . . . . . . . . . . . . . . . . . 55 Figure 4.10 Estimated varying-coefficients with median performance using normal distributions (setting III). . . . . . . . . . . . . . . . . . 62 Figure 4.11 Estimated varying-coefficients with median performance using logistic distributions (setting III). . . . . . . . . . . . . . . . . . 64 Figure 4.12 Estimated varying-coefficients with median performance using Gumbel distributions (setting III). . . . . . . . . . . . . . . . . 66 Figure 4.13 Estimated varying-coefficients with median performance using normal distributions (setting IV). . . . . . . . . . . . . . . . . . 70 Figure 4.14 Estimated varying-coefficients with median performance using logistic distributions (setting IV). . . . . . . . . . . . . . . . . . 71 List of Figures vii Figure 4.15 Estimated varying-coefficients with median performance using Gumbel distributions (setting IV). . . . . . . . . . . . . . . . . 72 Figure 4.16 Typical estimated varying-coefficients in 150 simulations using normal distributions (setting V). . . . . . . . . . . . . . . . . . 75 Figure 4.17 Typical estimated varying-coefficients in 150 simulations using logistic distributions (setting V). . . . . . . . . . . . . . . . . . 76 Figure 4.18 Typical estimated varying-coefficients in 150 simulations using Gumbel distributions (setting V) . . . . . . . . . . . . . . . . . 77 Figure 4.19 Estimated varying-coefficients using normal distributions in Model for HDSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Figure 4.20 Estimated varying-coefficients using normal distributions in Model for HDSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Figure 4.21 Estimated varying-coefficients using logistic distributions in Model for HDSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Figure 4.22 Estimated varying-coefficients using Gumbel distributions in Model for HDSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 viii List of Tables Table 4.1 Simulation results using normal distributions under three cases (setting I), n = 500. . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.2 Simulation results using normal distributions under three cases (setting I), n = 1000. . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.3 40 Simulation results using logistic distributions under three cases (setting I), n = 500. . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.4 39 41 Simulation results using logistic distributions under three cases (setting I), n = 1000. . . . . . . . . . . . . . . . . . . . . . . . . . . 44 A.3 Proofs of theorems 137 ˆ ∗ ) and µi (Ui ; β0 , α) ˆ and similarly If we ignore the difference between µi (u; β0 , a ˆ∗ ) and µ ˆ ) by Taylor’s expansions and the difference between µ ˜i (u; θ0 , b ˜i (Ui ; θ0 , γ ˆ∗ satisfy the following equations. ˆ ∗, b Lemma 3, then a n−1 σ0−1 n ∑  Wi,p Xi × δi i=1 +(δi − 1) ( f F ( log(Ri )−µi (u;β0 ,ˆ a∗ ) σ0 log(Ri )−µi (u;β0 ,ˆ a∗ ) σ0 ( f ) ) −f −F ( ( log(Li )−µi (u;β0 ,ˆ a∗ ) σ0 ˆ∗ )) + − F exp(−˜ µi (u; θ0 , b ( log(Li )−µi (u;β0 ,ˆ a∗ ) σ0 ) log(Li )−µi (u;β0 ,ˆ a∗ ) σ0 ) log(Li )−µi (u;β0 ,ˆ a∗ ) σ0 )  ) × Kh1 (Ui − u) = Op (n−1/2 ), n−1 n ∑ { Wi,r Mi × δi + ˆ∗ )) + exp(˜ µi (u; θ0 , b ( )  ∗  F log(Li )−µσi0(u;β0 ,ˆa ) [ ( )] (δi − 1) ˆ∗ ))] + exp(−˜ ˆ∗ )) − F log(Li )−µi (u;β0 ,ˆa∗ )  [1 + exp(˜ µi (u; θ0 , b µi (u; θ0 , b σ0 i=1 × Kh2 (Ui − u) = Op (n−1/2 ), T ∗ T ˆ∗ ) = M T W T b ˆ∗ ˆ ∗ ) = XiT Wi,p ˆ (u)+ZiT β0 , µ Where µi (u; β0 , a a ˜i (u; θ0 , b i i,r (u)+Ni θ0 . ˆ (u; ρ0 , κ ˆ ∗ ), the left hand side of the above equations. Denote by G √ ˆ (u; ρ0 , κ ˆ ∗ ) = op (1/ nh1 ) G A.3 Proofs of theorems 138 Then by Taylor’s expansion, we have √ ˆ (u; ρ0 , κ∗ ) + Dκ∗ {G ˆ (u; ρ0 , κ ˜ ∗ )}(ˆ G κ∗ − κ∗ ) = op (1/ nh1 ) (A.5) ˜ ∗ is between κ ˆ ∗ and κ∗ , and hence κ ˜ ∗ → κ∗ in probability. where κ In the followings, we take steps to finish the proof. ˆ (u; ρ0 , κ∗ )} and G ˆ (u; ρ0 , κ∗ ). Step Next, we consider asymptotic properties of Dκ∗ {G ˆ (u; ρ0 , κ∗ )} is expressed as follows. Firstly, Dκ∗ {G    Kh1 (Ui − u)I2p  , Denote Kh1 ,h2 (Ui − u) =    Kh2 (Ui − u)I2r    K(w)12p×2p K(w)12p×2r   , where 1m×n denotes the m × n Kν (w) =    K(νw)12r×2p K(w)12r×2r matrix whose elements are  all and recall  that since h1 = O(h2 ), we suppose that   wp h1 . = ν. Denote J1 =    h2 →0 h2 νr wr lim Then by the law of large numbers, Lemma and Lemma 3, we have } ∂ l(ϱ0 ) U = u J1T ◦ Kν (w) dw + op (1) J1 E T (∂k )(∂k ) 0 −∞ ] )⊗2 ∫ ∞ [( ∂l(ϱ0 ) U = u ◦ Kν (w) dw + op (1) = −p(u) E J1 ∂k0 −∞ ˆ (u; ρ0 , κ∗ )} = p(u) Dκ∗ {G ∫ ∞ { A.3 Proofs of theorems 139 = p(u)S0 (u; ρ0 , k0 ) + op (1). (A.6) Where ◦ denotes the “entrywise product” or “Hadamard product” of two matrices with the same dimensions. ˆ (u; ρ0 , κ∗ ) as follows using the same trick above. Secondly, we express G   n  di1  1∑ ∗ ˆ  G1 (u; ρ0 , κ ) = Kh1 ,h2 (Ui − u)    n i=1 di2    d˜1   + op (1) = p(u)    d˜2 [( ) ] ∫ ∞ ∂l(ϱ0 ) = p(u) J2 E U = u K(w) dw + op (1), (A.7) ∂k0 −∞    wp  . where J2 =    wr ˆ (u; ρ0 , κ∗ ). Step In the following, we establish the asymptotic normality of G ˆ (u; ρ0 , κ∗ ). We first compute the mean and the variance of G ˆ∗ ) = M T γ(Ui )+ ˜i (Ui ; θ0 , b Recall that µi (Ui ; β0 , α0 ) = XiT α0 (Ui )+ZiT β0 and µ i A.3 Proofs of theorems 140 NiT θ0 . Then by Taylor’s expansion, ˆ ∗) = µi (Ui ; β0 , α0 ) − µi (u; β0 , a (Ui − u)2 T ¨ (u){1 + op (1)}. Xi α ˆ∗ ) = (Ui − u) M T γ µ ˜i (Ui ; θ0 , γ) − µ ˜i (u; θ0 , b i ¨ (u){1 + op (1)}, where f¨(·) denotes the second derivative of the univariate function f (·). For any bivariate smooth function ϕ(·, ·) in R , by Taylor’s expansion again, we have ˆ ∗ ), µ ˆ ∗ )) = ϕ(µi (Ui ; β0 , α0 ), µ ˜i (Ui ; β0 , α0 )) − ϕ(µi (u; β0 , a ˜i (u; β0 , a    XiT α ¨ (u)  (U − u) i ˙ i (u; β0 , a   {1 + op (1)}, ˆ ), µ ˆ )} Dµi ,˜µi {ϕµ ˜i (u; β0 , a   ¨0 (u) MiT γ ∗ ∗ where f˙(·) denotes the first derivative of the univariate function f (·). ˆ∗ ) in ˆ (u; ρ0 , α0 , γ0 ) to change all the µi (u; β0 , a ˆ ∗ ) and µ Denote G ˜i (u; θ0 , b ˆ (u; ρ0 , κ∗ ) into µi (Ui ; β0 , α0 ) and µ G ˜i (Ui ; θ0 , γ). Denote dij,k0 to change all the ˆ∗ ) in dij into µi (Ui ; β0 , α0 ) and µ ˆ ∗ ) and µ µi (u; β0 , a ˜i (u; θ0 , b ˜i (Ui ; θ0 , γ) for j = 1, 2. ] [ 0) ˆ (u; ρ0 , k0 )] = U = u = 0. Therefore, E[G By the property of MLE, we have E ∂l(ϱ ∂k0 A.3 Proofs of theorems 141 0. By Lemma and conditions, ˆ (u; ρ0 , κ∗ )] = E[G ˆ (u; ρ0 , κ∗ )] − E[G ˆ (u; ρ0 , k0 )] E[G    di1 − di1,k0   = EKh1 ,h2 (Ui − u)    di2 − di2,k0   ∫ h21  E{ϕ1,0 |U = u} wp w2 K(w) dw   + op (1) = p(u)    ∫ h2 E{ϕ |U = u} w w K(w), dw 2,0 r   ∫ E{ϕ1,0 |U = u} wp w2 K(w) dw  p(u)h21    + op (1) =  ∫  E{ϕ2,0 |U = u} wr w K(w), dw ν2 = p(u)τn (u) + op (1). (A.8) Similarly,   ⊗2   di1  ˆ (u; ρ0 , κ∗ )} = n−1 E Kh1 ,h2 (Ui − u)   + Op (n−1 h41 ) V ar{G    di2   ⊗2   di1,k0    = n−1 E  K (U − u)  h1 ,h2 i   di2,k0 + op (n−1 ) −1 −1 = n−1 h−1 p(u)S1 (u; ρ0 , k0 ) + op (n h1 ). ∫ Where S1 (u; ρ0 , k0 ) = [( ∞ E −∞ J3 ∂l(ϱ0 ) ∂k0 )⊗2 ] U = u dw, (A.9) A.3 Proofs of theorems 142    K(w)I2p   . To prove the asymptotic normality, we use and J3 =    νK(νw)I2r the Cram´er-Wold device. For any constant vector b ̸= 0, we need to show √ d ˆ (u; ρ0 , κ∗ ) − bT E[G ˆ (u; ρ0 , κ∗ )]} − nh1 {bT G → N {0, p(u)bT S1 (u; ρ0 , k0 )b}. (A.10) Note that the left-hand side of the above formula admits the form √ −1 nh1 n n ∑ {Kh1 ,h2 (Ui − u)Yi − EKh1 ,h2 (Ui − u)Yi }. i=1 To establish the asymptotic normality, we only need to verify the Lyapunov condition: n √ ∑ E | nh1 n−1 {Kh1 ,h2 (Ui − u)Yi − EKh1 ,h2 (Ui − u)Yi }|2+t i=1 for some t > 0. By the assumed condition (vii), the left-hand side of the above expression is bounded by 22+t (n−1 h1 )1+t/2 nE|Y Kh1 ,h2 (Ui − u)|2+t = O{(nh1 )−t/2 } → 0. This verifies (A.10). Consequently, by (A.5), (A.8) and (A.9) √ nh1 {ˆ κ∗ − κ∗ − S0 (u; ρ0 , k0 )−1 τn (u)} A.3 Proofs of theorems 143 √ ˆ (u; ρ0 , κ ˆ (u; ρ0 , κ ˆ ∗ ) − E[G ˆ ∗ )]} + op ( nh1 h21 ) = {p(u)S0 (u; ρ0 , k0 ) + op (1)}−1 {G − → N {0, p−1 (u)S0 (u; ρ0 , k0 )−1 S1 (u; ρ0 , k0 )S0 (u; ρ0 , k0 )−1 }. d Denote bn = S0 (u; ρ0 , k0 )−1 τn (u), Σ1 = p−1 (u)S0 (u; ρ0 , k0 )−1 S1 (u; ρ0 , k0 )S0 (u; ρ0 , k0 )−1 . Therefore,    ˜ (ˆ a(u) − a(u))  √  H  d    − bn  − nh1    → N [0, Σ1 ]. ˆ ˜ H2 (b(u) − b(u)) Proof of Theorem 1. We have established the local consistency of the one-step estimator is Lemma 2. We can use the one-step estimators as the initial estimators for the fully-iterated estimators. Following the discussion of Carroll et al. (1997), it ˆ (u) and ρˆ are also locally consistent. is expected that the fully-iterated estimators κ ˆ , and ρ, ˆ we now establish the following Given the consistency of the estimators κ ˆ asymptotic representation of ρ: ˆ (ρ0 , k0 ) + Φ4 ) + op (1) n1/2 (ρˆ − ρ0 ) = −n1/2 (Λ1 − Λ2 )−1 (G (A.11) where Λ1 , Λ2 are defined later. Assuming (A.11) holds, then by the central limit theorem and the regularity condition given, n1/2 (ρˆ − ρ0 ) converges in distribution A.3 Proofs of theorems 144 to a normal random vector with the mean zero and the variance-covariance matrix Λ−1 Φ(Λ−1 )T as n goes to infinity. In the following, we prove (A.11) in steps. ˆ (u; ρ, κ∗ ) the left hand side of the local estimating equations Step Denote by G ˆ (u; ρ, k) the which is defined in the proof of Theorem 2. Similarly, denote by G left hand side of the global estimating equations below which are obtained by the first optimality condition of maximizing global likelihood functions (2.5) defined in section 2.2 in this thesis. n−1 σ −1 (  δi f F n −1 ( n ∑ Zi × i=1 log(Ri )−µi σ log(Ri )−µi σ n ∑ ) ) { Ni × i=1 −f −F ( log(Li )−µi σ ( log(Li )−µi σ n−1 σ −2 i=1  δi ( ) + (δi − 1) f ( f log(Li )−µi σ exp(−˜ µi ) + − F δi + + exp(˜ µi ) (δi − 1) n ∑ ) ( F log(Li )−µi σ ) ( ) log(Li )−µi σ  )  = 0.   [ ( )] = 0, [1 + exp(˜ µi )] + exp(−˜ µi ) − F log(Lσi )−µi  log(Ri )−µi σ ) ( ) (log(Ri ) − µi ) − f (log(Li ) − µi ) ( ) ( ) F log(Rσi )−µi − F log(Lσi )−µi log(Li )−µi σ A.3 Proofs of theorems 145 ( )  (log(Li ) − µi ) ( )  = 0. + (δi − 1) log(Li )−µi exp(−˜ µi ) + − F σ f log(Li )−µi σ Then by the law of large numbers and Lemma with the property of MLE, we have [ ˆ (ρ, k)}|ρ=ρ0 ,k=k0 Dρ {G ] ∂ l(ϱ0 ) = E + op (1) (∂ρ0 )(∂ρ0 )T [( )( )T ] ∂l(ϱ0 ) ∂l(ϱ0 ) = −E + op (1) ∂ρ0 ∂ρ0 = Λ1 + op (1), (A.12) and ∑ ˆ (ρ, k)}|ρ=ρ0 ,k=k0 = Dk {G Λi21 n i=1 ] [ ∂ l(ϱ0 ) + op (1) = E (∂ρ0 )(∂k0 )T n ˜ 21 } + op (1). = E{Λ (A.13) ˆ (u; ρ, κ∗ ) the left hand side of the local estimating equations Step Denote by G as defined in the proof of Theorem 2. The local estimating equations are obtained by the first optimality condition of maximizing the combined local likelihood functions (2.4) defined in section 2.2 of this thesis. A.3 Proofs of theorems 146 ˆ ∗ are the solution of the local estimating equations at convergence. Denote κ ˆ (u; ρ, ˆ κ ˆ ∗ ) = for any u, where ρˆ are the solutions of the global estimating Then G ˆ (u; ρ, k) at convergence. By the Taylor’s expansion, we have equations G ˆ (u; ρ, ˆ κ ˆ ∗) G ˆ (u; ρ0 , κ∗ ) + Dρ (G ˆ (u; ρ0 , κ∗ ))(ρˆ − ρ0 ) = G 0 ˆ (u; ρ0 , κ∗ ))(κˆ∗ − κ∗ ) + op (n−1/2 ). +Dκ∗ (G 0 (A.14) By law of large numbers and Lemma 3, we have ˆ (u; ρ0 , κ∗ )) Dρ (G ∫ [( ∞ = p(u) −∞ J2 E ∂ l(ϱ0 ) (∂k0 ) (∂ρ0 )T ) ] U = u K(w) dw + op (1) = p(u)E{Φ1 |U = u} + op (1). Recall (A.6) ˆ (u; ρ0 , κ∗ )) = p(u)S0 (u; ρ0 , k0 ) + op (1) Dκ∗ (G Combine all the equations above, we have p(u)S0 (u; ρ0 , k0 )(κˆ∗ − κ∗0 ) ˆ (ρ0 , κ∗ ) = −G (A.15) A.3 Proofs of theorems 147 ˆ (u; ρ0 , κ∗ ))(ρˆ − ρ0 ) + op (n−1/2 ), −Dρ (G κˆ∗ − κ∗0 ˆ (u; ρ0 , κ∗ ) = −p−1 (u)S0−1 (u; ρ0 , k0 )G ˆ (u; ρ0 , κ∗ ))(ρˆ − ρ0 ) + op (n−1/2 ). −p−1 (u)S0−1 (u; ρ0 , k0 )Dρ (G Therefore, ˆ − k0 = −Φ2 − Φ3 (ρˆ − ρ0 ) + op (n−1/2 ), k (A.16) Where Φ2 , Φ3 are the corresponding blocks of the corresponding matrices. i.e. ˆ (u; ρ0 , κ∗ ) and Φ2 = p−1 (u)QS0−1 (u; ρ0 , k0 )G ˆ (u; ρ0 , κ∗ )), Φ3 = p−1 (u)QS0−1 (u; ρ0 , k0 )Dρ (G  where        0     ⊗ Ip ,   ⊗ Ir  . Q=      0 Note that Φ2 (u) = p −1        d˜1   + op (1)   d˜2  (u)QS0−1 (u; ρ0 , k)  p(u)  A.3 Proofs of theorems 148    d˜1   + op (1) = QS0−1 (u; ρ0 , k)    d˜2 ˜ + op (1). = Φ Φ3 (u) = p−1 (u)QS0−1 (u; ρ0 , k)(p(u)E{Φ1 |U = u} + op (1)) ˜ |U = u} + op (1). = E{Φ Let us rewrite Φ2 (u) as follows. ∑ ˆ i2 , Φ2 (u) = Kh1 ,h2 (Ui − u)Φ p(u)n i=1 n    Kh1 (Ui − u)I2p  . where Kh1 ,h2 (Ui − u) =    Kh2 (Ui − u)I2r ˆ = 0. Therefore, by ˆ (ρ, ˆ k) Step ρˆ solves the global estimating equations G Taylor’s expansion, we have that (using that nh41 → and nh42 → 0) ˆ ˆ (ρ, ˆ k) = G ˆ (ρ0 , k0 ) + Dρ (G ˆ (u; ρ0 , κ∗ ))(ρˆ − ρ0 ) = G n ( ) 1∑ ˆ + Λi21 k(Ui ) − k0 (Ui ) + op (n−1/2 ), n i=1 (A.17) A.3 Proofs of theorems 149 where ˆ (ρ, k)}|ρ=ρ0 ,k=k0 = Λ1 + op (1). Dρ { G ∑ ˆ (ρ, α, γ)}|ρ=ρ0 ,k=k0 = ˜ 21 } + op (1). Λi21 = E{Λ D(αT ,γ T )T {G n i=1 n ( ) ˆ − k0 given in (A.16) into (A.17), Step We now plug the representation of k we have ˆ (ρ0 , k0 ) − Φ4 + op (n−1/2 ). (Λ1 − Λ2 )(ρˆ − ρ0 ) = −G ˜ 21 E(Φ ˜ |U )}, Where Λ2 = E{Λ Φ4 = n ∑n i=1 Λi21 Φ2 = n ∑n i=1 Λi21 p(U1i )n ∑n j=1 ˆ j2 . Then use the Kh1 ,h2 (Uj − Ui )Φ same manner we have shown before. ˆ (ρ0 , k0 ) = Let G n ∑n i=1 ci . By using the similar arguments in the proof of Theorem in Carroll et al. (1997) with a trick of interchanging summations, we have that 1∑ ˜ 21 |U = Ui }Φi2 + op (n−1/2 ) = Φ ˆ + op (n−1/2 ). Φ4 = E{Λ n i=1 n A.3 Proofs of theorems 150 Therefore, ˆ (ρ0 , k0 ) − Φ ˆ + op (n−1/2 ). (Λ1 − Λ2 )(ρˆ − ρ0 ) = −G (A.18) One may verify that ˆ (ρ0 , k0 ) + Φ ˆ 4) = E(G )⊗2 ( ˆ ˆ ˜ V ar(G2 (ρ0 , k0 ) + Φ4 ) = E c − E{Λ21 |U }Φ2 + op (n−1 ) = Φ + op (n−1 ). n n where c is ci without the specification of i, and Φ2 is Φi2 without the specification of i. Using the Cram´er-Wold device and by Lyapounov central limit theorem, we can ˆ (ρ0 , k0 ) + Φ ˆ . Therefore, prove the asymptotic normality of G √ n(ρˆ − ρ0 ) − → N [0, (Λ1 − Λ2 )−1 Φ((Λ1 − Λ2 )−1 )T ]. d Proof of Theorem 3. The proof of Theorem can be similarly derived as that for Theorem by a similar technique to that described in Jin et al. (2001). The proof is shown as follows. A.3 Proofs of theorems 151 Using the symbol “˜” to denote the estimates, estimating equations etc. corresponding to the resampling scheme shown in this thesis. For example, ρ˜ = (β˜T , θ˜T , σ ˜ )T . Therefore, based on the similar arguments in the proof of Theorem and Theorem 2, we have that ˆ −Φ ˜ (ρ, ˜1 − Λ ˜ )(ρ˜ − ρ) ˜ + op (n−1/2 ). ˆ = −G ˆ k) (Λ ˜4 = Where Φ n ∑n i=1 (A.19) ˜˜ |U = U }Φ ˜ i2 . E{Λ 21 i ˜˜ respectively. The reason is ˜ 21 are the same as Λ ˜ 1, Λ ˜ and Λ In fact, Λ1 , Λ2 and Λ 21 that ξi ’s are iid exponential random variables with mean and variance 1. Thereˆ ) = E(G ˜ ). Then we can have that Λ ˜ = Λ1 and Λ ˜ = Λ2 . fore, for example, E(G Now, it suffices to show that for every realizations of {ξi , i = 1, . . . n} the condiˆ +Φ ˜ (ρ, ˜ converges to the same limiting distribution of ˆ k) tional distribution of G ˆ (ρ0 , k0 ) + Φ ˆ 4. G Follow the same argument in the proof of Proposition A3. in Jin et al. (2001). Assume that the assumption of Proposition A1 in Jin et al. (2001) is satisfied. ˆ +Φ ˆ (ρ, ˆ ∥ = o(n−1 ) almost surely ˆ α, ˆ γ ˆ are minimizer, it follows that ∥G ˆ k) Since ρ, in the space of {ξi , i = 1, . . . n}. Thus, up to an almost surely negligible term, ˆ +Φ ˆ +Φ ˆ +Φ ˜ (ρ, ˜ (ρ, ˆ (ρ, ˜ = (G ˜ ) − (G ˆ ). ˆ k) ˆ k) ˆ k) G (A.20) A.3 Proofs of theorems 152 By the strong law of large number for U–processes (Arcones and Gin´e (1993), Theorem 3.1) and Proposition A1 in Jin et al. (2001), the mean converges almost surely to and the covariance matrix of (A.20) converges almost surely to n1 Φ + op (n−1 ). Using the Cram´er-Wold device and by Lyapounov central limit theorem, ˆ Φ ˜ (ρ, ˜ . Therefore, with probability ˆ k)+ we can prove the asymptotic normality of G equal to 1, √ ˆ − n(ρ˜ − ρ) → N [0, (Λ1 − Λ2 )−1 Φ((Λ1 − Λ2 )−1 )T ], d i.e. with probability equal to 1, the conditional distribution of the observed data converges to the asymptotic distribution of √ √ ˆ given n (ρ˜ − ρ) n (ρˆ − ρ0 ) . [...]... my NUS Graduate Research Scholarship Finally, I thank my family for their love and support xii SUMMARY Varying- coefficient models make up an increasing portion of statistical research and are now applied to censored data analysis in medical studies This research incorporates such flexible semiparametric regression tools for interval censored data with a cured proportion A two-part model is adopted to... nonproportional hazards This approach included a data augmentation method which avoided complicated posterior distributions and made it more tractable in the Bayesian framework Cancho et al (2011) employed a Bayesian analysis using MCMC methods for right -censored survival data suitable for populations with a cure rate The authors modeled the cure rate under the negative binomial distribution as a special case... general hazard model a which generalized a large number of families of cure rate models The estimation procedure was based on the maximum-likelihood-estimation method Bayesian ideas can be applied to analyze interval- censored data HDSD was analyzed using Bayesian models in Thompson and Chhikara (2003) Pennel et al (2009) used Markov chain Monte Carlo (MCMC) to fit TR models with random effects and nonproportional... interval censored data In general, the theoretical justification for interval censored data analysis may be difficult because there is a lack of basic tools as simple and elegant as the partial likelihood theory and the martingale theory for right censored data However, for likelihood-based estimation under a known class of distributions, 3 we can still obtain √ n consistency and asymptotic normality For. .. estimators Also relevant to censored data are the following close–knit publications Fan et al (1997) considered the proportional hazards regression model with a nonparametric risk effect Maximum local likelihood estimations for parametric baseline 9 hazard functions and maximum local partial likelihood estimations for nonparametric baseline hazard functions were studied Cai et al (2008) and Lu and Zhang... procedures for interval censoring data, only √ 3 n-consistency can be achieved For more details, one may consult Sun (2006) Because of this benefit, parametric models, such as accelerated failure time models, are usually preferred in lieu of Cox model for interval censored data, when the distribution is suitable In Zhang and Sun (201 0a) , clustered interval- censored failure time data with informative cluster... demonstrates the two-part model with varying- coefficients, applications to interval censored data and the estimation method Chapter 3 is the inference chapter where asymptotic theorems are provided A particular resampling method is applied to estimate the asymptotic variance A modified BIC version of the model selection method is also introduced Simulations and real data analysis are provided in Chapter 4 and... used AIC, the likelihood ratio test and the score test The author’s study showed that AIC was informative and that both the likelihood ratio test and the score test had adequate power for model selection when the sample size was large Well-developed methods for right -censored data may be extended to analyze the interval- censored data In Chen and Sun (2010), the authors employed an additive hazards model. .. survival time for studying interval- censored failure time data A multiple imputation approach was used for inference This algorithm imputes censoring times by sampling from the current estimate of the conditional distribution of the error which changes interval- censored data to right -censored data The authors then used the ready estimation method for the right -censored data for inference This approach can... size were analyzed using regression Two methods were proposed One was a weighted estimating equation-based procedure Another was a within-clustered resampling procedure This procedure sampled a single subject from each cluster and transformed the data to the usual univariate failure time data, which could be analyzed with a generalized linear model Since the observations in the resampled data are independent, . SEMIPARAMETRIC VARYING- COEFFICIENT MODEL FOR INTERVAL CENSORED DATA WITH A CURED PROPORTION SHAO FANG NATIONAL UNIVERSITY OF SINGAPORE 2013 SEMIPARAMETRIC VARYING- COEFFICIENT MODEL FOR INTERVAL. or semiparametric regression models appropriate for survival data. Zhang and Sun (2010b) gives the latest review of statistical analysis for interval censored data. In general, the theoretical. justification for interval censored data analysis may be difficult because there is a lack of basic tools as simple and elegant as the partial likelihood theory and the martingale theory for right censored

Ngày đăng: 10/09/2015, 09:32

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w