1. Trang chủ
  2. » Giáo Dục - Đào Tạo

On computational techniques for bayesian empirical likelihood and empirical likelihood based bayesian model selection

180 499 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 180
Dung lượng 1,66 MB

Nội dung

ON COMPUTATIONAL TECHNIQUES FOR BAYESIAN EMPIRICAL LIKELIHOOD AND EMPIRICAL LIKELIHOOD BASED BAYESIAN MODEL SELECTION YIN TENG (B.Sc., WUHAN UNIVERSITY) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF STATISTICS AND APPLIED PROBABILITY NATIONAL UNIVERSITY OF SINGAPORE 2014 DECLARATION I hereby declare that the thesis is my original work and it has been written by me in its entirety. I have duly acknowledged all the sources of information which have been used in the thesis. This thesis has also not been submitted for any degree in any university previously. Yin Teng 14st Aug 2014 ii Thesis Supervisor Sanjay Chaudhuri Associate Professor; Department of Statistics and Applied Probability, National University of Singapore, Singapore, 117546, Singapore. iii ACKNOWLEDGEMENTS I owe a lot to Professor Sanjay Chaudhuri. I am truly grateful to have him as my supervisor. This thesis would not have been possible without him. He is truly a great mentor. I would like to thank him for his guidance, time, encouragement, patience and most importantly, his enlightening ideas and valuable advices. What I learned from him will benefit me for my whole life. I am thankful to Professor Ajay Jasra and David Nott in my prequalifying committee for providing critical insights and suggestions. I am also thankful to Professor Debasish Mondal for his kindly help in the second chapter. I would also like to thank Mr. Zhang Rong for kindly providing IT help, the school for the scholarship and the secretarial staffs in the department, especially Ms Su Kyi Win, for all the prompt assistances during my study. Last but not least, I would like to thank all my friends in the department for all accompany and encouragement. Special appreciations are given to my parents and my boyfriend Lu Fei for their deep love, considerable understanding and continuous support in my life. iv Contents Declaration ii Thesis Supervisor iii Acknowledgements iv Summary viii List of Tables x List of Figures xii Introduction 1.1 Introduction of Bayesian empirical likelihood 1.2 Literature review . . . . . . . . . . . . . . . 1.3 Problems and our studies . . . . . . . . . . . 1.3.1 Computational techniques . . . . . . 1.3.2 Bayesian model selection . . . . . . . 12 13 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hamiltonian Monte Carlo in BayEL Computation 2.1 Bayesian empirical likelihood and its non-convexity problem 2.2 Properties of log empirical likelihood . . . . . . . . . . . . . 2.3 Hamiltonian Monte Carlo for Bayesian empirical likelihood . 2.4 The gradient of log empirical likelihood for generalized linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Illustrative Applications . . . . . . . . . . . . . . . . . . . . 2.5.1 Simulation study: Example . . . . . . . . . . . . . 2.5.2 Real data analysis: Job satisfaction survey in US . . v 19 20 25 33 40 42 43 46 Contents 2.6 2.5.3 Real data analysis: Rat population growth data . . . 52 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 A Two-step Metropolis Hastings for BayEL Computation 3.1 BayEL and maximum conditional empirical likelihood estimate 3.1.1 Bayesian empirical likelihood . . . . . . . . . . . . . 3.1.2 A maximum conditional empirical likelihood estimator 3.2 Markov chain Monte Carlo for Bayesian empirical likelihood 3.2.1 A two-step Metropolis Hastings method for fixed dimensional state space . . . . . . . . . . . . . . . . . . 3.2.2 A two-step reversible jump method for varying dimensional state space . . . . . . . . . . . . . . . . . . 3.3 Simulation study . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Linear model example . . . . . . . . . . . . . . . . . 3.3.2 Reversible jump Markov chain Monte Carlo . . . . . 3.4 Illustrative applications . . . . . . . . . . . . . . . . . . . . . 3.4.1 Rat population growth data . . . . . . . . . . . . . . 3.4.2 Gene expression data . . . . . . . . . . . . . . . . . . 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Likelihood Based Deviance Information Criterion 4.1 Empirical likelihood based deviance information criterion . 4.1.1 Deviance information criterion . . . . . . . . . . . . 4.1.2 Empirical likelihood based deviance information criterion . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Properties of BayEL . . . . . . . . . . . . . . . . . 4.2 Some properties of ELDIC . . . . . . . . . . . . . . . . . . 4.3 Some properties of BayEL model complexity . . . . . . . . 4.4 An alternative definition of BayEL model complexity . . . 4.5 Simulation studies and real data analysis . . . . . . . . . . 4.5.1 Priors and pEL . . . . . . . . . . . . . . . . . . . . D 4.5.2 ELDIC for variable selection . . . . . . . . . . . . . 4.5.3 Analysis of gene expression data . . . . . . . . . . . 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 60 61 61 65 67 67 71 76 76 79 82 82 85 87 89 . 92 . 92 . . . . . . . . . . 94 97 103 106 108 110 111 113 120 124 Contents Bibliography 156 vii SUMMARY Empirical likelihood based methods have seen many applications. It inherits the flexibility of non-parametric methods and keeps the interpretability of parametric models. In recent times, many researchers begin to consider using such methods in Bayesian paradigm. The posterior derived from the BayEL lacks analytical form and its support has complex geometry. Efficient Markov chain Monte Carlo techniques are needed for sampling from the BayEL posterior. In this thesis, two computational techniques are considered. We first consider Hamiltonian Monte Carlo method, which takes advantage of the gradient of log Bayesian empirical likelihood posterior to guide the sampler in the non-convex posterior support. Due to the nature of the gradient, the Hamiltonian Monte Carlo sampler would automatically draw samples within the support and rarely jumps out of it. The second method is a two-step Metropolis Hasting, which is efficient for both fixed and varying dimensional parameter space. The proposal in our method is based on the maximum conditional empirical likelihood estimates. Since such estimates are usually in the deep interior of the support, candidates proposed close to them are more likely to lie in the support. Furthermore, when the sampler jumps to a new model, with the help of this proposal, the BayEL posteriors in both of the models are close. Therefore, the move viii Summary and its inverse both have a good chance of being accepted. Another aspect considered in this thesis is the BayEL based Bayesian model selection. We propose an empirical likelihood based deviance information criterion (ELDIC), which has similar form to the classical deviance information criterion, but the definition of deviance now is based on empirical likelihood. The validity of ELDIC using as a criterion for Bayesian model selection is discussed. Illustrative examples are presented to show the advantages of our method. ix List of Tables 2.1 Average absolute autocorrelations of β0 and β1 for various lags obtained from a HMC and a random walk MCMC (RW MCMC) chains. The averages were taken over 100 replications. Starting points in each replication were random. . . . 46 2.2 Estimates for the North west, South west and Pacific region of US in job satisfactory survey. . . . . . . . . . . . . . . . . 50 2.3 Posterior means, standard deviations, 2.5% quantile, median and 97.5% quantile of α0 , βc and σ simulated from BayEL by HMC and corresponding results obtained from a fully parametric formulation using Gaussian likelihood via WinBugs (WB). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.1 The two-step Metropolis Hastings algorithm. . . . . . . . . . 68 3.2 The two-step reversible jump algorithm. . . . . . . . . . . . 74 3.3 coverage (%) of two-sided 95% credible interval for µi , (i = 1, . . . , 9), µc , σu2 and σ . Data are generated from normal and t distribution with degree of freedom 5. . . . . . . . . . . . . 78 3.4 Posterior model probabilities (PMP) above 1% for two-step reversible jump Markov chain Monte Carlo (RJMCMC). The posterior model probabilities are estimated by empirical frequencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 x Appendix +P op (n)tr −1 nJ(θˆn ) + J0,n > →0 Case 2: Prior π2 (θ) It is equivalent to show P pEL D ≥ p → By (A-23) we know op (n)tr (A-26) −1 nJ(θˆn ) + J0,n = op (1). Therefore by (A-22), we have nJ(θˆn ) + J0,n pEL D = tr −1 nJ(θˆn ) + op (1). So it is suffice to show P tr nJ(θˆn ) + J0,n −1 nJ(θˆn ) ≥p →0 By (A-24), it is equivalently to show P p − tr n = P tr n J(θˆn ) + J0,n n J(θˆn ) + J0,n n −1 J0,n −1 J0,n Note that, tr n J(θˆn ) + J0,n n 153 ≥p −1 J0,n ≤ → 0. Appendix −1 1 1/2 = tr J0,n J(θˆn ) + J0,n n n ≥ λ(p) n J(θˆn ) + J0,n n 1/2 J0,n −1 tr(J0,n ) tr(J0,n ) n = λ(1) J(θˆn ) + n1 J0,n λ (J ) n (1) 0,n λ(1) (J(θˆn )) + n1 λ(1) (J0,n ) > p Since J0,n = O(n), n1 λ(1) (J0,n ) −→ K, where K is some bounded constant. Then we get λ (J ) n (1) 0,n λ(1) (J(θˆn )) + n1 λ(1) (J0,n ) p −→ K > 0. λ(1) (Λ(θ0 ))−1 + K Therefore tr n P J(θˆn ) + J0,n n −1 J0,n ≤0 →0 Case 3: Prior π3 (θ) P = P pEL D −p > tr nJ(θˆn ) + J0,n −1 J(θˆn ) + J0,n n nJ(θˆn ) −1 ≤ P λ(1) ≤ P −1 nλ(1) J0,n tr(J(θˆn )) > = P n tr(J(θˆn )) > λ(p) (J0,n ) 154 > tr(J(θˆn )) > Appendix Under π3 (θ), since J0,n = Op (n1+α ), then λ(p) (J0,n ) = Op (n1+α ). Thus n/λ(p) (J0,n ) = Op (n−α ). So P n tr(J(θˆn )) > λ(p) (J0,n ) Therefore pEL D → 0. 155 → 0. Bibliography Aitkin, M. (1997). The calibration of p-values, posterior bayes factors and the aic from the posterior distribution of the likelihood. Statistics and Computing (4), 253–261. Akaike, H. (1974). A new look at the statistical model identification. Automatic Control, IEEE Transactions on 19 (6), 716–723. Al-Awadhi, F., M. Hurn, and C. Jennison (2004). Improving the acceptance rate of reversible jump mcmc proposals. Statistics & probability letters 69 (2), 189–198. Bayarri, M. and J. O. Berger (2000). P values for composite null models. Journal of the American Statistical Association 95 (452), 1127–1142. Berg, A., R. Meyer, and J. Yu (2004). Deviance information criterion for comparing stochastic volatility models. Journal of Business & Economic Statistics 22 (1), 107–120. Berger, J. O., J. K. Ghosh, and N. Mukhopadhyay (2003). Approximations and consistency of bayes factors as model dimension grows. Journal of Statistical Planning and Inference 112 (1), 241–258. Berger, J. O. and L. R. Pericchi (1998). Accurate and stable bayesian 156 Bibliography model selection: the median intrinsic bayes factor. Sankhy¯a: The Indian Journal of Statistics, Series B , 1–18. Berger, Y. and O. De La Riva Torres (2012). A unified theory of empirical likelihood ratio confidence intervals for survey data with unequal probabilities and non negligible sampling fractions. Southampton Statistical Sciences Research Institute, http://eprints. soton. ac. uk/337688 . Bergsma, W., M. Croon, L. A. van der Ark, et al. (2012). The empty set and zero likelihood problems in maximum empirical likelihood estimation. Electronic Journal of Statistics 6, 2356–2361. Besag, J. (2004). Markov chain monte carlo methods for statistical inference. Birdsall, C. K. and A. B. Langdon (2004). Plasma physics via computer simulation. CRC Press. Boyd, S. P. and L. Vandenberghe (2004). Convex optimization. Cambridge university press. Brooks, S. P., P. Giudici, and G. O. Roberts (2003). Efficient construction of reversible jump markov chain monte carlo proposal distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65 (1), 3–39. Casella, G. and E. I. George (1992). Explaining the gibbs sampler. The American Statistician 46 (3), 167–174. Celeux, G., F. Forbes, C. P. Robert, and D. M. Titterington (2006). Deviance information criteria for missing data models. Bayesian Analysis (4), 651–673. 157 Bibliography Chaudhuri, S., M. Drton, and T. S. Richardson (2007). Estimation of a covariance matrix with zeros. Biometrika 94 (1), 199–216. Chaudhuri, S. and M. Ghosh (2009). Empirical likelihood for small area estimation. National University of Singapore, Technical Report. Chaudhuri, S. and M. Ghosh (2011). Empirical likelihood for small area estimation. Biometrika 98 (2), 473–480. Chaudhuri, S., M. S. Handcock, and M. S. Rendall (2008). Generalized linear models incorporating population level information: an empiricallikelihood-based approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70 (2), 311–328. Chaudhuri, S. and T. Yin (2014). Two step metropolis hastings methods for bayesian empirical likelihood. Unpublished manuscript, Department of Statistics and Applied Probability, National University of Singapore. Chen, J. and J. Qin (1993). Empirical likelihood estimation for fi- nite populations and the effective usage of auxiliary information. Biometrika 80 (1), 107–116. Chen, J. and R. Sitter (1999). A pseudo empirical likelihood approach to the effective use of auxiliary information in complex surveys. Statistica Sinica (2), 385–406. Chen, J., A. Variyath, and B. Abraham (2008). Adjusted empirical likelihood and its properties. Journal of Computational and Graphical Statistics 17 (2), 426–443. 158 Bibliography Chen, M.-H., D. K. Dey, P. M¨ uller, D. Sun, and K. Ye (2010). Objective bayesian inference with applications. Frontiers of Statistical Decision Making and Bayesian Analysis, 31–68. Chen, S. X. and H. Cui (2007). On the second-order properties of empirical likelihood with moment restrictions. Journal of Econometrics 141 (2), 492–516. Corcoran, S. A. (1998). Bartlett adjustment of empirical discrepancy statistics. Biometrika 85 (4). Dawid, A. P. (1979). Conditional independence in statistical theory. Journal of the Royal Statistical Society. Series B (Methodological), 1–31. Del Moral, P., A. Doucet, and A. Jasra (2006). Sequential monte carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (3), 411–436. Dellaportas, P., J. Forster, and I. Ntzoufras (2002). On bayesian model and variable selection using mcmc. Statistics and Computing 12 (1), 27–36. DiCiccio, T., P. Hall, and J. Romano (1991). Empirical likelihood is bartlett-correctable. The Annals of Statistics 19 (2), 1053–1061. Drton, M. and M. D. Perlman (2007). Multiple testing and error control in gaussian graphical model selection. Statistical Science, 430–449. Earl, D. J. and M. W. Deem (2005). Parallel tempering: Theory, applications, and new perspectives. Physical Chemistry Chemical Physics (23), 3910–3916. 159 Bibliography Emerson, S. C., A. B. Owen, et al. (2009). Calibration of the empirical likelihood method for a vector mean. Electronic Journal of Statistics 3, 1161–1192. Fan, Y. and S. Sisson (2010). Reversible jump mcmc. Handbook of Markov Chain Monte Carlo: Methods and Applications, 67. Fang, K.-T. and R. Mukerjee (2006). Empirical-type likelihoods allowing posterior credible sets with frequentist validity: Higher-order asymptotics. Biometrika 93 (3), 723–733. Fowlkes, E. B., A. E. Freeny, and J. M. Landwehr (1988). Evaluating logistic models for large contingency tables. Journal of the American Statistical Association 83 (403), 611–622. Fran¸cois, O. and G. Laval (2011). Deviance information criteria for model selection in approximate bayesian computation. Statistical Applications in Genetics and Molecular Biology 10 (1). Gamerman, D. (1997). Sampling from the posterior distribution in generalized linear mixed models. Statistics and Computing (1), 57–68. Geisser, S. and W. F. Eddy (1979). A predictive approach to model selection. Journal of the American Statistical Association 74 (365), 153–160. Gelfand, A., S. Hills, A. Racine-Poon, and A. Smith (1990). Illustration of bayesian inference in normal data models using gibbs sampling. Journal of the American Statistical Association 85 (412), 972–985. Gelfand, A. E. and S. K. Ghosh (1998). Model choice: a minimum posterior predictive loss approach. Biometrika 85 (1), 1–11. 160 Bibliography Gelman, A., J. Carlin, H. Stern, and D. Rubin (1995). Bayesian data analysis. Chapman&Hall, London. Gelman, A., J. B. Carlin, H. S. Stern, and D. B. Rubin (2003). Bayesian data analysis. Chapman & Hall/CRC. Gelman, A., X.-L. Meng, and H. Stern (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica sinica (4), 733–760. Gelman, A., C. P. Robert, and J. Rousseau (2013). Inherent difficulties of non-bayesian likelihood-based inference, as revealed by an examination of a recent book by aitkin. Statistics & Risk Modeling with Applications in Finance and Insurance 30 (2), 105–120. Geman, S. and D. Geman (1984). Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on (6), 721–741. Geyer, C. J. and E. A. Thompson (1995). Annealing markov chain monte carlo with applications to ancestral inference. Journal of the American Statistical Association 90 (431), 909–920. Ghosh, M. and K. Natarajan (1999). Small area estimation: a Bayesian perspective. In Multivariate analysis, design of experiments, and survey sampling. Number pp.91-102. New York: Marcel Dekker. Ghosh, M., K. Natarajan, T. Stroud, and B. P. Carlin (1998). Generalized linear models for small-area estimation. Journal of the American Statistical Association 93 (441), 273–282. 161 Bibliography Girolami, M. and B. Calderhead (2011). Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (2), 123–214. Givens, G. and J. Hoeting (2005). Computational statistics, Volume 483. Wiley-Interscience. Green, P. (1995). Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika 82 (4), 711–732. Green, P. J. and D. I. Hastie (2009). Reversible jump mcmc. Genetics 155 (3), 1391–1403. Grend´ar, M. and G. Judge (2009a). Asymptotic equivalence of empirical likelihood and bayesian map. The Annals of Statistics 37 (5A), 2445– 2457. Grend´ar, M. and G. Judge (2009b). Empty set problem of maximum empirical likelihood methods. Electronic Journal of Statistics 3, 1542–1555. Haario, H., E. Saksman, and J. Tamminen (1999). Adaptive proposal distribution for random walk metropolis algorithm. Computational Statistics 14 (3), 375–396. Haario, H., E. Saksman, J. Tamminen, et al. (2001). An adaptive metropolis algorithm. Bernoulli (2), 223–242. Hartley, H. O. and J. Rao (1968). A new estimation theory for sample surveys. Biometrika 55 (3), 547–557. Heidelberger, P. and P. Welch (1983). Simulation run length control in the presence of an initial transient. Operations Research 31 (6), 1109–1144. 162 Bibliography Hoffman, M. D. and A. Gelman (2014). The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research 15, 1351–1381. Ibrahim, J. G., M.-H. Chen, and D. Sinha (2001). Criterion-based methods for bayesian model assessment. Statistica Sinica 11 (2), 419–444. Jasra, A., D. A. Stephens, and C. C. Holmes (2007). Population-based reversible jump markov chain monte carlo. Biometrika 94 (4), 787–807. Kitamura, Y. et al. (1997). Empirical likelihood methods with weakly dependent processes. The Annals of Statistics 25 (5), 2084–2102. Kolaczyk, E. (1994). Empirical likelihood for generalized linear models. Statist. Sinica (1), 199–218. Kolaczyk, E. D. (1995). An information criterion for empirical likelihood with general estimating equations. Unpublished manuscript, Department of Statistics, University of Chicago. Lazar, N. (2003). Bayesian empirical likelihood. Biometrika 90 (2), 319– 326. Leimkuhler, B. and S. Reich (2004). Simulating hamiltonian dynamics, Volume 14. Cambridge University Press. Li, Y., T. Zeng, and J. Yu (2012). Robust deviance information criterion for latent variable models. Linde, A. (2005). Dic in variable selection. Statistica Neerlandica 59 (1), 45–56. 163 Bibliography Liu, Y., J. Chen, et al. (2010). Adjusted empirical likelihood with highorder precision. The Annals of Statistics 38 (3), 1341–1362. McCullagh, P. and J. Nelder (1989). Generalized linear models, Volume 37. Chapman & Hall/CRC. Mengersen, K. L., P. Pudlo, and C. P. Robert (2013). Bayesian computation via empirical likelihood. Proceedings of the National Academy of Sciences 110 (4), 1321–1326. Monahan, J. and D. Boos (1992). Proper likelihoods for bayesian analysis. Biometrika 79 (2), 271–278. Mosteller, F. and J. W. Tukey (1977). Data analysis and regression. a second course in statistics. Addison-Wesley Series in Behavioral Science: Quantitative Methods, Reading, Mass.: Addison-Wesley, 1977 1. Mukerjee, R. et al. (2008). Data-dependent probability matching priors for empirical and related likelihoods. In Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh, pp. 60–70. Institute of Mathematical Statistics. Neal, R. (2011). Mcmc for using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 113–162. Neal, R. M. (1994). An improved acceptance procedure for the hybrid monte carlo algorithm. Journal of Computational Physics 111 (1), 194– 203. Nott, D. J. and R. Kohn (2005). Adaptive sampling for bayesian variable selection. Biometrika 92 (4), 747–763. 164 Bibliography Nott, D. J. and D. Leonte (2004). Sampling schemes for bayesian variable selection in generalized linear models. Journal of Computational and Graphical Statistics 13 (2). Owen, A. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75 (2), 237–249. Owen, A. (1991). Empirical likelihood for linear models. The Annals of Statistics 19 (4), 1725–1747. Peng, H., A. Schick, et al. (2013). Empirical likelihood approach to goodness of fit testing. Bernoulli 19 (3), 954–981. P´erez, J. M. and J. O. Berger (2002). Expected-posterior prior distributions for model selection. Biometrika 89 (3), 491–512. Qin, J. and J. Lawless (1994). Empirical likelihood and general estimating equations. The Annals of Statistics, 300–325. Qin, J., B. Zhang, and D. H. Leung (2009). Empirical likelihood in missing data problems. Journal of the American Statistical Association 104 (488). Rao, J. and C. Wu (2010). Bayesian pseudo-empirical-likelihood intervals for complex surveys. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4), 533–544. Rao, J. N. (2005). Small area estimation, Volume 331. John Wiley & Sons. Robert, C., T. Ryden, and D. Titterington (2002). Bayesian inference in hidden markov models through the reversible jump markov chain monte carlo method. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 62 (1), 57–75. 165 Bibliography Rockafellar, R. T. (1993). Lagrange multipliers and optimality. SIAM review 35 (2), 183–238. Rubin, D. B. (1981). The bayesian bootstrap. The annals of statistics, 130–134. Schennach, S. (2005). Bayesian exponentially tilted empirical likelihood. Biometrika 92 (1), 31–46. Sexton, J. and D. Weingarten (1992). Hamiltonian evolution for the hybrid monte carlo algorithm. Nuclear Physics B 380 (3), 665–677. Sisson, S. A. (2005). Transdimensional markov chains: A decade of progress and future perspectives. Journal of the American Statistical Association 100 (471), 1077–1089. Spiegelhalter, D. J., N. G. Best, B. P. Carlin, and A. Linde (2014). The deviance information criterion: 12 years on. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (3), 485–493. Spiegelhalter, D. J., N. G. Best, B. P. Carlin, and A. Van Der Linde (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64 (4), 583–639. Thomas, D. R. and G. L. Grunkemeier (1975). Confidence interval estimation of survival probabilities for censored data. Journal of the American Statistical Association 70 (352), 865–871. Tsao, M. (2013). Extending the empirical likelihood by domain expansion. Canadian Journal of Statistics 41 (2), 257–274. 166 Bibliography Tsao, M., F. Wu, et al. (2013). Empirical likelihood on the full parameter space. The Annals of Statistics 41 (4), 2176–2196. Variyath, A. M., J. Chen, and B. Abraham (2010). Empirical likeli- hood based variable selection. Journal of Statistical Planning and Inference 140 (4), 971–981. Wang, Q., J. Rao, et al. (2002). Empirical likelihood-based inference under imputation for missing response data. The Annals of Statistics 30 (3), 896–924. Wedderburn, R. (1974). Quasi-likelihood functions, generalized linear models, and the gauss—newton method. Biometrika 61 (3), 439–447. Wille, A., P. Zimmermann, E. Vranov´a, A. F¨ urholz, O. Laule, S. Bleuler, L. Hennig, A. Prelic, P. von Rohr, L. Thiele, et al. (2004). Sparse graphical gaussian modeling of the isoprenoid gene network in arabidopsis thaliana. Genome Biol (11), R92. Wright, S. J. (1997). Primal-dual interior-point methods, Volume 54. Siam. Wu, C. and J. Rao (2006). Pseudo-empirical likelihood ratio confidence intervals for complex surveys. Canadian Journal of Statistics 34 (3), 359–375. Yang, Y. and X. He (2012). Bayesian empirical likelihood for quantile regression. The Annals of Statistics 40 (2), 1102–1131. Yuan, A., J. Xu, and G. Zheng (2009). Some results on empirical likelihood estimation. Technique report. 167 Bibliography Zhou, M. (2014). emplik: Empirical likelihood ratio for censored/truncated data. R package version 0.9-9-2. Zhou, Y., A. M. Johansen, and J. A. Aston (2013). Towards automatic model comparison: an adaptive sequential monte carlo approach. arXiv preprint arXiv:1303.3123 . 168 [...]... conditional empirical likelihood estimates This method is useful in both of the fixed and varying dimensional parameter space In this thesis, we also consider the BayEL based model selection problem To our best knowledge, the Bayesian selection of moment condition models through empirical likelihood remains an open problem Therefore an empirical likelihood based deviance information criterion (ELDIC)... BayEL based model selection is not sufficiently studied These problems will be discussed in details in the next section Additionally, our proposed methods are introduced as well 1.3 Problems and our studies In this thesis, two aspects of BayEL procedures are mainly considered, the computational techniques and Bayesian model selection For the computational techniques, we first discuss the reasons for the computational. .. discuss the reasons for the computational difficulty, and then briefly introduce our proposed methods that can overcome such difficulties For Bayesian model selection, we propose an information criterion based on the Bayesian empirical likelihood, which is presented at the end of this section 12 1.3 Problems and our studies 1.3.1 Computational techniques The empirical likelihood is solved numerically Thus... the analytic form of the posterior, the Bayes Factors are not easy to calculate for the BayEL posterior 16 1.3 Problems and our studies Another group of the methods for model selection is based on the posterior predictive distribution (Geisser and Eddy, 1979) Based on this distribution, many criteria and statistics were proposed for Bayesian model selection (Gelman et al., 1996; Gelfand and Ghosh, 1998;... procedures In Chapter 4, we propose empirical likelihood based deviance information criterion (ELDIC) for Bayesian model selection and comparison By analogy, ELDIC is the sum of measures 17 Chapter 1 Introduction for BayEL model fit and BayEL model complexity The decision theoretic justification of ELDIC is shown, which requires asymptotic normality of BayEL posterior and consistency of posterior mean We... similar form to classical deviance information criterion, but the definition now is based on empirical likelihood The rest of this chapter is organised as follows In section 1.1, we formally construct the empirical likelihood with some known estimating equations as its constraints We also derive the Bayesian empirical likelihood based posterior through the Bayes Theorem The main results of BayEL techniques. .. approach for small area estimation Their approach did not depend on parametric assumptions Both continuous and discrete data could be handled in a unified manner In particular, they considered one-parameter exponential family models where the linear predictors included random effects Based on the first and second Bartlett identities, the empirical likelihood were then constructed for both area and unit... satisfaction survey in US described by Fowlkes et al (1988) and studied by Ghosh et al (1998) The second one is the famous rat population growth data presented in Gelfand et al (1990) 2.1 Bayesian empirical likelihood and its non-convexity problem The construction of empirical likelihood have been introduced in details in section 1.1 Here, for convenience, we briefly review it again 20 2.1 Bayesian empirical. .. minimum ELDIC and DIC in each iteration is boldfaced Xa,b,c id denoted as Xa , Xb , Xc 115 4.3 Forward selection sequence based on ELDIC and DIC 4.4 Comparison of ELDIC (1) , ELDIC (2) , DIC (1) and DIC (2) based on % time the model selected (1) is the true model (TM); (2) contains the true model with one additional covariate (TM+1); (3) contains the true model with at most... model was not correctly specified In more recent times, the BayEL procedures have been constructed for various models in various problems Rao and Wu (2010) considered empirical likelihood for complex sampling designs in Bayesian settings Based on the design features such as unequal probabilities of selection and clustering, they proposed a Bayesian pseudo -empirical likelihood It is in the the same form . ON COMPUTATIONAL TECHNIQUES FOR BAYESIAN EMPIRICAL LIKELIHOOD AND EMPIRICAL LIKELIHOOD BASED BAYESIAN MODEL SELECTION YIN TENG (B.Sc., WUHAN UNIVERSITY) A THESIS SUBMITTED FOR THE DEGREE. knowledge, the Bayesian selection of moment condition models through empirical likelihood remains an open problem. Therefore an empirical likelihood based deviance information criterion (ELDIC) is proposed classical deviance information cri- terion, but the definition of deviance now is based on empirical likelihood. The validity of ELDIC using as a criterion for Bayesian model selection is discussed.

Ngày đăng: 09/09/2015, 08:17

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w