1. Trang chủ
  2. » Thể loại khác

Generalized linear models and extensions

789 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 789
Dung lượng 27,15 MB
File đính kèm 25. Generalized Linear Models and Extensions.rar (26 MB)

Nội dung

Generalized Linear Models and Extensions Fourth Edition James W Hardin Department of Epidemiology and Biostatistics University of South Carolina Joseph M Hilbe Statistics, School of Social and Family Dynamics Arizona State University A Stata Press Publication StataCorp LLC College Station, Texas Copyright © 2001, 2007, 2012, 2018 by StataCorp LLC All rights reserved First edition 2001 Second edition 2007 Third edition 2012 Fourth edition 2018 Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845 Typeset in LATEX Printed in the United States of America 10 Print ISBN-10: 1-59718-225-7 Print ISBN-13: 978-1-59718-225-6 ePub ISBN-10: 1-59718-226-5 ePub ISBN-13: 978-1-59718-226-3 Mobi ISBN-10: 1-59718-227-3 Mobi ISBN-13: 978-1-59718-227-0 Library of Congress Control Number: 2018937959 No part of this book may be reproduced, stored in a retrieval system, or transcribed, in any form or by any means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission of StataCorp LLC Stata, , Stata Press, Mata, , and NetCourse are registered trademarks of StataCorp LLC Stata and Stata Press are registered trademarks with the World Intellectual Property Organization of the United Nations NetCourseNow is a trademark of StataCorp LLC LATEX is a trademark of the American Mathematical Society In all editions of this text, this dedication page was written as the final deliverable to the editor—after all the work and all the requested changes and additions had been addressed In every previous edition, Joe and I co-dedicated this book to our wives and children This time, I write the dedication alone In memory of Joseph M Hilbe who passed away after responding to the editor’s final change requests, but before the specification of our dedication Joe was a dear friend and colleague We worked closely on this new edition, and he was as cheerful and tireless as always We worked a long time to put this latest edition together, and he would want the reader to know that he is very proud of our collaboration, but even more proud of his family: Cheryl, Heather, Michael, and Mitchell Contents Figures Tables Listings Preface 1 Introduction 1.1 Origins and motivation 1.2 Notational conventions 1.3 Applied or theoretical? 1.4 Road map 1.5 Installing the support materials I Foundations of Generalized Linear Models 2 GLMs 2.1 Components 2.2 Assumptions 2.3 Exponential family 2.4 Example: Using an offset in a GLM 2.5 Summary 3 GLM estimation algorithms 3.1 Newton–Raphson (using the observed Hessian) 3.2 Starting values for Newton–Raphson 3.3 IRLS (using the expected Hessian) 3.4 Starting values for IRLS 3.5 Goodness of fit 3.6 Estimated variance matrices 3.6.1 Hessian 3.6.2 Outer product of the gradient 3.6.3 Sandwich 3.6.4 Modified sandwich 3.6.5 Unbiased sandwich 3.6.6 Modified unbiased sandwich 3.6.7 Weighted sandwich: Newey–West 3.6.8 Jackknife Usual jackknife One-step jackknife Weighted jackknife Variable jackknife 3.6.9 Bootstrap Usual bootstrap Grouped bootstrap 3.7 Estimation algorithms 3.8 Summary 4 Analysis of fit 4.1 Deviance 4.2 Diagnostics 4.2.1 Cook’s distance 4.2.2 Overdispersion 4.3 Assessing the link function 4.4 Residual analysis 4.4.1 Response residuals 4.4.2 Working residuals 4.4.3 Pearson residuals 4.4.4 Partial residuals 4.4.5 Anscombe residuals 4.4.6 Deviance residuals 4.4.7 Adjusted deviance residuals 4.4.8 Likelihood residuals 4.4.9 Score residuals 4.5 Checks for systematic departure from the model 4.6 Model statistics 4.6.1 Criterion measures AIC BIC 4.6.2 The interpretation of R in linear regression Percentage variance explained The ratio of variances A transformation of the likelihood ratio A transformation of the F test Squared correlation 4.6.3 Generalizations of linear regression R interpretations Efron’s pseudo-R McFadden’s likelihood-ratio index Ben-Akiva and Lerman adjusted likelihood-ratio index McKelvey and Zavoina ratio of variances Transformation of likelihood ratio Cragg and Uhler normed measure 4.6.4 More R measures The count R The adjusted count R Veall and Zimmermann R Cameron–Windmeijer R 4.7 Marginal effects 4.7.1 Marginal effects for GLMs 4.7.2 Discrete change for GLMs II Continuous Response Models 5 The Gaussian family 5.1 Derivation of the GLM Gaussian family 5.2 Derivation in terms of the mean 5.3 IRLS GLM algorithm (nonbinomial) 5.4 ML estimation 5.5 GLM log-Gaussian models 5.6 Expected versus observed information matrix 5.7 Other Gaussian links 5.8 Example: Relation to OLS 5.9 Example: Beta-carotene 6 The gamma family 6.1 Derivation of the gamma model 6.2 Example: Reciprocal link 6.3 ML estimation 6.4 Log-gamma models 6.5 Identity-gamma models 6.6 Using the gamma model for survival analysis 7 The inverse Gaussian family 7.1 Derivation of the inverse Gaussian model 7.2 Shape of the distribution 7.3 The inverse Gaussian algorithm 7.4 Maximum likelihood algorithm 7.5 Example: The canonical inverse Gaussian 7.6 Noncanonical links 8 The power family and link 8.1 Power links 8.2 Example: Power link 8.3 The power family III Binomial Response Models 9 The binomial–logit family 9.1 Derivation of the binomial model 9.2 Derivation of the Bernoulli model 9.3 The binomial regression algorithm 9.4 Example: Logistic regression 9.4.1 Model producing logistic coefficients: The heart data 9.4.2 Model producing logistic odds ratios 9.5 GOF statistics 9.6 Grouped data 9.7 Interpretation of parameter estimates 10 The general binomial family 10.1 Noncanonical binomial models 10.2 Noncanonical binomial links (binary form) 10.3 The probit model 10.4 The clog-log and log-log models 10.5 Other links 10.6 Interpretation of coefficients 10.6.1 Identity link 10.6.2 Logit link 10.6.3 Log link 10.6.4 Log complement link 10.6.5 Log-log link 10.6.6 Complementary log-log link 10.6.7 Summary 10.7 Generalized binomial regression 10.8 Beta binomial regression 10.9 Zero-inflated models 11 The problem of overdispersion 11.1 Overdispersion 11.2 Scaling of standard errors 11.3 Williams’ procedure 11.4 Robust standard errors IV Count Response Models 12 The Poisson family 12.1 Count response regression models 12.2 Derivation of the Poisson algorithm 12.3 Poisson regression: Examples 12.4 Example: Testing overdispersion in the Poisson model 12.5 Using the Poisson model for survival analysis 12.6 Using offsets to compare models 12.7 Interpretation of coefficients 13 The negative binomial family 13.1 Constant overdispersion 13.2 Variable overdispersion 13.2.1 Derivation in terms of a Poisson–gamma mixture 13.2.2 Derivation in terms of the negative binomial probability function 13.2.3 The canonical link negative binomial parameterization 13.3 The log-negative binomial parameterization 13.4 Negative binomial examples 13.5 The geometric family 13.6 Interpretation of coefficients 14 Other count-data models 14.1 Count response regression models 14.2 Zero-truncated models 14.3 Zero-inflated models 14.4 General truncated models 14.5 Hurdle models 14.6 Negative binomial(P) models 14.7 Negative binomial(Famoye) 14.8 Negative binomial(Waring) 14.9 Heterogeneous negative binomial models 14.10 Generalized Poisson regression models 14.11 Poisson inverse Gaussian models 14.12 Censored count response models 14.13 Finite mixture models 14.14 Quantile regression for count outcomes 14.15 Heaped data models V Multinomial Response Models 15 Unordered-response family 15.1 The multinomial logit model 15.1.1 Interpretation of coefficients: Single binary predictor 15.1.2 Example: Relation to logistic regression 15.1.3 Example: Relation to conditional logistic regression 15.1.4 Example: Extensions with conditional logistic regression 15.1.5 The independence of irrelevant alternatives 15.1.6 Example: Assessing the IIA 15.1.7 Interpreting coefficients 15.1.8 Example: Medical admissions—introduction 15.1.9 Example: Medical admissions—summary 15.2 The multinomial probit model 15.2.1 Example: A comparison of the models 15.2.2 Example: Comparing probit and multinomial probit 15.2.3 Example: Concluding remarks 16 The ordered-response family 16.1 Interpretation of coefficients: Single binary predictor 16.2 Ordered outcomes for general link 16.3 Ordered outcomes for specific links 16.3.1 Ordered logit 16.3.2 Ordered probit 16.3.3 Ordered clog-log 16.3.4 Ordered log-log 16.3.5 Ordered cauchit 16.4 Generalized ordered outcome models 16.5 Example: Synthetic data 16.6 Example: Automobile data 16.7 Partial proportional-odds models 21.2.2 Neuhaus, J M., 4 , 18.6 Newey, W K., 3.6.7 Newson, R., 3.6 Nyquist, H., 2.3 O Oakes, D., 3.6 O’Hara Hines, R J., 4.4.4 Olkin, I., 19.7 , 19.8 Olmo-Jiménez, M J., 14.6 P Pan, W., 22.1 Papke, L E., 9.6 Parzen, E., 3.6.7 Patterson, H D., 10.1 Piantadosi, S., 3.6.3 Pickles, A., 18.4.1 Pierce, D A., 4.4 Pitblado, J S., 3 , 3.1 , 10.7 , 19.5 Poi, B P., 3 , 3.1 , 10.7 , 19.5 Poisson, S D., 12.1 Pregibon, D., 4.2 , 4.3 , 17.1 Preisser, J S., 14.3 Q Quenouille, M H., 3.6.8 Quinn, B G., 4.6.1 R Rabe-Hesketh, S., 18.4.1 , 18.7 Raftery, A E., 4.6.1 Rasch, G., 10.1 Robert, C P., 19.4 Rodríguez-Avi, J., 14.6 Roeder, K., 4.2.2 Rogers, W., 11.4 Royall, R M., 3.6.3 Ruppert, D., 3.6.4 , 3.6.5 S Sáez-Castillo, A J., 14.6 Sambamoorthi, N., 4.6.1 Santos Silva, J M C., 14.14 Schafer, D W., 4.2.2 , 4.4 Schwarz, G., 4.6.1 Simpson, D G., 3.6.4 , 3.6.5 Sklar, A., 19.2 Skrondal, A., 18.7 Smith, P J., 4.2.2 Snell, E J., 4.4 Stromberg, A J., 3.6.4 , 3.6.5 Stuart, Alan, 3 Sturdivant, R X., 5.8 Sugiura, N., 4.6.1 Szczotka, F., 9.7 T Tan, W Y., 3.6.3 Tanner, M A., 20.1.1 Taylor, C., 18.4.1 Tibshirani, R., 17.4 Trivedi, P K., 4.7.2 , 12.4 , 14.6 , 14.12 , 14.13 Tsai, C.-L., 4.6.1 Turlach, B A., 6.2 , 13 U Uhler, R S., 4.6.3 V Veall, M R., 4.6.4 von Bortkiewicz, L., 12.1 Vuong, Q H., 14.10 W Wacholder, S., 10.1 , 10.5 Wahrendorf, J., 4.1 Wang, S., 3.6.4 , 3.6.5 Wedderburn, R W M., 2 , 3.3 , 10.1 , 17.2 Welsch, R E., 3.6.5 West, K D., 3.6.7 Wheatley, G A., 10.7 White, H., 3.6.3 , 3.6.5 , 11.4 Williams, D A., 4.2 , 11.3 Williams, R., 16.3.5 , 16.4 , 16.6 Windmeijer, F A G., 4.6.4 , A Wingood, G M., 14.15 Winkelmann, R., 14.6 , 16 , 16.2 Wolfe, R., 16.8 Wong, W H, 20.1.1 Wooldridge, J M., 9.6 Wu, C F J., 3.6.5 , 3.6.8 X Xekalaki, E., 14.6 Xu, X., 19.8 Y Yang, Z., 14.3 , 14.10 Z Zavoina, W., 4.6.3 Zeger, S L., 18.2 , 18.4.2 , 18.6 Zhao, L C., 4.6.1 Zhu, R., 14.10 Zimmermann, K F., 4.6.4 , 14.6 Subject index A AIC, 4.6.1 , 4.6.1 , 7.6 , 8.1 , 8.2 AICc, 4.6.1 AIChq, 4.6.1 algorithm IRLS, 3.7 gamma, 6.1 Gaussian, 5.3 Gaussian (reduced), 5.3 geometric, 13.5 inverse Gaussian, 7.3 log-geometric, 13.5 log-negative binomial, 13.3 negative binomial, 13.2.3 nonbinomial, 5.3 Poisson, 12.2 ML, 3.7 , 5.6 log-inverse Gaussian, 7.6 log-negative binomial, 13.3 lognormal, 5.6 ancillary, 1.4 , 3 , 3.3 , 5.3 , 6 , 11.2 , 13 , 13.4 , 13.5 , 18.6 , 21.4.4 Atkinson’s , 9.7 autocorrelation, 20.1.2 B Bernoulli deviance, 9.2 log likelihood, 9.2 model, 9.2 , 9.2 probability function, 9.2 BHHH, 3.6.2 , 3.6.2 BIC, 4.6.1 , 4.6.1 , 7.6 , 8.1 , 8.2 binary data, 9 binomial data, 4.6.3 deviance, 9.1 log likelihood, 9.1 model, 9.1 , 9.1 binsq command, 17.2 bivcnto, 19.8 bootstrap, 3.6.9 , 3.6.9 , 4.1 Box–Cox transformation, 4.3 , 4.3 brant command, 16.7 C canonical link, 3 canonical parameter, 2.3 , 4.1 censored count response model, 14.12 , 14.15 censored data, 2.1 , 4.6.3 , 6 , 12.5 chi-bar squared, 10.7 classification table, 9.5 clogit command, 18.3.2 cloglog command, 21.4.1 cluster data, 18 , 18.7 conditional logistic regression models, 15.1.3 , 15.1.4 confusion matrix, 9.5 conjugate, 20.1.1 contagion distribution, 13 continuation-ratio model, 16.8 , 16.8 Cook’s distance, 4.2.1 , 4.2.1 copula, 19.2 copula function, 19.1 count-data regression models, 12.1 , 12.1 , 14.1 , 14.1 criterion measure, 4.6.1 , 4.6.1 cutpoint, 16.5 D data binomial, 4.6.3 censored, 2.1 , 4.6.3 , 6 , 12.5 continuous, 6 grouped-response, 9 ordinal, 4.6.3 truncated, 2.1 dataset auto, 16.6 azdrg112, 14.10 , 14.13 cancer, 6.4 , 12.5 claims, 6.2 , 6.3 , 8.2 doll, 13.4 heart, 15.1.2 heart01, 9.4 , 10.3 , 11.2 medpar, 7.6 , 14.9 medparc, 14.12 warsaw, 9.7 , 10.3 , 10.4 deviance, 4.1 , 4.1 diagnostics, 4.2 , 4.2.2 DIC, 20.1.1 discrete change, average, 4.7.2 E effect, marginal, 4.7.1 EIM, 5.6 elasticity, 4.7.2 error sum of squares, 5.3 ESS, 20.2.2 estat command, 4.6 exploding logit, 18.3.2 exponential regression, 6.4 , 6.6 exposure, 12.2 F finite mixture models, 14.13 , 14.13 fitstat command, 4.6 fixed effects, 18.3 , 18.3.2 conditional, 18.3.2 , 18.3.2 unconditional, 18.3.1 , 18.3.1 G GAM, 17.4 , 17.4 gamma, 6 , 6.6 density function, 6.1 deviance, 6.1 distribution, 6 heterogeneity, 13.2.1 model, 6.1 , 6.1 regression, 6.6 , 6.6 Gaussian, 5 , 5.9 IRLS derivation, 5.1 ML derivation, 5.2 , 5.2 probability density function, 5.1 OLS, 5.8 , 5.8 GEE, 11.2 , 18.6 , 18.6 generalized binomial, 10.7 generalized Poisson regression model, 14.10 , 14.10 GENSTAT macro, 13.2.1 geometric distribution, 13 family, 13.5 , 13.5 Gibbs sampling, 18.4.2 , 18.4.2 , 20.2.2 gllamm command, 18.7 glm command, 21 , 21.4.5 gnbreg command, 14.9 gologit2 command, 16.3.5 , 16.4 , 16.7 goodness of fit, 9.5 , 9.5 grouped-response data, 9 H hat diagonals, 4.2 hazard rate, 12.5 health ratio, 10.6.4 heapcr command, 14.15 Hessian, 3.6.1 , 3.6.1 heterogeneity, 13.2.1 heterogeneous negative binomial model, 14.9 , 14.9 Hosmer–Lemeshow, 9.5 HPD interval, 20.1 hurdle model, 14.5 , 14.6 hypergeometric function, 4.4.5 I ICD code, 7.6 identity link, 5.1 , 5.5 , 5.7 identity-gamma model, 6.5 , 6.5 IIA, 15.1.5 , 15.1.6 incidence-rate ratio, 12.7 , 13.4 , 13.6 individual data, 18.1 , 18.1 interactions, 6.2 , 6.3 invcloglog() function, 16.3.3 , 16.3.4 inverse Gaussian model, 7 , 7.6 IRLS algorithm, 7.3 , 7.3 log link, 7.6 invlogit() function, 16.3.1 IRLS algorithm, 3.7 , 5.3 , 5.3 J jackknife, 3.6.8 , 3.6.8 , 4.2.1 jittering, 14.14 K -system, 13.2.1 L Lagrange multiplier, 12.4 latent variable models, 18.4.1 leaf blotch data, 17.2 leverage, 4.2 lgamma command, 6.4 likelihood-ratio test, 6.2 link identity, 5.1 , 5.5 log, 5.5 power, 8 , 8.3 reciprocal, 6.1 , 6.2 , 6.2 , 6.4 log link, 5.5 log-gamma model, 6.4 , 6.4 log-Gaussian, 5.5 , 5.5 logistic command, 9.5 logistic regression, 9.4 , 9.4.2 logistic regression models, 15.1.2 , 15.1.2 logit command, 21.4.1 lognormal, 5.5 , 5.5 LOS, 7.6 , 12.1 , 12.3 lrtest command, 2.4 lstand command, 9.7 M marginal effect, 4.7.1 Markov chain, 20.1.1 MCMC, 20.1.1 measurement error, 18.7 Metropolis–Hastings, 20.1.1 MH, 20.1.1 mixed command, 18.7 mixed models, 18.7 mixed-effect models, 18.5 mixture distribution, 13 , 13.1 mlogit command, 18.3.2 model Bayesian regression, 20.1 , 20.9.2 censored count response, 14.12 , 14.15 conditional logistic regression, 15.1.3 , 15.1.4 continuation-ratio, 16.8 , 16.8 count-data regression, 12.1 , 12.1 , 14.1 , 14.1 finite mixture, 14.13 , 14.13 generalized Poisson regression, 14.10 , 14.10 heaped regression, 14.15 , 14.15 heterogeneous negative binomial, 14.9 , 14.9 hurdle, 14.5 , 14.6 identity-gamma, 6.5 , 6.5 inverse Gaussian, 7 , 7.6 latent variable, 18.4.1 log-gamma, 6.4 , 6.4 logistic regression, 15.1.2 , 15.1.2 multinomial logit, 15.1 , 15.1.9 multinomial probit, 15.2 , 15.2.3 negbin(p), 14.6 , 14.6 ordered response, see ordered response Poisson inverse Gaussian, 14.11 , 14.11 quantile regression, 14.14 , 14.14 zero-inflated, 10.9 , 10.9 zero-inflated binomial, 10.9 , 10.9 zero-inflated generalized negative binomial, 14.3 , 14.3 zero-inflated generalized Poisson, 14.3 , 14.3 zero-inflated negative binomial, 14.3 , 14.3 zero-inflated Poisson, 14.3 , 14.3 zero-truncated negative binomial, 14.2 , 14.2 zero-truncated Poisson, 12.2 , 14.2 , 14.2 modified sandwich, 3.6.4 , 3.6.4 modified unbiased sandwich, 3.6.6 , 3.6.6 Monte Carlo, 20.1.1 mprobit command, 15.2.2 MSE, 5.3 multinomial logit models, 15.1 , 15.1.9 multinomial probit models, 15.2 , 15.2.3 N NB-1, 13.1 , 13.2.3 NB-2, 13.2 nbreg command, 13 , 13.1 , 13.2.3 , 21.4.1 negative binomial, 13 , 13.6 canonical link, 13.2.3 , 13.2.3 zero-truncated, 13.4 , 13.4 negative binomial(Famoye), 14.7 negative binomial(P), 14.6 , 14.6 negative binomial(Waring), 14.8 newey command, 21.4.1 Newton–Raphson algorithm, 3.7 normal, 5 , 5.9 normal() function, 16.3.2 normalden() function, 16.3.2 O ocratio command, 16.8 odds ratios, 9.4.2 , 9.4.2 , 9.7 , 9.7 , 10.6.2 offset, 2.4 oglm command, 16.5 OIM, 5.6 , 5.6 , 6.1 ologit command, 16.3.5 , 16.5 OLS, 5.7 , 5.8 , 5.8 OPG, 3.6.2 , 3.6.2 oprobit command, 16.3.5 ordered response cauchit, 16.3.5 , 16.3.5 clog-log, 16.3.3 , 16.3.3 generalized ordered logit, 16.4 , 16.4 log-log, 16.3.4 , 16.3.4 ordered logit, 16.3.1 , 16.3.1 ordered probit, 16.3.2 , 16.3.2 ordered-response model, 16 , 16.9 ordinal data, 4.6.3 overdispersion, 4.2.2 , 4.2.2 , 12.2 , 12.3 constant, 13.1 , 13.1 test, 4.2.2 , 12.4 variable, 13.2 , 13.2.3 P panel data, 18 , 18.7 parallel-lines assumption, 16.3.1 parameter ancillary, 5.3 canonical, 2.3 , 4.1 partial correlation, 9.7 Plackett–Luce, 18.3.2 Poisson, 12 , 12.2 , 12.7 inverse Gaussian response model, 14.11 , 14.11 zero-inflated, see zero-inflated Poisson models zero-inflated generalized, see zero-inflated generalized Poisson models zero-truncated, see zero-truncated Poisson models zero-truncated generalized, see zero-truncated generalized Poisson models poisson command, 12.2 , 21.4.1 pooled estimator, 18.2 , 18.2 power link, 8 , 8.3 relationships, 8.1 predict command, 6.2 , 21.2.1 , 21.2.2 prior, Cauchy, 20.2.3 probit command, 15.2.2 , 21.4.1 proportional-odds assumption, 16.2 , 16.4 , 16.5 prvalue command, 15.2.3 Q quantile regression for count data, 14.14 , 14.14 quasideviance, 8.3 , 17.2 quasilikelihood, 17.1 , 17.1 qvf command, 18.7 R , 4.6.2 , 4.6.4 McFadden’s measure, 16.5 percentage variance explained, 4.6.2 , 4.6.2 pseudo adjusted count measure, 4.6.4 , 4.6.4 Ben-Akiva and Lerman’s measure, 4.6.3 Cameron and Windmeijer’s measure, 4.6.4 , 4.6.4 count measure, 4.6.4 , 4.6.4 Cragg and Uhler’s measure, 4.6.3 , 4.6.3 Efron’s measure, 4.6.3 , 4.6.3 McFadden’s measure, 4.6.3 , 4.6.3 McKelvey and Zavoina’s measure, 4.6.3 , 4.6.3 transformation of likelihood ratio, 4.6.3 , 4.6.3 Veall and Zimmermann’s measure, 4.6.4 , 4.6.4 ratio of variances, 4.6.2 , 4.6.2 squared correlation, 4.6.2 , 4.6.2 transformation of -test, 4.6.2 , 4.6.2 transformation of likelihood ratio, 4.6.2 , 4.6.2 random effects, 11.2 , 18.4 random utility, 15.2 rcal command, 18.7 reciprocal link, 6.1 , 6.2 , 6.2 , 6.4 regress command, 21.4.1 regression calibration, 18.7 regression for heaped count data, 14.15 , 14.15 rejection sampling, 19.4 relative-risk ratio, 15.1.7 residual, 4.4 , 4.4.9 adjusted, 4.4 adjusted deviance, 4.4.7 , 4.4.7 , 4.4.8 Anscombe, 4.4 , 4.4.5 , 4.4.5 , 4.5 , 6.2 , 6.2 , 10.3 , 10.4 , 13.4 , 21.2.2 , A deviance, 4.4 , 4.4.6 , 4.4.6 , 4.5 likelihood, 4.4.8 , 4.4.8 modified, 4.4 name, 4.4 partial, 4.4.4 , 4.4.4 Pearson, 4.4 , 4.4.3 , 4.4.3 , 4.4.6 , 4.4.8 , 17.2 response, 4.4.1 score, 4.4.9 , 4.4.9 standardized, 4.4 studentized, 4.4 working, 4.4.2 , 4.4.2 risk difference, 10.6.1 risk ratio, 10.6.3 rologit command, 18.3.2 RSS, 5.3 S sandwich, 3.6.3 , 3.6.3 saturated model, 5.2 scaling, 12.3 simex command, 18.7 simulation extrapolation, 18.7 SPost, 16.4 streg command, 6.6 , 21.4.1 T test likelihood-ratio, 6.2 Wald, 6.2 thinning, 20.2.2 tnbreg command, 13.4 tpoisson command, 13.4 trials, Bernoulli, 9 , 13.2 , 13.2.2 , 21.4.3 trncregress command, 14.4 truncated data, 2.1 truncation, 14.4 , 14.4 tweedie command, 17.3 U unbiased sandwich, 3.6.5 , 3.6.5 underdispersion, 14.9 unordered response, 15 , 15.2.3 V variance bootstrap, 3.6.9 , 3.6.9 , 11.2 grouped bootstrap, 3.6.9 , 3.6.9 , 18.2 Hessian, 3.6.1 , 3.6.1 jackknife, 3.6.8 , 3.6.8 , 11.2 modified sandwich, 3.6.4 , 3.6.4 , 18.2 modified unbiased sandwich, 3.6.6 , 3.6.6 Newey–West, 11.2 one-step jackknife, 3.6.8 , 3.6.8 OPG, 3.6.2 , 3.6.2 sandwich, 3.6.3 , 3.6.3 , 11.2 unbiased sandwich, 3.6.5 , 3.6.5 usual bootstrap, 3.6.9 , 3.6.9 usual jackknife, 3.6.8 , 3.6.8 variable jackknife, 3.6.8 , 3.6.8 , 18.2 weighted jackknife, 3.6.8 , 3.6.8 weighted sandwich, 3.6.7 , 3.6.7 W Wald test, 6.2 weighted sandwich, 3.6.7 , 3.6.7 Anderson, 3.6.7 Gallant, 3.6.7 Newey–West, 3.6.7 Tukey–Hanning, 3.6.7 with-zeros Poisson, 14.3 Z zero-inflated binomial models, 10.9 , 10.9 zero-inflated generalized negative binomial models, 14.3 , 14.3 zero-inflated generalized Poisson models, 14.3 , 14.3 zero-inflated models, 10.9 , 10.9 zero-inflated negative binomial models, 14.3 , 14.3 zero-inflated Poisson models, 14.3 , 14.3 zero-truncated negative binomial models, 14.2 , 14.2 zero-truncated Poisson models, 12.2 , 14.2 , 14.2 ziheapr command, 14.15 ... Finally, part VI is about extensions to GLMs In particular, we examine the following models: Fixed-effects models Random-effects models Quasilikelihood models GEEs Generalized additive models We give the reader a thorough outline or overview of GLMs... 14 Other count-data models 14.1 Count response regression models 14.2 Zero-truncated models 14.3 Zero-inflated models 14.4 General truncated models 14.5 Hurdle models 14.6 Negative binomial(P) models 14.7 Negative binomial(Famoye)... We have added several new models to the discussion of extended generalized linear models (GLMs) We have included new software and discussion of extensions to negative binomial regression because of Waring and Famoye

Ngày đăng: 01/09/2021, 08:17

w