1. Trang chủ
  2. » Thể loại khác

Maximum likelihood estimation with stata

376 10 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 376
Dung lượng 1,79 MB
File đính kèm 10. Maximum Likelihood.rar (2 MB)

Nội dung

Maximum Likelihood Estimation with Stata Fourth Edition Maximum Likelihood Estimation with Stata Fourth Edition WILLIAM GOULD StataCorp JEFFREY PITBLADO StataCorp BRIAN POI StataCorp ® A Stata Press Publication StataCorp LP College Station, Texas ® Copyright c 1999, 2003, 2006, 2010 by StataCorp LP All rights reserved First edition 1999 Second edition 2003 Third edition 2006 Fourth edition 2010 Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845 Typeset in LATEX 2ε Printed in the United States of America 10 ISBN-10: 1-59718-078-5 ISBN-13: 978-1-59718-078-8 Library of Congress Control Number: 2010935284 No part of this book may be reproduced, stored in a retrieval system, or transcribed, in any form or by any means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission of StataCorp LP Stata, Mata, NetCourse, and Stata Press are registered trademarks of StataCorp LP LATEX 2ε is a trademark of the American Mathematical Society Contents List of tables xiii List of figures xv Preface to the fourth edition xvii Versions of Stata xix Notation and typography xxi Theory and practice 1.1 The likelihood-maximization problem 1.2 Likelihood theory 1.2.1 All results are asymptotic 1.2.2 Likelihood-ratio tests and Wald tests 1.2.3 The outer product of gradients variance estimator 10 1.2.4 Robust variance estimates 11 The maximization problem 13 1.3.1 Numerical root finding 13 Newton’s method 13 The Newton–Raphson algorithm 15 Quasi-Newton methods 17 The BHHH algorithm 18 The DFP and BFGS algorithms 18 1.3.3 Numerical maximization 19 1.3.4 Numerical derivatives 20 1.3.5 Numerical second derivatives 24 Monitoring convergence 25 1.3 1.3.2 1.4 vi Contents Introduction to ml 29 2.1 The probit model 29 2.2 Normal linear regression 32 2.3 Robust standard errors 34 2.4 Weighted estimation 35 2.5 Other features of method-gf0 evaluators 36 2.6 Limitations 36 Overview of ml 39 3.1 The terminology of ml 39 3.2 Equations in ml 40 3.3 Likelihood-evaluator methods 48 3.4 Tools for the ml programmer 51 3.5 Common ml options 51 3.5.1 Subsamples 51 3.5.2 Weights 52 3.5.3 OPG estimates of variance 53 3.5.4 Robust estimates of variance 54 3.5.5 Survey data 56 3.5.6 Constraints 57 3.5.7 Choosing among the optimization algorithms 57 Maximizing your own likelihood functions 61 3.6 Method lf 63 4.1 The linear-form restrictions 64 4.2 Examples 65 4.2.1 The probit model 65 4.2.2 Normal linear regression 66 4.2.3 The Weibull model 69 4.3 The importance of generating temporary variables as doubles 71 4.4 Problems you can safely ignore 73 Contents vii 4.5 Nonlinear specifications 74 4.6 The advantages of lf in terms of execution speed 75 Methods lf0, lf1, and lf2 77 5.1 Comparing these methods 77 5.2 Outline of evaluators of methods lf0, lf1, and lf2 78 5.2.1 The todo argument 79 5.2.2 The b argument 79 Using mleval to obtain values from each equation 80 5.2.3 The lnfj argument 82 5.2.4 Arguments for scores 83 5.2.5 The H argument 84 Using mlmatsum to define H 86 Aside: Stata’s scalars 87 Summary of methods lf0, lf1, and lf2 90 5.3.1 Method lf0 90 5.3.2 Method lf1 92 5.3.3 Method lf2 94 Examples 96 5.4.1 The probit model 96 5.4.2 Normal linear regression 98 5.4.3 The Weibull model 104 5.2.6 5.3 5.4 Methods d0, d1, and d2 109 6.1 Comparing these methods 109 6.2 Outline of method d0, d1, and d2 evaluators 110 6.2.1 The todo argument 111 6.2.2 The b argument 111 6.2.3 The lnf argument 112 Using lnf to indicate that the likelihood cannot be calculated 113 Using mlsum to define lnf 114 viii Contents 6.2.4 The g argument 116 Using mlvecsum to define g 116 6.2.5 6.3 6.4 The H argument 118 Summary of methods d0, d1, and d2 119 6.3.1 Method d0 119 6.3.2 Method d1 122 6.3.3 Method d2 124 Panel-data likelihoods 126 6.4.1 Calculating lnf 128 6.4.2 Calculating g 132 6.4.3 Calculating H 136 Using mlmatbysum to help define H 136 6.5 Debugging likelihood evaluators 10 151 7.1 ml check 151 7.2 Using the debug methods 153 7.3 Other models that not meet the linear-form restrictions 144 7.2.1 First derivatives 155 7.2.2 Second derivatives 165 ml trace 168 Setting initial values 171 8.1 ml search 172 8.2 ml plot 175 8.3 ml init 177 Interactive maximization 181 9.1 The iteration log 181 9.2 Pressing the Break key 182 9.3 Maximizing difficult likelihood functions 184 Final results 187 10.1 Graphing convergence 187 10.2 Redisplaying output 188 Contents 11 ix Mata-based likelihood evaluators 11.1 11.2 193 Introductory examples 193 11.1.1 The probit model 193 11.1.2 The Weibull model 196 Evaluator function prototypes 198 Method-lf evaluators 199 lf-family evaluators 199 d-family evaluators 200 11.3 Utilities 201 Dependent variables 202 Obtaining model parameters 202 Summing individual or group-level log likelihoods 203 Calculating the gradient vector 203 Calculating the Hessian 11.4 12 13 204 Random-effects linear regression 205 11.4.1 Calculating lnf 206 11.4.2 Calculating g 207 11.4.3 Calculating H 208 11.4.4 Results at last 209 Writing do-files to maximize likelihoods 213 12.1 The structure of a do-file 213 12.2 Putting the do-file into production 214 Writing ado-files to maximize likelihoods 217 13.1 Writing estimation commands 217 13.2 The standard estimation-command outline 219 13.3 Outline for estimation commands using ml 220 13.4 Using ml in noninteractive mode 221 13.5 Advice 222 13.5.1 Syntax 223 13.5.2 Estimation subsample 225 x Contents 13.5.3 Parsing with help from mlopts 229 13.5.4 Weights 232 13.5.5 Constant-only model 233 13.5.6 Initial values 237 13.5.7 Saving results in e() 13.5.8 Displaying ancillary parameters 240 13.5.9 Exponentiated coefficients 242 240 13.5.10 Offsetting linear equations 244 13.5.11 Program properties 246 14 15 Writing ado-files for survey data analysis 249 14.1 Program properties 249 14.2 Writing your own predict command 252 Other examples 255 15.1 The logit model 255 15.2 The probit model 257 15.3 Normal linear regression 259 15.4 The Weibull model 262 15.5 The Cox proportional hazards model 265 15.6 The random-effects regression model 268 15.7 The seemingly unrelated regression model 271 A Syntax of ml 285 B Likelihood-evaluator checklists 307 B.1 Method lf 307 B.2 Method d0 308 B.3 Method d1 309 B.4 Method d2 311 B.5 Method lf0 314 B.6 Method lf1 315 B.7 Method lf2 317 340 Appendix C Listing of estimation commands capture matrix drop ‘g’ quietly { local e = ‘p’+1 forval i = 1/‘p’ { mlvecsum ‘lnf’ ‘gi’ = ‘g‘i’’, eq(‘i’) matrix ‘g’ = nullmat(‘g’), ‘gi’ replace ‘g‘e’’ = 0.5*(‘g‘i’’*‘g‘i’’-‘iS’[‘i’,‘i’]) mlvecsum ‘lnf’ ‘gi’ = ‘g‘e’’ , eq(‘e’) matrix ‘gs’ = nullmat(‘gs’), ‘gi’ local ++e forval j = ‘=‘i’+1’/‘p’ { replace ‘g‘e’’ = ‘g‘i’’*‘g‘j’’ - ‘iS’[‘i’,‘j’] mlvecsum ‘lnf’ ‘gi’ = ‘g‘e’’ , eq(‘e’) matrix ‘gs’ = nullmat(‘gs’), ‘gi’ local ++e } } matrix ‘g’ = ‘g’ , ‘gs’ } // quietly end end mysureg_d1.ado begin mysuregc_d2.ado program mysuregc_d2 // concentrated likelihood for the SUR model version 11 args todo b lnf g H local p : word count $ML_y tempname W sumw // get residuals and build variance matrix forval i = 1/‘p’ { tempvar r‘i’ mleval ‘r‘i’’ = ‘b’, eq(‘i’) quietly replace ‘r‘i’’ = ${ML_y‘i’} - ‘r‘i’’ local resids ‘resids’ ‘r‘i’’ } quietly matrix accum ‘W’ = ‘resids’ [iw=$ML_w] if $ML_samp, noconstant scalar ‘sumw’ = r(N) matrix ‘W’ = ‘W’/‘sumw’ scalar ‘lnf’ = -0.5*‘sumw’*(‘p’*(ln(2*c(pi))+1)+ln(det(‘W’))) if (‘todo’ == | missing(‘lnf’)) exit // compute gradient tempname gi iW iWi tempvar scorei quietly generate double ‘scorei’ = capture matrix drop ‘g’ matrix ‘iW’ = invsym(‘W’) forval i = 1/‘p’ { matrix ‘iWi’ = ‘iW’[‘i’,1 ] matrix colnames ‘iWi’ = ‘resids’ matrix score ‘scorei’ = ‘iWi’, replace mlvecsum ‘lnf’ ‘gi’ = ‘scorei’, eq(‘i’) matrix ‘g’ = nullmat(‘g’), ‘gi’ } if (‘todo’ == | missing(‘lnf’)) exit // compute the Hessian, as if we were near the solution C.7 The seemingly unrelated regression model 341 tempname hij local k = colsof(‘b’) matrix ‘H’ = J(‘k’,‘k’,0) local r forval i = 1/‘p’ { local c ‘r’ mlmatsum ‘lnf’ ‘hij’ = -1*‘iW’[‘i’,‘i’], eq(‘i’) matrix ‘H’[‘r’,‘c’] = ‘hij’ local c = ‘c’ + colsof(‘hij’) forval j = ‘=‘i’+1’/‘p’ { mlmatsum ‘lnf’ ‘hij’ = -1*‘iW’[‘i’,‘j’], eq(‘i’,‘j’) matrix ‘H’[‘r’,‘c’] = ‘hij’ matrix ‘H’[‘c’,‘r’] = ‘hij’’ local c = ‘c’ + colsof(‘hij’) } local r = ‘r’ + rowsof(‘hij’) } end end mysuregc_d2.ado begin mysureg_lf0.ado program mysureg_lf0 version 11 args todo b lnfj local p : word count $ML_y local k = ‘p’*(‘p’+1) / + ‘p’ tempname S iS sij isij isi tempvar ip matrix ‘S’ = J(‘p’,‘p’,0) quietly { // get residuals and build variance matrix local e forval i = 1/‘p’ { tempvar r‘i’ mleval ‘r‘i’’ = ‘b’, eq(‘i’) replace ‘r‘i’’ = ${ML_y‘i’} - ‘r‘i’’ local resids ‘resids’ ‘r‘i’’ mleval ‘sij’ = ‘b’, eq(‘=‘p’+‘e’’) scalar matrix ‘S’[‘i’,‘i’] = ‘sij’ local ++e forval j = ‘=‘i’+1’/‘p’ { mleval ‘sij’ = ‘b’, eq(‘=‘p’+‘e’’) scalar matrix ‘S’[‘i’,‘j’] = ‘sij’ matrix ‘S’[‘j’,‘i’] = ‘sij’ local ++e } } matrix ‘iS’ = invsym(‘S’) // get score variables tempvar scorei gen double ‘scorei’ = gen double ‘ip’ = forval i = 1/‘p’ { matrix ‘isi’ = ‘iS’[‘i’,1 ] matrix colnames ‘isi’ = ‘resids’ matrix score ‘scorei’ = ‘isi’, replace replace ‘ip’ = ‘ip’ + ‘r‘i’’*‘scorei’ } 342 Appendix C Listing of estimation commands replace ‘lnfj’ = -0.5*(‘p’*ln(2*c(pi))+ln(det(‘S’))+‘ip’) } // quietly end end mysureg_lf0.ado References Berndt, E K., B H Hall, R E Hall, and J A Hausman 1974 Estimation and inference in nonlinear structural models Annals of Economic and Social Measurement 3/4: 653–665 Binder, D A 1983 On the variances of asymptotically normal estimators from complex surveys International Statistical Review 51: 279–292 Breslow, N E 1974 Covariance analysis of censored survival data Biometrics 30: 89–99 Broyden, C G 1967 Quasi-Newton methods and their application to function minimization Mathematics of Computation 21: 368–381 Cleves, M., W Gould, R G Gutierrez, and Y V Marchenko 2010 An Introduction to Survival Analysis Using Stata 3rd ed College Station, TX: Stata Press Davidon, W C 1959 Variable metric method for minimization Technical Report ANL-5990, Argonne National Laboratory, U.S Department of Energy, Argonne, IL Davidson, R., and J G MacKinnon 1993 Estimation and Inference in Econometrics New York: Oxford University Press Fletcher, R 1970 A new approach to variable metric algorithms Computer Journal 13: 317–322 1987 Practical Methods of Optimization 2nd ed New York: Wiley Fletcher, R., and M J D Powell 1963 A rapidly convergent descent method for minimization Computer Journal 6: 163–168 Fuller, W A 1975 Regression analysis for sample survey Sankhy¯ a, Series C 37: 117–132 Gail, M H., W Y Tan, and S Piantadosi 1988 Tests for no treatment effect in randomized clinical trials Biometrika 75: 57–64 Goldfarb, D 1970 A family of variable-metric methods derived by variational means Mathematics of Computation 24: 23–26 Gould, W 2001 Statistical software certification Stata Journal 1: 29–50 344 Greene, W H 2008 Econometric Analysis 6th ed Upper Saddle River, Hall References NJ: Prentice Grunfeld, Y., and Z Griliches 1960 Is aggregation necessarily bad? Review of Economics and Statistics 42: 1–13 Huber, P J 1967 The behavior of maximum likelihood estimates under nonstandard conditions In Vol of Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, 221–233 Berkeley: University of California Press Kent, J T 1982 Robust properties of likelihood ratio tests Biometrika 69: 19–27 Kish, L., and M R Frankel 1974 Inference from complex samples Journal of the Royal Statistical Society, Series B 36: 1–37 Korn, E L., and B I Graubard 1990 Simultaneous testing of regression coefficients with complex survey data: Use of Bonferroni t statistics American Statistician 44: 270–276 Lin, D Y., and L J Wei 1989 The robust inference for the Cox proportional hazards model Journal of the American Statistical Association 84: 10741078 Lă utkepohl, H 1996 Handbook of Matrices New York: Wiley Marquardt, D W 1963 An algorithm for least-squares estimation of nonlinear parameters Journal of the Society for Industrial and Applied Mathematics 11: 431–441 Nash, J C 1990 Compact Numerical Methods for Computers: Linear Algebra and Function Minimization 2nd ed New York: Adam Hilger Peto, R 1972 Contribution to the discussion of paper by D R Cox Journal of the Royal Statistical Society, Series B 34: 205–207 Press, W H., S A Teukolsky, W T Vetterling, and B P Flannery 2007 Numerical Recipes in C: The Art of Scientific Computing 3rd ed Cambridge: Cambridge University Press Rogers, W H 1993 sg17: Regression standard errors in clustered samples Stata Technical Bulletin 13: 19–23 Reprinted in Stata Technical Bulletin Reprints, vol 3, pp 88–94 College Station, TX: Stata Press Royall, R M 1986 Model robust confidence intervals using maximum likelihood estimators International Statistical Review 54: 221–226 Shanno, D F 1970 Conditioning of quasi-Newton methods for function minimization Mathematics of Computation 24: 647–656 Stuart, A., and J K Ord 1991 Kendall’s Advanced Theory of Statistics Volume 2: Classical Inference 5th ed London: Arnold Welsh, A H 1996 Aspects of Statistical Inference New York: Wiley References 345 White, H 1980 A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity Econometrica 48: 817–838 1982 Maximum likelihood estimation of misspecified models Econometrica 50: 1–25 Williams, R L 2000 A note on robust variance estimation for cluster-correlated data Biometrics 56: 645–646 Wooldridge, J M 2002 Econometric Analysis of Cross Section and Panel Data Cambridge, MA: MIT Press Author index B Berndt, E K 18 Binder, D A 12 Breslow, N E 145 Broyden, C G 18 C Cleves, M 265 D Davidon, W C 18 Davidson, R F Flannery, B P 19 Fletcher, R 18, 19 Frankel, M R 11 Fuller, W A 12 G Gail, M H 12 Goldfarb, D 18 Gould, W 223, 265 Graubard, B I 300 Greene, W H 132, 275, 276 Griliches, Z 275, 277 Grunfeld, Y 275, 277 Gutierrez, R G 265 H Hall, B H 18 Hall, R E 18 Hausman, J A 18 Huber, P J 11 K Kent, J T 12 Kish, L 11 Korn, E L 300 L Lă utkepohl, H 273 Lin, D Y 12 M MacKinnon, J G Marchenko, Y 265 Marquardt, D W 17 N Nash, J C 23 O Ord, J K P Peto, R 145 Piantadosi, S 12 Powell, M J D 18 Press, W H 19 R Rogers, W H 12 Royall, R M 12 S Shanno, D F 18 Stuart, A 348 T Tan, W Y 12 Teukolsky, S A 19 V Vetterling, W T 19 W Wei, L J 12 Welsh, A H White, H 12 Williams, R L 12 Wooldridge, J M 12 Author index Subject index A algorithms BFGS 18–19, 57 BHHH 18, 57–60 DFP 18–19, 57 Newton’s method 13–15 Newton–Raphson 15–17, 57–60 ancillary parameters 240–242 arguments b 79–82, 111 g 83–84, 116–118, 132–136, 207 H 84–87, 118–119, 136–144, 208–209 lnf 64, 112–114, 128–132, 206–207 lnfj 65, 82–83 todo 79, 111 aweights see weights, aweights B b 46 b argument .see arguments, b backed-up message .see messages, backed-up BFGS algorithm see algorithms, BFGS BHHH algorithm see algorithms, BHHH Break key 182–184 by prefix command 218 byable 218 C censoring 104, 262 coefficient vector argument see arguments, b concentrated log likelihood .276 constraints() option see options, constraints() continue option see options, continue convergence 25–27, 187–188 Cox regression see models, Cox regression D d0 see methods, d0 d1 see methods, d1 d1debug see methods, d1debug d2 see methods, d2 d2debug see methods, d2debug derivatives see numerical derivatives DFP algorithm see algorithms, DFP difficult option see options, difficult diparm 242 diparm() option see options, diparm() do-files 213–215 E e(cmd) 240 e(k aux) 240 e(k eq) 240 e(predict) 252 empirical variance estimator see variance estimators, robust equation names 44, 46, 173–175 equation notation 40–48 estimation commands 217–248, 255–283 evaluators see methods F fweights see weights, fweights 350 G g argument .see arguments, g gf see methods, gf gradient vector 5, 203–204 gradient vector argument .see arguments, g H H argument .see arguments, H Hessian argument see arguments, H Hessian matrix 5, 204–205 heteroskedasticity .141 I initial values 171–179, 237–239 iweights see weights, iweights L lf see methods, lf lf0 see methods, lf0 lf1 see methods, lf1 lf1debug see methods, lf1debug lf2 see methods, lf2 lf2debug see methods, lf2debug likelihood evaluators see methods likelihood evaluators, Mata-based 193–212 likelihood theory 4–12 likelihood-ratio test 9–10 linear regression see models, linear regression linear-form restrictions 40, 64 linearization method see variance estimators, robust lnf argument see arguments, lnf lnfj argument see arguments, lnfj logit model see models, logit M manipulable 10 markout 225–227 marksample 225 Mata 193–212 maximization see numerical, maximization Subject index messages backed-up 20 not concave 19–20, 71–72 methods .48–51 d0 109–122, 200–201, 308–309 d1 109–118, 122–124, 200–201, 309–311 d1debug 153–165 d2 109–118, 124–126, 200–201, 311–314 d2debug 153–154, 165–168 gf 50 lf .63–76, 193–195, 199, 307–308 lf0 90–92, 199–200, 314–315 lf1 92–93, 199–200, 315–317 lf1debug 153–165 lf2 94–95, 196–200, 317–319 lf2debug 153–154, 165–168 missing option see options, missing ml check 151–153, 285 ml clear 285 ml count 286 ml display 188–191, 218, 240–242, 286 ml footnote .286 ml graph 187–188, 286, 304 ml init 177–179, 279–281, 285 ml maximize 181–185, 286, 303–304 ml model 41–62, 285 ml plot 175–177, 285 ml query 173–175, 183, 285 ml report 184, 286 ml score .252, 286, 306 ml search .172–175, 285 ml trace 168–169, 286 ML samp 73, 308, 310, 312, 314, 316, 318 ML w .73, 112, 130, 308, 310, 313, 314, 316, 318 ML y1 .63, 307, 308, 310, 312, 314, 316, 318 mleval 80–82, 111, 290, 297, 305 mlmatbysum 136–138, 141–144, 204–205, 290, 297, 306 Subject index mlmatsum 86–87, 118–119, 290, 297, 306 mlopts .229–232 mlsum 114–116, 290, 297, 306 mlvecsum 116–118, 290, 297, 306 models Cox regression 144–150, 265–267, 330–331 linear regression 66–69, 98–104, 218, 259–262, 325–327 logit 255–257, 321–323 normal see models, linear regression panel-data 126–144, 205–212 probit 65–66, 87, 96–98, 115, 118–119, 193–195, 257–259, 323–325 random-effects regression 126–144, 205–212, 268–271, 332–335 seemingly unrelated regression 271–283, 335–342 Weibull .69–70, 104–107, 196–198, 262–265, 327–329 moptimize init by 204–205 moptimize util depvar 194, 202 moptimize util eq indices 202–203 moptimize util matbysum 204–205 moptimize util matsum 197, 204 moptimize util sum 203 moptimize util vecsum 203–204 moptimize util xb 194, 202 mycox 266, 330–331 mylogit 256–257, 321–323 mynormal 261–262, 325–327 myprobit 258–259, 323–325 myrereg 270–271, 332–335 mysureg .281, 335–342 myweibull 264, 327–329 N Newton–Raphson algorithm see algorithms, Newton–Raphson Newton’s method see algorithms, Newton’s method 351 noconstant option see options, noconstant nonconcavity 15–17 noninteractive mode 221, 285, 297–301 nonlinear specifications 74–75 not concave message see messages, not concave numerical derivatives 20–25 maximization 13–25 root finding 13–17 O OPG see variance estimators, outer product of gradients options constraints() 57, 287, 297–298 continue 233, 299 difficult 17, 184–185 diparm() 240–242 eform() 242–244, 289, 305 exposure() 244–246 missing 225–226 noconstant 44, 302 offset() 244–246 subpop() 57 svy 56–57, 249–254, 300 technique() 57–60, 287, 300 vce() .34–35, 53–56, 297 outer product .see variance estimators, outer product of gradients P panel-data models see models, panel-data probit model see models, probit program properties 246–252 svyb 250 svyj 250 svyr 250 swml 246–247 properties() option see program properties 352 proportional hazards model see models, Cox regression pweights see weights, pweights Q qualifiers, subsample 51–52 R random-effects regression see models, random-effects regression restricted parameters 68 robust option see options, vce() root finding see numerical root finding S sandwich estimator .see variance estimators, robust scalars 87–90 scores see variance estimators, robust seemingly unrelated regression see models, seemingly unrelated regression speed 75–76 starting values see initial values subpop() option see options, subpop() subsample 225–229 survey data 56–57, 251, 254, 259, 261 svy option see options, svy svy prefix 249–254 svyb see program properties, svyb svyj see program properties, svyj svyr see program properties, svyr svyset 56–57 swml see program properties, swml T Taylor-series linearization method see variance estimators, robust technique() option see options, technique() todo argument see arguments, todo Subject index V variance estimators, outer product of gradients 10–11, 53–54, 116–118 robust 11–12, 34–35, 54–56, 116–118, 297 vce() option see options, vce() W Wald test 9–10 Weibull model see models, Weibull weights .35–36, 52–53 aweights 35–36, 52–53, 122, 309, 315 fweights 35–36, 52–53, 122, 309, 315 iweights 35–36, 52–53, 122, 250, 309, 315 pweights 35–36, 52–53, 56, 232–233 White estimator see variance estimators, robust ... Maximum Likelihood Estimation with Stata Fourth Edition WILLIAM GOULD StataCorp JEFFREY PITBLADO StataCorp BRIAN POI StataCorp ® A Stata Press Publication StataCorp LP College... 4: 5: 6: 7: 8: log log log log log log log log log likelihood likelihood likelihood likelihood likelihood likelihood likelihood likelihood likelihood = = = = = = = = = -45.03321 -27.990675 -23.529093... Iteration 0: 1: 2: 3: 4: 5: 6: 7: log log log log log log log log likelihood likelihood likelihood likelihood likelihood likelihood likelihood likelihood = = = = = = = = -45.03321 -27.990675 -23.529093

Ngày đăng: 31/08/2021, 16:28

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN