Maximum Likelihood Estimation with Stata, Third Edition
Overview
Authors: William Gould, Jeffrey Pitblado, William Sribney
Publisher: Stata Press
Copyright: 2006
ISBN-10: 1-59718-012-2
ISBN-13: 978-1-59718-012-2
Pages: 290; paperback
Price: $43.00
Comment from the Stata technical group
Maximum Likelihood Estimation with Stata, 3rd Edition, is written for researchers in all disciplines who need to fit models using maximum likelihood estimation. This edition offers a wealth of material about the ml command, updated to include new features introduced in Stata 9.
Noteworthy features in ml include
constraints() — linear constraints
technique() — four optimization algorithms (Newton–Raphson, DFP, BFGS, and BHHH)
vce(oim) — observed information matrix variance estimator
vce(opg) — outer product of gradients variance estimator
vce(robust) — Huber/White/sandwich/robust variance estimator
svy — complete and automatic support for survey data analysis
In addition, the authors give advice for developing your own estimation command and illustrate how to write your estimation command so that it supports the new svy prefix introduced in Stata 9.
In the final chapter, the authors illustrate the major steps required to get from log-likelihood function to fully operational estimation command. This is done using several different models: logit and probit, linear regression, Weibull regression, the Cox proportional hazards model, random-effects regression, and seemingly unrelated regression.
Table of contents
Preface (pdf)
Versions of Stata
Notation and Typography
1 Theory and practice
1.1 The likelihood-maximization problem
1.2 Likelihood theory
1.2.1 All results are asymptotic
1.2.2 Variance estimates and hypothesis tests
1.2.3 Likelihood-ratio tests and Wald tests
1.2.4 The outer product of gradients variance estimator
1.2.5 Robust variance estimates
1.3 The maximization problem
1.3.1 Numerical root finding
Newton's method
The Newton–Raphson algorithm
1.3.2 Quasi-Newton methods
The BHHH algorithm
The DFP and BFGS algorithms
1.3.3 Numerical maximization
1.3.4 Numerical derivatives
1.3.5 Numerical second derivatives
1.4 Monitoring convergence
2 Overview of ml
2.1 The jargon of ml
2.2 Equations in ml
2.3 Likelihood-evaluator methods
2.4 Tools for the ml programmer
2.5 Common ml options
2.5.1 Subsamples
2.5.2 Weights
2.5.3 OPG estimates of variance
2.5.4 Robust estimates of variance
2.5.5 Survey data
2.5.6 Constraints
2.5.7 Choosing among the optimization algorithms
2.6 Maximizing your own likelihood functions
3 Method lf
3.1 The linear-form restrictions
3.2 Examples
3.2.1 The probit model
3.2.2 The normal model: linear regression
3.2.3 The Weibull model
3.3 The importance of generating temporary variables as doubles
3.4 Problems you can safely ignore
3.5 Nonlinear specifications
3.6 The advantages of lf in terms of execution speed
3.7 The advantages of lf in terms of accuracy
4 Methods d0, d1, and d2
4.1 Comparing these methods
4.2 Outline of method d0, d1, and d2 evaluators
4.2.1 The todo argument
4.2.2 The b argument
Using mleval to obtain values from each equation
4.2.3 The lnf argument
Using lnf to indicate that the likelihood cannot be calculated
Using mlsum to define lnf
4.2.4 The g argument
Using mlvecsum to define g
Scores for robust and OPG variance estimates (optional)
4.2.5 The negH argument
Using mlmatsum to define negH
4.2.6 Aside: Stata's scalars
4.3 Summary of methods d0, d1, and d2
4.3.1 Method d0
4.3.2 Method d1
4.3.3 Method d2
4.4 Linear-form examples
4.4.1 The probit model
4.4.2 The normal model: linear regression
4.4.3 The Weibull model
4.5 Panel-data likelihoods
4.5.1 Calculating lnf
4.5.2 Calculating g
4.5.3 Calculating negH
Using mlmatbysum to help define negH
4.6 Likelihoods other than linear form
5 Debugging likelihood evaluators
5.1 ml check
5.2 Using methods d1debug and d2debug
5.2.1 Method d1debug
5.2.2 Method d2debug
5.3 ml trace
6 Setting initial values
6.1 ml search
6.2 ml plot
6.3 ml init
7 Interactive maximization
7.1 The iteration log
7.2 Pressing the Break key
7.3 Maximizing difficult likelihood functions
8 Final results
8.1 Graphing convergence
8.2 Redisplaying output
9 Writing do-files to maximize likelihoods
9.1 The structure of a do-file
9.2 Putting the do-file into production
10 Writing ado-files to maximize likelihoods
10.1 Writing estimation commands
10.2 The standard estimation-command outline
10.3 Outline for estimation commands using ml
10.4 Using ml in noninteractive mode
10.5 Advice
10.5.1 Syntax
10.5.2 Estimation subsample
10.5.3 Parsing with help from mlopts
10.5.4 Weights
10.5.5 Constant-only model
10.5.6 Initial values
10.5.7 Saving results in e()
10.5.8 Displaying ancillary parameters
10.5.9 Exponentiated coefficients
10.5.10 Offsetting linear equations
10.5.11 Program properties
11 Writing ado-files for survey data analysis
11.1 Program properties
11.2 Writing your own predict command
12 Other examples
12.1 The logit model
12.2 The probit model
12.3 The normal model: linear regression
12.4 The Weibull model
12.5 The Cox proportional hazards model
12.6 The random-effects regression model
12.7 The seemingly unrelated regression model
A Syntax of ml
B Likelihood evaluator checklists
B.1 Method lf
B.2 Method d0
B.3 Method d1
B.4 Method d2
C Listing of estimation commands
C.1 The logit model
C.2 The probit model
C.3 The normal model
C.4 The Weibull model
C.5 The Cox proportional hazards model
C.6 The random-effects regression model
C.7 The seemingly unrelated regression model
References
Download Freely
http://ishare.iask.sina.com.cn/f/7025893.html