Transcription

Econ 312Wednesday, April 22Limited Dependent Variables: Probit and LogitReadings: Wooldridge, Section 17.1Class notes: 154 - 159

Today’s Far Side offeringHow we’re all feeling atthis time of year!2

Context and overview The final major section of the course deals with dependent variablesthat have limited ranges, not – to This class looks in detail at models of a dummy dependent variable Linear probability model is simple Probit and logit models are more statistically reasonable, but require carefulinterpretation of the coefficients In the next few classes we will examine other situations in which thedependent variable is limited in range or discontinuous3

Linear probability model y 0 or 1, soE yi x i Pr yi 1 x i y1 Linear probability model(LPM) just applies OLS bymaking this a linear functionof the x variables:Pr yi 1 x i E yi x i 0 1 x i Problem #1: line doesn’t fitdata well0x4

Error term in LPM Problem #2: Since y can onlybe 0 or 1, the error term canonly be 1 0 1 x i ory10 0 1 x i u is discrete, not continuous Bernoulli distribution, notnormalPr ui 0 1 x i 0 1 x iPr ui 0 1 x i 1 0 1 x i Sum of Bernoulli variables isnormal in limit, so coefficientestimates may still be normal0x5

Prediction in LPMy Problem #3: For extremevalues of x we always predictPr[ y 1 x ] 0 or Pr[ y 1 x ] 11 Also has heteroskedasticity Bottom line: LPM is simple Might be usable for x close tosample mean Simply not the best model!0x6

Alternative models: Probit and logit E yi x Pr[ yi 1 x ] G 0 1 x i G z i Probit fits cumulative normaldistribution functionG zi zi zizi d 12 e1 22y1d Logit fits cumulative logisticdistribution function1e ziG zi zi zi1 e1 e zi Similar shapes that fit the 0/1data better than linear0x7

Estimation of probit and logit Nonlinear maximum likelihood Discrete density function:Pr yi 1 x i , G x i , Pr yi 0 x i , 1 G x i Or f yi x i , G x i 1 G x i yi 1 yi , yi 0, 1 Log likelihood function with IID sample:nln L y, x yi ln G x i 1 yi ln 1 G x i .i 1 Choose , evaluate lnL, then search over to find maximum Estimator is consistent, asymptotically normal/efficient8

Goodness of fit and hypothesis tests Goodness of fit always looks bad: Fraction predicted correctly, predicting 1 for G x i ˆ 0.5 Pseudo-R2:ln L ˆ ; x , y1 , Z y , 0, 0, , 0 ln L Z ; x , y Likelihood-ratio test: 2 ln Lu ln Lr q2 . Can also do the standard t test: ˆ j ct t n k 1 ˆse j9

Interpretation of coefficients In OLS (or LPM): j E yi x i z In probit or logit: j x j x jwith no logical interpretation Remember that we define z x For continuous regressor, we want Pr y 1 x jd Pr y 1 z G z j g z j ,dz x j measures effect of x on z, g(z) measures effect of z on Pr[y 1]10

Geometric interpretation of coefficientsy Put z x on horizontal axis Increase of x units in x change of z x units in z Change of z units in z change of G z g z z unitsin E y x So partial effect of x on Pr y 1 is Pr y 1 g z j1 x j0z x 11

Odds ratio in logite xi Pr yi 1 x i x i 1 e xi 1 e x exi ixi x i e xi x i e xi 1 x i e xi exi xi 1 xi Pr yi 1 x i Pr yi 0 x i "odds ratio" is the effect of x on the “log odds ratio” Stata table reports e as proportional effect of x on odds Always 0, e 1 x increases odds, e 1 x decreases odds Depends on x: set to mean12

Partial effects in probit Probit uses normal rather than logistic distribution Pr y 1 1 12 xi 2 z j z j e j x j2 Again, this depends on x, so we typically evaluate the effect at themeans of the regressors Partial effects in both probit and logit have same sign as coefficientand can be tested by j 0 If x is a dummy, we want Pr y 1 x j 1 Pr y 1 x j 0 13

Probit and logit in StataProbit probit reports coefficients dprobit reports partial effects (oreffects of 0 1 for dummies)evaluated at means of all x Use margins command toevaluate at other x Same t statistics and tests frombothLogit logit reports coefficients logistic reports effect of x on logodds ratio at means Remember that zero effectmeans e 1 Tricky to interpret; see notes14

Issues in probit and logit estimation Nonlinearity makes finding best estimator less reliable Algorithm can break down in cases of high multicollinearity Complex models can take a long time to converge Omitted-variable bias affects all coefficients, even if xj notcorrelated Heteroskedasticity makes estimator inconsistent White’s robust standard error fixes the standard error, but not thecoefficient Try to rescale the model to reduce probability of heteroskedasticity15

Review and summary When the dependent variable is a 0/1 dummy we have severalchoices of estimators Linear probability model is simple, but unrealistic Probit and logit are better suited to the situation Both probit and logit use cumulative probability distributionfunctions to approximate relationship instead of straight line Must be estimated by nonlinear least squares Coefficients no longer have the usual interpretations Stata can transform coefficients into meaningful “partial effects”16

Another bad economist joke “Let us remember the unfortunate econometrician who, in one of themajor functions of his system, had to use a proxy for risk and a dummyfor sex.”-- Fritz Machlup--Taken from Jeff Thredgold, On the One Hand: The Economist's Joke Book17

What’s next?In the next class, we discuss estimators appropriate to other unusualdependent variables: Generalizing probit/logit to more than two choices (0/1/2, forexample) Models for ordered dependent variables (A B C) Models for count dependent variables (0, 1, 2, 3, )18