Models as Approximations -- A Conspiracy of Random Regressors and Model Deviations Against Classical Inference in Regression

WP 2015-9.0

Authors: Andreas Buja, Richard Berk, Lawrence Brown, Edward George, Emil Pitkin, Mikhail Traskin, Linda Zhao, and Kai Zhang

 

More than thirty years ago Halbert White inaugurated a "model-robust" form of statistical inference based on the "sandwich estimator" of standard error. It is asymptotically correct even under "model misspecification," that is, when models are approximations rather than generative truths. It is well-known to be "heteroskedasticity-consistent" but it is less well-known to be "nonlinearity-consistent" as well. Nonlinearity, however, raises fundamental issues: When fitted models are approximations, conditioning on the regressor is no longer permitted because the ancillarity argument that justifies it breaks down. Two effects occur: (1) parameters become dependent on the regressor distribution; (2) the sampling variability of parameter estimates no longer derives from the conditional distribution of the response alone. Additional sampling variability arises when the nonlinearity conspires with the randomness of the regressors to generate a 1/sqrt (N) contribution to standard errors. Asymptotically, standard errors from "model-trusting" fixed-regressor theories can deviate from those of "model-robust" random-regressor theories by arbitrary magnitudes. In the case of linear models, a test will be proposed for comparing the two types of standard errors.

 

 

PDF icon 2015-9.0_Berk_ModelsAsApproximations(1).pdf