Sunday, April 21, 2013

More on statistics in research

One last good point out of the whole excel error in the 2010 Reinhart and Rogoff paper:
This raises another issue. Programming is getting easier and easier, but it’s hard to do well. Economics these days depends heavily on programming. It seems to problematic to me that we rely on economists to also be programmers; surely there are people who are good economists but mediocre programmers (especially since the best programmers don’t become economists). If you crawl through a random sample of econometric papers and try to reproduce their results, I’m sure you will find bucketloads of errors, whether the analysis was done in R, Stata, SAS, or Excel. But people only find them when the stakes are high, as with the Reinhart and Rogoff paper, which has been cited all around the globe (not necessarily with their approval) as an argument for austerity.
It is the most generally applicable advice, in that it might well have prevented many of the other errors.  But it is not a one directional attribution of responsibility.  A careful analyst should have done sensitivity analysis for things like outliers and found the critical sensitivity of the results to the inclusion of New Zealand.  Or compared the different weighting schemes.  If you are going to try and understand a small dataset you need to be very thorough at looking at all of the ways that the data itself could be summarized. 

So statisticians also have a burden of care, here, even if one was not directly involved in this analysis to counsel researchers on how to approach these difficult data problems.  In the same sense, it would not be a bad structural change for researchers to consult more with methodologists, who can look at the problem outside of the lens of strong priors and might be more quick to question a surprisingly good result. 

No comments:

Post a Comment