Wednesday, March 4, 2009

Publication Bias

The more I look at empirical results, the closer appear the flaws of a data-centric worldview.  Data never speak for themselves, and the process of their interpretation is every bit as political as other forms of analysis.  But when you analyze data, you focus on the minutae of your dataset, so you're further divorced for the subject matter.  You can make all sorts of crazy claims that would never slide if you were forced to present your views in a clear manner in front of a lay-audience remotely familiar with the subject matter.

The problem is that empirical claims are generally accepted or rejected on the basis of statistical tests meeting a certain treshold.  But creating such a treshold gives researchers a pole to vault; when they manage to do so, they publish, when they don't, they do not publish.  Instead of statistical tests revealing anything about the state of the world, they only reveal what researchers could torture the data into saying.

Brad DeLong makes the case that virtually all economic hypotheses are wrong, and the results from other meta-analyses are also worrying.  The medical literature is a clear example--because of publication bias, only positive results are published, so we have virtually no good idea about how effective compounds or treatments really are.  

Genetic association studies, from my limited information, seem to have a similar problem.  You pick the same loci as everyone else studies, splice and dice your data until you find correlates with some subgroup, and then herald your results as significant.  Look, we've found the gene that makes you fat/gay/popular, and we can pretend that people have no agency.

Psychology/behavioral economics has a big problem with this as sample sizes are small and it's easy for experimenters to induce effects through expectations.  Consider Dan Ariely's latest book, which is a compendium of human foibles and mistakes.  Is he so amazing an experimenter that every sample design provides a unique and interesting result?  Is it in the nature of experiments that it's very easy to reject a strawman "rational economic agent"?  Or is he covering up the results of studies that failed to work out?  Inquiring minds want to know!

The fact that behaviorists have taken this line of study is very disturbing personally.  Clearly economists make horrible assumptions and need to make better ones.  But on one hand you have people who care about the economy--they all use bad assumptions--and on the other you have people working with more reasonable assumptions, but they're only interesting in talking to hung-over undergraduates and finding that people are "not rational," whatever that means.  I'm not sure what facts about the real world I've learned from reading this literature.  People are crazy and can't be trusted with anything more sophisticated than a butter knife?  I've always suspected so, but nonetheless they seem to be doing a decent job anyway.  You just can't explain anything interesting about the dynamics of economic systems or variations across time and space without leaving behind both assumptions of constant irrationality and rationality; no one seems to be interesting in doing this.

This doesn't even get into the issues with relying on brain scans to communicate human truths, the various other problems plauging statistical studies (bad data, weak instruments, omitted variables, correlation is not causation, general equilibrium issues, the fact that "hard" statistical evidence is just as flimsy as the normal kind).  Or the fact that the urge to appear scientific is leaving important areas orphaned as they cannot be analyzed "rigorously."  Or that "revisionism" is always in vogue, and I don't know what to think about history anymore.  In a world in which entire academic fields exist purely because of publication bias, and academics are driven (almost) entirely by the urge to gain status, it's hard to trust anything or anyone.




No comments: