Tyler Cowen refers us to some studies:
How much do returns to education differ across different natural experiment methods? To test this, we estimate the rate of return to schooling in Australia using two different instruments for schooling: month of birth and changes in compulsory schooling laws. With annual pre-tax income as our measure of income, we find that the naıve ordinary least squares (OLS) returns to an additional year of schooling is 13%. The month of birth IV approach gives an 8% rate of return to schooling, while using changes in compulsory schooling laws as an IV produces a 12% rate of return. We then compare our results with a third natural experiment: studies of Australian twins that have been conducted by other researchers. While these studies have tended to estimate a lower return to education than ours, we believe that this is primarily due to the better measurement of income and schooling in our data set. Australian twins studies are consistent with our findings insofar as they find little evidence of ability bias in the OLS rate of return to schooling. Together, the estimates suggest that between one-tenth and two-fifths of the OLS return to schooling is due to ability bias. The rate of return to education in Australia, corrected for ability bias, is around 10%, which is similar to the rate in Britain, Canada, the Netherlands, Norway and the United States.These are all basically examples of an Instrumental Variable (IV) approach. Let’s take these one at a time. First, we have the month of birth evidence. Yet this is the classic example of a weak instrument, an issue which is well discussed elsewhere. The month or quarter of birth has a very weak impact on final educational outcomes, and parents with children born in different months are not identical either.
Next, there is the compulsory schooling evidence, that Arnold Kling actually takes on as well, though only in the US context. He observes that the variation across states in terms of when a student can legally graduate doesn’t seem to predict their actual graduation habits. This is basically also a weak instrument; at least in the case of the US.
Finally, we have the twins evidence. The idea here is to observe one twin going to College and compare her with her sister twin who did not go to College. The assumption is that these two individuals shared identical environmental background factors, and so any resulting differences can be causally attributed to the College attendance of one twin. While this is a clever trick, it requires you to believe that twins are interchangeable humans. What if a family can only afford to send one twin to College, and so they send their more able child? What about all sorts of cognitive and non-cognitive differences that come up between children growing up in the same house? What about the possibility of twin interactions? Though interesting and suggestive, I don’t believe this evidence to be causally definitive.
It’s easy enough to knock holes in any body of literature, even one (as here) which does purport to establish identification. So here’s some evidence that points in the other direction.
1. Cognitive abilities have limited scope for educational intervention in developed economies beyond age ~5.
The evidence of this actually comes from James Heckman. He argues that we simply do not have access to any educational treatment that can reliably boost IQ over an extended period of time. Even the lauded Head Start/Perry Preschool programs can't do that. And if spending tens of thousands of dollars per pupil on a pilot program can’t produce results, it’s difficult to imagine what would.
An education proponent could now say something like, “Fine, but cognitive skills aren’t everything. Heckman supports Head Start — because it boosts noncognitive skills like impulse control.” Suppose I even grant that point — though I note that these noncognitive skills are something like a black box. There’s defined entirely as a residual of what can’t be a cognitive skill, and are inferred largely on the basis of lower crime rates among treated populations.
But think about what that would mean. Everything we do in schools — the teaching, the homework, etc. — has limited value when it comes to actually improving mental functions. Rather, it may or may not be effective in domesticating children to functioning in a modern post-industrial economy. At the very least, that would suggest that education ought radically change its focus away from cognitive tasks towards those aspects of behavior modification. Maybe we’d get the same results as school from a program forcing children to dig holes every day and fill them back up. Also note that Heckman’s results on the payoffs of education based on these noncognitive skills is rapidly declining in age. Are Head Start, prenatal care, or child nutrition policies worthwhile? Very likely. What about pushing unprepared children to attend College? Less clear.
2. Other estimates of the marginal return to education are low.
Heckman has another paper with coauthors where he attempts more rigorously to estimate the marginal impact of more education — in particular, of more College. Even the Instrumental Variable estimates discussed above may be biased — as they measure the impact on individuals induced to have the treatment (ie, more education). This need not be the same population as those induced to have more education in response to some arbitrary policy change.
Instead, Heckman creates an estimate designed to get exactly at the impact of more College on outcomes. The basic logic of his approach is to find individuals who had a low ex ante likelihood of attending College, but who went anyway. These are a proxy of the individuals targeted by, say, a program to induce more people to attend College.
For a sample of white males from the NLSY, we establish that marginal expansions in college attendance attract students with lower returns than those enjoyed by persons currently attending college. The contrast between what conventional IV measures and the marginal return to a policy can be stark. For example, while the conventional IV estimate is 0.0951, the estimated marginal return to a policy that expands each individual’s probability of attending college by the same proportion is only 0.0148. This policy induces students who should not attend college to attend it. Too many people go to college. [Emphasis added]
Note in particular that his estimates are consistent with high “IV” estimates — based on a set of instruments that he was able to use here. So even if the “identified” estimates based on the IV literature are correct, they do not necessarily serve as useful diagnostics on whether College expansion programs are worthwhile.
I’ll acknowledge that there’s substantial uncertainty about this question and much that we don’t know. I’m not particularly on one “side” in this debate. But this is such a difficult question to answer because people who seek more education would likely have done well anyway.
What I can say is that the most effective policy interventions here have little to do with simply broadening the access to education. All sorts of early child intervention techniques seem to yield positive results. A number of charter school/school choice/voucher experiments have resulted in institutional improvements in the quality of education while lowering the cost.