My suspicion is that instead of clear walls between things which "work" and don't, things are more complicated and difficult for the government to figure out. There's a great piece in the New York Review of Books which details how comparative effectiveness studies don't have a great history, partly due to biases of the investigators themselves:
With other experts, I performed a "meta-analysis," i.e., a study bringing together data from multiple clinical trials. We concluded that erythropoietin significantly improved the health of cancer patients and we recommended it to them as their default option. But our analysis and guidelines were wrong. The benefits ultimately were shown to be minor and the risks of treatment sometimes severe, including stroke and heart attack.It closes with:
After this failure, I came to realize that I had suffered from a "Pygmalion complex." I had fallen in love with my own work and analytical skills. In behavioral economics, this is called "overconfidence bias," by which we overestimate our ability to analyze information, make accurate estimates, and project outcomes. Experts become intoxicated with their past success and fail to be sufficiently self-critical.
The care of patients is complex, and choices about treatments involve difficult tradeoffs. That the uncertainties can be erased by mandates from experts is a misconceived panacea, a "focusing illusion."Well worth reading. The departures from a perfect world of rational economic agents are real; but they don't have unambiguous lessons for the optimal balance between markets and government.