Tag Archives: science

Ironic debunking

Who Will Debunk The Debunkers? Daniel Engber asks in a fascinating piece at fivethirtyeight. He tells the story of “meta-skeptic” Dave Sutton, who has made it his specialty to doubt other doubters’ explanations. The first few paragraphs, about iron, prove the point that a good debunking is often too clever. Likewise with Semmelweis – the received version is probably too simple. Finally, Sutton comes off as somewhat of a megalomaniac when it comes to his work about Darwin, providing yet another layer to the story.

Ragnar Frisch on economic planning

Ragnar Frisch in the early 1960’s had high hopes for future Soviet economic development:

The blinkers will fall once and for all at the end of the 1960s (perhaps before). At this time the Soviets will have surpassed the US in industrial production. But then it will be too late for the West to see the truth. (Frisch 1961a)

That is from an article by Sæther and Eriksen in the new Econ Journal Watch. The paper contains much more than this angle.

It must be said that it was quite common for economists at the time to believe that the Soviet Union had a sustainable system. For instance Paul Samuelson, who repeatedly pushed his predictions for when the American GNP would be overtaken by the Soviet GNP further into the future. If anyone knows about any modern Norwegian debate about this, I would be interested to learn about it.

H/t: MR, Arnold Kling.

 

 

“Oslo is the cradle of rigorous causal inference”

Those are the words of James Heckman, from a lecture (slides, paper) at the University of Oslo last week. In particular, it is Trygve Haavelmo’s 1943 paper The statistical implications of a system of simultaneous equations (pdf) that gets the honor of being “the first rigorous treatment of causality”. A summary:

Heckman 2013 Haavelmos contributions to causality

According to Heckman, Haavelmo built on Marshall’s general idea of ceteris paribus to define fixing (“an abstract operation that assigns independent variation to the variable being fixed (p. 8)”), that is to be distinguished from classical statistical conditioning (“a statistical operation that accounts for the dependence structure in the data (p. 8)”). This fixing occur hypothetically, thus causality becomes defined in terms of thought experiments, along the earlier thoughts of Ragnar Frisch. In Heckman’s words: “Causal effects are not empirical statements or descriptions of actual worlds, but descriptions of hypothetical worlds obtained by varying – hypothetically – the inputs determining outcomes. (pp. 2-3)”. 

Much of the lecture and paper is a polemic against Pearl’s do-calculus. Those interested in that debate can read Heckman and Pinto’s paper and Pearl’s comments on it, watch a conference discussion they had last year, or read stuff that more able people than me have blogged about before. Not debatable, though, is that Heckman knows to please his hosts.

Measurement is about learning and improvement, not control

Chris Blattman rants against the resistance to trying to estimate cost-effectiveness that he has encountered in the aid world. One thing he writes about is the “we do not experiment on people” argument (counter: there are always some who gets the stuff and some who do not).

Another expression of reluctance to measurement that I have encountered is: “We understand that we should be held accountable to donors, but why the need for such tight control? Don’t they trust us?” But this gets wrong the rationale for measuring, which primarily is to learn about the effects of what we do in order to do it better. Even if you are not accountable to anyone, measurement may help you learn what you do best and to improve.