The dark art of science policy
Governments around the world invest heavily in science and innovation, but what types of spending work best?
Would it be better to provide R&D tax credits for all firms, or should we fund specific projects in strategically important industries?
Suprisingly, for the most part, we just can't answer these questions. Science and innovation policy is a dark art.
Medicine was once a similarly murky affair, but the invention of the randomised controlled trial sixty years ago means that when your doctor perscribes you a new drug, you can be confident of its effectiveness.
In such a trial, some patients in a group are randomly selected to be given a treatment, while the remainder receive a placebo. Only at the end of the trial is it revealed who received the treatment and who didn't, so that its effect is compared without bias to that of the placebo.
Why couldn't we apply this sort of test to innovation investments? We can. In fact, in mid-2012 the newly formed Ministry of Business, Innovation and Employment inadvertently conducted such an experiment.
When it assessed the quality of the funding proposals that it had received, the Ministry did not ensure that each proposal received an equal number of external reviews. This potentially made their funding decisions subject to bias: even if two proposals were equally good, the proposal that by chance received more reviews would be more likely to have at least one negative review.
A cautious Ministry might be reluctant to fund proposals that received a negative review, even if all others were positive. Proposals that received more reviews would then be less likely to be funded than equally good proposals that received fewer.
In the end, we now know that more than a third of the proposals that received one review were funded, while only one quarter of those that received two or more were successful.Was the Ministry too conservative in its funding decisions?
In this case the difference in success rates was not enough to be able to draw a solid statistical conclusion.While unintentional experiments like this can reveal much about the quality of our decisionmaking, it would no doubt be better to undertake such studies purposefully rather than by accident.
And there are methods for studying the effectiveness of our investments in innovation that are fairer than the random allocation of external reviews. This new approach has become known as "the science of science policy".
It was recently used to test the quality of decisionmaking by the US National Institutes of Health (NIH), which invests billions of dollars every year in medical research. The conclusion? The projects rated poorly by the NIH (but funded none-the-less) produced just as much impact as those that were rated the best.
This is valuable information for an organisation that spends more than US$100 million dollars annually on reviewing proposals. It seems that by taking a leaf out of medicine's book we may soon be able to settle many debates about innovation policy with hard data and the application of statistics rather than resorting to the abstract.