Donald Forsdyke: Queen’s University, Canada

 

Topic: Grant Review
Comments by Donald Forsdyke
Queen’s University, Canada
6/30/2015

Proposed Actions

Broadly, there are a variety of ways of funding biomedical research. Let us say, system A, system B, system C, etc. Each system is likely to display distinct selectivities, filtering out some researchers and embracing others. That the same people will always “rise to the top,” whatever the system, is unlikely. Since we aim to optimize progress in biomedical research, we need to choose a system that selects the researchers best able to contribute to this goal.

In the 1940s when the US funding system began to assume the form it has today, there was little thought that there needed to be a choice between alternative systems (1). To most it seemed obvious that, as with so many problems that confront our society, we should chose a committee of well-meaning “experts,” ensure that they have the relevant information (i.e. grant proposals), and then follow their advice. In other words, one system (say system A) was adopted without much thought about possible alternatives.

Of course, certain people “rose to the top,” progress was made and, by some strange circumlocution, this was trumpeted as showing the wisdom of the original system “designers.” We in Canada adopted a similar system, but funds being more limited the crunch, now so painfully acknowledged in the USA, came much earlier (1980s). A group of us (“Canadians for Responsible Research Funding”) explored various alternatives. Going back to first principles, it became obvious why system A was not working. From these principles, it was apparent that there existed systems B, and C, that would probably work much better. There were many papers, and I wrote a book on the topic (2).

Unfortunately, a decision to explore the latter systems was very much in the hands of those who had benefitted most from system A. Like the Varmus group in the USA, they wrung their hands and deplored the problems, but called for something virtually impossible, carefully designed data-gathering to compare the relative merits of systems. Half-hearted small scale studies kept statisticians busy, and that was all.

What is needed is a proper study of the abundant literature on the subject, a careful description of the alternatives and how they might come into operation, and then their large scale adoption, for better or worse. In Canada, I came up with a system – Bicameral Review – and others came up with other alternatives. None of these have been properly implemented. The reason why we just go round and round in bureaucratic circles was anticipated by Jevons (3): “Asking researchers about research evaluation is like asking a bird about aerodynamics.” This was why we took great pains to work from first principles unhampered as much as possible by our professional blinkers. Unfortunately, those whom we have tried to convince do not seem so liberated. More details may be found in my peer review web-pages (4)

1. Forsdyke DR (1993) On giraffes and peer review. FASEB J 7, 619-621.
2. Forsdyke DR (2000) Tomorrow’s Cures Today? How to Reform the Health Research System. Harwood Academic, Amsterdam.
3. Jevons FR (1973) Science Observed. Allen & Unwin, London.
4. Forsdyke DR. Peer Review web-pages (http://post.queensu.ca/~forsdyke/peerrev.htm)