For a reform of the way science is published

We started hearing about the “crisis of reproducibility” in psychology in 2011: on the one hand, research showed how easy it was for a researcher in this discipline to find “false positives,” that is, data that “proves” that he is right; on the other hand, there has been a heated debate over a “classic” 1996 experiment, cited more than 2,000 times since then, but never reproduced.

By the word “reproducibility”, we mean a fundamental reality of scientific advances for centuries: to be acceptable, a “discovery” must be able to be reproduced by other researchers. In other words, a single search for a “revolutionary” claim is not enough.

But the idea of ​​a crisis of reproducibility has spread to disciplines other than psychology. Thus, nutrition studies have lost much of their brilliance in the last decade, too many of them seem tailor-made to attract public and media attention, to the detriment of their scientific qualities.

Finally, although we have long suspected that much of the so-called “preliminary” biomedical research did not lead to promising advances as we delved deeper, the magnitude of the problem was staggering, says a journalist from the New Scientist : in 2011, an internal study by the company Bayer concluded that two-thirds of the leads followed by the company based on university research were unsuccessful. In 2012, the company Amgen added that of 53 studies considered important, only 5 could be reproduced. This is partly why, in the last decade, science journalists have started to go home more often than before, so great care must be taken before reporting of a study that relates “only” to mice. Or even worse, before highlighting its “encouraging” results.

Last but not least, the pressures on researchers by the university system and grant agencies to publish as often as possible — the famous “publish or die.” For example, there has long been a bias toward “positive” results, because researchers whose study has produced negative results will be less likely to take the time to write the article. And for some time now, we’ve been complaining about the temptation of some magazines to highlight certain titles to get more attention. One possible solution: to impose a “pre-registration” of the research, which would force the researchers to describe their hypothesis and their objectives, and which would prevent them from being changed along the way, in order to meet the criteria of certain journals.

Another possible solution: stop giving so much importance, in researchers’ evaluations, to the “impact factor” of journals, that is, that a publication in a journal with a high impact factor impact gives more ” points “for professional development, increasing the temptation to publish research on more” sticky “topics.

The “pre-registration” approach is especially promoted by the Reproducibility Network, a British organization that has set itself the task of proposing reforms in the way the scientific community works. The Center for Open Science, an American organization, offers a “tag” to identify research that has been “pre-registered” and another for those whose researchers have agreed to share all their data. The National Institutes of Health, the largest health research grant agency in the United States, will require all recipients to share their data next year.

In the recent report by New Scientist, science journalist Clare Wilson concludes that everyone involved in these reforms seems to agree that there has been progress in responding to the “red flags” heard over the past decade. “Almost everyone said it had started well, but there was still a long way to go.”

Don’t miss any of our content

Encourage Octopus.ca

Leave a Comment