Speaking of science, and supplementing the post below, this:
/The NIH claims it is going to conduct a replication study*
Via Science.org
Earlier this year, the National Institutes of Health (NIH) made an unusual offer to many of its 37,500 principal investigators: If you have a laboratory study you think could have a major impact on health—such as a mouse experiment testing a possible heart disease drug—we may pay for a contract lab to repeat the work to make sure it’s solid.
Only a few people applied to the pilot phase of NIH’s initiative [surprising no one — ED] which is finalizing its picks for the first handful of studies this month. But its leader at NIH says it has enough participants to study the feasibility of the program, which has the support of Congress and President-elect Donald Trump’s nominee to head NIH, Stanford University health economist Jay Bhattacharya. He recently told The Wall Street Journal that replication studies should be “a centerpiece of what the NIH does.”
For years, concerns have mounted that many basic biomedical experiments don’t hold up when another lab attempts them, casting doubt on plans to translate the work into a treatment. Cases of apparent scientific fraud, such as work underlying Alzheimer’s disease drugs that Science investigated, have added to worries about the integrity of these preclinical studies.
Sounds good, right? But then comes the (silent) kicker:
The initiative comes with a big caveat: The agency has no plans to make the resulting data public. That “limits the appeal and value,” says Tim Errington of the Center for Open Science (COS), a nonprofit that supports replication studies. Still, he says, the pilot “is a step in the right direction.
PJMedia’s Ben Bartee has his doubts about the sincerity of the NIH in this matter:
NIH Exposed in Massive CYA Operation Ahead of Trump Takeover?
In what world is a publicly-funded agency like the NIH allowed to hide publicly-funded research from the public that would potentially expose its malfeasance — and this from the Most Transparent Administration in History™?
Good question.
*How much of a problem is this? It’s huge, and longstanding: here’s a BBC article dated February 22 2017:
Most scientists 'can't replicate studies by their peers'
Science is facing a "reproducibility crisis" where more than two-thirds of researchers have tried and failed to reproduce another scientist's experiments, research suggests.
This is frustrating clinicians and drug developers who want solid foundations of pre-clinical research to build upon.
From his lab at the University of Virginia's Centre for Open Science, immunologist Dr Tim Errington runs The Reproducibility Project, which attempted to repeat the findings reported in five landmark cancer studies.
"The idea here is to take a bunch of experiments and to try and do the exact same thing to see if we can get the same results."
You could be forgiven for thinking that should be easy. Experiments are supposed to be replicable.
The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.
Sadly nothing, it seems, could be further from the truth.
After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.
Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.
"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.
Concern over the reliability of the results published in scientific literature has been growing for some time.
According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.
Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.
"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."
The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.
"What we see in the published literature is a highly curated version of what's actually happened," he says.
"The trouble is that gives you a rose-tinted view of the evidence because the results that get published tend to be the most interesting, the most exciting, novel, eye-catching, unexpected results.
"What I think of as high-risk, high-return results."
The reproducibility difficulties are not about fraud, according to Dame Ottoline Leyser, director of the Sainsbury Laboratory at the University of Cambridge.
That would be relatively easy to stamp out. Instead, she says: "It's about a culture that promotes impact over substance, flashy findings over the dull, confirmatory work that most of science is about."
She says it's about the funding bodies that want to secure the biggest bang for their bucks, the peer review journals that vie to publish the most exciting breakthroughs, the institutes and universities that measure success in grants won and papers published and the ambition of the researchers themselves.
"Everyone has to take a share of the blame," she argues. "The way the system is set up encourages less than optimal outcomes."
“Less than optimal outcomes” — that’s a nice euphemism for fraud and sloppy science. And that’s for hard-science studies; the reproducibility record for psychology and other social science studies gives a new meaning to “dismal science”.