Thursday, September 19, 2013

Medical research often exaggerates studies' success


Researchers seeking new medical treatments need to be much stricter in their animal studies and make their results -- and especially their failures -- more accessible if they are to improve the chances of finding drugs for Alzheimer's disease and other conditions that have been notoriously stubborn to treat, according to a new paper from Stanford.

An analysis of nearly 4,500 animal studies found almost twice as many reports of positive results -- meaning, the treatment being studied showed statistical significance of being effective -- than one would expect given the parameters of the study designs, said Dr. John Ioannidis, a professor of medicine at Stanford who was lead author of the paper.

In other words, based on statistics alone, it's clear that many animal studies are biased to produce positive results, which then are used to push treatments into human clinical trials, and in some cases put drugs on the market.

That bias -- which is usually subtle and probably unintentional, Ioannidis said -- helps explain why drugs to treat stroke and Alzheimer's and a host of other challenging conditions often seem so promising in mice and other animal studies, only to fail in human experiments.

"For so many diseases we have dozens of interventions that seem to work in animals, or at least this is what reading the published literature would tell you," said Ioannidis, who is an expert in clinical trial design and statistical analysis.

But that supposed success in animal studies hasn't translated into drugs to treat Alzheimer's, for example. "For stroke, we have maybe one intervention that works. For Parkinson's disease we have a few," Ioannidis said. "There's a lot of discrepancy."

One basic flaw of animal studies is that on a purely biological basis, what works in a rat doesn't always work in a human being. Still, animal studies are critical to medical research, said Ioannidis. Studying rodents and other mammals is a key step in the process of moving a potential drug treatment from the laboratory into human subjects, to make sure therapies are safe and have at least some chance of working.

But professional pressure to produce positive results means that many scientists, perhaps without even being aware of it, will only report data that support their work. The problem is that once scientists then try to move their research into human experiments, the results don't hold up.

Human clinical trials operate under much higher standards and many more guidelines than animal studies, mostly to promote the safety of patients, but also to prevent accidental or intentional bias.

The "gold standard" for human research is the randomized, double-blinded, placebo-controlled trial. That means research subjects are randomly divided into a group that gets the treatment and a control group that gets a placebo. Both the subjects and the scientists are blinded, so that no one knows who's getting the treatment and it's nearly impossible to let bias enter into the results.

Also, human clinical trials must be registered with the U.S. National Institutes of Health, which means that the goals and the parameters of the study are made public before the trial even begins.

But no such universal registration exists for animal studies.

Bias can enter into the research in several ways, Ioannidis pointed out in his paper. A scientist might run dozens of statistical analysis on his data, but only publish the results that are positive and ignore the results that are neutral. Or he might change the parameters of a study after the research has been completed to put a more positive spin on the results.

Scientists are not necessarily trying to quash negative results, and they may not even be aware of their subtle bias, Ioannidis said. For example, if a drug therapy seems to be effective in a mouse after 12 hours, but not after 16 hours, it's understandable the scientist would be more interested in reporting -- and further studying -- the 12-hour effect.

"There is flexibility in the way data can be analyzed," Ioannidis said. "There may be 50 different outcomes being studied and five different statistical models that could be applied. One of those outcomes may be statistically significant, but if that's all that gets reported, and everything else is silenced, the message may be very misleading.

"In scientific studies there should be a clear understanding up front in what's going to happen with the data," he said.

Not helping matters is that medical journals often are reluctant to publish negative study results. So what ends up published is almost never an accurate representation of the actual research being done, said John Huguenard, a neuroscientist who was not part of Ioannidis' group.

"There's a bias in the publication industry against negative results. It's not exciting to publish them," Huguenard said. But he's optimistic that the research community is open to changing the way animal studies are conducted and making results more accessible.

___

http://www.thedenverchannel.com/lifestyle/health/medical-research-often-exaggerates-studies-success-091713?

No comments: