Science is flawed. It’s time we embraced that.

riNews, Post Materialist Science1 Comment

Science is flawed.

by Julia Belluz and Steven Hoffman on May 13, 2015

In his book Derailed, about his fall from academic grace, the Dutch psychologist Diederik Stapel explained his preferred method for manipulating scientific data in detail that would make any nerd’s jaw drop:

“I preferred to do it at home, late in the evening… I made myself some tea, put my computer on the table, took my notes from my bag, and used my fountain pen to write down a neat list of research projects and effects I had to produce…. Subsequently I began to enter my own data, row for row, column for column…3, 4, 6, 7, 8, 4, 5, 3, 5, 6, 7, 8, 5, 4, 3, 3, 2. When I was finished, I would do the first analyses. Often, these would not immediately produce the right results. Back to the matrix and alter data. 4, 6, 7, 5, 4, 7, 8, 2, 4, 4, 6, 5, 6, 7, 8, 5, 4. Just as long until all analyses worked out as planned.”

In 2011, when Stapel was suspended over research fraud allegations, he was a rising star in social psychology at Tilburg University in the Netherlands. He had conducted attention-grabbing experiments on social behavior, looking at, for example, whether litter in an environment encouraged racial stereotyping and discrimination. Yet that paper — and at least 55 others, as well as 10 dissertations written by students he supervised — were built on falsified data.

Outright fraud is just one potential derailment from truth.

Stories like Stapel’s are what most people think of when they think about how science goes wrong: an unethical researcher methodically defrauding the public.

But outright fraud is just one potential derailment from truth. And it’s actually a relatively rare occurrence.

Recently, the conversation about science’s wrongness has gone mainstream. You can read, in publications like Vox, the New York Times or the Economist, about how the research process is far from perfect — from the inadequacies of peer review to the fact that many published results simply can’t be replicated. The crisis has gotten so bad that the editor of The Lancet medical journal Richard Horton recently lamented, “Much of the scientific literature, perhaps half, may simply be untrue.”

When people talk about flaws in science, they’re often focusing on medical and life sciences, as Horton is. But that might simply be because these fields are furthest along in auditing their own problems. Many of the structural problems in medical science could well apply to other fields, too.

That science can fail, however, shouldn’t come as a surprise to anyone. It’s a human construct, after all. And if we simply accepted that science often works imperfectly, we’d be better off. We’d stop considering science a collection of immutable facts. We’d stop assuming every single study has definitive answers that should be trumpeted in over-the-top headlines. Instead, we’d start to appreciate science for what it is: a long and grinding process carried out by fallible humans, involving false starts, dead ends, and, along the way, incorrect and unimportant studies that only grope at the truth, slowly and incrementally.

Acknowledging that fact is the first step toward making science work better for us all.

tmp655045035355537409From study design to dissemination of research, there are dozens of ways science can go off the rails. Many of the scientific studies that are published each year are poorly designed, redundant, or simply useless. Researchers looking into the problem have found that more than half of studies fail to take steps to reduce biases, such as blinding whether people receive treatment or placebo.

In an analysis of 300 clinical research papers about epilepsy — published in 1981, 1991, and 2001 — 71 percent were categorized as having no enduring value. Of those, 55.6 percent were classified as inherently unimportant and 38.8 percent as not new. All told, according to one estimate, about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on flawed and redundant studies.

After publication, there’s the well-documented irreproducibility problem — the fact that researchers often can’t validate findings when they go back and run experiments again. Just last month, a team of researchers published the findings of a project to replicate 100 of psychology’s biggest experiments. They were only able to replicate 39 of the experiments, and one observer — Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California — told Nature that the reproducibility problem in cancer biology and drug discovery may actually be even more acute.

Indeed, another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)

So why aren’t these problems caught prior to publication of a study? Consider peer review, in which scientists send their papers to other experts for vetting prior to publication. The idea is that those peers will detect flaws and help improve papers before they are published as journal articles. Peer review won’t guarantee that an article is perfect or even accurate, but it’s supposed to act as an initial quality-control step.

Yet there are flaws in this traditional “pre-publication” review model: it relies on the goodwill of scientists who are increasingly pressed and may not spend the time required to properly critique a work, it’s subject to the biases of a select few, and it’s slow – so it’s no surprise that peer review sometimes fails. These factors raise the odds that even in the highest-quality journals, mistakes, flaws, and even fraudulent work will make it through. (“Fake peer review” reports are also now a thing.)

And that’s not the only way science can go awry. In his seminal paper “Why Most Published Research Findings Are False,”Stanford professor John Ioannidis developed a mathematical model to show how broken the research process is. Researchers run badly designed and biased experiments, too often focusing on sensational and unlikely theories instead of ones that are likely to be plausible. That ultimately distorts the evidence base — and what we think we know to be true in fields like health care and medicine.

All of these problems can be further exacerbated through the dissemination of research, when university press offices, journals, or research groups push out their findings for public consumption through press releases and news stories.

One recent British Medical Journal study looked at 462 press releases about human health studies that came from 20 leading UK research universities in 2011. The authors compared these press releases with both the actual studies and the resulting news coverage. What they wanted to find out was how overblown claims got made.

Take, for example, the notion that coffee can prevent cancer. Did that come from the study itself, or from the press release, or was it a figment of the journalist’s imagination? The researchers discovered that university press offices were a major source of overhype: more than one-third of press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

These exaggerated claims then seeped into news coverage. When a press release included actual health advice, 58 percent of the related news articles would do so, too (even if the actual study provided no such advice). When a press release confused correlation with causation, 81 percent of related news articles would, too. And when press releases made unwarranted inferences about animal studies, 86 percent of the journalistic coverage did, too. Therefore, the study authors concluded, “The odds of exaggerated news were substantially higher when the press releases issued by the academic institutions were exaggerated.”

Worse, the scientists were usually present during the spinning process, the researchers wrote: “Most press releases issued by universities are drafted in dialogue between scientists and press officers and are not released without the approval of scientists and thus most of the responsibility for exaggeration must lie with the scientific authors.”

In another 2012 study, also published in the BMJ, researchers examined press releases from major medical journals and compared them with the newspaper articles generated. They found a direct link between the scientific rigor in the press release and rigor in the related news stories. “High quality press releases issued by medical journals seem to make the quality of associated newspaper stories better,” they wrote, “whereas low quality press releases might make them worse.”

Meanwhile, it’s difficult for many people to access a great deal of scientific research — impeding the free flow of information.

Sometimes the problem manifests rather innocuously: in an analysis of more than a million hyperlinks in research papers published between 1997 and 2012, researchers found that between 13 percent and nearly 25 percent of hyperlinks in the scientific journals they looked at were broken.

Other times, it’s less innocuous. Right now, taxpayers fund a lot of the science that gets done, yet journals charge users ludicrous sums of money to view the finished product. American universities and government groups spend $10 billion each year to access science. The British commentator George Monbiotonce compared academic publishers to the media tycoon Rupert Murdoch, concluding that the former were more predatory. “The knowledge monopoly is as unwarranted and anachronistic as the corn laws,” he wrote. “Let’s throw off these parasitic overlords and liberate the research that belongs to us.”

Despite this outrageous setup and all the attention to it over the past 20 years, the status quo is still firmly entrenched, especially when it comes to health research. All of us — physicians, policymakers, journalists, curious patients — can’t access many of the latest research findings, unless we fork over a hefty sum or it happens to be published in an open-access journal.

Because of these now well-known problems, it’s not unusual to hear statements like those from The Lancet editor Richard Horton that “Much of the scientific literature, perhaps half, may simply be untrue.” He continued: “Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”


On the bright side, these troubles, and the crisis of confidence in science that has transpired, have given rise to an unprecedented push to fix broken systems. An open-data movement — which seeks to share or publish the raw data on which scientific publications are based — has gained traction around the world. So has the open-access movement, which is pushing to put all research findings in the public domain rather than languish behind paywalls.

In recent years, there has also arisen a “post-publication peer review” culture. For example, a new website, PubPeer, allows scientists to comment on each other’s articles, critiquing and discussing works anonymously, as soon as they’ve been published in journals — kind of like a comments section on a news site. This has opened up the space for criticism beyond the traditional peer review process. It has also helped uncover science fraud and weed out problematic studies.

Recognizing that these flaws are frequent and often inevitable might actually give us a healthier appreciation for how science works.

“Meta research” is becoming increasingly prominent and unified across scientific disciplines. Last year, Stanford launched the Meta-Research Innovation Center to bring researchers who work on studying research together in one place. The center is guided by this mission statement: “Identifying and minimizing persistent threats to medical-research quality.” These meta-researchers apply the scientific method to study science itself and find out where it falters.

With the growth of research on research has come another important insight: that we need to stop giving too much credence to single studies, and instead rely more on syntheses of many studies, which bring together all findings on a given topic and minimize the biases inherent within each particular study.

Medical Studies

 

More and more fields are also working on reproducibility projects, like the one we noted in psychology. (Some have even dubbed it a “reproducibility revolution.”) Hopefully this revolution will fix some of science’s flaws.

In the meantime, recognizing that these flaws are frequent and often inevitable might actually give us a healthier appreciation for how science works — and help us think about more ways to improve it.

Long before it was mainstream to criticize science, Sheila Jasanoff, a Harvard professor, was arguing that science — and scientific facts — are socially constructed, shaped more by power, politics, and culture (the “prevailing paradigm”) than by societal need or the pursuit of truth. “Scientific knowledge, in particular, is not a transcendent mirror of reality,” she writes in her book States of Knowledge. “It both embeds and is embedded in social practices, identities, norms, conventions, discourses, instruments and institutions — in short, in all the building blocks of what we term the social.” In a conversation since, she cautioned, “There is something terribly the matter with projecting an idealistic view of science.”

Whether or not you believe in the social constructivist argument, the underlying assumption it makes is one that people too often fail to appreciate about science: it is carried out by people, and people are flawed; therefore, science will, inevitably, be flawed. Or, as Jasanoff puts it, “Science is a human system.”

A failure to appreciate how science works, its faults and limitations, breeds mistrust. At a meeting at the National Academy of Sciences this month, health law professor and author Tim Caulfield pointed out that one of the things readers often use against his pro-science arguments is that “science is wrong” anyway, so why bother. In other words, people hear about research misconduct or fraud, see the contradictory studies out there, and conclude that they can’t trust science.

Instead, if people saw science as a human construction — the result of a tedious, incremental process that can be imperfect in its pursuit of truth — both science and the public understanding of science would be better off. We could learn to trust science for what it is and avoid misunderstandings around what it is not.

While it may seem that critics like Jasanoff scoff at science, that’s not the case. She, for one, has actually made criticizing science her life’s work — a testament to her reverence for science and her desire to improve its methods. Over the past 20 years, she’s helped build a little-known field called “science and technology studies,” or just STS, that is now starting to gain wider prominence. Politics has political science to study its functioning. There are literary studies for fiction and poetry. STS studies science itself — how it’s carried out, what it gets right, where it goes wrong, its harms and benefits.

“We need to change what the starting assumption ought to be,” Jasanoff explains. “If it’s provisionality rather than truth, we need to build in the checks and balances around that.” As such efforts —  like the reproducibility projects, or post-publication peer review — gain traction, the scientific community is waking up to that fact.

Now the rest of us need to.

Original Article

 

One Comment on ““Science is flawed. It’s time we embraced that.”

  1. Peter

    Science is “done” by human beings, who desires a stable income and job, attention and compliments for their work, … and sometime a bit more, a big ego which needs a lot of “food”, which fears that someone could one day discover that his world view is based on illusions, and therefore they react with dogmatism, manipulating data and trying to divert attention.

    In the case of Diederik Stapel I can understand that he was under some kind of pressure to justify his position as professor. But then the responsibility can be divided by 50% for Diederik and 50% for the system, which should alleviate the pressure and stress for the “creative jobs”. Indeed Diederik was really creative in “asking questions”. Now he exposes the lies and illusions so many people are ready to believe.

    And that is another problem. So many people who are not even scientists, proclaims themselves as “skeptics” and attack on the almost anonymous internet people with an open mind. This pseudoskeptics do a lot of harm, because they camouflage themselves as “experts” in some topics and they seem to have some scientific background knowledge, but when confronted with facts they attack and use even swearwords.

    I think science is misused as religion is misused. Some people are really honest and they do a great job, but there are so many unscientific people who misuse science as a facade for promoting their egoistic driven desires. Sad, but I have well founded hope that it will soon change.

Leave a Reply

Your email address will not be published. Required fields are marked *