"Why most published research findings are false" is not, as the title of an academic paper, likely to win friends in the ivory tower. But it has certainly influenced people. The paper it introduced was published in 2005 by John Ioannidis, an epidemiologist who was then at the University of Ioannina, in Greece, and is now at Stanford. It exposed the ways, most notably the overinterpreting of statistical significance in studies with small sample sizes, that scientific findings can end up being "irreproducible" — or, as a layman might put it, wrong.

Ioannidis has been waging war on sloppy science ever since, helping to develop a discipline called meta-research (i.e., research about research). Later this month, that battle will be institutionalized, with the launch of the Meta-Research Innovation Center at Stanford.

METRICS, as the new laboratory is to be known for short, will connect enthusiasts of the nascent field in such corners of academia as medicine, statistics and epidemiology, with the aim of solidifying the young discipline. Ioannidis and the lab's co-founder, Steven Goodman, will (for this is, after all, science) organize conferences at which acolytes can meet in the world of atoms, rather than just online. They will create a "journal watch" to monitor scientific publishers' work and to shame laggards into better behavior. And they will spread the message to policymakers, governments and other interested parties, in an effort to stop them from making decisions on the basis of flaky studies.

All this in the name of the center's nerdishly valiant mission statement: "Identifying and minimizing persistent threats to medical-research quality."

Irreproducibility is one such threat — so much so that there is an (admittedly tongue-in-cheek) publication called the Journal of Irreproducible Results. Some fields are making progress, though. In psychology, the Many Labs Replication Project, supported by the Center for Open Science, an institute of the University of Virginia, has rerun 13 experiments about widely accepted theories. Only 10 were validated. The center also has launched what it calls the Cancer Biology Reproducibility Project, to look at 50 recent oncology studies.

Until now, however, according to Ioannidis, no one has tried to find out whether such attempts at revalidation actually have had any impact on the credibility of research. METRICS will try to do this, and will make recommendations about how future work might be improved and better coordinated — for the study of reproducibility should, like any branch of science, be based on evidence of what works and what does not.

Wasted effort is another scourge of science that the lab will look into. A recent series of articles in the Lancet noted that, in 2010, about $200 billion (an astonishing 85 percent of the world's spending on medical research) was squandered on studies that were flawed in their design, redundant, never published or poorly reported. METRICS will support efforts to tackle this extraordinary inefficiency, and will itself update research about the extent to which randomized-controlled trials acknowledge the existence of previous investigations of the same subject. If the situation has not improved, METRICS and its collaborators will try to design new publishing practices that discourage bad behavior among scientists.

There is also Ioannidis' pet offender: publication bias. Not all studies that are conducted get published, and the ones that do tend to be those that have significant results. That leaves a skewed impression of the evidence.

Researchers have been studying publication bias for years, using various statistical tests. Again, though, there has been little reflection on these methods and their comparative effectiveness. They may, according to Ioannidis, be giving both false negatives and false positives about whether or not publication bias exists in a particular body of studies.

Ioannidis plans to run tests on the methods of meta-research itself, to make sure he and his colleagues do not fall foul of the very criticisms they make of others. "I don't want," he says, "to take for granted any type of meta-research is ideal and efficient and nice.

"I don't want to promise that we can change the world — although this is probably what everybody has to promise to get funded nowadays."

Copyright 2013 The Economist Newspaper Limited, London. All Rights Reserved. Reprinted with permission.