A new slogan has emerged in the culture: "Do your own research." On internet forums and social media platforms, people arguing about hotly contested topics like vaccines, climate change and voter fraud sometimes bolster their point or challenge their interlocutors by slipping in the acronym "DYOR."
"Two days after getting the jab, a friend of mine's friend had a heart attack," a Reddit user wrote recently in a discussion about COVID-19 vaccines. "I'm not saying they're connected, but D.Y.O.R."
The slogan, which appeared in conspiracy theory circles in the 1990s, has grown in popularity over the past decade as conflicts over the reliability of expert judgment have become more pronounced. It promotes an individualistic, freethinking approach to understanding the world: Don't be gullible — go and find out for yourself what the truth is.
That may seem to be sound advice. Isn't it always a good idea to gather more information before making up your mind about a complex topic?
In theory, perhaps. But in practice the idea that people should investigate topics on their own, instinctively skeptical of expert opinion, is often misguided. As psychological studies have repeatedly shown, when it comes to technical and complex issues like climate change and vaccine efficacy, novices who do their own research often end up becoming more misled than informed — the exact opposite of what DYOR is supposed to accomplish.
Consider what can happen when people begin to learn about a topic. They may start out appropriately humble, but they can quickly become unreasonably confident after just a small amount of exposure to the subject. Researchers have called this phenomenon the beginner's bubble.
In a 2018 study, for example, one of us (Prof. Dunning) and the psychologist Carmen Sanchez asked people to try their hand at diagnosing certain diseases. (All the diseases in question were fictitious, so no one had any experience diagnosing them.) The participants attempted to determine whether hypothetical patients were healthy or sick, using symptom information that was helpful but imperfect, and they got feedback after every case about whether they were right or wrong. Given the limited nature of the symptom information that was provided, the participants' judgments ought to have been made with some uncertainty.
How did these would-be doctors fare? At the start, they were appropriately cautious, offering diagnoses without much confidence in their judgments. But after only a handful of correct diagnoses, their confidence shot up drastically — far beyond what their actual rates of accuracy justified. Only later, as they proceeded to make more mistakes, did their confidence level off to a degree more in line with their proficiency.