A key issue on the stage throughout the first two rounds of Democratic debates has been health care reform, with much focus on universal access. The United States is, after all, the "only major country on Earth" that does not offer health care to all of its citizens, as Sen. Bernie Sanders, I-Vt., has put it. But this critical question remains unaddressed: Can universal access to health care really solve the problem of health inequality in the United States?
The answer is no — the disappointing lesson learned by most European countries with universal access. Perhaps the most telling example of accessibility is Britain's National Health Service (NHS), created on the heels of World War II. The NHS's "equal access for equal need" was meant, in part, to tackle the vastly unequal health outcomes and life expectancies of Brits across the socioeconomic spectrum. But over subsequent decades, it became clear that the NHS had left health inequalities basically untouched.
That's because health care systems are designed around the care of the individual. They were never designed to manage some of the most critical causes of poor health, which have socioeconomic origins. Poverty, homelessness, and inadequate housing and nutrition are improved by greater access to resources, not by more doctor visits. In other words, universal access is not enough. To address health inequality, we must revise a fundamental and deep-rooted misunderstanding of what good health is and what it requires.
In the U.S., we owe this misunderstanding chiefly to the activities of professional medical organizations selling a new idea of health care to potential patients in the 1920s. Before the 1920s, a motley crew of practitioners dominated medical work in the United States. Their training ranged from a few months of casual lecture attendance to many years of intensive hospital work, and in clinical facilities that varied from state-of-the-art hospitals to dilapidated structures packed to the rafters with beds. The public was not especially impressed by the vastly varying practices and outcomes that resulted.
This inconsistency began to decline, though, in the early 20th century. The Carnegie Foundation's famous Flexner Report of 1910 began to push "irregular" medical schools out of existence and standardize medical training. In the 1920s, organizations such as the American College of Surgeons began to do the same for hospitals, arranging a minimum standard that hospitals had to meet to gain the ACS's coveted stamp of approval.
For this new standardized health care to succeed, it needed a more consistent stream of "consumers": That is, these new "regular" doctors needed patients. The ACS partly solved this problem by extracting promises from major industrial organizations to use only ACS-approved hospitals for workers injured on the job. But a question remained: How could doctors expand their patient base and encourage individuals to voluntarily show up to hospitals and trust a stranger to address their health woes?
Customer loyalty had to be built, as burgeoning industrial titans during the same period, such as Henry Ford, understood. And this is exactly what medical organizations did, in the process creating a new understanding of health as personal, private and individually focused.
In 1923, the American Medical Association launched "Hygeia," a journal that aimed to endear the public to medical practitioners by teaching them to appreciate how medicine could improve the lives of individuals. Likewise, over this same period, the ACS organized a series of "health rallies" across the country, showcasing their standardized, regulated brand of health care as one that would revolutionize personal well-being.