Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

•••

One of the lasting consequences of the COVID-19 pandemic has been a decline of trust in public health experts and institutions. It is not hard to see why: America botched COVID testing, kept the schools closed for far too long, failed to vaccinate enough people quickly enough, and inflicted far more economic damage than was necessary — and through all this, public health experts often had the dominant voice.

In their defense, public health officials are trained to prioritize public safety above all else. And to their credit, many now recognize that any response to a public health crisis needs to consider the trade-offs inherent in any intervention. As Dr. Anthony Fauci recently told the New York Times, "I'm not an economist."

As it happens, I am. And my fear is that we are about to make the same mistake again — that is, trusting the wrong experts — on artificial intelligence.

Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature.

Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too.

What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI might cure cancer or otherwise improve our health. Predictions of doom often fail to take into account the risks to America and the world if we pause AI development.

I also do not hear much engagement with the economic arguments that, while labor market transitions are costly, freeing up labor has been one of the major modes of material progress throughout history. The U.S. economy has a remarkable degree of automation already, not just from AI, and currently stands at full employment. If need be, the government could extend social protections to workers in transition rather than halt labor-saving innovations.

Each of these topics is so complicated that there are no simple answers (even if we ask an AI!). Still, within that complexity lies a lesson: True expertise on the broader implications of AI does not lie with the AI experts themselves. If anything, Hinton's remarks about AI's impact on unemployment — "it takes away the drudge work," he said, and "might take away more than that" — make me downgrade his judgment.

Yet Hinton is acknowledged to be the most important figure behind recent development in AI neural nets, and he has won the equivalent of a Nobel Prize in his field. And he is now doubting whether he should have done his research at all. Who am I to question his conclusions?

To be clear, I am not casting doubt on either his intentions or his expertise. But I would ask a different question: Who, today, is an expert in modeling how different AI systems will interact with each other to create checks and balances, much as decentralized human institutions do? These analyses would require an advanced understanding of the social sciences and political science, not just AI and computer science.

It almost goes without saying that there are different kinds of expertise. Albert Einstein helped to create the framework for mobilizing nuclear energy, and in 1939 he wrote President Franklin Roosevelt urging him to build nuclear weapons. He later famously recanted, saying in 1954 that the world would be better off without the bomb.

He may yet be proved right, but so far most Americans see the trade-offs as acceptable, in part because they have created an era of U.S. hegemony and ensured that U.S. leaders cannot easily escape the costs of major wars. Nuclear disarmament still exists as a movement, but it has the support of no major political party in any nuclear nation. (If anything, Ukraine regrets having given up its nuclear weapons.)

The lesson is clear: Experts from other fields often turn out to be more correct than experts in the "relevant" field — with the qualification, as the Einsteins of 1939 and 1954 show, that all such judgments are provisional.

As with many issues, people's views on AI are usually based on their prior beliefs. So I will declare mine: decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past; national rivalries will always be with us (thus the need to outrace China); and intellectuals can too easily talk themselves into pending doom.

All of this leads me to the belief that the best way to create safety is by building and addressing problems along the way, sometimes even in a hurried fashion, rather than by having abstract discussions on the internet.

So I am relatively sympathetic to AI progress. I am skeptical of arguments that, if applied consistently, also would have hobbled the development of the printing press or electricity.

I also believe that intelligence is by no means the dominant factor in social affairs, and that it is multidimensional to an extreme. So even very impressive AIs probably will not possess all the requisite skills for destroying or enslaving us. We also tend to anthropomorphize non-sentient entities and to attribute hostile intent where none is present.

Many AI critics, unsurprisingly, don't share my priors. They see coordination across future AIs as relatively simple; risk-aversion and fragility as paramount; and potentially competing intelligences as dangerous to humans. They de-emphasize competition among nations, such as with China, and they have a more positive view of what AI regulation might accomplish. Some are extreme rationalists, valuing the idea of pure intelligence, and thus they see the future of AI as more threatening than I do.

So who exactly are the experts in debating which set of priors are more realistic or useful? The question isn't quite answerable, I admit, but neither is it irrelevant. Because the AI debate, when it comes down to it, is still largely about priors. At least when economists debate the effects of the minimum wage, we sling around broadly commensurable models and empirical studies. The AI debates are nowhere close to this level of rigor.

No matter how the debates proceed, however, there is no way around the genuine moral dilemma that Hinton has identified. Let's say you contributed to a technological or social advance that had major implications, and a benefit-to-cost ratio of 3 to 1. The net gain would be very high, but so would the (gross) costs.

How easily would you sleep knowing that your work, of which you had long been justifiably proud, was leading to so many cyberattacks and job losses and suffering? Would seeing the offsetting gains make you feel better? What if the ratio of benefit to cost were 10 to 1? How about 1.2 to 1?

There are no objective answers. How you respond probably depends on your personality type. But the question of how you feel about your work is not the same as how it affects society and the economy. Progress shouldn't feel like working in the triage ward, but sometimes it does.

Tyler Cowen is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is co-author of "Talent: How to Identify Energizers, Creatives, and Winners Around the World."