California cardiologist and author Dr. Eric Topol is known as the "dean of digital medicine" for his longtime work in highlighting the future promise of computing in health care. He once diagnosed a heart attack on an airplane using an EKG device that worked through his smartphone, and this year he diagnosed his own kidney stone with an ultrasound probe that plugs into a phone. Such applications show just a glimmer of what is possible when true artificial intelligence is linked to medical devices. In a keynote address Wednesday at the Medical Design & Manufacturing (MD & M) conference at the Minneapolis Convention Center, Topol is expected to talk about the future of "deep learning" and AI in medicine. Following is an edited transcript of a Nov. 1 conversation with Topol.

Q: As you look at medical technology, what is the application that is really going to prove the value of artificial intelligence in this field?

A: There's quite a number. I think we're at the point where you'll be able to diagnose a heart rhythm through your watch, and that is one of the early ones. Also managing your diabetes through your watch or your phone, your glucose levels, and getting all that data processed. High blood pressure is next. These are three of the most common medical conditions, and they all [have] data that will flow from you that is fully processed, ingested and learned, to give feedback so that you have surveillance of an important condition and much better management.

Q: And how is that different from the algorithms that we see today, like in Medtronic's 670G insulin pump, which can self-adjust insulin doses while the user is sleeping?

A: This is much deeper. An algorithm is a step in this process. But an example in the diabetes world, or the glucose world, would be having many dimensions of data, like exercise, nutrition, sleep, and that multidimensional data, analyzing that, including the [gut] microbiome, to help promote far better glucose regulation. So it's a much more complex input and output, and that is really the essence of what deep learning is. It's all these layers of data being processed between the input and the output.

Q: Are any of these 'deep learning' applications close to reality?

A: Today it's really a supervised deep learning, where it's narrow applications like any of the ones I just mentioned and many more. In the past week, we've learned of deep learning for diagnosing colon polyps, for predicting suicidal ideation, and a couple more. Basically these are narrow uses of that input and output for predicting, and it is exceptionally powerful. Oftentimes, better than a human or an expert could do, by machine-processing of data.

Q: Are there new risks that are created with the introduction of AI into health care?

A: Yes, no question. As you pointed out, an algorithm is the basis of it. If it's wrong, or it's off, or it's biased, then it basically can be propagated and amplified and it can cause mistakes and problems. There are also, of course, privacy and security issues that are fundamental, and that exist with any medical data. So those are some of the big ones. But another one is, of course, as more and more we rely on it, the more we could worsen health inequities, which are already bad but could get worse.

Oh, the other application I was going to mention, because it's so common, is being able to predict a migraine before it happens. Every time you have a migraine, it starts to pick up all the dimensions of data that predicted that, so that after you have 15 episodes, hopefully you won't have any more. That's the goal. But hopefully it won't take 15 in the future.

Q: Has any device ever needed Food and Drug Administration approval for an AI?

A: Yes. One of the ones that was approved was for heart flow. It's a CT-based technology that predicts about artery disease. They were one of the first to be FDA approved with AI. [San Francisco-based Arterys announced in January that it had received Food and Drug Administration clearance for a cloud-based medical imaging application, becoming the first company to get FDA clearance for a technology that "leverages cloud computing and deep learning in a clinical setting," a news release said.]

Q: Are there regulatory issues that could slow the adoption of AI in medicine? For example, how would the FDA handle an AI when the underlying computer code is updated?

A: Dr. Scott Gottlieb, FDA commissioner, has come up with a very progressive way to reduce the burden with all these updates. There is a move afoot at the FDA to try to promote digital health, and with that, data processing, machine learning, AI. We are seeing some very favorable support at FDA. And they've published new guidelines and decreasing the risk categories, all sorts of good things.

Q: Does the solution have to do with giving companies more autonomy in making changes after approval? Or speeding the approval process for changes?

A: All levels — More autonomy; less slowness, which has always been a problem. They're just trying to open things up. When it's tied to risk, obviously, that will be factored in. But a lot of these things are not putting people at risk. ...

For example, the migraine thing. If this really works, if the data are very supportive, then you would think that's not putting people at risk. And to hold that up for years, or however long it might take, would be unhelpful for so many people who suffer that problem.

Q: So you're hopeful that the FDA can nurture this field of AI in medicine and not stifle it?

A: Yeah. I spoke to Scott last month and I mentioned that in just five months, he's done more to help this field than, well, a lot more than in the past five years, that's for sure.