“What’s a doctor doing at Google?”
Long-time readers of this blog and folks who read this blog because they know me in real life, know that my husband was a practicing clinician for 20 years or so before taking a job at Google Health. It was great, except for the long hours and the way they cut into our family time.
One of the minor social benefits of being married to an ICU doctor is that it is really easy to explain to people what my husband does. Everyone has a mental image of an ICU doctor, so all I had to do in response to the question “What does your husband do?” is say “He’s an ICU doctor.” Their internal preconceptions of what that meant generally took it from there.
But since my husband started working at Google it’s been a bit trickier. “He’s an ICU doctor, but he’s working at Google now” is typically not a response that people have a pre-existing frame of reference for. But I can’t just ignore his ICU background. He uses his clinical expertise every day at Google.
“But how? What’s a doctor doing at Google?”I’m not great (yet) at answering this question, but you know who is? Michael. He’s done several interviews lately in which this question has come up.
Why Michael went to Google in the first placeIn an interview for the podcast Raise the Line, Michael describes his motivation in working at Google Health this way:
“When my dad gets sick, he has a Harvard-trained physician looking over his shoulder, helping him know what to type in and what queries to ask. I just want that for the world.”- Michael Howell, in an August 2023 interview on the Raise the Line podcast
The 41 minute interview with Michael covers a lot of ground, from why Michael became an ICU doctor in the first place, his work on improving healthcare quality and patient safety during his clinical practice, what he’s doing at Google Health, and how he sees AI changing medical practice going forward.
AI in medicine? Tell me more about that.As you might imagine, Michael’s been interviewed a lot lately about his work with Generative AI in healthcare.
In September, Michael was interviewed by JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, about the potential for AI in healthcare and possible pitfalls in the form of AI hallucinations and gaslighting. (The published interview is approximately 25 minutes long, and there is a transcript.)

In the interview, Michael was asked about how he views this moment in medical history:
"...understanding the [AI's] capabilities are really important. And those are totally different than what came before. I read a bunch of old papers and I've thought sometimes what it must have been like to be in practice when penicillin showed up. You're like, okay, that's different. I don't know all the things. I may overuse it a little bit, but it's a marked moment."- Michael Howell, MD, MPH in an interview with JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS for JAMA/HIMMS
Although the title for this interview makes it seem like it would be quite technical and hard to follow, it’s not that bad actually. After all, Michael wasn’t just an ICU doc back in the day. He was also a teacher, and as a result, he’s pretty good at explaining things. For example, this is how he explains what AI hallucinations are, why they happen, and how AI engineers are working to correct them going forward.
"I'll add that in any domain, but in healthcare in particular, there's a concept called automation bias of people trust the thing that comes out of the machine. And this is a really important patient safety issue. Like with [Electronic Health Records], they reduced many kinds of medical errors like no one dies of handwriting anymore, right? Which they used to do with some regularity, but they increased the likelihood of other kinds of errors.And so the automation bias is a really important thing. And when the model is responding and sounds like a person might sound, it's an even bigger risk. So hallucinations are really important and what they are is the model is just predicting the next word.
And if there's one thing for people who are watching this to remember it's that the model doesn't go look things up in PubMed.
[....] It just remembers stuff out of that embedding space or the concept space. And so it'll be reading along it'll be predicting next word, doing a good job and then it'll say, oh, this looks like it should be a medical journal citation. That's the kind of thing that comes next. Here are words that are plausible for a medical journal citation and then that will look just like a medical journal citation. [....] It was a big problem in the earlier versions of them.
There are a few ways from a technical standpoint that this is getting better but it remains an important issue. One example, it turns out that these things are bad at math. They're good at two plus two equals four because there's like a lot of that on the internet. But if you give it, you know, whatever 13,127 plus 18,123, they say, oh that looks like it should be a five digit number. Let me get a plausible five digit number. They don't ask. So what folks are doing to mitigate that is to say, oh this looks like a math problem. Ask a calculator and the calculator will get the answer. And then to put that in. Or this looks like you should do a journal citation. Go look it up in the source of record and then report back. And so that's one area. And for folks who want to look at more research in this the evolving areas are called grounding, consistency, and attribution."
- Michael Howell, in an interview with JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS for JAMA/HIMMS
The entire interview is here if you are interested in watching/reading it.
Wait? I thought you told me that Michael went to Google to tackle the problem of medical misinformation?Yep, he’s definitely still doing that. Here’s a three-minute interview Michael and his colleague Dr. Garth Graham of YouTube did recently with Yahoo/Finance about how they are dealing with medical misinformation on YouTube.
References & Related Links
Bibbins-Domingo, K. (2023, September 20). AI and Clinical Practice—AI Gaslighting, AI Hallucinations, and GenAI Potential [Video]. JAMA/JN Learning Network.Gaglani, S., Carrese, M., Acer, H., & Apanovitch, D. (2023, August 23). What AI’s Rapid Progress Means for Healthcare and Health Information – Dr. Michael Howell, Chief Clinical Officer at Google [Audio podcast episode]. Raise the Line. Jacobino, N. (2023, November 13). How YouTube Is Addressing Medical Misinformation [Video]. Yahoo Finance.