Sector News

Why Google, Amazon, and Nvidia are all building AI notetakers for doctors

November 15, 2020
Borderless Future

Automated medical transcription that’s actually accurate could save doctors a huge amount of time, and the tech giants are getting in on the action.

For doctors, taking notes and inputting them into electronic medical records is so cumbersome that they often have to use human medical scribes to do it for them. That’s changing as more hospital systems turn to artificial intelligence-based transcription tools.

However, some doctors feel the tools available today are just not accurate enough. “If there were a really smart voice transcription service that was 99% accurate, I would definitely use it,” says Bon Ku, an emergency room doctor at Thomas Jefferson Hospital University and director of the university’s Health Design Lab. “A lot of times, I feel like I’m a data-entry clerk.”

For the last several years, big tech companies have been jockeying to be the one who finally delivers the kinds of tools doctors have been craving.

This week, Google launched open source machine learning software to help doctors make sense of patient medical records. The platform is composed of two programs. One, an API for healthcare-related natural language processing, scans medical documents for key information about a patient’s journey, puts it into a standard format, and summarizes it for the doctor. It can pull from multiple sources of information like medical records as well as transcribed doctors’ notes. The goal is to create an easy way for doctors to review a patient’s past care. The second, called AutoML Entity Extraction for Healthcare, is a low-code tool kit that helps doctors to pull out specific data from a patient’s record, like information about a genetic mutation. Both tools will be available for free until December 10, 2020 for doctors, insurers, and biomedical companies.

Much of Big Tech’s enthusiasm for medicine is focused on building a better way for doctors to record their interactions with patients without having to type into a computer. Amazon, Microsoft, and Google have all created software to this effect and are increasingly creating tools for healthcare settings, likely in a quest for new sources of recurring revenue.

Even Nvidia, which has traditionally focused more on its imaging technology, has started offering medical transcription. Earlier this year, Nvidia launched a service called BioMegatron, which is built to recognize conversational speech. The data set is trained on over six billion medical terms and is 92% accurate. There are also a host of smaller companies like Dragon, MModal, Suki Ai, and Saykara providing transcription for doctors.

AI-powered transcription is the latest push toward automating medical processes. Much of doctors’ work is already electronic: many use a computer system to pull up patient data. A 2013 paper found that over the course of a 10-hour shift, emergency room doctors made 4,000 clicks during a busy shift. Doctors who use the EPIC electronic health record system also have a program called “dot phrases” to help make it faster to write notes and pull information about patients (EPIC also has an AI transcription module). The problem some doctors have with dot phrases is that it enables quick pre-written entries about an ailment or symptom to be inserted into patient records. Such shorthand is fine for medical billing, but it leads patient records to be overly generalized. As a result, doctors reviewing patient’s history often don’t get the context surrounding a patient’s last visit.

“Most of patient records are garbage—they’re full of templates,” says Ku. “Ninety percent of our diagnoses come from the interview; it doesn’t come from diagnostic imaging or lab tests. It’s about me being able to get the story from my patient—but that becomes hindered because there’s this insane pressure to enter data into a computer.”

Doctors also spend an enormous amount of time entering data in the electronic health record. Ku says that doctors have what’s called “pajama time,” which refers to the hours they spend at home recalling patient information into the system. This is why doctors would love a notetaking experience that was more akin to talking to Alexa. A system that extracts patient data from a conversation or the ability to order tests by voice would be a game changer, says Ku. He’d be able to spend a lot more time with patients. However, the technology needs to get to a place where it is more accurate so doctors don’t have to spend more time updating what the AI got wrong.

“There has to be some safety mechanism,” says Ku.

By Ruth Reader


comments closed

Related News

May 21, 2022

The net-zero transition in the wake of the war in Ukraine: A detour, a derailment, or a different path?

Borderless Future

In this article McKinsey attempts to examine the possible effects of the war and its ramifications on the key requirements for a more orderly net-zero transition. Explores the war’s potential effect on key sectors and how shifts in energy and finance markets could play out in the aggregate, both globally and within major regional blocs.

May 15, 2022

Reengineering your business for a smart and connected World

Borderless Future

The shift from standalone hardware to smart, connected products is pervasive—and it’s here to stay. Forward-thinking hardware companies are taking leadership positions in a new era of product development. Will you be one of them?

May 7, 2022

Is real-time data too late?

Borderless Future

It’s interesting to reflect on the opportunities which were imagined back in 2010, and which regularly appear on today’s supply chain agenda. Some progress has been made over the past decade, but there are still plenty of early observers to be convinced, and early adopters who haven’t realized that real-time information alone will not necessarily deliver competitive advantage.