Google works with UCSF to develop medical AI system to handle 75% of doctors’ prescriptions

Researchers at Google and the University of California, San Francisco (UCSF) have developed an AI system that predicts a doctor’s prescription decision in 75 percent of cases. If applied to a medical system, it can identify patients or prescriptions that appear to be in unusual condition. A bit like the fraud detection scheme used by information card companies.

Google works with UCSF to develop medical AI system to handle 75% of doctors' prescriptions

The results were published in the journal Clinical Pharmacology and Therapeutics.

Research scientist Kathryn Rough and Google Health M.D. Alvin Rajkomar wrote in a report that while no doctor, nurse or pharmacist wants to make mistakes that hurt patients, studies have shown that 2 percent of inpatients experience serious medical events related to drug misuse. These events are likely to be life-threatening, causing permanent injury or causing death.

To do this, the AI system was trained using a data set that contained approximately 3 million bills from more than 100,000 hospitalized patients, using a randomly date-changing electronic health record and deleting some records (including name, address, contact information, record number, doctor’s name, free text notes, images, etc.) using a randomly dated electronic health record. More importantly, data sets are not limited to specific diseases or medical areas, making tasks more challenging, and helping to ensure that models can identify a wider range of disease series.

The researchers evaluated two models: long-term and short-term memory (LSTM) recursive neural networks that modeled long-term dependencies, and logical models similar to those commonly used in clinical health studies. Both were compared to a baseline that ranked the most commonly used drugs based on the patient’s hospital services (e.g., general medicine, general surgery, obstetrics, cardiology) and the time of admission.

Each time a drug is ordered in retrospective data, the models list 990 possible drugs, and the researchers evaluate the models to determine whether to assign the actual drugs actually ordered in each case to a doctor at a higher probability. The performance of each model is assessed by comparing its ranking selection with the drug actually prescribed by the doctor.

The best performerwas was LSTM, where at least 93% of the drugs on the top 10 were ordered by clinicians for a given patient within the next day. In 55 percent of cases, the model correctly listed the drug prescribed by a doctor as one of the 10 most likely drugs to take, and 75 percent ordered the drug in the top 25.

“It is important that models trained in this way reproduce the behavior of doctors in historical data without learning the best prescription patterns, how these drugs may work, or what side effects may be,” the researchers wrote. In our next phase of the study, we will look at the circumstances in which these models can be used to identify drug errors that harm patients. “

“We look forward to working with doctors, pharmacists, other clinicians and patients, and we will continue to look at the ability to quantify the ability of this model to catch errors and ensure patient safety in hospitals.” “

One thing i have to say is that Google has a very extensive work on AI healthcare.

Mr. Lei learned that Google had previously developed a model that could classify X-rays with “human-level” accuracy. Last year, Google said its lung cancer detection AI system had more than six human radiologists, and that skin diagnostic models could accurately detect 26 skin conditions like doctors.

A recent Google study was to use the AI model to identify breast cancer from mammograms with fewer false positives. Google is also working with Aravind Eye Hospital in Maduri, India, to diagnose eye diseases from retinal images.