“AI-driven medicine” is no longer a very new concept. The prevention, monitoring, treatment and development of many diseases are closely related to computer technology. Computers are fast, efficient, and highly replicable and inexpensive when processing data.
Today, On World Diabetes Day, more than 420 million people worldwide are suffering from the disease, more than a quarter of whom are from China. In 2017, China’s diabetes prevalence rate was 10.9%, second only to the United States. Diabetes is an incurable chronic disease, but its harm to the human body is more of a variety of complications. If properly controlled, patients can coexist with the disease for a long time. And Google is using its deep learning technology to help a large number of people with diabetes fight complications and get a better life in the process of living with the disease.
Retinopathy of diabetes
Retinopathy is one of the most serious complications of diabetes.
Long-term high blood sugar can lead to the inner wall of the retinal blood vessels, causing a series of under-eye lesions, such as microhevads, bleeding, hard oozing, and even causing retinal shedding. The average patient develops eye lesions, which eventually lead to blindness, after 10 years of diabetes.
At the same time, retinopathy caused by diabetes is not incurable. If found in a timely manner, control blood sugar and timely targeted treatment, retinal lesions can be completely avoided, cured. Through the eye scan image, you can tell if the patient’s retina has lesions. But the problem is that for people with diabetes in remote, poor areas, there is a lack of expertise around doctors to judge eye-scans.
Detection of retinal lesions . . . Google
In response, Google’s AI team studied a set of deep learning algorithms. Working with hospitals and doctors in India and the United States, Google collected 128,000 images of under-eye scans and used them to form a data set to train the neural network. The algorithm marks 880,000 related features to determine if an under-eye scan has any signs of retinopathy. Google then tested the trained neural network algorithm, which was compared to a professional jury of seven professional ophthalmologists. Both sides do the same, and even the algorithm’s accuracy is slightly higher than that of doctors.
At present, this research results have not only stayed in the laboratory stage, but entered the clinical application. Google’s partner, Verily, has been approved by European regulators to use the technology in hospitals to screen for eye complications from diabetes. The technology is now being put into clinical use at India’s Aravind Eye Hospital, and there is hope that more doctors will be introduced in countries with more staff shortages and more people with diabetes.
Algorithmic interpretation of medical images
The detection of diabetic retinopathy is only a small part of the algorithm’s ability to detect the disease.
Google’s medical AI team has also created a more widely used tool called LYNA that allows in-depth identification of a wide range of medical records and medical images, mining data features, and allowing pathologists to free themselves from repeated “watching” work and focusing on other clinical diagnostics.
LYNA can now identify lymph node cell slices to determine whether a patient has breast cancer and assist doctors in follow-up treatment decisions. LyNA’s cancer detection rate was significantly higher than previous tests in the 2016 ISBI Camelyon Challenge Cancer Cell Regional Testing Competition. Although LYNA is not yet a complete substitute for physician scans and needs to be tested in a larger number of cases, its capabilities are rapidly improving.
In addition to retinal and lymph nodes, Google is using deep learning to diagnose skin diseases and lung cancer.
In the field of dermatology, Because GPs are far less accurate at diagnosing dermatologists than professional dermatologists, Google researchers have developed an AI tool that can assist GPs in diagnosing skin diseases. Through deep learning, the algorithm can not only analyze the medical image of the patient’s skin, but also analyze the medical data, such as medical history, age, sex, symptoms and so on, which can greatly improve the accuracy of the diagnosis of skin diseases of general practitioners.
For lung cancer, Google’s artificial intelligence model is able to make a more holistic prediction of CT scans of a patient’s lungs, identify subtle malignant tissues in the lungs, and predict lung cancer risk. The diagnostic accuracy of this model is as high as 94%, greatly improving the current high cost of lung cancer screening, low accuracy, often false negative, false positive problems.
Google AI Scans 3D Images of the Lungs
In addition to algorithms, Google has created a set of “rehabilitation hardware” for Alzheimer’s patients that allows them to return to places they know in a “virtual ride” that can help them recall the past. Geek Park (id: Geekpark) has reported on The Alzheimer’s Report about Google’s technology.
With the improvement of medical level, the living standard and life expectancy of human beings are constantly improving.
This means that the professional threshold for doctors is increasing and the pressure on work is increasing, while the imbalance in medical resources is increasing. A good doctor has limited capacity to treat all patients, and the sense of mission and responsibility to treat and save people forces them to put all the pressure on their shoulders.
This is the greatest significance of “AI-driven medicine”, technology can expand the ability of doctors, reduce their stress, and enable them to help more ordinary people. AI-based medical imaging recognition and disease screening are only the first steps, and there is more potential in this area to be discovered.