Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, a study by researchers from Google and several medical centres found.
The technology is a work in progress, not ready for widespread use, but the new report, published last week in the journal Nature Medicine, offers a glimpse of the future of artificial intelligence in medicine.
One of the most promising areas is recognising patterns and interpreting images – the same skills that humans use to read microscope slides, X-rays, MRIs and other medical scans.
By feeding huge amounts of data from medical imaging into systems called artificial neural networks, researchers can train computers to recognise patterns linked to a specific condition, such as pneumonia, cancer or a wrist fracture that would be hard for a person to see. The system follows an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.
The process, known as deep learning, is already being used in many applications, such as enabling computers to understand speech and identify objects so that a self-driving car will recognise a stop sign and distinguish a pedestrian from a telephone pole. In medicine, Google has already created systems to help pathologists read microscope slides to diagnose cancer, and to help ophthalmologists detect eye disease in people with diabetes.
"We have some of the biggest computers in the world," says Dr Daniel Tse, a product manager at Google and an author of the journal article. "We started wanting to push the boundaries of basic science to find interesting and cool applications to work on."
In the new study the researchers applied artificial intelligence to CT scans used to screen people for lung cancer, which caused 1.7 million deaths worldwide last year. The scans are recommended for people at high risk because of a long history of smoking.
Risk of dying
Studies have found that screening can reduce the risk of dying from lung cancer. In addition to finding definite cancers, the scans can also identify spots that might later become cancer, so that radiologists can sort patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.
But the test has pitfalls: it can miss tumours, or mistake benign spots for malignancies and push patients into invasive, risky procedures such as lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.
The researchers thought computers might do better. They created a neural network, with multiple layers of processing, and trained it by giving it many CT scans from patients whose diagnoses were known: some had lung cancer, some did not, and some had nodules that later turned cancerous.
Then, they began to test its diagnostic skill.
“The whole experimentation process is like a student in school,” Tse said. “We’re using a large data set for training, giving it lessons and pop quizzes so it can begin to learn for itself what is cancer, and what will or will not be cancer in the future. We gave it a final exam on data it’s never seen after we spent a lot of time training, and the result we saw on final exam – it got an A.”
Tested against 6,716 cases with known diagnoses, the system was 94 per cent accurate. Pitted against six expert radiologists, when no prior scan was available, the deep-learning model beat the doctors: it had fewer false positives and false negatives. When an earlier scan was available, the system and the doctors were neck and neck.
The ability to process vast amounts of data may make it possible for artificial intelligence to recognise subtle patterns that humans simply cannot see.
“It may start out as something we can’t see, but that may open up new lines of inquiry,” said Dr Mozziyar Etemadi, a research assistant professor of anaesthesiology at Northwestern University Feinberg School of Medicine, and an author of the study.
Dr Eric Topol, director of the Scripps Research Translational Institute in California, who has written extensively about artificial intelligence in medicine, said, "I'm pretty confident that what they've found is going to be useful, but it's got to be proven." Topol was not involved in the study.
Given the high rate of false positives and false negatives on the lung scans as currently performed, he said, “lung CT for smokers, it’s so bad that it’s hard to make it worse.”
Asked if artificial intelligence would put radiologists out of business, Topol says,“Gosh, no!”
The idea is to help doctors, not replace them.
“It will make their lives easier,” he says. “Across the board, there’s a 30 per cent rate of false negatives, things missed. It shouldn’t be hard to bring that number down.”
There are potential hazards, though. A radiologist who misreads a scan may harm one patient, but a flawed AI system in widespread use could injure many, Topol warns. Before they are unleashed on the public, he said, the systems should be studied rigorously, with the results published in peer-reviewed journals and tested in the real world to make sure they work as well there as they did in the lab. – New York Times News Service