The Algorithm Will See You Now: How AI is Helping Doctors Diagnose and Treat Patients
Artificial intelligence researchers are building tools to quickly and accurately turn data into diagnoses. But practical limitations and ethical concerns mean humans should remain in charge.

Don't expect a robot to treat you at your next doctor's visit. But AI algorithms are playing a bigger role in diagnosis and other aspects of healthcare. Image credit: Getty Images
In a children’s hospital overlooking the U.S. Capitol, researchers are working to solve a global problem: how to spot genetic disorders in newborns.
About eight million children are born with a chromosomal abnormality every year, but at least a third aren’t diagnosed until much later. That’s largely because while there are over 6,000 genetic disorders, common newborn DNA tests only look for about 20 of them. Many of those undiagnosed babies experience health complications that end up being responsible for 25 percent of all infant deaths.
AI could help catch more of these conditions, and faster. The key? About half of all genetic disorders have facial markers—telltale structural features or patterns—which computer vision algorithms can be trained to spot, no matter how subtle.
At Children's National in Washington D.C., researchers created an application called mGene. The app lets physicians feed a photo of a baby’s face to an algorithm that looks at facial landmarks and takes various measurements like the size of the nose and angle of the eyes, then determines if there is a genetic condition present. So far, their tool can recognize four serious, sometimes life-threatening syndromes—Down, DiGeorge, Williams, and Noonan—with accuracy rates well over 90 percent.
While mGene is still in its early stages of testing, the researchers behind it are eager to get the tool deployed. Beyond finding cases that clinicians miss, the easy-to-use app can assist in places—all too many of them, unfortunately—that have few or no geneticists. For example, nurses could be trained to use the tool, then direct parents to follow up care as needed, potentially saving lives.
This research is also expanding what geneticists know about chromosomal abnormalities. mGene is discovering facial landmarks for syndromes previously unknown to doctors, according to Marius George Linguraru, a principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National.
Importantly, researchers trained mGene on newborns from 20 different countries, making it more likely to spot conditions in a wider range of the world’s population than most practitioners can.

mGene, an app created by researchers at Washington D.C.'s Children's National, analyzes infants' facial landmarks, allowing physicians to identify and diagnose genetic conditions.
“Geneticists are mainly trained on textbooks with examples of northern European ancestry,” said Linguraru. But that doesn’t always work, he adds, since humans look so different from one another, both within and among ethnic groups. For example, certain eye shapes can be a sign of Down syndrome in white babies, but are not a meaningful signal in Asian populations.
The team is also working on gathering more data to train mGene on other genetic disorders. Recognizing thousands of conditions—many of which are so rare that there are only a handful of cases in the world—is impossible. AI could fill in the gaps.
“There is only so much that a clinician can look at and understand,” said Antonio R. Porras, a researcher working on mGene.
From Data to Diagnosis
More than most other fields, AI has been put to use in healthcare to see if it can find new solutions to old problems in the data. So far, it has shown promise in improving medical procedures, expanding services, and finding novel ways of offering care.
As much as AI is bound to change the industry, however, doctors and nurses will remain a crucial part of the healthcare experience. Daniel K. Sodickson, a radiology professor at New York University working with Facebook on a project to speed up imaging from MRIs, says the most prevalent misperception that he encounters is that a doctor’s job is to look through the data and find the problem.
“It’s a very small part of what physicians do,” Sodickson says.
Analyzing MRI images has proven to be a particularly productive area of AI research. Again and again, technologists have shown that algorithms are able to spot abnormalities in X-rays as well as, if not better than, radiologists. In a now infamous New Yorker story, computer scientist Geoffrey Hinton, one of the people most responsible for the current AI boom, was quoted saying that “we should stop training radiologists right now.”
But the narrative is evolving. The actual AI being applied to real-world problems is acting more like a talented assistant than it is auditioning for the role of a new, improved robotic doctor.
“Replacing the human doctor is not our business,” says Jeff Nadler, the CIO for Teladoc Health, a virtual health provider using AI to advance their systems.
With its telehealth services, Teladoc saves money for patients and gets them in front of a doctor faster, says Nadler. Machine learning reduces wait times and decides how many doctors need to be available at any given time. As a result, telehealth can create more ways for clinicians to see patients and shorter wait times, removing common barriers to getting healthcare.
“We're creating a more long-term relationship between the clinician and patient,” Nadler said.
The power of combining human and artificial intelligence is becoming evident in other research as well. In 2016, a team of pathologists at Harvard created an AI-powered program to identify breast cancer cells. Pathologists accurately spotted breast cancer 96 percent of the time, beating the algorithm’s rate of 92 percent. But the combination of the two found 99.5 percent of cancerous cells. With nearly 1.7 million new cases of breast cancer diagnosed every year, as many as 130,000 more patients could receive accurate diagnoses through such collaborative efforts between humans and computers.

In a recent study, AI algorithms helped pathologists identify breast cancer cells (like those pictured here) in lymph node tissue samples with 99.5 percent accuracy, up from 96 percent. Image credit: National Cancer Institute, Wikimedia Commons
Wrestling with the Ethics of AI in Medicine
The field of healthcare seems uniquely poised to deal with problems that have plagued other industries—like privacy and ethics—that have seen an AI revolution. HIPAA, or the Health Insurance Portability and Accountability Act, is American legislation that outlines data privacy for medical information that has been in place and guiding healthcare since 1996. The Hippocratic Oath that all physicians take, stating they will “do no harm,” also adds a layer of consideration for doctors looking to use AI. Because of the industry’s stringent rules and how much research is being done in healthcare, providers might end up establishing best practices for uses of AI in all fields, not just medicine.
“We’re in this wild west phase with AI right now, everyone is trying everything,“ Sodickson says. “We need a sheriff’s office.”
With so many applications and more in the pipeline, the integration of AI in healthcare will not be easy. There are practical considerations, including the technical act of rolling AI out. It won’t be just one or two algorithms in play, but hundreds or thousands that could possibly disagree, according to Parsa Mirhaji, the Director for the Center for Health Data Innovations, and CTO of NYC Clinical Data Research Network.
“It becomes a headache. Which one do you believe and why?” he says.
Mirhaji and his team are working to mentor and train healthcare professionals on how to use and think about AI. But it's going to be a sea change that will take some time to sort out, he stresses. For all the help that AI can bring, integrating it will bring growing pains that humans will have to figure out on their own.
Mirhaji concludes: “I am not aware of an AI algorithm I can build that will fix that.”