A physician may find themselves in a diagnostic odyssey if a patient’s symptoms or phenotypes point in the direction of a rare genetic disease. But, an artificial intelligence-based program that indicates the most likely genetic disorder based on facial phenotypes is giving physicians a place to start. The program can be downloaded as an app on a phone and requires nothing more than a photo of the patient’s face.

The community-driven phenotyping platform, Face2Gene, is used by 70% of the world’s geneticists across 2,000 clinical sites in 130 countries. Developed by the Boston-based company FDNA, Face2Gene is based on a novel facial image analysis framework, DeepGestaltTM, that uses computer vision and deep-learning algorithms that highlights the facial phenotypes of hundreds of diseases and genetic variations.

A team of scientists led by researchers at FDNA published a study in Nature Medicine on Jan 7, titled “Identifying Facial Phenotypes of Genetic Disorders Using Deep Learning,”  on the use of facial analysis in detecting genetic disorders.

The paper outlines how the technology transforms phenotyping—the capture, structuring, and analysis of complex human physiological data. It was trained on a dataset of over 150,000 patients. For a deeper explanation of how the deep convolutional neural networks (DCNN) technology works, see the figure below. For the purposes of this study, 17,000 patient images representing more than 200 syndromes were used. The paper reports that DeepGestalt achieves 91% top-10-accuracy in identifying the correct syndrome on 502 images and outperformed expert clinicians in three experiments.

Paul Kruszka, MD, MPH, clinical geneticist in the medical genetics branch of the National Human Genome Research Institute, tells GEN that Face2Gene is useful to diagnose dysmorphology syndromes which typically affect roughly 1 in 30,000 people. He notes that the Face2Gene program is something that he uses readily and, when he goes to meetings with his colleagues, almost everyone has the app on their cell phone.

Dr. Karen Gripp [FDNA]
Karen Gripp, MD, chief of the division of medical genetics, Nemours/Alfred I. duPont Hospital for Children, chief medical officer at FDNA, and co-author of the paper, tells GEN that the importance of this paper is in the detailed description of how the algorithm was trained and how it functions. There are other systems out there, but none with as many cases and conditions being analyzed. This paper creates a standard to compare other systems to and a reference for other work that uses the tool. She adds that the paper emphasizes how AI can help work in precision medicine and uses the concept of facial phenotyping as a launchpad that could be applied to other imaging systems.

Kruszka has seen firsthand how well this type of technology performs. At a meeting last summer in Banff, Charles Swartz, PhD, senior research scholar at the Greenwood Genetic Center in South Carolina, performed a demonstration during his presentation. In a room full of clinical geneticists, he projected many faces of people who had been diagnosed with varying dysmorphic diseases and asked the physicians to assign a genetic disorder to each face. Suffice it to say, the physicians performed poorly—far worse than the computer. Kruszka notes that this is not surprising because the computer does not face the same limitations as a physician and, as a result, has seen and analyzed thousands of training cases.

One of the most exciting aspects of the program, adds Kruszka, is its ease. After asking just one question to a patient, “is it ok if I take a picture?” and snapping a quick photo, the program offers results back within seconds. “You can’t rest a diagnosis on this,” adds Kruszka, as the standard of care is still a molecular diagnosis. But, a tool that gives you a place to start is very valuable.

The program is not age limited and can be used on very young babies, explains Gripp. “The structure of the face does not change so much and once the program learns to recognize a face, and place the landmarks over, it can apply that to any age.” She adds that a big challenge with very young babies is in obtaining a good enough photo because they are often “wrinkly.” Also, the babies that she sees in her practice typically have lots of tubes or other interfering objects on their faces.

Kruszka notes that it is very advantageous to know about genetic syndromes early for multiple reasons. For example, there may be therapies available. Also, parents who may be worried about recurrent risk in subsequent pregnancies could seek genetic counseling. Many patients of rare genetic diseases have other health concerns or complications. Also, a diagnosis can alleviate a lot of mysteries and stress along the way—from not meeting developmental benchmarks to challenges in school.

The program is portable, inexpensive, and performs better than a physician. So, what else does it need? Gripp would love to have the ability to analyze the profile view of a face, as that can be useful information in making diagnoses. She also would like to have more data on different ethnic backgrounds, as the overwhelming majority of faces uploaded are of European descent. However, she notes that the program “performs quite well in different ethnicities and that there are no ethnicities where it doesn’t work.”

Kruszka’s team has published four papers using a similar but inferior (according to Kruszka) algorithm in diverse populations. Their results show that the technology becomes more accurate for a specific ethnic population when that population is separated out from other groups, suggesting that focusing on diversity is incredibly important.

James Lupski, MD, PhD, DSc (hon), professor, department of molecular and human genetics and professor of pediatrics, Baylor College of Medicine, tells GEN that one of the reasons why this technology is so important is because “the field of clinical genetics and dysmorphology is a ‘dying art’—few people truly develop this ‘skill-set’.” Kruszka adds that only about 20 people are trained in clinical genetics each year.

“More importantly,” adds Lupski, “the number of genetic conditions being defined through genomics is exploding. Moreover, there is becoming a recognition that 1/20 molecularly diagnose cases have pathogenic variation at two loci resulting in a “blended phenotype” that can make it difficult to clinically recognize.” He adds that “having an objective and better way to quantify a ‘facial gestalt’ is very important for moving forward in a precision medicine era.”

It is important, notes Gripp, to recognize that this paper shows the potential that AI can bring to precision medicine in general, beyond just facial recognition and that is it just one example of the tools that FDNA is actively working on to bring this technology to a larger scale. Although Face2Gene is the main product that operates on the DeepGestalt technology, FDNA is in the process of developing embedded solutions operating on this technology that can be licensed to other healthcare and tech organizations so that they may integrate the technology within their own platforms.

 

DeepGestalt: High-level flow and network architecture. (a) A new input image is first preprocessed to achieve face detection, landmarks detection, and alignment. After preprocessing, the input image is cropped into facial regions. Each region is fed into a deep convolutional neural network (DCNN) to obtain a softmax vector indicating its correspondence to each syndrome in the model. The output vectors of all regional DCNNs are then aggregated and sorted to obtain the final ranked list of genetic syndromes. The histogram on the right-hand side represents DeepGestalt’s output syndromes, sorted by the aggregated similarity score. (b) The DCNN architecture of DeepGestalt. A snapshot of an image passing through the network. The network consists of ten convolutional layers, and all but the last are followed by batch normalization and a rectified linear unit (ReLU). After each pair of convolutional (CONV) layers, a pooling layer is applied (maximum pooling after the first four pairs and average pooling after the fifth pair). This is then followed by a fully connected layer with dropout (0.5) and a softmax layer. A sample feature map is shown after each pooling layer. It is interesting to compare the low-level features of the first layers with respect to the high-level features of the final layers; the latter identify more complex features in the input image, and distinctive facial traits tend to emerge while identity-related features disappear. The photograph is published with parental consent and this figure is published with the author’s permission. [Gurovich et al.]
Previous articleRentschler Appoints New VP of Manufacturing Science and Technology
Next articleAI Algorithm Used to Detect One of the Key Signs of Diabetic Eye Disease