Scientists have developed a high-throughput artificial intelligence-based imaging technique that can automatically screen for subtle morphological changes associated with genetic manipulation in Caenorhabditis elegans nematodes. The Georgia Institute of Technology team claims the platform is over 100 times faster than manual screening approaches, is far more sensitive, and could dramatically improve the way high-throughput, high-content screening using the model system is carried out. The platform was devised by Hang Lu, Ph.D., and colleagues as a means to speed their research focused on the study of genes involved in the development of synapses.
The emergence of automated cell microscopy, sophisticated sample-handling technologies, and analytical software mean large-scale, high-throughput imaging and drug screening studies in cultured cells are already possible. However, the researchers point out, while automated screening of whole organisms using modified FACS or multiwell-plate methods has increased imaging and screening throughput, the low resolution of existing technologies means it’s hard to pick out subtle morphological changes of interest in complex organisms. “A primary obstacle is that automated screening requires equipment that can robustly handle large sample numbers and a system for extracting and interpreting high-content imaging data,” they state.
Dr. Lu’s team had previously developed a microfluidic worm-sorting device to speed up the process of looking at the organisms under the microscope, but to date, evaluating morphological differences between worms either had to be carried out manually, or using a simple computer algorithm that couldn’t pick out subtle changes. The new system combines the microfluidic device, computer-vision tools, and a statistical framework to classify animals. Essentially, the microfluidic device allows animals expressing a fluorescent reporter to be imaged rapidly and sorted. The images are processed using a computer-vision algorithm, and animals displaying even subtle difference from wild type are sorted for further study. When the team applied the system to their studies of synaptogenesis, it not only dramatically increased throughput, but even picked out phenotypic changes that weren’t picked up by eye.
“We feed the program wild-type images, and it teaches itself to recognize what differentiates the wild type,” explains co-research Matthew Crane. “We don’t have to show the computer every possible mutant, and that is very powerful. And the computer never gets bored.”
Importantly, this approach could be applied to study a huge range of biological processes using the Caenorhabditis elegans model system, Dr. Lu continues. “Our automated technique can be generalized to anything that relies on detecting a morphometric—or shape, size, or brightness difference. We can apply this to anything that can be detected visually.”