Scientists from Ecole Polytechnique Fédérale de Lausanne (EPFL) and Harvard report the development of an AI method to track neurons inside moving and contorted animals. The study “Automated neuron tracking inside moving and deforming animals using deep learning and targeted augmentation,” published in Nature Methods, was led by Sahand Jamal Rahi, PhD, at EPFL’s School of Basic Sciences.
“Reading out neuronal activity from three-dimensional (3D) functional imaging requires segmenting and tracking individual neurons. This is challenging in behaving animals if the brain moves and deforms. The traditional approach is to train a convolutional neural network with ground-truth (GT) annotations of images representing different brain postures,” write Rahi and his colleagues.
“For 3D images, this is very labor intensive. We introduce targeted augmentation, a method to automatically synthesize artificial annotations from a few manual annotations. Our method (‘Targettrack’) learns the internal deformations of the brain to synthesize annotations for new postures by deforming GT annotations. This reduces the need for manual annotation and proofreading.”
“A graphical user interface allows the application of the method end-to-end. We demonstrate Targettrack on recordings where neurons are labeled as key points or 3D volumes. Analyzing freely moving animals exposed to odor pulses, we uncover rich patterns in interneuron dynamics, including switching neuronal entrainment on and off.
“The breakthrough has the potential to accelerate research in brain imaging and deepen our understanding of neural circuits and behaviors,” says Rahi.
Convolutional neural network
The new technique is based on a convolutional neural network (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called “convolution,” which looks at small parts of the picture, like edges, colors, or shapes, at a time, and then combines all that information together to make sense of it and to identify objects or patterns.
The problem is that to identify and track neurons during a movie of an animal’s brain, many images have to be labeled by hand because the animal appears differently across time due to the many different body contortions. Given the diversity of the animal’s postures, generating a sufficient number of annotations manually to train a CNN can be daunting.
To address this, the researchers developed an enhanced CNN featuring targeted augmentation. The approach automatically synthesizes annotations for reference out of only a limited set of manual annotations. The result is that the CNN learns the internal contortions of the brain and then uses them to create annotations for new postures, drastically reducing the need for manual annotation and double-checking.
The new method reportedly is able to identify neurons whether they are represented in images as individual points or as 3D volumes. The researchers tested it on the roundworm, Caenorhabditis elegans, whose 302 neurons have made it a popular model organism in neuroscience.
Using the enhanced CNN, the scientists measured activity in some of the worm’s interneurons. They found that they exhibit complex behaviors, e.g., changing their response patterns when exposed to different stimuli, such as periodic bursts of odors.
The team made its CNN accessible, providing a user-friendly graphical user interface that integrates targeted augmentation, streamlining the process into a comprehensive pipeline, from manual annotation to final proofreading, according to Rahi.