This video from two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. [Duke University]

Biomedical scientists at Duke University say they have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time. This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies, according to the team whose study (“Fast and Robust Active Neuron Segmentation in Two-Photon Calcium Imaging Using Spatio-Temporal Deep-Learning”) appears in PNAS.

“Calcium imaging records large-scale neuronal activity with cellular resolution in vivo. Automated, fast, and reliable active neuron segmentation is a critical step in the analysis workflow of utilizing neuronal signals in real-time behavioral studies for the discovery of neuronal coding properties. Here, to exploit the full spatiotemporal information in two-photon calcium imaging movies, we propose a 3D convolutional neural network to identify and segment active neurons. By utilizing a variety of two-photon microscopy datasets, we show that our method outperforms state-of-the-art techniques and is on a par with manual segmentation,” the investigators wrote.

“Furthermore, we demonstrate that the network trained on data recorded at a specific cortical layer can be used to accurately segment active neurons from another layer with different neuron density. Finally, our work documents significant tabulation flaws in one of the most cited and active online scientific challenges in neuron segmentation. As our computationally fast method is an invaluable tool for a large spectrum of real-time optogenetic experiments, we have made our open-source software and carefully annotated dataset freely available online.”

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process, according to Sina Farsiu, PhD, the Paul Ruffin Scarborough associate professor of engineering in Duke’s department of biomedical engineering (BME). Currently, the most accurate method requires a human analyst to circle every “spark” they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat, or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in the BME can accurately identify and segment neurons in minutes, noted Farsiu.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” he continued.

“The data analysis bottleneck has existed in neuroscience for a long time—data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” added Yiyang Gong, PhD, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then trained the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

This site uses Akismet to reduce spam. Learn how your comment data is processed.