Mice are always in motion; sweeping their whiskers, sniffing their environment, and grooming themselves. So too are neurons in the brain, even in the absence of external sensory stimuli or a behavioral task. Spontaneous actions activate neurons across many different regions of the brain, providing a neural representation of what the animal is doing moment-by-moment across the brain. But how the brain uses these persistent, widespread signals remains unknown.
Now, scientists have developed a tool that could uncover the understanding of brain-wide signals driven by spontaneous behaviors. The tool, called Facemap, uses deep neural networks to relate information about a mouse’s eye, whisker, nose, and mouth movements to neural activity in the brain. More specifically, Facemap is a “framework consisting of a keypoint tracker and a deep neural network encoder for predicting neural activity.”
This research is published in Nature Neuroscience, in the paper, “Facemap: a framework for modeling neural activity based on orofacial tracking.”
Atika Syeda, a graduate student in the lab of Carsen Stringer, PhD, a group leader at HHMI Janelia Research Campus explained: “The goal is [to uncover]: What are those behaviors that are being represented in those brain regions? And, if a lot of that information is in the facial movements, then how can we track that better?”
Stringer and colleagues had previously found that activity in many different areas across a mouse’s brain—long thought to be background noise—are signals driven by these spontaneous behaviors. Still unclear, however, was how the brain uses this information.
“The first step in really answering that question is understanding what are the movements that are driving this activity, and what exactly is represented in these brain areas,” Stringer said. “All of these different brain areas are driven by these movements, which is why we think it is really important to get a better handle on what these movements actually are because our previous techniques really couldn’t tell us what they were,” she explained.
To address this shortcoming, the team looked at 2,400 video frames and labeled distinct points on the mouse face corresponding to different facial movements associated with spontaneous behaviors. They homed in on 13 key points on the face that represent individual behaviors, like whisking, grooming, and licking.
Here, a video of a mouse face has been edited to label 13 key points that correspond to different facial movements associated with individual spontaneous behaviors, like whisking, grooming, and licking. [Atika Syeda/HHMI Janelia Research Campus]
The team had first developed a neural network-based model that could identify these key points in videos of mouse faces collected in the lab under various experimental setups. They then developed another deep neural network-based model to correlate this key facial point data representing mouse movement to neural activity, allowing them to see how a mouse’s spontaneous behaviors drive neural activity in a particular brain region.
Facemap is more accurate and faster than previous methods used to track orofacial movements and behaviors in mice. Also, it is specifically designed to track mouse faces and has been pretrained to track many different mouse movements. Taken together, the model can predict twice as much neural activity in mice compared to prior methods.
Facemap is freely available and easy to use. Hundreds of researchers around the world have already downloaded the tool since it was released last year.
“This is something that if anyone wanted to get started, they could download Facemap, run their videos, and get their results on the same day,” Syeda said. “It just makes research, in general, much easier.”