Our senses are constantly inundating the brain with multifarious information. The brain must therefore continuously tune out distractions to focus on what matters. Understanding how organisms represent the external world internally is a key question for neuroscientists, psychologists, and computer scientists, but the strategies and efficiencies of representing the world in the brain remains understudied.

A paper published on June 6, 2022, reports a study conducted on mice that claims sensory systems compress representations of the external world in the brain while preserving information. Through modeling dopamine neuron activity and behavior during a time-restricted decision-making task, the scientists demonstrate cognitive systems in the brain compress representations if overall rewards are preserved.

The study, published in the Nature Neuroscience article titled, “Efficient coding of cognitive variables underlies dopamine responses and choice behavior” resulted from a collaboration of scientists from the neuroscience program at the Champalimaud Foundation in Lisbon, Portugal, the department of electrical and computer engineering at the Carnegie Mellon University, and the Harvard Medical School, Boston.

Joe Paton, PhD, director of the Champalimaud Neuroscience Research Program, is a senior author of the study [Alexandre Azinheira]
“Our goal was to try and understand the form of internal of cognitive representations in the brain. The world as you see it is a construction that your brain actively creates. We were trying to understand what this construction looks like and whether its form could help us to derive general principles by which the brain constructs our internal sense of the world,” said co-senior author of the study, Joe Paton, PhD, director of the Champalimaud Neuroscience Research Program.

Christian Machens, PhD, head of the theoretical neuroscience lab at the Champalimaud Foundation, is a co-senior author of the study [Alexandre Azinheira].
Another co-senior author of the study, Christian Machens, PhD, head of the theoretical neuroscience lab at the Champalimaud Foundation said, “Compressing the representations of the external world is akin to eliminating all irrelevant information and adopting temporary ‘tunnel vision’ of the situation.”

The authors claim, these finding have broad implications in neuroscience and artificial intelligence (AI). Paton said, “While the brain has clearly evolved to process information efficiently, AI algorithms often solve problems by brute force, using lots of data and parameters. Our work provides a set of principles to guide future studies on how internal representations of the world may support intelligent behavior in the context of biology and AI.”

Asma Motiwala, PhD, is the lead author of the study (Alexandre Azinheira).

Lead author of the study, Asma Motiwala, PhD, said, “By modelling dopamine neuron activity and behavior in a time-based decision making task, we reveal signatures of a core principle that may shape internal representations for behavior and cognition.”

The researchers used a task where mice had to determine whether two tones were separated by an interval longer or shorter than 1.5 seconds to receive a reward, while the activity of their dopamine neurons were recorded. Machens said, “It’s well known that dopamine neurons play a key role in learning the value of actions. If the animal wrongly estimated the duration of the interval on a given trial, then the activity of these neurons would produce a ‘prediction error’ that should help improve performance on future trials.”

Midbrain dopamine neurons act like teaching signals in AI algorithms. Studying dopamine activity therefore offers a route to uncovering principles of information representation in neural circuits.

Paton explains, “Imagine you wanted to know the formation of a football team on the field, but you only had access to a video of the crowd at the stadium. Because you know that the crowd tends to look at the ball, if you track where the crowd is looking, you can infer where the players are, and thus the team’s formation. Now imagine you want to know the principles that coaches use to choose formations. You might be able to show that the formations used tend to maximize goals scored in relation to goals suffered, while at the same time minimizing the distance that players need to run.”

Motiwala built different computational reinforcement learning models to test which model best captured both neuronal activity and animal behavior. The models shared common principles but differed in how they represented the information. The researchers found only models that compressed task representation could account for the data.

Machens said, “The brain seems to eliminate all irrelevant information. Curiously, it also apparently gets rid of some relevant information, but not enough to take a real hit on how much reward the animal collects overall. It clearly knows how to succeed in this game.”

The information represented captured variables of the task and the animal’s actions. “Previous research has focused on the features of the environment independent of the individual’s behavior. But we found that only compressed representations that depended on the animal’s actions fully explained the data,” said Motiwala. “Our study is the first to show that the way representations of the external world are learnt, especially taxing ones, may be dependent on and interact in unusual ways with how animals choose to act.”

The team also found that a key behavioral signature of this interaction in their model and rats was procrastination of a subset of challenging decisions. Paton said, “When the mice were most uncertain about the correct choice, they tended to procrastinate in making their decision, until their limited representation of the task fooled them into thinking they were more likely to get the correct answer.”

Paton believes, this work that helps clarify how the brain transforms the external world into internal representations, also provides a set of principles by which AI algorithms might profitably do the same.

Motiwala said, “Our work shows an important set of interactions between actions and representations that may come about only during end-to-end training reward-based signals. This is likely to shed light not only on understanding how these networks operate, but open new perspectives with which we understand how neural representations may be learnt by different interacting brain systems.”

In their future studies, the scientists intend to investigate the representations of cognitive variables in other areas of the brain that drive dopamine neuron activity in this timing task, and extend the computational model to incorporate additional factors.

Previous articleBridge Therapy for Children with High-Risk Neuroblastoma
Next articleWuXi Biologics Launches Facility in China for Pre-Filled Syringes