Recognizing faces is an ability one may consider elemental, but it involves complex and enigmatic neural processing. Understanding how the brain processes facial information of those close to us versus those we’ve merely seen offers an opportunity to parse out this neural crosstalk in normal individuals, giving us a clue into neuropsychiatric disorders like autism where facial recognition and interpersonal interactions are compromised.
A study led by scientists at Dartmouth College shows while faces of those we know and those we’ve just seen activate the same visual processing regions of the brain, only faces of those we know activate regions of the brain that process nonvisual cues such as social, semantic, and emotional information.
The study is published in the journal Proceedings of the National Academy of Sciences in an article titled, “Shared neural codes for visual and semantic information about familiar faces in a common representational space.” The study infers that the brain’s face processing system for those we are personally familiar with is encoded in a neural space that is not specific to a given set of faces but is shared across brains. Individually distinctive information associated with familiar faces is embedded in a neural code that is shared across brains.
“Within visual processing areas, we found that information about personally familiar and visually familiar faces is shared across the brains of people who have the same friends and acquaintances,” said Matteo Visconti di Oleggio Castello, PhD, first author of the study. Visconti di Oleggio Castello conducted this research as a graduate student in psychological and brain sciences at Dartmouth and is now a neuroscience postdoctoral scholar at the University of California, Berkeley. “The surprising part of our findings was that the shared information about personally familiar faces also extends to areas that are nonvisual and important for social processing, suggesting that there is shared social information across brains.”
In the study, the researchers used between-subject linear classifiers trained on hyperaligned brain data to decipher the processing of visual and nonvisual information of faces familiar to the test participants. The scientists observed that while the identity of both visually and personally familiar faces could be decoded across participants from activity in the brain’s visual processing areas, only the identity of personally familiar faces could be decoded in areas involved in social cognition.
“Our current study demonstrates the existence of a shared neural code within areas involved in social cognition,” Visconti di Oleggio Castello said. “We hypothesize that this shared neural code might encode a shared person knowledge conceptual space which might help us communicate with our close others about mutual friends and acquaintances.”
The ability to recognize familiar faces is critical in shaping appropriate social behaviors. This involves not simply processing visual information but also social and personal knowledge about a familiar person.
Visconti di Oleggio Castello said, “Our findings and methodological approach might help elucidate impairments in social interactions for some classes of disorders. For example, hyperalignment could be used to create a common model that can predict what brain responses to familiar individuals would look like in healthy individuals. By comparing these predictions to brain responses in individuals with impaired social processing, it could be possible to localize brain areas that might suffer from functional disorders and characterize functional patterns of brain responses that deviate from the range of healthy responses.”
The current study used two new methods to study face and identity perception: hyperalignment and between-subject classifiers. Earlier work from the group showed the feasibility of using the hyperalignment approach in predicting face-responsive areas.
“Hyperalignment allowed us to align the participants’ brain responses to familiar faces into a common representational space. In previous research, we showed that hyperalignment outperforms common alignment methods (such as anatomical alignment), while still preserving fine-scale, detailed information about brain responses of individual participants,” said Visconti di Oleggio Castello. “Between-subject classifiers allowed us to decode what stimulus a participant was looking at based on the brain responses of other participants. This approach constituted a direct test for the existence of shared information across the brains of different individuals.”
Hyperalignment creates a common representational space for understanding how brain activity is similar between participants. The team used data from three fMRI tasks with 14 graduate students who had known each other for at least two years. In two tasks, participants were presented images of four other personally familiar graduate students and four visually familiar people unknown to them. In a third task, participants watch parts of the movie The Grand Budapest Hotel. Hyperalignment was then applied to this fMRI data to align participants’ responses into a common representational space, enabling the team to use machine learning classifiers to predict what stimuli a participant was looking at based on the brain activity of the other participants.
The results showed that the identity of visually familiar and personally familiar faces is decoded with accuracy across the brain in areas that are mostly involved in visual processing of faces. For visually familiar identities, participants only know what the stimuli looks like and have no other information about them.
In decoding personally familiar identities, the results showed more shared information across the brains of the participants. There is high decoding accuracy in four other areas outside of the visual system including the dorsal medial prefrontal cortex involved in social processing, the precuneus involved in processing personally familiar faces, the insula involved in emotional processing, and the temporal parietal junction, important in social cognition and in representing the mental states of others, a phenomenon popularly known as the “theory of the mind.”
“This shared conceptual space for the personal knowledge of others allows us to communicate with people that we know in common,” said Maria Ida Gobbini, MD, PhD, a research associate professor in the cognitive science program at Dartmouth, associate professor in the department of experimental, diagnostic and specialty medicine at the University of Bologna, and senior author on the study. “When we see someone we know, we activate immediately who that person is. This is what allows us to interact in the most appropriate way with someone who is familiar.”
Co-author James Haxby, PhD, professor of psychological and brain sciences at Dartmouth said, “It would’ve been quite possible that everybody has their own private code for what people are like but this is not the case. Our research shows that processing familiar faces really has to do with general knowledge about people.”
The team intends to explore two lines of investigation in its future work, added Visconti di Oleggio Castello, “First, we will investigate the dimensions of the shared person knowledge space and how they map onto psychological dimensions. Second, we will investigate the role of individual differences and how they map onto the shared representational space.”