Just as cartographers have created manageable maps of our planet and enabled travel and development, our brain maps our diverse sensory inputs to our credit-card sized cerebral cortex to enable perception and understanding. For reasons unclear, different facets of visual stimuli are mapped in the primary visual cortex in patterns that differ across species.

In a new study published on April 28, 2022, in the journal Nature Communications, (“A theory of cortical map formation in the visual brain“), scientists from SUNY College of Optometry in New York, Caltech in Pasadena, and the Charity-University Medicine in Berlin, have analyzed electrophysiological recordings of visual stimuli to develop a computational model that formulates a general theory of how the cerebral cortex maps the visual world.

Jose-Manuel Alonso, PhD, MD, a professor of biological and vision sciences at the SUNY College of Optometry, is the senior author of this study.

Senior author of the study, Jose-Manuel Alonso, PhD, MD, professor of biological and vision sciences at the SUNY College of Optometry said, “In a series of papers that started with my postdoctoral studies, my laboratory has been providing increasingly stronger evidence that the orientation maps described by Hubel and Wiesel emerge from the cortical mapping of spatial position provided by two major visual pathways that signal stimulus light-dark polarity, the ON (light) and OFF (dark) pathways. As the evidence became stronger, we started working on a theory of cortical map formation that could fully explain a large body of experimental data, including data already available in the scientific literature and new data that we obtained to specifically test the theory.”

The theory of cortical map formation proposed in this paper suggests, interspecies diversity in these patterns emerges from species variations in the thalamic afferent density sampling sensory space.

Since each point in the visual field is mapped by neurons into circular receptive fields in the two-dimensional cerebral cortex that sample the input’s spatial position, eye source (right or left eye), polarity (light or dark), and orientation, increasing the number of inputs per visual point improves visual sampling but also increases the need for larger visual cortical areas and therefore larger brains.

Through analysis of electrophysiological recordings across species, the authors found, with evolution, visual sampling (afferent sampling density) increases the size of the visual cortex that represents the same visual point. This allows the segregation of input signals and cortical areas by multiple dimensions. It also allows the inputs to combine into clusters of cortical neurons that maximize the diversity of stimuli extracted from each visual point. This maximization process based on the sorting and mixing of input signals in the visual cortex creates a pinwheel pattern for stimulus orientation that forms rich, multi-dimensional maps of our visual worlds.

A simulated cortical map (Credit: Sohrab Najafian).

The authors noted, “We illustrate the theory with an afferent-density model that accurately replicates the maps of different species through afferent segregation followed by thalamocortical convergence pruned by visual experience.”

The significance of the model proposed in the current study is that it explains the formation of maps for multiple stimulus dimensions—spatial position, eye dominance, light/dark polarity, orientation, spatial resolution, and low-pass spatial-frequency filtering—in the primary visual cortex, across species. The work uses standard computational modeling and electrophysiology recordings to test multiple predictions of the theory.

“The theory/model replicates with exquisite accuracy a large body of neuronal data collected by multiple labs across the world including our own over several decades,” said Alonso. “Our theory has three important features that greatly increase its biological significance: First, the theory relies on the sorting of thalamic afferents by function, an extremely well-preserved principle of afferent organization in the mammalian brain; second, it follows very closely all major developmental stages of the primary visual cortex; and third, it predicts a close relation among cortical topographies for multiple stimulus dimensions that can be used to simulate very accurate visual maps in the future.”

“The authors have discovered that the specific organization of a cortical two-dimensional map arises as a function of the interplay between the density of synaptic boutons and the nature of the information arriving to the cortical map,” said Stephen Macknik, PhD, a professor of ophthalmology, neurology, and physiology & pharmacology at the Downstate Health Sciences University who was not involved in the current study. “The visual cortex is arranged for four dimensions. This new discovery describes how the organization of these four dimensions follows from the density of synaptic boutons arriving into the cortex to describe each dimension.”

The authors speculate that the model proposed in this study may be applicable in explaining sensory mapping in other areas of the brain since neural tracts use similar mechanisms for axon segregation and pruning.

Macknick said, “What is truly exciting about this fundamental discovery is that it will generalize to all regions of the cortex, including those that are not already well understood. In the case of the visual cortex, we already knew the map, but with this new discovery we can predict the map of any cortical region, even when we do not know its function, by determining the density of the boutons entering the region. Its relatively easy microscopy work to determine this density number, compared to the years or decades of neuroscientific research necessary to determine the precise function of a cortical region, so this discovery stands as one of the most important discoveries ever made in cortical research in the brain.”

The ability of the model to predict topographic relations among all stimulus dimensions represented in a visual map may facilitate accurate reconstructions of the multi-dimensional maps needed for cortical implants. In future studies, Alonso’s team and his collaborators will focus on the functional implications of the model for other brain maps, human visual perception, and diagnostics/ therapies of visual disorders.

Previous articleHuman Brain Organoids Reveal How a Genetic Mutation Disrupts Neural Development
Next articleCATCH Tech Offers Detailed View of Where Drug Molecules Hit Their Targets