How the brain develops: a new way to shed light on cognition

Overview: A new computational neuroscience study sheds light on how the brain’s cognitive abilities evolve and could help shape new AI research.

Source: University of Montreal

A new study introduces a new neurocomputational model of the human brain that could shed light on how the brain develops complex cognitive skills and advance research into neural artificial intelligence.

The study, published Sept. 19, was conducted by an international group of researchers from the Institut Pasteur and Sorbonne Université in Paris, CHU Sainte-Justine, Mila – Quebec Artificial Intelligence Institute and Université de Montréal.

The model who made the cover of the magazine Proceedings of the National Academy of Sciences of the United States of America (PNAS), describes neural development across three hierarchical levels of information processing:

  • the first sensorimotor level examines how the inner activity of the brain learns patterns from perception and associates them with action;
  • the cognitive level examines how the brain combines those patterns contextually;
  • finally, the conscious level takes into account how the brain distances itself from the outside world and manipulates learned patterns (via memory) that are no longer accessible to perception.

The team’s research provides clues to the core mechanisms underlying cognition thanks to the model’s focus on the interplay between two fundamental types of learning: Hebbic learning, which is associated with statistical regularity (i.e. repetition) — or as neuropsychologist Donald Hebb put it, “neurons firing together, connecting together” – and reinforcing learning, associated with reward and the dopamine neurotransmitter.

The model solves three tasks of increasing complexity at those levels, from visual recognition to cognitive manipulation of conscious perceptions. Each time, the team introduced a new core mechanism to progress.

The results highlight two fundamental mechanisms for the development of multi-level cognitive skills in biological neural networks:

  • synaptic epigenesis, with local-scale Hebrew learning and global reinforcement learning;
  • and self-organized dynamics, through spontaneous activity and a balanced excitatory/inhibitory ratio of neurons.
The model solves three tasks of increasing complexity at those levels, from visual recognition to cognitive manipulation of conscious perceptions. Image is in the public domain

“Our model shows how the convergence of neuro-AI highlights biological mechanisms and cognitive architectures that could fuel the development of the next generation of artificial intelligence and even eventually lead to artificial consciousness,” said team member Guillaume Dumas, an assistant professor of computational psychiatry. at UdeM, and principal investigator at the CHU Sainte-Justine Research Centre.

Reaching this milestone may require integrating the social dimension of cognition, he added. The researchers are now looking at the integration of biological and social dimensions that play a role in human cognition. The team has already pioneered the first simulation of two whole brains interacting.

Anchoring future computing models in biological and social realities will not only continue to shed light on the core mechanisms underlying cognition, the team believes, but will also help build a unique bridge from artificial intelligence to the only known system with advanced social consciousness: the human brain.

About this computational neuroscience research news

Author: Julie Gazaille
Source: University of Montreal
Contact: Julie Gazaille – University of Montreal
Image: The image is in the public domain

Original research: Open access.
“Multi-level development of cognitive skills in an artificial neural network” by Guillaume Dumas et al. PNAS


Also see

This shows a pregnant woman being hugged by her partner

Multi-level development of cognitive skills in an artificial neural network

Several neuronal mechanisms have been proposed to explain the formation of cognitive skills through postnatal interactions with the physical and socio-cultural environment.

Here we introduce a computational model at three levels of information processing and cognitive skill acquisition. We propose minimum architectural requirements to build these levels, and how the parameters affect their performance and relationships.

The first sensorimotor level handles local unconscious processing, here during a visual classification task. The second level or cognitive level globally integrates the information from multiple local processors over long distance connections and synthesizes it in a global, but still unconscious way. The third and cognitive highest level treats the information globally and consciously. It is based on the global neuronal workspace (GNW) theory and is called the conscious level.

We’ll use the track and delay conditioning tasks to challenge the second and third levels, respectively. Results first emphasize the need for epigenesis through the selection and stabilization of synapses at both local and global scales to enable the network to solve the first two tasks.

On a global scale, dopamine appears necessary to properly provide credit allocation, despite the temporary lag between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustaining representation within the GNW in the absence of sensory input.

Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both local and global scales, the balanced excitatory/inhibitory ratio enhances performance. We discuss the plausibility of the model in terms of both neurological and artificial intelligence.

Leave a Comment