Large-scale model of the primary visual cortex can accurately solve multiple visual processing tasks

HBP researchers have trained a large-scale model of the primary visual cortex of the mouse to solve visual tasks in a highly robust way. The model provides the basis for a new generation of neural network models. Due to their versatility and energy-efficient processing, these models can contribute to advances in neuromorphic computing.

Modeling the brain can have a massive impact on artificial intelligence (AI): since the brain processes images in a much more energy-efficient way than artificial networks, scientists take inspiration from neuroscience to create neural networks that function similarly to the biological ones to significantly save energy.

In that sense, brain-inspired neural networks are likely to have an impact on future technology, by serving as blueprints for visual processing in more energy-efficient neuromorphic hardware. Now, a study by Human Brain Project (HBP) researchers from the Graz University of Technology (Austria) showed how a large data-based model can reproduce a number of the brain's visual processing capabilities in a versatile and accurate way. The results were published in the journal Science Advances.

With the help of the PCP Pilot Systems at the Jülich Supercomputing Centre, developed in a collaboration between the HBP and the software company Nvidia, the team analysed a biologically detailed large-scale model of the mouse primary visual cortex that can solve multiple visual processing tasks. This model provides the largest integration of anatomical detail and neurophysiological data currently available for the visual cortex area V1, which is the first cortical region to receive and process visual information.

The model is built with a different architecture than those of deep neural networks used in current AI, and the researchers found out that it has interesting advantages regarding learning speed and visual processing performance over models that are commonly used for visual processing in AI.

The model was able to solve all five visual tasks presented by the team with high accuracy. For instance, these tasks involved classifying images of hand-written numbers or detecting visual changes in a long sequence of images. Strikingly, the virtual model achieved the same high performance as the brain even when the researchers subjected the model to noise in the images and in the network that it had not encountered during training.

One reason for the superior robustness of the model – or its ability to cope with errors or unexpected input, such as the noise in the images – is that it reproduces several characteristic coding properties of the brain.

Having developed a unique tool for studying brain-style visual processing and neural coding, the authors describe their new model as providing an "unprecedented window into the dynamics of this brain area".

Source:
Journal reference:

Chen, G., et al. (2022) A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing. Science Advances. doi.org/10.1126/sciadv.abq7592.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Traditional Chinese herb shows promise against Alzheimer’s and Parkinson’s