Predicting behavior from image features: Insights from cortical tuning

In a recent study published in Nature Communications, researchers investigate whether the human occipital-temporal cortex (OTC) co-represents the semantic and affective content of visual stimuli to guide behavior.

Study: Occipital-temporal cortical tuning to semantic and affective features of natural images predicts associated behavioral responses. Image Credit: patrice6000 / Shutterstock.com

The neuropathophysiology of responding to stimuli

Recognizing and responding to emotionally salient stimuli is crucial for evolutionary success, as it aids survival and reproductive behaviors. Adaptive responses vary by context, such as different avoidance strategies for a large bear as compared to a weak animal, or distinct approach responses for infants and potential mates.

While emotional stimuli activate various brain regions, including the amygdala and OTC, the neural mechanisms that contribute to these behavioral choices remain unclear. Thus, further research is needed to clarify how the integrated representation of semantic and affective features in the OTC translates into specific and context-dependent behavioral responses.

About the study 

The current study protocol was approved by the University of California Berkeley committee for protection of human subjects and informed consent. Data were collected from six healthy adults with a mean age of 24 and with normal or corrected vision.

Study participants viewed 1,620 natural images that were categorized into 23 semantic categories by four raters and obtained from the International Affective Picture System (IAPS), Lotus Hill image set, and internet searches.

The study cohort also completed six functional magnetic resonance imaging (fMRI) sessions, one of which was to obtain retinotopy scans and five for the main task, while viewing images projected onto a screen. All images were presented for one second with a three-second interval. Estimation scans involved pseudo-random image presentations with null trials, while validation scans used controlled sequences.

After the scan, the study participants rated image valence as negative, neutral, or positive and their arousal by the image on a nine-point scale. Additionally, fMRI data were collected on a three Tesla Siemens Total Imaging Matrix Trio scanner (3T Siemens TIM Trio scanner) and pre-processed using MATrix LABoratory (Matlab) and Statistical Parametric Mapping version 8 (SPM8), including converting images to Neuroimaging Informatics Technology Initiative (NIFTI) format, cleaning time series data, realignment, and slice timing correction.

Design matrices were constructed for data modeling, with L2-penalized regression used for feature weight estimation. Model validation used voxel-wise prediction accuracy, whereas principal components analysis (PCA) identified patterns of co-tuning to image features. 

Study findings 

The current study utilized a multi-feature encoding modeling approach to investigate how natural image semantic and affective features are represented in the brain. The experimental stimuli included 1,620 images varying widely in semantic categories and affective content.

Ridge regression was used to fit multi-feature encoding models to fMRI data acquired as subjects viewed these images. Six subjects each completed fifty fMRI scans over six two-hour sessions, with thirty training scans used for model estimation and twenty test scans for validation.

The Combined Semantic, Valence, and Arousal (CSVA) model described each image using a combination of semantic categories, valence, arousal judgments, and additional compound features. Moreover, fMRI data from model estimation runs were concatenated, and ridge regression was used to fit the CSVA model to each subject’s blood oxygen level dependent (BOLD) data.

Voxel-wise weights were estimated for each model feature and applied to the values of feature regressors for images viewed during validation scans to generate predicted BOLD time-courses for each voxel. These predicted time-courses were correlated with observed validation BOLD time-courses to obtain estimates of model prediction accuracy.

The CSVA model was found to accurately validate BOLD time-courses across the OTC. Additionally, the model outperformed simpler models containing only semantic or valence and arousal features.

Comparison using a bootstrap procedure revealed that the CSVA model outperformed the valence by arousal and semantic only models at both group and individual levels. The superiority of the CSVA model was particularly apparent in OTC regions with known semantic selectivity, such as the occipital face area (OFA) and fusiform face area (FFA).

Variance partitioning techniques showed that many voxels responsive to the full CSVA model maintained significant prediction accuracies when only variance explained by semantic category by affective feature interactions was retained. Furthermore, coding stimulus affective features was found to differentially improve model fit for animate versus inanimate stimuli, with a significantly greater increase for animate stimuli.

PCA of the CSVA model feature weights revealed consistent patterns of OTC tuning to stimulus animacy, valence, and arousal across subjects. The top three principal components (PCs) accounted for significantly more variance than stimulus features alone, and their structure was consistent across subjects. These PCs represented dimensions including stimulus animacy, arousal, and valence, with spatial transitions in tuning across subjects showing distinct cortical patches responding selectively.

OTC tuning to affective and semantic features of emotional images predicted behavioral responses, which explained more variance in behaviors than low-level image structure or simpler models. 

Conclusions 

Using voxel-wise modeling of fMRI data from subjects viewing over 1,600 emotional images, the researchers of the current study found that many OTC voxels represented both semantic categories and affective values, especially for animate stimuli. A separate group identified behaviors suited to each image.

Regression analyses showed that OTC tuning to these combined features predicted behaviors better than tuning to either feature alone or low-level image structures, thus suggesting that OTC efficiently processes behaviorally relevant information.

Journal reference:
  • Abdel-Ghaffar, S.A., Huth, A.G., Lescroart, M.D. et al. (2024). Occipital-temporal cortical tuning to semantic and affective features of natural images predicts associated behavioral responses. Nature Communicationsdoi:10.1038/s41467-024-49073-8
Vijay Kumar Malesu

Written by

Vijay Kumar Malesu

Vijay holds a Ph.D. in Biotechnology and possesses a deep passion for microbiology. His academic journey has allowed him to delve deeper into understanding the intricate world of microorganisms. Through his research and studies, he has gained expertise in various aspects of microbiology, which includes microbial genetics, microbial physiology, and microbial ecology. Vijay has six years of scientific research experience at renowned research institutes such as the Indian Council for Agricultural Research and KIIT University. He has worked on diverse projects in microbiology, biopolymers, and drug delivery. His contributions to these areas have provided him with a comprehensive understanding of the subject matter and the ability to tackle complex research challenges.    

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Kumar Malesu, Vijay. (2024, July 15). Predicting behavior from image features: Insights from cortical tuning. News-Medical. Retrieved on November 21, 2024 from https://www.news-medical.net/news/20240715/Predicting-behavior-from-image-features-Insights-from-cortical-tuning.aspx.

  • MLA

    Kumar Malesu, Vijay. "Predicting behavior from image features: Insights from cortical tuning". News-Medical. 21 November 2024. <https://www.news-medical.net/news/20240715/Predicting-behavior-from-image-features-Insights-from-cortical-tuning.aspx>.

  • Chicago

    Kumar Malesu, Vijay. "Predicting behavior from image features: Insights from cortical tuning". News-Medical. https://www.news-medical.net/news/20240715/Predicting-behavior-from-image-features-Insights-from-cortical-tuning.aspx. (accessed November 21, 2024).

  • Harvard

    Kumar Malesu, Vijay. 2024. Predicting behavior from image features: Insights from cortical tuning. News-Medical, viewed 21 November 2024, https://www.news-medical.net/news/20240715/Predicting-behavior-from-image-features-Insights-from-cortical-tuning.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Study reveals the power of sleep in enhancing motor learning for individuals with brain injury