New 4D imaging technique enhances AI audiovisual analysis

medicalxpress.com

Researchers at Western University have developed a new imaging technique to understand how the brain processes sights and sounds simultaneously. This study uses a method called 4D imaging, adding time as a fourth dimension to traditional imaging techniques. The team, led by computer science professor Yalda Mohsenzadeh and Ph.D. student Yu (Brandon) Hu, examined how participants reacted to 60 video clips paired with sounds. They used functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG) to analyze brain activity. The results showed that the brain's primary visual cortex responds to both visual and basic auditory inputs, while the primary auditory cortex responds only to sounds. Mohsenzadeh stated that this difference in how the brain processes information could improve how artificial intelligence (AI) systems understand audiovisual data. Currently, many AI programs struggle to analyze sounds alongside images, focusing mainly on visual information. The new technique provides a detailed 4D map of brain responses, revealing that the visual system plays a dominant role in processing multimodal information. This insight could lead to better AI models that mimic the brain's natural processes. The findings were published in the journal Communications Biology. The research team aims to use this new understanding to enhance deep learning models for tasks involving both sound and vision.


With a significance score of 4.3, this news ranks in the top 7% of today's 17913 analyzed articles.

Get summaries of news with significance over 5.5 (usually ~10 stories per week). Read by 9000 minimalists.


loading...