A black-and-white movie has been extracted almost perfectly from the brain signals of mice using an artificial intelligence tool.
Mackenzie Mathis at the Swiss Federal Institute of Technology Lausanne and her colleagues examined brain activity data from around 50 mice while they watched a 30-second movie clip nine times. The researchers then trained an AI to link this data to the 600-frame clip, in which a man runs to a car and opens its trunk.
The data was previously collected by other researchers who inserted metal probes, which record electrical pulses from neurons, into the mice’s primary visual cortexes, the area of the brain involved in processing visual information. Some brain activity data was also collected by imaging the mice’s brains using a microscope.
Next, Mathis and her team tested the ability of their trained AI to predict the order of frames within the clip using brain activity data that was collected from the mice as they watched the movie for the tenth time.
This revealed that the AI could predict the correct frame within one second 95 per cent of the time.
Other AI tools that are designed to reconstruct images from brain signals work better when they are trained on brain data from the individual mouse they are making predictions for.
To test whether this applied to their AI, the researchers trained it on brain data from individual mice. It then predicted the movie frames being watched with an accuracy of between 50 and 75 per cent.
“Training the AI on data from multiple animals actually makes the predictions more robust, so you don’t need to train the AI on data from specific individuals for it to work for them,” says Mathis.
By revealing links between brain activity patterns and visual inputs, the tool could eventually reveal ways to generate visual sensations in people who are visually impaired, says Mathis.
“You can imagine a scenario where you might actually want to help someone who is visually impaired see the world in interesting ways by playing in neural activity that would give them that sensation of vision,” she says.
This advance could be a useful tool for understanding the neural codes that underlie our behavior and it should be applicable to human data, says Shinji Nishimoto at Osaka University, Japan.