- Researchers around the world are training AI to re-create images seen by humans using only their brain waves. Experts say the technology is still in its infancy, but it heralds a new brain-analysis industry.
Singapore (News Agencies): Zijiao Chen can read your mind, with a little help from powerful artificial intelligence and an fMRI machine.
Chen, a doctoral student at the National University of Singapore, is part of a team of researchers that has shown they can decode human brain scans to tell what a person is picturing in their mind, according to a paper released in November.
Their team, made up of researchers from the National University of Singapore, the Chinese University of Hong Kong and Stanford University, did this by using brain scans of participants as they looked at more than 1,000 pictures — a red firetruck, a gray building, a giraffe eating leaves — while inside a functional magnetic resonance imaging machine, or fMRI, which recorded the resulting brain signals over time. The researchers then sent those signals through an AI model to train it to associate certain brain patterns with certain images.
Later, when the subjects were shown new images in the fMRI, the system detected the patient’s brain waves, generated a shorthand description of what it thinks those brain waves corresponded to, and used an AI image-generator to produce a best-guess facsimile of the image the participant saw.
The results are startling and dreamlike. An image of a house and driveway resulted in a similarly colored amalgam of a bedroom and living room. An ornate stone tower shown to a study participant generated images of a similar tower, with windows situated at unreal angles. A bear became a strange, shaggy, doglike creature.
The resulting generated image matched the attributes (color, shape, etc.) and semantic meaning of the original image roughly 84% of the time.
While the experiment requires training the model on each individual participant’s brain activity over the course of roughly 20 hours before it can deduce images from fMRI data, researchers believe that in just a decade the technology could be used on anyone, anywhere.
“It might be able to help disabled patients to recover what they see, what they think,” Chen said. In the ideal case, Chen added, humans won’t even have to use cellphones to communicate. “We can just think.”
The results involved only a handful of study subjects, but the findings suggest the team’s noninvasive brain recordings could be a first step toward decoding images more accurately and efficiently from inside the brain.
Researchers have been working on technology to decode brain activity for over a decade. And many AI researchers are currently working on various neuro-related applications of AI, including similar projects such as those from Meta and the University of Texas at Austin to decode speech and language.
University of California, Berkeley scientist Jack Gallant began studying brain decoding over a decade ago using a different algorithm. He said the pace at which this technology develops depends not only on the model used to decode the brain — in this case, the AI — but the brain imaging devices and how much data is available to researchers. Both fMRI machine development and the collection of data pose obstacles to anyone studying brain decoding.
“It’s the same as going to Xerox PARC in the 1970s and saying, ‘Oh, look, we’re all gonna have PCs on our desks,’” Gallant said.
While he could see brain decoding used in the medical field within the next decade, he said using it on the general public is still several decades away.
Even so, it’s the latest in an AI technology boom that has captured the public imagination. AI-generated media from images and voices to Shakespearean sonnets and term papers have demonstrated some of the leaps that the technology has made in recent years, especially since so-called transformer models have made it possible to feed vast quantities of data to AI such that it can learn patterns quickly.
The team from the National University of Singapore used image-generating AI software called Stable Diffusion, which has been embraced around the world to produce stylized images of cats, friends, spaceships and just about anything else a person could ask for.
The software allows associate professor Helen Zhao and her colleagues to summarize an image using a vocabulary of color, shape and other variables, and have Stable Diffusion produce an image almost instantly.
The images the system produces are thematically faithful to the original image, but not a photographic match, perhaps because each person’s perception of reality is different, she said.
“When you look at the grass, maybe I will think about the mountains and then you will think about the flowers and other people will think about the river,” Zhao said.
Human imagination, she explained, can cause differences in image output. But the differences may also be a result of the AI, which can spit out distinct images from the same set of inputs.
The AI model is fed visual “tokens” in order to produce images of a person’s brain signals. So instead of a vocabulary of words, it’s given a vocabulary of colors and shapes that come together to create the picture.
But the system has to be arduously trained on a specific person’s brain waves, so it’s a long way from wide deployment.
“The truth is that there is still a lot of room for improvement,” Zhao said. “Basically, you have to enter a scanner and look at thousands of images, then we can actually do the prediction on you.”
It’s not yet possible to bring in strangers off the street to read their minds, “but we’re trying to generalize across subjects in the future,” she said.
Like many recent AI developments, brain-reading technology raises ethical and legal concerns. Some experts say in the wrong hands, the AI model could be used for interrogations or surveillance.
“I think the line is very thin between what could be empowering and oppressive,” said Nita Farahany, a Duke University professor of law and ethics in new technology. “Unless we get out ahead of it, I think we’re more likely to see the oppressive implications of the technology.”
She worries that AI brain decoding could lead to companies commodifying the information or governments abusing it, and described brain-sensing products already on the market or just about to reach it that might bring about a world in which we are not just sharing our brain readings, but judged for them.
“This is a world in which not just your brain activity is being collected and your brain state — from attention to focus — is being monitored,” she said, “but people are being hired and fired and promoted based on what their brain metrics show.”
“It’s already going widespread and we need governance and rights in place right now before it becomes something that is truly part of everyone’s everyday lives,” she said.
The researchers in Singapore continue to develop their technology, hoping to first decrease the number of hours a subject will need to spend in an fMRI machine. Then, they’ll scale the number of subjects they test.
“We think it’s possible in the future,” Zhao said. “And with [a larger] amount of data available on a machine learning model will achieve even better performance.”