Researchers Reconstruct Movies Using Scans of Brain Activity
Image courtesy Shinji Nishimoto.
Mind reading, recording dreams … both the makings of great movies. But what once was science fiction is now one step closer to real science.
A group of researchers from the University of California at Berkeley were able to reconstruct movies clips using scans of their subjects’ brain activity.
The report — published last week in Biology Today, along with images that soon went viral online — could open the door to advances in health and medicine in the future.
“A therapist might have a patient that says, ‘I am feeling horrible and I don’t know why.’ And he could say, ‘Let’s look at the images generated in your brain,'” said Thomas Naselaris, one of the study’s co-authors and a post doctoral fellow at U.C. Berkeley. “Now I think it’s safe to say that the study we just published is a step in that direction, but we don’t know how big a step it may be. It could be very small or may be very big.”
The report details an arduous process in which subjects were asked to lie completely still for hours at a time in fMRI machines, watching movie previews as the magnet scanned their brains. The scientists then used data from the scans to reconstruct hazy images that could be retraced to the original movie clips.
One day, this technology could be used in hospitals and nursing homes — perhaps to help doctors understand patients who have lost the ability to communicate through speech or sign language.
Medical advances aside, it’s too early to panic if the thought of someone eavesdropping on your daydreams makes you nervous. Study co-author Jack Gallant, a U.C. Berkeley neuroscientist, predicts that a portable, more affordable way to complete brain scans will need to be developed before this technology could become widespread or successful. And that, he said, could take decades to hit the market.
We spoke with Gallant and Naselaris about the potential impact — and ethics — of their research. And stay tuned in the weeks ahead for the full report about the study on the NewsHour.
NEWSHOUR: How can this type of technology be used in the future for technological or health advances?
GALLANT: There are a lot of potential uses. In the immediate future, there are no applications. But in the intermediate future, you could imagine using this in medicine. For example, if someone had a stroke, you could use this to determine what damage the stroke had caused and also you might be able to use it to communicate with the stroke patient if he can’t speak. Also with things like nerve degenerative disease, you could use this to communicate.
NEWSHOUR: How are the findings applicable to science now?
NASELARIS: I think the scientific importance right now, is that it’s a demonstration of the validity of current models of how the visual system works. The model we are using to do this decoding is inspired by decades of research on how the brain responds to moving images. It’s really a cast of current knowledge.
NEWSHOUR: Are there any ethical implications with applying this science? Some people speculate someday we might be able to record dreams or people’s thoughts?
GALLANT: Right now, to build a dictionary for someone’s brain requires putting them in a magnet for several hours and presumably you need to get consent to do that. In the future, there will be better models of the brain, better and more portable methods for measuring brain activity, and in that time frame, serious ethical issues will need to be addressed.
NASELARIS: I think they are complicated. Let’s say you have a technology that could decode dreams. I think we should approach it as an invasive medical procedure and there are strict codes of ethics with these things. You aren’t going to go in and take a tumor out of a brain unless it benefits the patient, the patient has full knowledge of what’s happening and you have their consent. There may be other more complicated issues with this particular procedure, assuming that it exists in the future. It should be thought of as an invasive medical procedure and approached in the same way.
NEWSHOUR: How will you build off of this technology in the future?
NASELARIS: One of the most important things is to realize is the model we are using to do reconstructions only captures a small part of the total amount of processing that goes on in the visual system. It deals with how the visual system sees an image, in etches, in textures. But of course the human vision system can extract much more interesting information about an image, more than texture, the whole role is to tell you what objects are in an image, it gives you a meaningful interpretation. Building new models that can tell us about higher level processing going on will be really critical.