A year and a half ago, we published a great feature on the current state of the quest to read the human mind. It included some then in-progress work from Jack Gallant, a neuroscientist at U.C. Berkeley, in which Gallant was attempting to reconstruct a video by reading the brain scans of someone who watched that video--essentially pulling experiences directly from someone's brain. Now, Gallant and his team have published a paper on the subject in the journal Current Biology.
This is the first taste we've gotten of what the study actually produces. Here's a video of the reconstruction in action:
The reconstruction (on the right, obviously) was, according to Gallant, "obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately."
Don't forget to check out our original feature on this work for some more background into what the researchers would really prefer we call "brain decoding" rather than "mind-reading."
watching those clips brings to mind the animus, from assassin's creed.
why learn from your own mistakes, when you could learn from the mistakes of others?
When can i start recording my dreams?
whats funny to me is how steve martin had a collar and it looked like he was in a tshirt, the woman had no collar and looked like she had a collar, and then the parrot looks like heath ledger as the joker for a second, lol and the writing looked like it was in japanese or an alien language.
The potentially sad part is that they may have reconstructed the entire picture from the person's brain, but the person's brain doesn't need a perfectly vivid image to remember what it thinks it remembers.
To the brain of the person looking at Steve Martin, that may be all the information it needs to remember that scene.
Other than that, this is pretty amazing.
It's kind of like a dream in reverse. Like random firings of your brain make the image on the right and your unconscious mind says, "Hey! It's Steve Martin!" then it creates the dark blur across the middle of your vision and you think/dream "Hey! Elephants! No... It's an airplane!"
i want to see this tech with a bigger library !
bored? lets go mine the stars... ^^
. . .coming to a court room near you. . .
Whats with the words embedded in the video on the right? especially during the elephant, prolly added afterward? they don't seem relevant to the video.
@ultratrollbot9: As per the article: obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli.
You're seeing a mixture of youtube clips being mixed together that fit with what the person was thinking about.
I'm trying to really get a handle on what is going on here, and where the videos come in. And this is what I think it is saying. 18 million seconds of video is just a more impressive way of saying 5000 hours of video. OK so you have this video and you stick one or more (most likely much more) people in a machine and image their brains as they watch the video. The hope being to find a map between a scan slice to video image database with some kind of fancy algorithm.
Now after you create this database you put a test subject in the scanner, and take the scans that come off of it and run it through the algorithm and try to find a match, and when you show that video image to him/her.
So in fact you need two things for this to look good to us, as that we are seeing as a "match". First there has to be enough shapes and what not in the 5000 hours of video so that it can even show something that might look like a match. And the second part is if the brain scan to image match algorithm can actually find that correct image.
It is a pretty interesting idea because what it says is that what you see in the video has nothing really to do with what the "brain sees". It is simply saying that some pattern in the brain when repeated in another person's brain means the same thing.
What this says is that it takes a lot of "creative manipulation" and "computer enhancement" to make moving pictures out of random brain scans.
If this is actually true science, the subject matter being "recorded" is being recorded from a transmission point and not a storage point. Like a network sniffer captures packets on the way to the computer, this may be picking up information moving along the nervous system's Ethernet cable. It is still not reading the brain-computer's "hard drive".
-Just a 'bot that is hot in a 'lectronic world.
Interesting question. It might be either a "transmission" or it might be short term memory. Highly unlikely that it has anything to do with long term memory.
It would be interesting to see if they could do the same with people just thinking about something, or if it requires the person actually seeing it. That would help indicate if it is a transmission or "thought". If it would work for subjects thinking about something then you could use it so control things. Like think push button, and computer pushes button.
So this means we now not only can talk to the animals but listen too?!
just a idea !!
Now what would make this worthy of news is if he could take the brain scan algorithms and convert them to an actual image, even a halfed ass image. Also I'd like to see someone take patterns of a subjects thoughts and matching them to audio frequencies would be awesome
@ mrwright85- You hit the nail on the head sir. When this technology is perfected I can totally see authorities using it for criminal identification and crime scene recreation. It's a slippery slope into a "Minority Report" type scenario where you will not only be held accountable for your actions, but your thoughts as well.
I get the impression of what we see in the mind is a combination of what we see, what we are thinking about and what those things may associate with all at the same time, ending with just random spontaneous images getting in the way too.
Now if we get a clear picture of someone’s brain, does this mean this person brain is in more sync with our reading device, or they are a slow thinker, or they are a more focused individual?
I think a long term data base of people looking at the same thing with a cross reference of various people should bring interesting results. Plus the images may become clearer as it is further developing in sync and sensitivity to our minds.
We may one day actually see the mind of a mentally ill patient; verse the average healthy working mind.
We may actually see one day certain types of mental illness and this used in better treatments.
This could develop into a wonderful diagnostic tool!
I don't think you could do either short term or long term storage, the brains memory acts as a distributed memory type device dsitributing information over a large area. you would be hard pressed to collect this infromation and sort it out into a reasonable image.information, doing the data tap/packet capture system would be best as its still in a uniform pattern.
Looks really interesting and my guess is that this science has plenty of potential even in the short term. Hopefully other groups will jump into the research. I like the idea of streaming one's vision to storage for future review, perhaps with sound and other sensory data included. I think quite a few films have dealt with this. "Strange Days" comes to mind.
When it came to people (body, face, etc.) it looks like in the video that we are trying to associate them with other people we may have seen.
and in the writing it seems that we are comparing them with other stuff mostly words but it looks like also scenes in time which if you ask me make a lot of sense.