They took the images from the brain, but they might not have gotten it from the mind - it may have come for circuitry associated with the eyes, rather that the circuity associated with thinking.
I would like to see if they could read images if a person just thought about them, rather than actually saw them.
Exactly. The visual system produces a very stable activation pattern, relative to other senses and multidimensional objects. The noise inherent in those other systems is probably the best protection of our inner privacy. There simply isn't a code to be read because there's little systematicity from one moment, or person, to the next. The visual system is as good as it gets and even then only when you're looking at a prescribed object.
In other words, ignore the PR about downloading thoughts and dreams. We can decode some stuff from fMRI activity patterns but we're soon swamped by noise, even when using machine learning techniques. The data just isn't there. It's like trying to predict the weather two weeks from now. The system is inherently noisy.
For instance, here's a seminal paper predicting which category (faces, place, object) was going to be recalled from studied pictures and a second or two before it was actually recalled:
They are using voxels from visual areas V1, V2, and V4. These are levels in the visual regions of the cortex. What you call "thinking", would correlate with the IT (infero-temporal cortex) and the PFC (pre-frontal).
Also, they are using fMRI activations, which means they don't record from neurons directly, but look at cerebral blood flow (which is linked to neuronal activation; the more the neuron does, the more oxygen it needs).
I would like to see if they could read images if a person just thought about them, rather than actually saw them.
I imagine it would take a bit of training to be able to visualize images in ways that a computer can read them - I imagine children would be best at learning thought-commands quickly. Just as my grandparents could never quite learn how to touch-type, I imagine my grandchildren will kick my ass at thinking into their computers ;)
It would have been lame if they had gotten the information from the neurons in the retina, but the article says they took the readings from the cerebral visual cortex. I'm pretty sure that it's used both for processing vision and thinking about images.
Not thinking about. That far back is a reconstruction of the data compression that happens in the retina. Attention can shift activation there slightly, but there's little abstraction from the retinal image.
To get past the creepiness aspect, let's say that you can only view what someone else is seeing. That's still pretty useful.
Imagine that you make this high enough resolution that it's like watching TV, and then small enough that it'll fit in a helmet.
Then, you could record what someone is seeing and hearing, without having a cameraman around. It would be a cool way to, for example, watch a football play from the eyes of the quarterback. It might be a very interesting way to make movies.
You wouldn't get any of the effects of blurred vision, crossed eyes, or some such. Also, eyes are far better cameras than anything you'd fit on a helmet.
In some ways. If you recorded just what the rods and cones detected, you'd actually get a really small, really high-resolution area in the center (the fovea) with the rest at extremely much lower resolution. So if you recorded what the quarterback saw, but without the post-processing done by the brain to cache some of the things recently glanced at, it might be pretty hard to watch.
I think a big problem with this would be the eye's saccades. From what I understand, the brain relies on the fact that it controls what the eye is focused on to interpret the signal.
Basically, they had this experiment where they showed the subjects 400 12x12 (I think) pixel images of letters and measured various things in their brain.
They used (most likely) neural nets and machine learning to draw conclusions about what was going on, and then they showed the subjects a series of NEW images that were not in the training set, measured the brain activity, and they were able to display a graphic that had white, grey and black pixels that closely resembled the white on black lettering that they had trained it on.
They were not exact pictures, but kind of like fuzzy images that you could see the whiter areas were were the letters were, and the darker areas in the images were aslo where the background was.
Interesting stuff, but I think a far cry from the reading people's dreams they believe is possible in the future.
They've established the basic principle. All it needs now is refinement.
This "basic principle" is no different when they do other MRIs that can detect patterns in brain activity (specific to one individual). The computers aren't doing any interpretation of brain activity here, just recognizing what it has been preprogrammed to to recognize.
What you're proposing is that we build a recognition database of every single brain pattern and its correlating image. Damn, what kind of computer does that remind you of? Oh yea - the human brain!
Not only that, but each individual has unique brain patterns, so this "computer" would have to be able to interpret a wide variety of signals. We're so far off from understanding the brain that this isn't happening, ever.
You lead the topic to a very big question. If you think here computer is not doing any interpretation, what you actually say is computer learning methods don't mean computer can think. In this article, scientists do make computer do some "interpretation" because the training data and test data are different. (NEW images and reconstruct).
People here have much more to do to fulfill the goal of "reconstructing high resolution, color image", but the problem is not computers do think or not.
In the long term, this has great potential for improving communication - imagine being able to visualize something and show it to other people instantly, rather than having to try to draw it or put it into words.
For example, it could greatly speed up teaching if both the student and teacher were equipped with an advanced version of this device, since the teacher could immediately see and easily correct any mistaken visualizations the student had.
It could be the human-to-human equivalent of going from 56k to broadband.
I can think of a lot of negative implications if this technology progresses the way they think it will. Can you imagine how excited governments will be about being able to "read the terrorists' minds" with this new toy?
Are there any good implications? The "get images directly from the artists brain" doesn't sound too exciting to me, compared to the potential for 1984 style chicanery.
I don't think it is reading pictures from the brain at all. The way I understand it, it can only recognised pictures the computer has learned before.
As an analogy, imagine your eye lid twitches every time somebody mentions George Bush. The computer would then learn that when your eye lid twitches, you are thinking about George Bush. Not very exciting imo.
I think there was an experiment with a cat where they actually generated an image from what fell on the cat's retina. THAT was scary to me.
The article states that the actual images shown during the second phase were different than the "training" images that the computer showed to subject and used to "learn".
I think there was an experiment with a cat where they actually generated an image from what fell on the cat's retina.
The only study I've ever seen on this has been based on simply grabbing the image off the reflection of the eye. I can't imagine this actually involved hooking up a cat's visual cortex to some sort of interface, the sheer processing power required for that sort of visual information processing is unfathomable.
They did do that, it was in Scientific American in an article about the eyes. They they did not take it from the visual cortex though, but rather from neurons coming from the retina.
The retina has various layers and they took a look at what each layer does.
They put smashed bananas into the cats' brains to record the signals.
(It's less ridiculous than it sounds: Bananas contain a certain enzyme that helps to capture the neurons signals. The porridge they put around the sensors in the cats' brains contained many more things besides the bananas. My fiancee was studying that stuff in university.)
This is quite different than the OP's image reconstruction, and it isn't really what one is thinking (i.e. in your head). You need to consciously send motor commands to your vocal chord for the system to pick up your "thoughts." Essentially, you complete the entire speech process (from thoughts to motor signals), but the device completes the final step, rather than your vocal box.
<image>
"Researchers from Japan’s ATR Computational Neuroscience Laboratories have developed new brain analysis technology that can reconstruct the images inside a person’s mind and display them on a computer monitor, it was announced on December 11. According to the researchers, further development of the technology may soon make it possible to view other people’s dreams while they sleep.
The scientists were able to reconstruct various images viewed by a person by analyzing changes in their cerebral blood flow. Using a functional magnetic resonance imaging (fMRI) machine, the researchers first mapped the blood flow changes that occurred in the cerebral visual cortex as subjects viewed various images held in front of their eyes. Subjects were shown 400 random 10 x 10 pixel black-and-white images for a period of 12 seconds each. While the fMRI machine monitored the changes in brain activity, a computer crunched the data and learned to associate the various changes in brain activity with the different image designs.
Then, when the test subjects were shown a completely new set of images, such as the letters N-E-U-R-O-N, the system was able to reconstruct and display what the test subjects were viewing based solely on their brain activity.
For now, the system is only able to reproduce simple black-and-white images. But Dr. Kang Cheng, a researcher from the RIKEN Brain Science Institute, suggests that improving the measurement accuracy will make it possible to reproduce images in color.
“These results are a breakthrough in terms of understanding brain activity,” says Dr. Cheng. “In as little as 10 years, advances in this field of research may make it possible to read a person’s thoughts with some degree of accuracy.”
The researchers suggest a future version of this technology could be applied in the fields of art and design — particularly if it becomes possible to quickly and accurately access images existing inside an artist’s head. The technology might also lead to new treatments for conditions such as psychiatric disorders involving hallucinations, by providing doctors a direct window into the mind of the patient.
ATR chief researcher Yukiyasu Kamitani says, “This technology can also be applied to senses other than vision. In the future, it may also become possible to read feelings and complicated emotional states.”
The research results appear in the December 11 issue of US science journal Neuron."
When this kind of technology replaces keyboards, mice, and touch screens, do you think that mainstream audiences might begin to appreciate the value in knowing exactly what the software on your personal devices does, and which master that software serves?
I would never put so much as my private SSH keys on a closed source machine that pulls updates in the background or a device on which a third party is root (iPhone, G1). I certainly can't imagine plugging my brain into any such creature.
Ever since I read the story of writing a backdoor in the unix login prompt, then writing that change into the compiler so the login prompt code looks clean, then writing that change into the running compiler, so the compiler code looks clean, but the running compiler will backdoor itself or the login prompt, I've had the feeling that the best bet is not to trust as far as possible and not to worry as far as possible.
Smart people will always be able to outsmart me, stupid people with power or desirables will always be able to coerce me into lowering my standards.
Don't be scared. If you're ever placed in an fMRI with scenes of the crime, just start recalling your childhood home or Snow White. And for godsakes don't start remembering the crime.
"I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us. Mr. Stay Puft!"
I would like to see if they could read images if a person just thought about them, rather than actually saw them.