Hacker News new | past | comments | ask | show | jobs | submit login

They took the images from the brain, but they might not have gotten it from the mind - it may have come for circuitry associated with the eyes, rather that the circuity associated with thinking.

I would like to see if they could read images if a person just thought about them, rather than actually saw them.




Exactly. The visual system produces a very stable activation pattern, relative to other senses and multidimensional objects. The noise inherent in those other systems is probably the best protection of our inner privacy. There simply isn't a code to be read because there's little systematicity from one moment, or person, to the next. The visual system is as good as it gets and even then only when you're looking at a prescribed object.

In other words, ignore the PR about downloading thoughts and dreams. We can decode some stuff from fMRI activity patterns but we're soon swamped by noise, even when using machine learning techniques. The data just isn't there. It's like trying to predict the weather two weeks from now. The system is inherently noisy.


For instance, here's a seminal paper predicting which category (faces, place, object) was going to be recalled from studied pictures and a second or two before it was actually recalled:

http://www.sciencemag.org/cgi/reprint/310/5756/1963.pdf?maxt...

Still, that's a big difference from predicting (or mindreading) the particular item.

And here's a good review of that and similar work:

http://polyn.com/struct/NormEtal06_TICS.pdf


Do you have any publications of your research online?


Been meaning to put a home page together.

Here are two empirical reports: http://www.jneurosci.org/cgi/content/full/26/18/4917 http://www.jneurosci.org/cgi/content/full/27/14/3790

And here's a review: http://www.psych.upenn.edu/stslab/assets/pdf/TS_CONB05.pdf

Happy to answer questions at grob AT mass inst tech


I knew you'd say that.


They are using voxels from visual areas V1, V2, and V4. These are levels in the visual regions of the cortex. What you call "thinking", would correlate with the IT (infero-temporal cortex) and the PFC (pre-frontal).

This, while not directly relevant, might be helpful: http://www.scholarpedia.org/article/What_and_where_pathways

Also, they are using fMRI activations, which means they don't record from neurons directly, but look at cerebral blood flow (which is linked to neuronal activation; the more the neuron does, the more oxygen it needs).


I would like to see if they could read images if a person just thought about them, rather than actually saw them.

I imagine it would take a bit of training to be able to visualize images in ways that a computer can read them - I imagine children would be best at learning thought-commands quickly. Just as my grandparents could never quite learn how to touch-type, I imagine my grandchildren will kick my ass at thinking into their computers ;)


It would have been lame if they had gotten the information from the neurons in the retina, but the article says they took the readings from the cerebral visual cortex. I'm pretty sure that it's used both for processing vision and thinking about images.


Not thinking about. That far back is a reconstruction of the data compression that happens in the retina. Attention can shift activation there slightly, but there's little abstraction from the retinal image.


It's still pretty impressive, and puts us one step closer to my dream of having a camera with me wherever I go, ready at a moment's notice.


Some studies show that visual cortex is activated by mental imagery, too ( See, eg: http://www.nature.com/nature/journal/v378/n6556/abs/378496a0... ) so it stands to reason that decoding mental images could be possible.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: