I'll bite. Tell us, concretely, what is to be gained from a biological approach.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.
I'm short on time, so please forgive my rushed answer.
It will deliver on all of the failed promises of past AI techniques. Creative machines that actually understand language and the world around it. The "hard" AI problems of vision and commonsense reasoning will become "easy". You need to program a computer the logic that all people have hands or that eyes and noses are on faces. They will gain this experiences and they learn about our world, just like their biological equivalent, children.
Here's some more food for thought from Jeff Hawkins:
"John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be,
intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this:
Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters. Mind you, the directions say nothing about the meanings of the Chinese characters; they only deal with how the characters are to be copied, erased, reordered, transcribed, and so forth.
Someone outside the room slips a piece of paper through the slot. On it is written a story and questions about the story, all in Chinese. The man inside doesn't speak
or read a word of Chinese, but he picks up the paper and goes to work with the rulebook. He toils and toils, rotely following the instructions in the book. At times
the instructions tell him to write characters on scrap paper, and at other times to move and erase characters. Applying rule after rule, writing and erasing
characters, the man works until the book's instructions tell him he is done. When he is finished at last he has written a new page of characters, which unbeknownst
to him are the answers to the questions. The book tells him to pass his paper back through the slot. He does it, and wonders what this whole tedious exercise has
been about.
Outside, a Chinese speaker reads the page. The answers are all correct, she notes— even insightful. If she is asked whether those answers came from an intelligent mind that had understood the story, she will definitely say yes. But can
she be right? Who understood the story? It wasn't the fellow inside, certainly; he is ignorant of Chinese and has no idea what the story was about. It wasn't the book,
15 which is just, well, a book, sitting inertly on the writing desk amid piles of paper.
So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And
now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is
the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate
intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent. (Searle made it clear he didn't know what intelligence is;
he was only saying that whatever it is, computers don't have it.)
This argument created a huge row among philosophers and AI pundits. It spawned hundreds of articles, plus more than a little vitriol and bad blood. AI defenders
came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but
just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese
and when it doesn't. Its behavior doesn't tell us this.
A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from
my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did,
but my understanding occurred when I read the story, not just when I answer your questions. A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions. The Chinese Room, Deep Blue, and most computer programs don't have anything akin to this. They don't understand what they are doing. The only way we can judge whether a computer is intelligent is by its output, or behavior.
First, I don't feel this answers angersock's question concerning concrete applications of cognitive neuroscience to artificial intelligence.
Second, despite running into it time and again over the years, Searle's Chinese room argument still does not much impress me. It seems to me clear that the setup just hides the difficulty and complexity of understanding in the magical lookup table of the book. Since you've probably encountered this sort of response, as well as the analogy from the Chinese room back to the human brain itself, I'm curious what you find useful and compelling in Searle's argument.
I remain interested in biological approaches to cognition and the potential for insights from brain modelling, but I don't see how it's useful to disparage mathematical and statistical approaches, especially without concrete feats to back up the criticism.
Traditional AI has had 1/2 a century of failed promises. Jeff's numenta had a major shakeup over this very topic and has only been working with biological inspired AI for the past 3 years. Kurzwell also has just recently come around. Comparing Grok to Watson is like putting a yellow belt up against Bruce lee. Give it some time to catch up
In university I witnessed first hand the insitutional discrimination against biological neural nets. My ordinal point was that google could use the fresh blood and ideas.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.