What attributes does a system need for you to accept its comparison to a brain/neuron?
Without defining what's essential, I'm nervous to call the comparison insufficient. If a topological subset of neurons isn't good enough, what do we need in addition/instead? If we stuff NNs full of complicated (how complicated?) activation functions, does that new system do the trick? Or add...47 new "neuron" variants? Or swap the learning scheme from gradient descent to something fancier? (For that matter, do we even know what the brain's scheme is, and why GA/back prop isn't an acceptably extremely crude approximation of it?)
The brain is so unimaginably intricate. Our models are hilariously simple in contrast, of course. But what of those mismatches are differences in kind vs. differences in magnitude?
Without defining what's essential, I'm nervous to call the comparison insufficient. If a topological subset of neurons isn't good enough, what do we need in addition/instead? If we stuff NNs full of complicated (how complicated?) activation functions, does that new system do the trick? Or add...47 new "neuron" variants? Or swap the learning scheme from gradient descent to something fancier? (For that matter, do we even know what the brain's scheme is, and why GA/back prop isn't an acceptably extremely crude approximation of it?)
The brain is so unimaginably intricate. Our models are hilariously simple in contrast, of course. But what of those mismatches are differences in kind vs. differences in magnitude?