Hacker News new | past | comments | ask | show | jobs | submit login

In wet-brains:

Interlacing isn't 4-way or 6-way, it's 10e3-way, and each interlaced connection has a weight that's nonlinearly time-dependant based on how long since last firing.

Every cyclic connection is potentially a self-sustaining oscillator.

None of these features are efficiently implemented in current silicon.

"Caution when comparing neural networks to brains" is underselling it. They're profoundly different kinds of network; nobody is building (or even publicly planning) silicon that has that kind of interconnect breadth.




What do you mean? Any image classifier will use way more than 4 kernels for convolution. All those layers are interlaced. Furthermore they also contain fully connected layers, with neuron integrating way more than 10^3 signals.

The reasons that there aren't much more fully connected layers, is that this doesn't work. Actually, one of the key developments in NNs is architectural, minpools, ReLus, U-net. All are key for modern networks, all architectural.


Sorry, that was hastily written & could have been clearer.

It is, of course, very common to perform (eg) convolution with a larger kernel size, or to use a dense layer.

However, unlike wet-neurons:

* Convolution has the same local shape for each cell. * Convolution has no self-suppression for recent activation, vs time-dependent, nonlinear response in wet cells. * Current silicon offers no performance advantage for interconnect to adjacent cells (could be done).

With ~80 billion neurons in a brain, 1000 is not like a dense layer at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: