1. Building special-purpose hardware for neural nets is a good idea and potentially very useful.
2. The architecture implement by this IBM chip, spike-and-fire, is not the architecture used by the state-of-the-art convolutional networks, engineered by Alex Krizhevsky and others, that have recently been smashing computer vision benchmarks. Those networks allow for neuron outputs to assume continuous values, not just binary on-or-off.
3. It would be possible, though more expensive, to implement a state-of-the-art convnet in hardware similar to what IBM has done here.
Of course, just because no one has shown state-of-the-art results with spike-and-fire neurons doesn't mean that it's impossible! Real biological neurons are spike-and-fire, though this doesn't mean the behavior of a computational spike-and-fire 'neuron' is a reasonable approximation to that of a biological neuron. And even if spike-and-fire networks are definitely worse, maybe there are applications in which the power/budget/required accuracy tradeoffs favor a hardware spike-and-fire network over a continuous convnet. But it would be nice for IBM to provide benchmarks of their system on standard vision tasks, e.g., ImageNet, to clarify what those tradeoffs are.
I find it interesting that no group (to my knowledge) has tried something similar to [Do Deep Networks Need to Be Deep?](http://arxiv.org/abs/1312.6184) for ImageNet scale networks. There have been several results which show that the knowledge learned in larger networks can be compressed and approximated using small or even single layer nets. Extreme learning machines (ELM) can be seen as another aspect of this. There have also been interesting results in the "kernelization" of convnets [from Julian Mairal and co.](http://arxiv.org/abs/1406.3332) that, accompanied by the stong crossover between Gaussian processes and neural networks from back in late 90s, point to the possibility of needing different "representation power" for learning vs. predicting which may lead to the ability to kernelize the knowledge of a trained net, ideally in closed form.
I am doing some experiments in this area, and would encourage anyone thinking of doing hardware to look at this aspect before investing the R&D to do hardware! If this knowledge can really be compressed it could be a massive reduction in complexity to implement in hardware...
I am a bit biased on this topic (finishing a talk about this exact topic for EuroScipy now) but I find the connections interesting at least.
But over a specific time-period, doesn't spike-and-fire integrate signals, so that effectively you're operating with real-valued quantities? Isn't this the brain's way of using digital signals (more robust, lower power) than analogue values over the neural wires?
1. Building special-purpose hardware for neural nets is a good idea and potentially very useful.
2. The architecture implement by this IBM chip, spike-and-fire, is not the architecture used by the state-of-the-art convolutional networks, engineered by Alex Krizhevsky and others, that have recently been smashing computer vision benchmarks. Those networks allow for neuron outputs to assume continuous values, not just binary on-or-off.
3. It would be possible, though more expensive, to implement a state-of-the-art convnet in hardware similar to what IBM has done here.
Of course, just because no one has shown state-of-the-art results with spike-and-fire neurons doesn't mean that it's impossible! Real biological neurons are spike-and-fire, though this doesn't mean the behavior of a computational spike-and-fire 'neuron' is a reasonable approximation to that of a biological neuron. And even if spike-and-fire networks are definitely worse, maybe there are applications in which the power/budget/required accuracy tradeoffs favor a hardware spike-and-fire network over a continuous convnet. But it would be nice for IBM to provide benchmarks of their system on standard vision tasks, e.g., ImageNet, to clarify what those tradeoffs are.