I've been quite disappointed to see how many papers from "cutting edge" research groups are chasing small improvements in well-known benchmarks by finding new techniques that happen to work, while there is a lot less effort put into finding out why. I guess Geoff Hinton's explanation about what gets published today explains it.
Fully agree, and it's a necessary disease in young fields like ML (akin to grid search in fact).
But at some point, there will need to be some sort of theoretical foundation brought to bear or advancement will grind down to a halt
And the academic reward mechanism needs to start reflecting that fact.