I think less than a decade on the basis that the hardware requirements to experiment with different algos have recently become quite reasonable so you get things like Karpathy's nanoGPT that he wrote in a month and can be trained for a few hundred dollars. People are going to be trying all sorts of ideas to try to get more human like intelligence.
That is not a basis. LLMs do not even exist within the same plane of intelligence that AGI would. NanoGPT gets us no closer to AGI than Tylenol gets us to curing cancer. And predictions about when we'll "cure cancer" have been about as accurate as I suspect AGI will be. When you don't understand the problem or the solution, you can't estimate the timeline for the solution. LLMs regurgitate and summarize text with unreliable accuracy, and they feign intelligence by reproducing the work of intelligent humans.
To me LLMs seem quite similar to the bit of the brain which does speaking without thinking about it. That would suggest people could build on that to do the other things the brain does like thinking and learning. We shall see I guess.
> LLMs do not even exist within the same plane of intelligence that AGI would. NanoGPT gets us no closer to AGI than Tylenol gets us to curing cancer.
You're saying that with a lot of confidence, but the truth is that we don't know if paracetamol might be the cure of cancer, less than we know for sure LLMs "do not even exist within the same plane of intelligence that AGI would".
Almost all discoveries are surprising, and after being discovered looks kind of obvious in hindsight. It just happened to be someone who stumbled over the right way of putting together the right pieces, and boom, something new discovered and invented.
With that said, I personally also don't believe LLMs to be the solution to AGI, but I won't claim with such confidence that they won't be involved in it, would be kind of outlandish to claim something like that this early on.