In your previous comment you stated that LLMs can only solve problems that are in their training set (e.g. "all we are gonna get is better and better googles"). But that's not true as I pointed out.
Now your argument seems to be that they can't solve all problems or, more charitably, can't solve highly complex problems. This is true but by that standard, the vast majority of humans can't reason either.
Yes, the reasoning capacities of current LLMs are limited but it's incorrect to pretend they can't reason at all.
If LLM is trained on python coding, and its trained separately on just plain english language on how to decode cyphers, it can statistically interpolate between the two. That is a form of problem solving, but its not reasoning.
This is why when you ask it fairly complex problems on how to make a bicycle using a CNC with limited work space, it will tell you generic answers, because its just staistically looking at a knowledge graph.
A human can reason, because when there is a gray area in a knowledge graph, they can effectively expand it. If I was given the same task, I would know that I have to learn things like CAD design, CNC code generation, parametric modeling, structural analysis, and so on, and I could do that all without being prompted to do so.
You will know when AI models will start to reason when they start asking questions without ever being told explicitly to ask questions through prompt or training.
Now your argument seems to be that they can't solve all problems or, more charitably, can't solve highly complex problems. This is true but by that standard, the vast majority of humans can't reason either.
Yes, the reasoning capacities of current LLMs are limited but it's incorrect to pretend they can't reason at all.