It feels like the days of Claude 2 -> 3 or GPT 2->3 level changes for the leading models are over and you're either going to end up with really awkward version numbers or just embrace it and increment the number. Nobody cares a Chrome update gives a major version change of 136->137 instead of 12.4.2.33 -> 12.4.3.0 for similar kinds of "the version number doesn't always have to represent the amount of work/improvement compared to the previous" reasoning.
Even if LLMs never reach AGI, they're good enough to where a lot of very useful tooling can be built on top of/around them. I think of it more as the introduction of computing or the internet.
That said, whether or not being a provider of these services is a profitable endeavor is still unknown. There's a lot of subsidizing going on and some of the lower value uses might fall to the wayside as companies eventually need to make money off this stuff.
this was my take as well. Though after a while I've started thinking about it closer to the introduction of electricity which in a lot of ways would be considered the second stage of the industrial revolution, the internet and AI might be considered the second stage of the computing revolution (or so I expect history books to label it as). But just like electricity, it doesn't seem to be very profitable for the providers of electricity, but highly profitable for everything that uses it.
I think it's a bit early to say. At least in my domain, the models released this year (Gemini 2.5 Pro, etc). Are crushing models from last year. I would therefore not by any means be ready to call the situation a stall.
Sure, but despite there being a 2.0 release between they didn't even feel the need to release a Pro for it still isn't the kind of GPT 2 -> 3 improvement we were hoping would continue for a bit longer. Companies will continue to release these incremental improvements which are all always neck-and-neck with each other. That's fine and good, just don't inherently expect the versioning to represent the same relative difference instead of the relative release increment.
I'd say 2.5 pro to 1.5 pro was a 3 -> 4 level improvement, but the problem is 1.5 pro wasn't state of the art when released, except for context length, and 2.5 wasn't that kind of improvement compared to the best open AI or Claude stuff that was available when it released.
1.5 pro was worse than original gpt4 on several coding things I tried head to head.