There are a whole bunch of software problems where "just prompt an LLM" is now a viable solution. Need to analyse some data? You could program a solution, or you could just feed it to ChatGPT with a prompt. Need to build a rough prototype for the front-end of a web app? Again, you could write it yourself, or you could just feed a sketch of the UI and a prompt to an LLM.
That might be a dead end, but a lot of people are betting a lot of money that we're just at the beginning of a very steep growth curve. It is now plausible that the future of software might not be discrete apps with bespoke interfaces, but vast general-purpose models that we interact with using natural language and unstructured data. Rather than being written in advance, software is extracted from the latent space of a model on a just-in-time basis.
A lot of the same people also recently bet huge amounts of money that blockchains and crypto would replace the world's financial system (and logistics and a hundred other industries).
A16z and Sequoia made some big crypto bets, but I don't recall Google or Microsoft building new DCs for crypto mining. There's a fundamental difference between VCs throwing spaghetti against the wall and established tech giants steering their own resources towards something.
The software that powers LLM inference is very small, and is the same no matter what task you ask it to perform. LLMs are really the neural architecture and model weights used.