Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where I work in the hoary fringes of high end tech we can’t secure enough token processing for our use cases. Token price decreases means opening of capacity but we immediately hit the boundaries of what we can acquire. We can’t keep up with the use cases - but more than that we can’t develop tooling to harness things fast enough and the tooling we are creating is a quick hack. I don’t fear for the revenue of base model providers. But I think in the end the person selling the tools makes the most and in this case I think it continue to be cloud providers. I think in a very real way OpenAI and Anthropic are commercialized charities driving change and commoditizing rapidly their own products and it’ll be infrastructure providers who win the high end model game. I don’t think this is a problem I think this is in fact inline with their original charters but a different path than most people view nonprofit work. A much more capitalist and accelerated take.

Where they might make future businesses is in the tooling. My understanding from friends within these companies is their tooling is remarkably advanced vs generally available tech. But base models aren’t the future of revenues (to be clear tho they make considerable revenue today but at some point their efficiency will cannibalize demand and the residual business will be tools)



I'm curious now. Can you give color on what you're doing that you keep hitting boundaries? I suppose it isn't limited by human-attention.


Yes it’s limited by human attention. It has humans in the loop but a lot of LLM use cases come from complex language oriented information space challenges. It’s a lot of classification challenges as well as summarization and agent based dispatch / choose your own adventure with humans in the loop in complex decision spaces at a major finserv.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: