Hacker Newsnew | past | comments | ask | show | jobs | submit | wyre's commentslogin

Yes, they will prioritize AI safety until their board of directors says that needs to change.

Is that not considered institutional? If i own a Vanguard ETF, the stock that comprises the ETF is classified as being owned by Vanguard, right?

Genuinely asking.


That's the main motivation to serving in the army, but that still requires training and exams.

>It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO

However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.


'think' is one of those words that used to mean something but is now hopelessly vague- in discussions like these it becomes a blunt instrument. IMO LLMs don't 'think' at all - they predict what their model is most likely to say based on previously observed patterns. There is no world model or novelty. They are exceptionally useful idea adjacency lookup tools. They compress and organize data in a way that makes it shockingly easy to access, but they only 'think' in the way the Dewey decimal system thinks.

if we were having this conversation in 2023 I would agree with you, but LLM's have advanced so much that they are essentially efficient lookup tables is an oversimplification so dramatic I know you don't understand what you're talking about.

No one accuses the Dewey decimal system of thinking.


If I am so ignorant maybe you'd like to expand on exactly why I'm wrong. It should be easy since the oversimplification is dramatic enough that it made you this aggressive.

No, I don't want to waste my time trying to change the view of someone so close-minded they can't accept that LLM's do anything close to "thinking"

Sorry.


That's what I thought. Big talk, no substance.

I'm not the other poster but he's probably referring to how your comment seems to only be talking about "pure" LLMs and seems pretty out of date, whereas most tools people are using in 2025 use LLMs as glue to stitch together other powerful systems.

Open source models are still a year or so behind the SotA models released the last few months. The price to performance is definitely in favor of Open Source models however.

DeepMind is actively using Google’s LLMs on groundbreaking research. Anthropic is focused on security for businesses.

For consumers it’s still a better deal for a subscription than to invest a few grand in a personal LLM machine. There will be a time in the future where diminishing returns shortens this gap significantly, but I’m sure top LLM researchers are planning for this and will do whatever they can to keep their firm alive beyond the cost of scaling.


Definitely

I am not suggesting these companies can't pivot or monetize elsewhere, but the return on developing a marginally better model in-house does not really justify the cost at this stage.

But to your point, developing research, drugs, security audits or any kind of services are all monetization of the application of the model, not the monetization of the development of new models.

Put more simply, say you develop the best LLM in the world, that's 15% better than peers on release at the cost of $5B. What is that same model/asset worth 1 year later when it performs at 85% of the latest LLM?

Already any 2023 and perhaps even 2024 vintage model is dead in the water and close to 0 value.

What is a best in class model built in 2025 going to be worth in 2026?

The asset is effectively 100% depreciated within a single year.

(Though I'm open to the idea that the results from past training runs can be reused for future models. This would certainly change the math)


For sure, all these companies are racing to have the strongest model, and as time goes on we quickly start reaching diminishing returns. DeepSeek came out at the beginning of this year, blew everyone's minds, and now look at how far the industry has progressed beyond it.

It doesn't even seem like these companies are in a battle of attrition to not be the first to go bankrupt. Watching this would be a lot more exciting if that was the case! I think if there was less competition between LLMs developers could slow down, maybe.

Looking at the prices of inference of open-source models, I would bet proprietary models are making a nice margin on API fees, but there is no way OpenAI will make their investors whole because they make a few dollars of revenue for a million tokens. I am terrified of the world we will live in if OpenAI will be able to reverse their balance sheet. I think there's no where else that investors want to put their money.


Larger fines, more robust methods for Meta to keep children off their platforms, more robust methods to stop the spread of propaganda and spam on their platforms, for Meta to start prioritizing connection between others instead of attention.

If you want a company to do something, you do need to ensure that the fine is bigger than the amount of money they made or will make by doing the thing you are trying to discourage. You need there to be a real downside. I don't think any of the fines that have been discussed are anywhere close to the levels that I am talking about.

Don’t corporate fines often come with requirements that the company also discontinue certain activities, start certain other ones, and be able to prove this or that to a regulator?

Probably meant Ladybird

Don’t be a pedant. You know very well there is a big different between a photo taken on an iPhone and a photo edited with Nano Banana.

I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.


Probably the way it’s always been defined: those that own capital.


Yup, exactly this! To clarify a bit more for the lurkers:

Obviously the line can be hard to draw for most (intentionally so, even!), but at the end of the day there’s people who work for their living and people who invest for their living. Besides not having to work, investors are very intentionally & explicitly tasked with directing society.

Being raised in the US, I often assumed that “capitalism” meant “a system that involves markets”, or perhaps even “a system with personal freedom”. In reality, it’s much drier and more obvious: capitalism is a system where the capitalists rule, just like the monarchs of monarchism or the theocrats of theocracy. There are many possible market-based systems that don’t have the same notion of personal property and investment that we do.


Ah, that might explain some communication issues I've had.

Looking it up, it seems that marxists use the word "capitalists" to refer to the class of owners of capital. I've always used "capitalist" to refer to a market-led country or to people who believe in capitalism. My dictionary helpfully uses "capitalist" to mean anything related to capitalism.

At the very least, I'll have learnt something from this conversation :)


Lots of followers of capitalism fancy themselves capitalists, as supporters of a system that could enable them to themselves own capital - which feels like an even playing field in terms of possibility for the future. But they are not capitalists and have nothing in common with the ones they idolize. There is an in between sense of the word where people apply the label aspirationally.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: