Hacker News new | past | comments | ask | show | jobs | submit login

This. Even when we ignore the whole ethical aspect of "AI for benefit of humanity" and all that philosophic stuff, there are very real legal reasons why OpenAI should never have been allowed to switch to for profit. They were only able to circumvent this with their new dual company structure, but this should still not be legal.



The point of their charter is not to make money, it's to develop AI for the benefit of all, which I interpret to mean putting control and exploitation of AI in the hands of the public.

The reality: we don't even get public LLM models, let alone source code, while their coffers overfloweth.

Awesome for OpenAI and their employees! Every else goes without. Public benefit my arse.


I've been really hung up on the irony of "Open" part of the OpenAI name. I figure "Open" must mean "open for business". What is open about OpenAI?


The most oppressive regimes have "Democratic" or "People's" in the official name of their country.

Someone took inspiration from this.


They changed their minds and didn't change the name. That's all.


If that's the case the name should come with an asterisk and footnote. Keeping "Open" in the name is not genuine. Its would be like a superhero group called themselves "Hero Squad" and decided being superheros is not profitable as villainy, but still calling themselves Hero Squad despite the obvious operational changes.


While I completely agree, I think we've seen enough to realize that something as powerful as what OpenAI is developing shouldn't be freely released to the public. Not as a product, nor as source code.

Dangerous and powerful things like weapons and chemicals are restricted in both physical and informational form for safety reasons. AI needs to be treated similarly.


So you believe that Microsoft can on average more be trusted with dangerous technology than humanity as a whole?

A bold claim given their track record


didn't Firefox / Mozilla set that precedent already?


I can download the Firefox sources and everything else they produce.

That they make money incidentally to that is really no problem and a positive because it provides reasonable funding.

What if Firefox made a world beating browser by accident. Would they be justified in closing the source, restricting access and making people pay for it?

That's what OpenAI did.


That's the real distinction: does the for-profit subsidiary subsume the supposed public good of the parent non-profit?

If OpenAI Co. is gatekeeping access to the fruits of OpenAI's labors, what good is OpenAI providing?


Anyway, to answer your question, no, not okay to close up the nonprofit and go 100% for-profit in that case.

Concisely, in any human matteres: Do what you say you'll do, or, add qualifiers/don't say it.

Take funds from a subset of users who need support services or patch guarantees of some kind, use that to pay people to continue to maintain and improve the product.


They had one of the best browsers in the world at one point.

Their sell-out path was hundreds of millions of dollars from GOOG to make their search engine the default, and, unspoken: allow FF to become an ugly, insecure, red-headed stepchild when compared to Chrome.

Likely part of what took priority away from Thunderbird, at the time, too.


No. MozCo is for profit owned by Mozilla Foundation which does additional things to satisfy the IRS and has been that way since the begining.


Not since the beginning. They made it that way after beef with the IRS.

I wish they hadn't because they are thinking too commercial (extremely high paid CEO) for instance but they have a foundation to answer to which doesn't manage them like shareholders would (eg not rewarding the CEO for dropping marketshare!). This model is the worst of both worlds imo.


That's the same basic structure, on paper, as OpenAI, it didn't “switch to for-profit” in terms of taking the nonprofit entity and converting it to a for-profit.


Mozilla doesn't have outside investors; AFAIK it's 100% owned by the foundation. OpenAI has outside investors.


Imagine if as punishment, OpenAI were forced to OpenSource any and all IP that was developed in the non-profit phase of their company?

That would be a Nuke in the AI world.


Not really. Open source and proprietary models aren't that far from them.

They don't have moat. Their main advantage have been people and aleady we see entire Anthropic spinoff, Sutskever absent, Karpathy leave, who is next?


They already have a massive moat. Try competing with them, let me know what the bill looks like. Only a few companies on the planet can realistically attempt it at this point. Let me know how many GPUs you need and where you plan to get them from.

They have the same moat that Google search has. Including as it pertains to usage and data.

You also can't train a new competitor like OpenAI was able to jumpstart GPT, the gates have already been raised on some of the best data.

Very few companies will be able to afford to keep up with the hyper scale models that are in our future, due to the extreme cost involved. You won't be able to get enough high-end GPUs, you won't be able to get enough funding, and you won't have a global brand that end users recognize and or trust.

The moat expands as the requirements get ever larger to compete with them. Eventually the VC money dries up because nobody dares to risk vaporizing $5+ billion just to get in the ring with them. That happened in search (only Microsoft could afford to fund the red ink competition with Google), the exact same thing will happen here.

Google search produces $100+ billion in operating income per year. Venture capital to go after them all but dried up 15+ years ago. There have been very few serious attempts at it despite the profit, because of the cost vs risk (of failure) factor. A lot of people know how Google search works, there's a huge amount of VC money in the tech ecosystem, Google mints a huge amount of profit - and yet nobody will dare. The winner/s in GPT's field will enjoy the same benefit.

And no, the open source at home consumer models will not come even remotely close to keeping up. That'll be the latest Linux consumer desktop fantasy.


Their main advantage are their products and their communication. ChatGPT is nice and they managed to impose their API.

Open source staying behind commercial products even if they are technically really close … ? I think I have already seen this.


Imagine if instead, they were forced to delete the models they built using all our data without consent. Lets make it a fusion bomb.


The copyright lawsuits against OpenAI are already calling for algorithmic disgorgement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: