No mate, not everyone is trying hard to prove some guy on the Internet wrong. I do remember these two but to be honest, they were not on top of my mind in this context, probably because it's a different example - or what are you trying to say? That the people running AI companies should go to jail for deceiving their investors? This is different to Theranos. Holmes actively marketed and PRESENTED a "device" which did not exist as specified (they relied on 3rd party labs doing their tests in the background). For all that we know, OpenAI and their ilk are not doing that really. So you're on thin ice here. Amazon came close though, with their failed Amazon Go experiment, but they only invested their own money, so no damage was done to anyone. In either case your example is showing what? That lying is normal in the business world and should be done by the CEOs as part of their job description? That they should or should not go to jail for it? I am really missing your point here, no offence.
> In either case your example is showing what? That lying is normal in the business world and should be done by the CEOs as part of their job description? That they should or should not go to jail for it? I am really missing your point here, no offence.
If you run through the message chain you'll see first that the comment OP is claiming companies market llms as AGI, and then the next guy quotes Altmans tweet to support it. I am saying companies don't claim llms are AGI and that CEOs are doing CEO things; my examples are Elon (didn't go to jail btw) and the other two that did.
> For all that we know, OpenAI and their ilk are not doing that really.
I think you completely missed the point. Altman is definitely engaging in 'creative' messaging, so do other GenAI CEOs. But unlike Holmes and others, they are careful to wrap it into conditionals and future tense and this vague corporate speak about how something "feels" like this and that and not that it definitely is this or that. Most of us dislike the fact that they are indeed implying this stuff as being almost AGI, just around the corner, just a few more years, just a few more hundred billion dollars wasted in datacenters. When we can see on a day-to-day basis, that their tools are just advanced text generators. Anyone who finds them 'mindblowing' clearly does not have a complex enough use case.
I think you are missing the point. I never said it's the same nor is that what I am arguing.
> Anyone who finds them 'mindblowing' clearly does not have a complex enough use case.
What is the point of llms? If their only point is complex use cases then they're useless, let's throw them away. If their point/scope/application is wider and they're doing something for a non negligible percentage of people then who are you to gauge whether they deserve to be mindblowing to someone or not regardless of their use case?
What is the point of LLMs? It seems nobody really knows, including the people selling them. They are a solution in search of a problem. But if you figure it out in the meanwhile, make sure to let everyone know. Personally I'd be happy with just having back Google as it was between roughly 2006-2019 (RIP) in the place of the overly verbose statistical parrots.
That's reallyyyy trying hard to minimise the capability of LLMs and their potentials that we're still discovering. But you do you I guess.