Hacker News new | past | comments | ask | show | jobs | submit login

Chalmers: "GPT-5? A vastly-improved model that somehow reduces the compute overhead while providing better answers with the same hardware architecture? At this time of year? In this kind of market?"

Skinner: "Yes."

Chalmers: "May I see it?"

Skinner: "No."




It has only been a little over one year since GPT-4 was announced, and it was at the time the largest and most expensive model ever trained. It might still be.

Perhaps it's worth taking a beat and looking at the incredible progress in that year, and acknowledge that whatever's next is probably "still cooking".

Even Meta is still baking their 400B parameter model.


As Altman said (paraphrasing): GPT-4 is the _worst_ model you will ever have to deal with in your life (or something to that effect).


I found this statement by Sam quite amusing. It transmits exactly zero information (it's a given that models will improve over time), yet it sounds profound and ambitious.


I got the same vibe from him on the All In podcast. For every question, he would answer with a vaguely profound statement, talking in circles without really saying anything. On multiple occasions he would answer like 'In some ways yes, in some ways no...' and then just change the subject.


Yep. I'm not quite sure what he's up to. He takes all these interviews and basically says nothing. What's his objective?

My guess is he wants OpenAI to become a household name, and so he optimizes for exposure.


and boy did the stockholders like that one.


What stockholders. They’re investors at this point. I wish I could get in on it.


They're rollercoaster riders, being told lusterous stories by gold-panners while the shovel salesman counts his money and leaves.


There are no shovels or shovel sellers. It’s heavily accredited investors with millions of dollars buying in. It’s way above our pay grade, our pleb sayings don’t apply.


I think you could pretty easily call Nvidia a shovel-seller in this context.


You’re right.


Why should I believe anything he says?


I will believe it when I see it. People like to point at the first part of a logistic curve and go "behold! an exponential".


Ah yes my favorite was the early covid numbers, some of the "smartest" people in the SF techie scene were daily on Facebook thought-leadering about how 40% of people were about to die in the likely case.


Let's be honest, everyone was speculating. Nobody knew what the future would bring, not even you.


The difference is some people were talking a whole lot confidently, and some weren’t.


Legit love progress


GPT-3 was released in 2020 and GPT-4 in 2023. Now we all expect 5 sooner than that but you're acting like we've been waiting years lol.


The increased expectations are a direct result of LLM proponents continually hyping exponential capabilities increase.


So if not exponential, what would you call adding voice and image recognition, function calling, greatly increased token generation speed, reduced cost, massive context window increases and then shortly after combining all of that in a truly multi modal model that is even faster and cheaper while adding emotional range and singing in… checks notes …14 months?! Not to mention creating and improving an API, mobile apps, a marketplace and now a desktop app. OpenAI ships and they are doing so in a way that makes a lot of business sense (continue to deliver while reducing cost). Even if they didn’t have another flagship model in their back pocket I’d be happy with this rate of improvement but they are obviously about to launch another one given the teasers Mira keeps dropping.


All of that is awesome, and makes for a better product. But it’s also primarily an engineering effort. What matters here is an increase in intelligence. And we’re not seeing that aside from very minor capability increases.

We’ll see if they have another flagship model ready to launch. I seriously doubt it. I suspect that this was supposed to be called GPT-5, or at the very least GPT-4.5, but they can’t meet expectations so they can’t use those names.


Isn’t one of the reasons for the Omni model that text based learning has a limit of source material. If it’s just as good at audio that opens a whole another set of data - and a interesting UX for users


I believe you’re right. You can easily transcribe audio but the quality of the text data is subpar to say the least. People are very messy when they speak and rely on the interlocutor to fill in the gaps. Training a model to understand all of the nuances of spoken dialogue opens that source of data up. What they demoed today is a model that to some degree understands tone, emotion and surprisingly a bit of humour. It’s hard to get much of that in text so it makes sense that audio is the key to it. Visual understanding of video is also promising especially for cause and effect and subsequently reasoning.


The time for the research, training, testing and deploying of a new model at frontier scales doesn't change depending on how hyped the technology is. I just think the comment i was replying to lacks perspective.


Pay attention to the signal, ignore the noise.


People who buy into hype deserve to be disappointed. Or burned, as the case may be.


Incidentally, this dialogue works equally well, if not better, with David Chalmers versus B.F. Skinner, as with the Simpsons characters.


Agnes (voice): "SEYMOUR, THE HOUSE IS ON FIRE!"

Skinner (looking up): No, mother, it's just the Nvidia GPUs.


Agnes (voice): "SEYMOUR, THE HOUSE IS ON FIRE!"

Skinner (looking up): "No, mother, it's just the Nvidia GPUs."


"Seymour, the house is on fire!"

"No, mother, that's just the H100s."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: