This isn't some 30 people startup. Google's revenue from other sources will easily keep them running and building on top of these products with close to zero chance of failing no matter what happens with AI in the next decade.
Why do people keep thinking they're intellectually superior when negatively evaluating something that is OBVIOUSLY working for a very large percentage of people?
I've been asking myself this since AI started to become useful.
Most people would guess it threatens their identity. Sensitive intellectuals who found a way to feel safe by acquiring deep domain-specific expertise suddenly feel vulnerable.
In addition, a programmer's job, on the whole, has always been something like modelling the world in a predictable way so as to minimise surprise.
When things change at this rate/scale, it also goes against deep rooted feelings about the way things should work (they shouldn't change!)
Change forces all of us to continually adapt and to not rest on our laurels. Laziness is totally understandable, as is the resulting anger, but there's no running away from entropy :}
Hopefully they are not live-coding that crap though. Do you want to make those apps even more unreliable than they already are, and encourage devs not to learn any lessons (as vibe coding prescribes)?
> Why do people keep thinking they're intellectually superior when negatively evaluating something that is OBVIOUSLY working for a very large percentage of people?
I'm not talking about LLMs, which I use and consider useful, I'm specifically talking about vibe coding, which involves purposefully not understanding any of it, just copying and pasting LLM responses and error codes back at it, without inspecting them. That's the description of vibe coding.
The analogy with "monkeys with knives" is apt. A sharp knife is a useful tool, but you wouldn't hand it to an unexperienced person (a monkey) incapable of understanding the implications of how knives cut.
Likewise, LLMs are useful tools, but "vibe coding" is the dumbest thing ever to be invented in tech.
> OBVIOUSLY working
"Obviously working" how? Do you mean prototypes and toy examples? How will these people put something robust and reliable in production, ever?
If you meant for fun & experimentation, I can agree. Though I'd say vibe coding is not even good for learning because it actively encourages you not to understand any of it (or it stops being vibe coding, and turns into something else). It's that what you're advocating as "obviously working"?
> The analogy with "monkeys with knives" is apt. A sharp knife is a useful tool, but you wouldn't hand it to an unexperienced person (a monkey) incapable of understanding the implications of how knives cut.
Could an experienced person/dev vibe code?
> "Obviously working" how? Do you mean prototypes and toy examples? How will these people put something robust and reliable in production, ever?
You really don't think AI could generate a working CRUD app which is the financial backbone of the web right now?
> If you meant for fun & experimentation, I can agree. Though I'd say vibe coding is not even good for learning because it actively encourages you not to understand any of it (or it stops being vibe coding, and turns into something else). It's that what you're advocating as "obviously working"?
I think you're purposefully reducing the scope of what vibe coding means to imply it's a fire and forget system.
Sure, but why? They already paid the price in time/effort of becoming experienced, why throw it all away?
> You really don't think AI could generate a working CRUD app which is the financial backbone of the web right now?
A CRUD? Maybe. With bugs and corner cases and scalability problems. A robust system in other conditions? Nope.
> I think you're purposefully reducing the scope of what vibe coding means to imply it's a fire and forget system.
It's been pretty much described like that. I'm using the standard definition. I'm not arguing against LLM-assisted coding, which is a different thing. The "vibe" of vibe coding is the key criticism.
> Sure, but why? They already paid the price in time/effort of becoming experienced, why throw it all away?
You spend 1/10 amount of time doing something, you have 9/10 of that time to yourself.
> A CRUD? Maybe. With bugs and corner cases and scalability problems. A robust system in other conditions? Nope.
Now you're just inventing stuff. "scalability problems" for a CRUD app. You obviously haven't used it. If you know how to prompt the AI it's very good at building basic stuff, and more advanced stuff with a few back and forth messages.
> It's been pretty much described like that. I'm using the standard definition. I'm not arguing against LLM-assisted coding, which is a different thing. The "vibe" of vibe coding is the key criticism.
By whom? Wikipedia says
> Vibe coding (or vibecoding) is an approach to producing software by depending on artificial intelligence (AI), where a person describes a problem in a few sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3] Vibe coding is claimed by its advocates to allow even amateur programmers to produce software without the extensive training and skills required for software engineering.[4] The term was introduced by Andrej Karpathy in February 2025[5][2][4][1] and listed in the Merriam-Webster Dictionary the following month as a "slang & trending" noun.[6]
Emphasis on "shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code" which means you don't blindly dump code into the world.
Doing something badly in 1/10 of the time isn't going to save you that much time, unless it's something you don't truly care about.
I have used AI/LLMs; in fact I use them daily and they've proven helpful. I'm talking specifically about vibe coding, which is dumb.
> By whom? [...] Emphasis on "shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code" which means you don't blindly dump code into the world.
By Andrej Karpathy, who popularized the term and describes it as mostly blindly dumping code into the world:
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
He even claims "it's not too bad for throwaway weekend projects", not for actual production-ready and robust software... which was my point!
> Writing computer code in a somewhat careless fashion, with AI assistance
and
> In vibe coding the coder does not need to understand how or why the code works, and often will have to accept that a certain number of bugs and glitches will be present.
and, M-W quoting the NYT:
> You don’t have to know how to code to vibecode — just having an idea, and a little patience, is usually enough.
and, quoting from Ars Technica
> Even so, the risk-reward calculation for vibe coding becomes far more complex in professional settings. While a solo developer might accept the trade-offs of vibe coding for personal projects, enterprise environments typically require code maintainability and reliability standards that vibe-coded solutions may struggle to meet.
I must point out this is more or less the claim I made and which you mocked with your CRUD remarks.
> Doing something badly in 1/10 of the time isn't going to save you that much time, unless it's something you don't truly care about.
You're adding "badly" like it's a fact when it is not. Again, in my experience, in the experience of people around me and many experiences of people online AI is more than capable of doing "simpler" stuff on its own.
> By Andrej Karpathy, who popularized the term
Nowhere in your quoted definitions does it say you don't *ever* look at the code. MW says non-programmers can vibe code, also in a "somewhat careless fashion" none of those imply you CANNOT look at code for it to be vibe coding. If Andrej didn't look at it it doesn't mean the definition is that you are not to look at it.
> which you mocked with your CRUD remarks
I mocked nothing, I just disagree with you since as a dev with over 10 years of experience I've been using AI for both my job and personal projects with great success. People that complain about AI expect it to parse "Make an ios app with stuff" successfully, and I am sure it will at some point, but now it requires finer grain instructions to ensure its success.
It's not obvious that it's "working" for a "very large" percentage of people. Probably because this very large group of people keep refusing to provide metrics.
I've vibe-coded completely functional mobile apps, and used a handful LLMs to augment my development process in desktop applications.
From that experience, I understand why parsing metrics from this practice is difficult. Really, all I can say is that codegen LLMs are too slow and inefficient for my workflow.
Yup. Even a small market share is market share. Plus they are paying to acquire a team of folks who are already in this space and who will, until golden handcuffs come off, keep working in this space. Still an insane number though.
But openai is stronger brand with free publicity - whatever they say/do will instantly show up the same day on all news across the world.
The "space" exists for months, there are no people with 10y expertise here, with their brand they can attract any talent they can wish for in this "space", no?
You can probably vibe code 80% of it in a week or two?
I guess it's all up to interpretation, but having a brand in one space doesn't necessarily translate to a brand in another. OpenAI doesnt/didnt have a code editor. Now it does/it will.
I'm fairly into llms but it took me awhile to try cursor because the cost of changing editors is very high. I'd probably eventually try a OpenAI editor but only if I saw it was actually getting adoption and good feedback from others.
I'd also argue that while this llm powered editor space is pretty new, the editor space in general is much older.
I keep running into these sorts of messages in hn but my experience couldn't be more different. Even autocomplete does this automatically for me, let alone using chat/agent in cursor/augment code.
> Japan unveils world’s first solar super-panel: More powerful than 20 nuclear reactors
> Under its revised energy plan, the Ministry of Industry now prioritizes PSCs on Section 0 of its plan wherein Japan aims to develop PSC sections generating 20 gigawatts of electricity equivalent to 20 nuclear reactors by fiscal 2040.
Wtf is this headline. Why are journalists doing this shit.
Press release, innit. Ministry of Industry releases press statement, gets almost automatically copy-pasted into various news feeds. Costs basically nothing to publish.
> And these also why it requires no code changes to extract more metrics from wide event.
I think the point of OP's comment is that while you're not paying code tax for to parse/aggro the data as it's all in one place you're paying code tax for actually generating the event with everything in it.
Sure you still need to code but instead of concrete metrics one by one, you instrument the context and the state. The opentelemetry trace API can save you a lot of work. But I agree there is still potential to improve the auto instrument.
> Well you know, all those fun and creative parts in software engineering has been taken over by vibe coding
What? In what way? Fun and creative parts are thinking about arch, approach, technologies. You shouldn't be letting AI do this. Typing out 40 lines of a React component or FastAPI handler does not involve creativity. Plus nobody is forcing you to use AI to write code, you can be as involved with that as you'd like to.
> Plus nobody is forcing you to use AI to write code, you can be as involved with that as you'd like to.
I had management "strongly encourage" me to use AI for coding. It will absolutely be a requirement soon for many people.
The more generative AI you use the more dependent you become of it. Code bases need to be structured different to be friendly to LLM's. So even if you might work somewhere where you technically don't have to use AI, you will need it to even make sense of the code and be competitive.
The job of an software engineer for the most part will change fundamentally and there will be no going back. We didn't know how good we had it.
To be honest I was up and running in less than a minute so nothing stands out. I didn't dive deeper right now but if I could do this and export to Figma directly that would be 100% of my use cases right now.