> This isn’t a generic chatbot. It’s a custom-built voice agent that answers his phone, knows his exact prices, his hours, his policies, and can collect a callback when it doesn’t know something.
I wanted to know how to make softwares with LLM "without losing the benefit of knowing how the entire system works" and "intimately familiar with each project’s architecture and inner workings", while "have never even read most of their code". (Because obviously, you can't.) But OP didn't explain that.
You tell LLM to create something, and then use another LLM to review it. It might make the result safer, but it doesn't mean that YOU understand the architecture. No one does.
Hot take: you can't have your cake and eat it too. If you aren't writing code, designing the system, creating architecture, or even writing the prompt, then you're not understanding shit. You're playing slots with stochastic parrots
The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
There's a new kind of coding I call "vibe
coding", where you fully give in to the
vibes, embrace exponentials, and forget
that the code even exists.
Not all AI-assisted programming is vibe coding. If you're paying attention to the code that's being produced you can guide it towards being just as high quality (or even higher quality) than code you would have written by hand.
It's appropriate for the commenter I was replying to, who asked how they can understand things, "while having never even read most of their code."
I like AI-assisted programming, but if I fail to even read the code produced, then I might as well treat it like a no-code system. I can understand the high-levels of how no-code works, but as soon as it breaks, it might as well be a black box. And this only gets worse as the codebase spans into the tens of thousands of lines without me having read any of it.
The (imperfect) analogy I'm working on is a baker who bakes cakes. A nearby grocery store starts making any cake they want, on demand, so the baker decides to quit baking cakes and buy them from the store. The baker calls the store anytime they want a new cake, and just tells them exactly what they want. How long can that baker call themself a "baker"? How long before they forget how to even bake a cake, and all they can do is get cakes from the grocer?
> Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away.
It's insane that this quote is coming from one of the leading figures in this field. And everyone's... OK that software development has been reduced to chance and brute force?
There are two ways to approach this. One is a priori: "If you aren't doing the same things with LLMs that humans do when writing code, the code is not going to work".
The other one is a posteriori: "I want code that works, what do I need to do with LLMs?"
Your approach is the former, which I don't think works in reality. You can write code that works (for some definition of "works") with LLMs without doing it the way a human would do it.
It means that just because a human can't read the code doesn't mean the code is not correct. Obfuscators exist, for example, and it's conceivable that the LLM writes perfectly correct code even though it's unmaintainable to us.
Thanks, that's a good insight into my value system then. I understand that code doesn't have to be human-readable to be correct. I don't want to work on a codebase filled with unreadable code which no human colleague understands though. This is also why I don't like a lot of web frameworks - the final code outputted to the page is a huge spaghetti of un-inspectable Javascript and HTML.
I want to have the ability to understand each relevant layer of the system, even if I don't necessarily have the full understanding at every given moment.
Also, to add to my point earlier: You don't like frameworks but it's frameworks all the way down to microcode, and that's a massive amount of layers. Javascript isn't an absolute source of truth, you're just picking one layer out of the entire abstraction stack and saying "this is good enough for me".
It's perfectly fine to do that, but also realize that other people might just choose a different layer, and that's fine too if the end result fulfills its purpose.
Sure, but that's more your preference than an objective way to do software "correctly". We're still figuring out what the latter means when LLMs are involved (hence my article here).
the hardware you typed this on was designed by hardware architects that write little to no code. just types up a spec to be implemented by verilog coders.
> Since my main goal was to learn, I decided to do it "the right way". This means I didn’t want to rely on Replit or Lovable where the infra part is obfuscated. I wanted to deal with that complexity myself.
I expected OP to actually 'learn' devops, but what they did was just asking LLMs to do everything.
Also...
> 180+ paid $2 for a dino
People pays $2 for an image of dinosaur with human face?
Is this an openclaw alternative that is installed on my mac but runs on their cloud? Or just a VDI?
It's difficult to understand what this is because its name is "Personal Computer", and it seems like their definition of Personal Computer is very different from everyone else's.
Also it's funny that it shows making a revenue report with their brand template. AI can replace HR jobs but they still have to make reports for noble executives? They are basically saying "We won't replace CEOs/executives".
It's ironic to debate whether 'clean rooming' or rewriting violates licensing laws when LLMs clearly violate all of them.
Also no one can prove that whether LLMs referenced original code or not, because LLM companies don't disclose what data they used. I'm pretty sure that well-known open source projects such as chardet has been included in the Claude dataset, but Anthropic won't say anything about this.
The fact that humans can use this feels like a side effect. The developer says it's "built for agents first" and "AI agents would be the primary consumers of every command, every flag, and every byte of output"[1].
This sounds like a term from an Arthur C. Clarke novel. Also reminds me of Urasawa Naoki's Pluto.
> Information Utility Burnout
These days, every time I search for something, I have to use 100% of my brain just to work out which results are slops and which are not, even on Kagi. A few days ago, I had to search for something on Google without an adblocker. It was a remarkable experience. Like, 60% of my screen were ads or AI slops.
It can't read my mind. This means that I have to say "I need to buy eggs" out loud in front of the machine (or in another room, depending on how good the microphone is). I'd rather just say "Hey Siri, remind me to buy eggs" to my phone.
This is 2026's most generic chatbot.
reply