Hacker Newsnew | past | comments | ask | show | jobs | submit | more jappgar's commentslogin

I really think parse don't validate gives people a false sense of security (particularly false in dynamic languages like javascript and python).

"Well, I already know this is a valid uuid, so I don't really need to worry about sql injection at this point."

Sure, this is a dumb thing to do in any case, but I've seen this exact thing happen.

Typesafety isn't safety.


Type safety is absolutely some degree of safety. And I don’t know why anyone would think parsing a value into a type that has fewer inhabitants would absolve them of having to prevent SQL injection — these are orthogonal things.

The quote here — which I suspect is a straw man — is such a weird non sequitur. What would logically follow from “I already know this is a valid UUID” is “so I don’t need to worry about this not being a UUID at this point”.


In python or typescript, the most popular languages in the world, it offers no runtime safety.

Even in languages like Haskell, "safety" is an illusion. You might create a NumberGreaterThanFive type with smart constructors but that doesn't stop another dev from exporting and abusing the plain constructor somewhere else.

For the most part it's fine to assume the names of types are accurate, but for safety critical operations it absolutely makes sense to revalidate inputs.


> that doesn't stop another dev from exporting and abusing the plain constructor somewhere else.

That seems like a pretty unfair constraint. Yes, you can deliberately circumvent safeguards and you can deliberately write bad code. That doesn't mean those language features are bad.


No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.

It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.


This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.

One key to good prompting is clear thinking.


LLMs are language models. Meshes aren't language. Yes this can create python to create simple objects, but that's not how anyone actually creates beautiful 3d art. Just like no one is handwriting svg files to create vector art.

LLMs alone will never make visual art. They can provide you an interface to other models, but that's not what this is.


This is of course true, but have you ever seen Inigo Quilez's SDF renderings? It's certainly not scalable, but it sure is interesting

https://www.youtube.com/watch?v=8--5LwHRhjk


That's fine. I'm happy to define "visual art" as things LLMs can't do, and use LLMs only for the 3d modelling tasks that are not "visual art".

Such tasks can be "not making visual art", but that doesn't mean they aren't useful.


I know that, I was making a statement about how you can.

Not exactly sure what your point is. If an LLM can take an idea and spit out words, it can spit out instructions (just like we can with code) to generate meshes, or boids, or point clouds or whatever. Secondary stages would refine that into something usable and the artist would come in to refine, texture, bake, possibly animate, and export.

In fact, this paper is exactly that. Words as input, code to use with blender as output. We really just need a headless blender to spit it out as a GLTF and it’s good to go to second stage.


If you have an artist, can't you just talk to her about what you want and then she makes the model and all the rest of it? I don't really understand what you gain if you pay for LLM, make model with it and then give it to artist.


If you knew how an art pipeline worked, you would. An artist is usually one in an array of artists that completes a model. The pipeline starts with concept artists (easily AI now), turnaround (AI is hit or miss here), modeling (this phase), texturing (could be another artist), baking (normally a texture artists job but depending on material complexity and whether it’s for film, technical artist), rendering if needed.

Then you have sub specialties. Rigging, animation, texturing, environments, props, characters, effects.

It’s a fascinating process.


LLMs aren't suited to this, just like they aren't suited to generating images (different models do the hard work, even when you're using an LLM interface).

I agree with the parent comment. This might be neat to learn the basics of blender scripting, but it's an incredibly inefficient and clumsy way of making anything worthwhile.


That's fair, and perhaps a different kind of multi-modal model will emerge that is better at learning and interacting with UIs..

Or maybe applications will develop new interfaces to meet LLMs in the middle, sort of how MCP servers are a very primitive version of that for APIs..

Future improvements don't just have to be a better version of exactly what it is today, it can certainly mean changing or combining approaches.

Leaving AI/LLM aside, 3D modeling and animation tech has drastically evolved over the years, removing the need for lots of manual and complicated work by automating or simplifying the workflow for achieving better results.


Right.

This is like training an AI on being an Excel expert, and then ask it to make Doom for you: You're gonna get some result, and it will be impressive given the constraints. It's also going to be pure dog shit that will never see the light of day other than as a meme.


Dry cracked skin is a lot worse for your health. The body is producing oils for a reason


The idea that aging office workers can learn to weld is even dumber than thinking aging welders can learn to code.


Welding is not hard to learn. It just takes practice.


Nope. Try TIG on aluminum or titanium. I will never be able to do this well enough to get certified because, simply put, I'm well past my prime dexterity years. But I can still write some damn fine embedded code.


...neither of those sound the least bit dumb to me.


Which is why theres evidence that it doesn’t work to help us learn from the mistakes of the past.


That was my optimistic take before I started working on a large Haskell code base.

Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs.

My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code.


> terser languages don't work all that well with LLMs

I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares.

When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM.


I usually treat the LLM generated Haskell code as a first draft.

The power of Haskell in this case is the fearless refactoring the strong type system enables. So even if the code generated is not beautiful, it can sit there and do a job until the surrounding parts have taken shape, and then be refactored into something nice when I have a moment to spare.


Apl is executed right to left and LLMS.... Aren't.


Can't you just run HLint on it?


There's actually a significant difference between Haskell and OCaml here so we can't lump them together. OCaml is a significantly simpler, and moderately more verbose, language than Haskell. That helps LLMs when they do codegen.


This has been my experience as well. Ai writes Go better than any language besides maybe html and JavaScript/python.


I wonder if it has more to do with larger training data than the languages themselves.


(Async) Iterators are definitionally pull-based and not suitable for event (push) handling.

They've also been around for years as another poster mentioned.


Sudden catastrophic failure is of the reason carbon fiber mountain bikes are a dangerous development.

Modern mountain bikes live in this weird consumer space. They are designed to stand up to incredible stresses while remaining light and agile. But they're increasingly purchased by people who don't really need high performance gear, they just like the idea of owning "the best" stuff.

I imagine the same thing is true with pickup trucks. What once was utilitarian becomes a vanity object. Now it's harder for people who genuinely need the performance.


>What once was utilitarian becomes a vanity object. Now it's harder for people who genuinely need the performance.

The bike space is full of people screeching about how you'll catastrophically break stuff if you use your bike for all its worth, just like the vehicle space is.


And "the best" for those people is definitely not a mountain bike. But I'm sure it's the same factor as for SUVs, it looks cool and is big and scary. Pretty much everyone would be better off with a citybike and a decent geartrain + wheels


Horses for courses. The sort of person riding a high end mountain bike is not using it to buy groceries in the city.


Terminal UIs is such a step backward. It's only attractive to people who have a preexisting emotional attachment to the terminal.

I should be one of those people, I guess. I love shell scripts and all the rest... but interactive terminal UIs have always sucked.

So much of what AI companies are putting out is designed to capture developer mindshare. Substantive improvements to their core product (models) are few and far between, so they release these fidgets about once a month to keep the hope alive.

From that standpoint, TUI makes sense because it obscures the process and the result enough to sucker more people into the vibe-coding money hole.


I think the way we currently work with agents, through a text context and prompts, is just a very natural fit for the terminal. It is a very simple design and makes it very easy to review the past actions of the agent and continue to guide it through new instructions. And then you can always jump into your IDE when you want to jump around the source code to review it in more detail.

On the other hand, agent integrations in IDEs seem to often add a lot more widgets for interacting with agents, and often they put the agent is in its own little tab off to the side, and I find that harder to work with.

That's why, even though I love using IDEs and have never been a big terminal person, I much prefer using Claude Code in the terminal rather than using tools like Copilot in VSCode (ignoring the code quality differences). I just find it nicer to separate the two.

The portability of being able to really easily run Claude Code in whatever directory you want, and through SSH, is a nice bonus too.


I agree that the current crop of IDE integrations really leave something to be desired.

I've been using Roocode (Cline fork) a lot recently and while it's overall great, the UI is janky and incomplete feeling. Same as Cursor and all the others.

I tried Claude Code after hearing great things and it was just Roocode with a worse UX (for me). Most of the people telling me how great it was were talking up the output as being amazing quality. I didn't notice that. I presume the lack of IDE integration makes it feel more magical. This is fun while you're vibing the "first 80%" of your product, but eventually the agents need much more hand holding and collaborative edits to keep things on track.


It is composable with all decades old Linux CLI tools, which you simply can't do with an IDE.

It also doesn't prevent you from using an IDE at all, but still fits for people with text editors like Vim who doesn't want to use IDEs.


I’d like to think it’s the most extensible format. If you prefer GUI, you can put a wrapper around it but this gives you the most flexibility.


The underlying process might be extensible, but the TUI likely isn't.

It makes sense I guess if a TUI is easier to build and ship than a GUI.

It does make we wonder why devs don't just use the TUI to vibecode a GUI and compete with Cursor...


I am not 100% sold on these CLI tools. Namely because they don’t optimize on coordination. I’d like to see a more polished AI behind the coordination based on context, memory, cost, speed, etc. It doesn’t make sense to deploy LLM to do this specific task or for me to hardcode that logic either. Right now, I’d start with o3 and delegate to other models based on strengths I perceive but I rather have all of that automated for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: