Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh dear. Using AI for something you don't understand well is surely a recipe for disaster and should not be encouraged.


My take is that you should be using AI for exactly the same things that you would ask someone a random contractor to do for you, knowing that they won't be there to maintain it later.


On the other hand, one can see it as another layer of abstraction. Most programmers are not aware of how the assembly code generated from their programming language actually plays out, so they rely on the high-level language as an abstraction of machine code.

Now we have an additional layer of abstraction, where we can instruct an LLM in natural language to write the high-level code for us.

natural language -> high level programming language -> assembly

I'm not arguing whether this is good or bad, but I can see the bigger picture here.


Assembly is generally generated deterministically. LLM code is not.


Different compiler versions, target architectures, or optimization levels can generate substantially different assembly from the same high-level program. Determinism is thus very scoped, not absolute.

Also almost every software has know unknowns in terms of dependencies that gets permanently updated. No one can read all of its code. Hence, in real life if you compile on different systems (works on my machine) or again but after some time has passed (updates to compiler, os libs, packages) you will get a different checksum for your build with unchanged high level code that you have written. So in theory given perfect conditions you are right, but in practice it is not the case.

There are established benchmarks for code generation (such as HumanEval, MBPP, and CodeXGLUE). On these, LLMs demonstrate that given the same prompt, the vast majority of completions are consistent and pass unit tests. For many tasks, the same prompt will produce a passing solution over 99% of the time.

I would say yes there is a gap in determinism, but it's not as huge as one might think and it's getting closer as time progresses.


Your comment lacks so much context and nuance to ultimately be nonsense.

You absolutely can, and probably _should_, leverage AI to learn many things you don't understand at all.

Simple example: try picking up or learning a programming language like C with or without LLMs. With is going to be much more efficient. C is one of the languages that LLMs have seen the most, they are very, very good at it for learning purposes (also at bug hunting).

I have never learned as much about computing as in the last 7/8 months of using LLMs to assist me at summarizing, getting information, finding bugs, explaining concepts iteratively (99% of Software books are crap: poorly written and quickly outdated, often wrong), scanning git repositories for implementation details, etc.

You people keep committing the same mistake over and over: there's a million uses to LLMs, and instead of defining the context of what you're discussing about you conflate everything with vibe coding making ultimately your comments nonsense.


I've posted this before, but I think it will be a perennial comment and concern:

Excerpted from Tony Hoare's 1980 Turing Award speech, 'The Emperor's Old Clothes'... "At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."

My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.

Applied to your specific example, yes, LLMs can be a good assistants for learning. I would add that triangulation against other sources and against empirical evidence is always necessary before one can trust that learning.


My context is that I have seen some colleagues try to make up for not having expertise with a particular technology by using LLMs and ultimately they have managed to waste their time and other people's time.

If you want to use LLMs for learning, that's altogether a different proposition.


I kinda knew what you meant, but I also feel it is important to provide the nuance and context.


seems like a significant skill/intelligence issue. someone i know made a web security/pentesting company without ANY prior knowledge in programming or security in general.

and his shit actually works by the way, topping leaderboards on hackerone and having a decent amount of clients.

your colleagues might be retarded or just don’t know how to use llms


Would you recognize a memory corruption bug when the LLM cheerfully reports that everything is perfect?

Would you understand why some code is less performant than it could be if you've never written and learned any C yourself? How would you know if the LLM output is gibberish/wrong?

They're not wrong; it's just not black-and-white. LLMs happen to sometimes generate what you want. Often times, for experienced programmers who can recognize good C code, the LLMs generate too much garbage for the tokens it costs.

I think some people are also arguing that some programmers ought to still be trained in and experienced with the fundamentals of computing. We shouldn't be abandoning that skill set completely. Some one will still need to know how the technology works.


Not sure how your comments relates to mine.

The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.

You seem to describe very different use cases.

In any case, just to answer your (unrelated to mine) comment, here[1] you can see a video of one of the most skilled C developers on the planet finding very hard to spot bugs in the Redis codebase.

If all your arguments boil down to "lazy people are lazy and misuse LLMs" that's not a criticism of LLMs but of their lack of professionalism.

Humans are responsible for AI slop, not AI. Skilled developers are enhanced by such a great tool that they know how and when to use.

[1] https://www.youtube.com/watch?v=rCIZflYEpEk


I was commenting on relying completely on the LLM when learning a language like C when you don’t have any prior understanding of C.

How do people using LLMs this way know that the generated code/text doesn’t contain errors or misrepresentations? How do they find out?


>The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.

Someone else interpretation is not the author's saying. :)

Since the tone is so aggressive, it doesn't feel like it would be easy to build any constructive discussion on this ground.

Acting prudently is not blind rejection, the latter being not wiser than blind acceptance.


Would you mind sharing some of the ways that you leverage LLMs in your learning?

Some of mine:

* Converse with the LLM on deeper concepts

* use the `/explain` hook in VSCode for code snippets I'm struggling with

* Have it write blog-style series on a topic, replete with hyperlinks

I have gotten in some doom loops though when having it try to directly fix my code, often because I'm asking it to do something that is not feasible, and its sycophantic tendencies tend to amplify this. I basically stopped using agentic tools to implement solutions that use tech I'm not already comfortable with.

I've used it for summarization as well, but I often find that a summary of a man page or RFC is insufficient for deeper learning. It's great for getting my feet wet and showing me gaps in my understanding, but always end up having to read the spec at the end


Good at bug hunting?

Have you heard about how much AI slop has been submitted as "bug" - but always turn out to be not a bug - to the curl project?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: