Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered.

Yeah but we can ask an LLM to read the code and write documentation, if that happens.





Good documentation also contains the "why" of the code, ie why it is the way it is and not one of the other possible ways to write the same code. That is information inherently not present in the code, and there would be no way for a LLM to figure it out after the fact.

Also, no "small" program is ever at risk of dying in the sense that Naur describes it. Worst case, you can re-read the code. The problem lies with the giant enterprise code bases of the 60s and 70s where thousands of people have worked on it over the years. Even if you did have good documentation, it would be hundreds of pages and reading it might be more work than just reading the code.


I'm currently involved in a project where we are getting the LLM to do exactly that. As someone who _does_ have a working theory of the software (involved in designing and writing it) my current assessment is that the LLM generated docs are pure line noise at the moment and basically have no value in imparting knowledge.

Hopefully we can iterate and get the system producing useful documents automagically but my worry is that it will not generalise across different system and as a result we will have invested a huge amount of effort into creating "AI" generated docs for our system that could have been better spent just having humans write the docs.


My experience has been mixed with tools like deepwiki, but that's precisely the problem. I tried it with libraries I was familiar with and it was subtly wrong about some things.

We are not at the subtly wrong stage yet, currently we are at the totally empty words devoid of real meaning stage.

It's insane to me how you people are so confident in the LLMs abilities. Have you not tried them? They fuck things up all the time. Basic things. You can't trust them to do anything right.

But sure let's just have it generate docs, that's gonna work great.


There's a skill to phrasing the prompt so the code comes out more reliable.

Was some thread on here the other day, where someone said they routinely give Claude many paragraphs specifying what the code should and shouldn't do. Take 20 minutes just to type it up.


Yeah sure but that's not what dude above is suggesting. Dude above is suggesting "hello ai please document this entire project for me".

I mean even if that did work you still gotta read the docs to roughly the same degree as you would have had to read the code and you have to read the code to work with it anyway.


The problem will always remain that it cannot answer 'why', only 'what'. And oftentimes you need things like intent and purpose and not just a lossy translation from programming instructions to prose.

I'd see it like transcribing a piece of music where an LLM, or an uninformed human, would write down "this is a sequence of notes that follow a repetitive pattern across multiple distinct blocks. The first block has the lyrics X, Y ...", but a human would say "this is a pop song about Z, you might listen to it when you're feeling upset."


That's a bad example, because an LLM is perfectly capable of saying when something is a song or not.

And how does it do that? By looking at the words and seeing that they rhyme?

An LLM is not capable of subtext or reading between the lines or understanding intention or capability or sarcasm or other linguistic traits that apply a layer of unspoken context to what is actually spoken. Unless it matches a pattern.

It has one set of words, provided by you, and another set of words, provided by its model. You will get the bang average response every single time and mentally fill in the gaps yourself to make it work.


It would be nice if LLMs could do that without being wrong about what the code does and doesn't do.

Magical thinking.

That's what people would say 3 years ago about today's state of AI.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: