Hacker News new | past | comments | ask | show | jobs | submit login

I've found that heavily commented code can be better for the LLM to read later, so it pulls in explanatory comments into context at the same time as reading code, similar to pulling in @docs, so maybe it's doing that on purpose?



No, it's just bad. I've been writing a lot of Python code past two days with Gemini 2.5 Pro Preview, and all of its code was like:

```python

def whatever():

  --- SECTION ONE OF THE CODE ---

  ...

  --- SECTION TWO OF THE CODE ---

  try:
    [some "dangerous" code]
  except Exception as e:
     logging.error(f"Failed to save files to {output_path}: {e}")
     # Decide whether to raise the error or just warn
     # raise IOError(f"Failed to save files to {output_path}: {e}")
```

(it adds commented out code like that all the time, "just in case")

It's terrible.

I'm back to Claude Code.


I'm seeing it trying to catch blind exceptions in Python all the time. I see it in my colleagues code all the time, it's driving me nuts.


The training loop asked the model to one-shot working code for the given problems without being able to iterate. If you had to write code that had to work on the first try, and where a partially correct answer was better than complete failure, I bet your code would look like that too.

In any case, it knows what good code looks like. You can say "take this code and remove spurious comments and prefer narrow exception handling over catch-all", and it'll do just fine (in a way it wouldn't do just fine if your prompt told it to write it that way the first time, writing new code and editing existing code are different tasks).


It's only an example, there's pretty of irrelevant stuff that LLMs default to which is pretty bad Python. I'm not saying it's always bad but there's a ton of not so nice code or subtly wrong code generated (for example file and path manipulation).


There are a bunch of stupid behaviors of LLM coding that will be fixed by more awareness pretty soon. Imagine putting the docs and code for all of your libraries into the context window so it can understand what exceptions might be thrown!


Copilot and the likes have been around for 4 years, and we’ve been hearing this all along. I’m bullish on LLM assistants (not vibe coding) but I’d love to see some of these things actually start to happen.


I feel like it has gotten better over time, but I don't have any metrics to confirm this. And it may also depend on what type of you language/libraries that you use.


I feel like there was a huge jump when cursor et al appeared, and things have been “changing” since then rather than improving.


It just feels to me like trying to derive correct behavior without a proper spec so I don't see how it'll get that much better. Maybe we'll collectively remove the pathological code but otherwise I'm not seeing it.


Well, at least now we know who to blame for the training data :)


What’s a blind exception?


It's certainly annoying, but you can try following up with "can you please remove superfluous comments? In particular, if a comment doesn't add anything to the understanding of the code, it doesn't deserve to be there".


I'm having the same issue, and no matter what I prompt (even stuff like "Don't add any comments at all to anything, at any time") it still tries to add these typical junior-dev comments where it's just re-iterating what the code on the next line does.


you can have a script that drops them all


You don't need a follow up

Just end your prompt with "no code comments"


I prefer not to do that as comments are helpful to guide the LLM, and esp. show past decisions so it doesn't redo things, at least in the scope of a feature. For me this tends to be more of a final refactoring step to tidy them up.


I always thought these were there to ground the LLM on the task and produce better code, an artifact of the fact that this will autocomplete better based on past tokens. Similarly always thought this is why ChatGPT always starts every reply with repeating exactly what you asked again


Comments describing the organization and intent, perhaps. Comments just saying what a "require ..." line requires, not so much. (I find it will frequently put notes on the change it is making in comments, contrasting it with the previous state of the code; these aren't helpful at all to anyone doing further work on the result, and I wound up trimming a lot of them off by hand.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: