Hacker Newsnew | past | comments | ask | show | jobs | submit | nazgul17's commentslogin

That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.

The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.

One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.


My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....

Correct me if I'm wrong, but even with batch processing turned off, they are still only deterministic as long as you set the temperature to zero? Which also has the side-effect of decreasing creativity. But maybe there's a way to pass in a seed for the pseudo-random generator and restore determinism in this case as well. Determinism, in the sense of reproducible. But even if so, "determinism" means more than just mechanical reproducibility for most people - including parent, if you read their comment carefully. What they mean is: in some important way predictable for us humans. I.e. no completely WTF surprises, as LLMs are prone to produce once in a while, regardless of batch processing and temperature settings.

You can change ANY sampling parameter once batch processing is off and you will keep the deterministic behavior. temperature, repetition penalty, etc.... I got to say I'm a bit disappointed in seeing this in hacker news, as I expect this from reddit. you bring the whole matter on a silver platter, the video describes in detail how any sampling parameter can be used, i provide the whole code opensource so anyone can try it themselves without taking my claims as hearsay, well you can bring a horse to water as they say....

I feel like you're putting words in someone else's mouth. Maybe you are not responding to OP but, in your mind, to an ex-colleague that did so in a different venue than this forum?

In a forum like this, stating your preference is just that: stating your preference.

If you were talking with your manager and stated your preference, you'd be stating your preference and, between the lines, asking to make it happen for yourself.

If you were talking with your manager and stated your preference and specified the reason is because you prefer working around people, only then, between the lines, you'd be asking to make it happen for your whole team.


I agree, the preferences don't do anything unless used as a collective. But from the point of view of comparing the viewpoints, they're not apples to apples, because one requires the cooperation of other people. WFH isn't actually 'from home', it's from not-office-with-everyone-else. So if you just want to work in an office, then WFH is perfect for you. Arguably even better than working in the one and only office, because you get to choose the office.

But the buried lede so to speak is that RTO has literally nothing to do with the office. The office is just an empty box that happens to exist somewhere.

So the level of control for each preference is wildly different, and they can't just be compared like that. One is naturally 'closed', and the other naturally 'open'. That, to me, does speak to the intrinsic value of each preference.


Probably, because not everyone is made happy: some are annoyed. I am not going to enter the merit of either side.


To add to this, the system offering text generation, i.e. the loop that builds the response one token at a time generated by a LLM (and at the same time feeds the LLM the text generated so far) is a Markov Model, where the transition matrix is replaced by the LLM, and the state space is the space of all texts.


They also tried to heal the damage, to partial avail. Besides, it's science: you need to test your hypotheses empirically. Also, to draw attention to the issue among researchers, performing a study and sharing your results is possibly the best way.


Yeah I mean I get that, but surely we have research like this already. "Garbage in, garbage out" is basically the catchphrase of the entire ml field. I guess the contribution here is that "brainrot"-like text is garbage which, even though it seems obvious, does warrant scientific investigation. But then that's what the paper should focus on. Not that "LLMs can get 'brain rot'".

I guess I don't actually have an issue with this research paper existing, but I do have an issue with its clickbait-y title that gets it a bunch of attention, even though the actual research is really not that interesting.


I don’t understand, so this is just about training an LLM with bad data and just having a bad LLM?

just use a different model?

dont train it with bad data and just start a new session if your RAG muffins went off the rails?

what am I missing here


The idea of brain rot is that if you take a good brain and give it bad data it becomes bad. Obviously if you give a baby (blank brain) bad data it will become bad. This is about the rot, though.


Do you know the conceot of brain rot? The gist here is that if you train on bad data (if you fuel your brain with bad information) it becomes bad


I don’t understand why this is news or relevant information in October 2025 as opposed to October 2022


Brexit was fairly recent, compared to the bad governance of Argentina.


Yes. Let the brits elect a true populist like Nigel Farage and then we'll see. I mean, I hope not, but it seems we are on that path.


Argentina had a persistent history of coups, dictatorships, and unstable governance before and after Peron.

It does a disservice to both Argentine and American history to compare either when their institutions historically and currently are not comparable.

This is why it is best to look at Brexit. You might be a boomer, but Brexit happened almost a decade ago, and the shakeup that happened in 2016-22 is enough data to understand how similar shakeups would impact a state with similar institutions like the US.


The scalability of spying has exploded. Back before re-election comms, the government had no way to spy on communications and sieve out opposers - now they do, with encryption the only thing standing in the way.


We should rename it as BrowserScript. The .bs extension is a funny bonus.


In my humble (backender) opinion, if it's hard to use a tool right, that counts as a cons, and that must be accounted for when choosing which tool to use.


It's hard to build non-trivial web UI with any technology—React is just what's familiar. If Angular had won (god forbid) we'd be seeing all the same articles written about how bad Angular is.


What a condescending and arrogant answer.

I think you should also think more about this.

One can believe that Apple (or any company) should let you do whatever you want with your hardware - in general - and point out any instance when they don't; even if that specific instance is not something that touches you!

This is true of everything. Another example: if you believe in freedom of speech, you should vocally defend anyone who is deprived of it, even when that is not you. Otherwise, you lose by divide and conquer.

Apes together strong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: