Hacker News new | past | comments | ask | show | jobs | submit login

> On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all

The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though. We rely on calculators because they eclipse us in _any_ calculation, we rely on books because they store the stories permanently, but if I use chatGPT to write all my easy SQL I will still have to write the hard SQL by hand because it cannot do that properly (and if I rely on chatGPT to much I will not be able to do that either because of attrition in my brain).






We'll definitely need people who can do the hard stuff still!

If we're lucky, the tendency toward random hallucinations will force an upswing in functional skepticism and and lots of mental effort spent verifying outputs! If not, then we're probably cooked.

Maybe a ray of light, even coming from a serious skeptic of generative AI: I've been impressed at what someone with little ability to write code or inclination to learn can accomplish with something like Cursor to crank out little tools and widgets to improve their daily life, similar to how we still need skilled machinists even while 3D printing has enabled greater democratization of object production. LLMs: a 3D printer for software. It may not be great, but if it works, whatever.


> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though.

Yeah, you'd think that a profession that talks about stuff like "NP-Hard" and "unit tests" would be more sensitive to the distinction between (A) the work of providing a result versus (B) the amount of work necessary to verify it.


Yeah, they realize (B) is almost always much, much lower than (A), which is why ChatGPT is stupidly useful even if it gets 15% of the stuff wrong.

I distrust that rationale, because even if generation>=verification, it depends on the error-rate and impact. Wiring up a condemned building with demolition charges might take longer than a casual independent review...

Truly perfect code verification can easily cost more than writing it, especially when it's not just the new lines themselves, but the change's effect on a big existing system.


> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though

Not sure about books. Between self-help, religion, and New Age, I'd guess quite a lot of books not marked as fiction are making false claims.


Thats not what I meant tho, the point about books is that they store information reliably. If I write something down, within most reasonable settings it will still be the same text when I read it back. That means if I write something down instead of remembering it, the writing will outperform me in storing this information. Same with the calculator, the calculator will always perform at least as good as me at arithmetics. There is no calculation on which the calculator can randomly fail, leading me to do it by hand, so I don't need to retain the skill of doing it by hand. The same can not be said about LLMs and that is the issue.

Sure, but also that's not what (generative) AI are for.

If you want reliable list of facts, use (or tell the AI to use) a search engine and a file system… just then you need whatever system you use to be able to tell if your search for "Jesus" was in the Christian missionary sense, or the "ICE arrested Jesús Cruz" sense, or you wanted the poem in the Whitehouse v Lemon case, or if you were just swearing.

If you can't tell which you wanted, the books being constant doesn't help.

> There is no calculation on which the calculator can randomly fail, leading me to do it by hand, so I don't need to retain the skill of doing it by hand.

I've seen it happen, e.g. on my phone the other week, because Apple's note-based calculator strips unrecognised symbols, which means when you copy-paste from a place where "." is the decimal separator, while your system settings say you use "," as a decimal separator, it gives an answer off by some power of ten… but I've also just today discovered that doing this the other way around on macOS (system setting "." as separator) it strips the stuff before the decimal.

Just in case my writing is unclear, here's a specific example, *with the exact same note* (as in, it's auto-shared by iCloud and recomputing the answer locally) on macOS (where "." is my separator):

  123,45 / 2 ‎ = 22.5
  123.45 / 2 ‎ = 61.725
and iOS (","):

  123,45 / 2 ‎ = 61,725
  123.45 / 2 ‎ = 6.172,5
And that's without data entry failure. I've had to explain to a cashier that if I have three items that are each less than £1, the total cannot possibly be more than £3.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: