Hacker News new | past | comments | ask | show | jobs | submit login
Accumulation of cognitive debt when using an AI assistant for essay writing task (arxiv.org)
385 points by stephen_g 7 days ago | hide | past | favorite | 425 comments





It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.

Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:

Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.

This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.


> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.

I don't see how LLMs can do anything but significantly worsen this situation overall.


> I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them.

Yes, but the arguments they need to present are not necessarily the ones they used to convince themselves, or their own reasoning history that made them arrive at their proposal. Usually that is an overly boring graph search like "we could do X but that would require Y which has disadvantage Z that theoretically could be salvaged by W, but we've seen W fail in project Q and especially Y would make such a failure more likely due to reason T, so Y isn't viable and therefore X is not a good choice even if some people argue that Y isn't a strict requirement, but actually it is if we think in a timeline of several years and blabla" especially if the decision makers have no time and no understanding of what the words X, Y, Z, W, Q, T etc. truly mean. Especially if the true reason also involves some kind of unspeakable office politics like wanting to push the tools developed by a particular team as opposed to another or wanting to use some tech for CV reasons.

The narrative to be crafted has to be tailored for the point of view of the decision maker. How can you make your proposal look attractive relative to their incentives, their career goals, how will it make them look good and avoid risks of trouble or bad optics. Is it faster? Is it allowing them to use sexy buzzwords? Does it line up nicely with the corporate slogan this quarter? For these you have to understand their context as well. People rarely announce these things, and a clueless engineer can step over people's toes, who will not squarely explain the real reason for their pushback, they will make up some nonsense, and the clueless guy will think the other person is just too dumb to follow the reasoning.

It's not simply about language use skills, as in wordsmithing, it's also strategizing and putting yourself in other people's shoes, trying to understand social dynamics and how it interacts with the detailed technical aspects.


To give a brief example of this -- a college asked why an exec had listened to my argument but not theirs recently, despite "saying the same thing". I explained that my argument contained actual impacts: actual delays, actual costs, an actual timeline when the impact would occur -- rather than nebulous "there will be problems".

Everyone comes to execs with hypothetical problems that all sound like people dressing up minor issues -- unless you can give specific details, justifications, etc. they're not going to parse properly.

This would be one case where a person asking an LLM for help is not even aware of the information they lack about the person they're trying to talk to.

We could define expertise this way: that knowledge/skill you need to have to formulate problems (, questions) from a vague or unknown starting point.

Under that definition, it becomes clear why LLMs "in the large" pose problems.


I don't know. Predicting delays, costs and timelines is notoriously hard unless it's something you've done the exact same way many times already. For example in physical work, like installing external insulation on a building, a contractor can fairly easily predict the time required because they did similar buildings in the past several years, it's multiplying an area by a time average, and they know the delay caused by asking for some material by checking the shipping time on the website they order it from.

Developing software is very different and many nontechnical execs still refuse to understand it, so the clever engineers learn to make up numbers because that makes them look good.

Realistically, you simply come across as more competent and the exec compressed all that talk about the details into "this guy is quite serious about not recommending going this way - whatever their true reason and gut feel, it's probably okay to go their way, they are a good asset in the company, I trust that someone who can talk like this is able to steer things to success". And the other guy was "this guy seems to actively hide his true reasons, and is kind of vague and unconfident, perhaps lazy or a general debbie downer, I see no reason to defer to him."


I think there's an element to that -- I also said it's about trust and credibility. However in this case it was partly about helping the exec cognise the decision and be aware that he needs to make a decision, basically scaffolding the decision-making process for the exec.

It's kinda annoying for decision-makers to be presented with what sounds like venting. This is something I've done before, in much worse ways actually -- even venting on a first-introduction handshake meeting. But I've learned how to boil that down into decision-making.

I do find it very annoying, still, how people are generally unwilling to help you explore your thinking out-loud with you, and want to close it down to "what's the impact?" "what's the decision?" -- so I sympathise a lot with people unable to do this well.

I often need to air unformulated concerns and it's a PITA to have people say, "well there's no impact to that" etc. : yeah, that isnt how experts think. Experts need to figure out even how to formulate mental models of all possible impacts, not just the ones you care about.

This is a common source of frustration between people who's job is to build (mental,technical...) models and people who's job is to manage social systems.


I think nontechnical execs have a mental model of technical expertise, where there's some big rule book lookup table that you learned in college and allows you to make precise, quantified, authoritative statements about things.

But of course the buck has to stop somewhere. By being definitive, you as the expert also give ammo to the exec. Maybe they already wanted to go that certain way, and now they can point to you and your mumbo jumbo as the solid reasoning. Kind of how consultants are used.


Ooh there's the first positive thing to come out of this whole LLM thing. Can we replace overpaid consultants who contribute nothing with sweet whispers from an executive chatbot?

In light of this it's interesting that a lot of arguments for LLMs discuss them largely in terms of what they "can" do and not what the actual use is.

This entire thread of comments is all circling around but does not now how to articulate the omnipresent communication issues within tech, because the concept of effective communications is not taught in tech, not taught in the entire science, engineering, math and technology series. The only communications training people receive is how to sell, how to do lite presentations.

There absolutely is a great way to use LLMs when writing, but not to write! Have them critique what you wrote, but not write for you. Create a writing professor persona, create a writing critique, and make them offer Socratic advice where they draw you to make the connection, they don't think for you, but teach you.

There has been a massive disservice to the entire tech series of professions by ignoring the communications, interpersonal and group communication dynamics of technology development. It is not understood, and not respected. (Many developers will deny communication skills utility! They argue against being understood; "that is someone else's job") Fact of the matter: a quality communicator leads, simply because no one else conveys understanding; without the skills they leave a wake of confusion and disgruntled staff. Competent communicators know how to write to inform, know how to debate to shared understanding, and they know how to diffuse excited emotion, they know how to give bad news and be thanked for the insight.

Seriously, effective communications is a glaring hole in your tech stack.


I find that LLM extremely good in training such language skills by using following process:

a) write a draft yourself.

b) ask the LLM to correct your draft and make it better.

c) newer LLMs will explicitly mention the things they corrected (otherwise ask for being explict about the changes)

d) walk through each of the changes and apply the ones you feel that make the text better

This helped me improving my writing skills drastically (in multiple languages) compared to the times where I didn't have access to LLMs.


Done this as well. But after the initial "wow!" moment, the "make it better" part became a "actually I don't like how you wrote it, it doesn't sound like me".

There is a thin line between enhancing and taking over, and IMO the current LLMs cross it most of the time.


Your mistake was having the AI rewrite at all, don't do that, that is exactly the problem with them - that is them thinking for you. Ask the AI how well you wrote it, make the AI a writing professor, make the AI adopt a Socratic attitude that does not do anything but draws you to make the connection yourself.

I use Grammarly sometimes to check my more serious texts, but there's a gotcha. If you allow all of its stylistic choices, your writing becomes very sterile and devoid of any soul.

Your word and structural choices adds a flair of its own, makes something truly yours and unique. Don't let the tool kill that.


It's all already there. When you converse with a junior-engineer about their latest and greatest idea (over a chat platform), and they start giving you real-time responses which are a page long and structured into bullet points...it's not even that they are using chatgpt to avoid thinking, it is the fact that they think either no-one will notice, or that this is how grown-ups actually converse with each other, is terrifying.

I haven't encountered that (yet), but I can't think of a faster way to get me to stop paying attention to them. I'm more interested in their analysis, not the analysis of a machine I can just use myself directly.

On the one hand you're right. On the other hand, some , let's be honest - moron - bends the ear of some clueless manager with a too-good-to-be-true plan...And suddenly you have to explain it to two clueless people. And I used "junior" in the sense of my own assesment - I've definitely talked to some people who had a "senior" title, but that was perhaps reflective more of their accumulated years of experience, rather than actual knowledge...It's all so easy in their world - for the ChatGPT tells'em so ;)

This is the worst thing ever. I have coworkers who literally cannot string three words together without making a grammar mistake, but recently they've been texting me with more-than-perfect grammar and a vast vocabulary.

These people are trying to fool everyone else making them think they are smarter/more educated than they actually are. They aren't fooling me, I've seen their real writing, I know it's not actually their text and thoughts, it really disgusts me.


I know exactly what you mean. I once tried - albeit very implicitly - hinting that I knew they were using a chatbot to generate answers, but unfortunately it did not help. Perhaps I should just have openly told them it not only insults the counterparts intelligence, but that it was also disrespectful.

> another technology that society will inoculate itself against

I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.


The obesity rate is still rising in most areas worldwide. I'd argue we still haven't developed herd immunity to gas-powered automobiles invented early to mid 1800s.

> I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.

Not to say there is not significant amount of stuff you actually want to get right.


It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

There's a tremendous hollowing-out of our mental capacities caused by the computer science framing of activities in terms of input->output, as-if the point is to obtain the output "by any means".

It would not matter if the LLM gave exactly the same output as you had written, and always did. Because you still have to act in the world with thoughts that you needed have when authoring it.


> It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

So much this.

At my current workplace, I was asked to write up a design doc for a software system. The contents of the document itself weren't very relevant as the design deviated significantly based on constraints and feedback that could be discovered only after beginning the implementation, but it was the act of putting together that document, thinking about the various cases, etc. that lead to the formation of a mental model that helped me work towards delivering that system.



Hey, this is actually good, and the whole site is as well. Thanks for sharing!

“Plans are worthless, planning is invaluable”


> It is still done, but if no one actually reads it, why not automate generation.

There's a reason the real-estate industry has been able to go all-in on using AI to write property listings with almost no consumer kickback (except when those listings include hallucinated schools).

We're already used to treating them with skepticism, and nobody takes them at face value.


> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say.

But what fraction of communication is "worthwhile"?

I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.


Well if you want a rant about academia, I have many well prepared.

I think in the cases you describe the "thinking" was already purely performative, and what LLMs are doing is a kind of accelerationist project of undermining the performance by automating it.

I'm somewhat optimistic about this kind of self-destructive LLM use:

There are a few institutions where these purely-performative pseudo-thinking processes exist, ones insensitive to "existential feedback loops" which otherwise burn them down. I'm hopefully LLMs become a wildfire of destruction in these institutions and, absent external pressures, they return to actual thinking over the performative.


One of the effects on software development is: the fact that you submitted a PR with any LoC count doesn't mean that you did any work. You need to explain your solution and answer questions to prove that.

The next stage of this issue is: how do you explain something you didn't write?

The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.

I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.

Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.


IMO we are likely in a golden era of coding LLM productivity, one in which the people using them are also experts. Once there are no coding experts left, will we still see better productivity?

Yea, and how are people going to learn when the answer is just a chat away. I know it would have been hard for me to learn programming if I knew I could just ask for the solution everytime (no stackoverflow does not count because most people dont ask a question for every single issue they encounter like with AI)

this was the same criticism against SO at the time too. people who want to learn and put in the effort will learn even faster with AI (asking, exploring the answers, etc.) and those who use it as a crutch will be left behind as always. we're just in the confusing staging.

Explanation that the smartpants and some management are already totally willing to outsource to an LLM as well...

I see it as more of a calibration, revolving around understanding what an AI is inherently not able to do – decide what YOU want – and stopping to be weird about that. If you chose to stop being involved in a process and mold it, then your relationship to that process and the outcome will necessarily change. Why would we be surprised by that?

As soon as we stop treating AI like mind readers things will level out.


> This is a system for substituting thinking itself with non-thinking

One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious


> One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

Sorry, I can't immediately think of what you're talking about. Could you link to an example so I can get a feel for it?


Every time you have ever seen somebody weigh into an evenly slightly complex topic with a post that starts with “I asked chatgpt and”

That just signals to me to completely ignore the rest of what they wrote.

Thank fuck I'm not the only person who finds this completely unhelpful at best, and outright disrespectful of time at worst.

Oh god, I find it maddening! Got it.

>Authoring is two-thirds of the point of most communication.

Not when there's money to be made.


It’s been my experience that most people opinions on AI is inversely proportional to the timescale they have been using it.

Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.

But then you need to find something in that closet and just weep for days.


What you say might be true for the current crop of LLMs. But it's rather unlikely their progress will stop here.

Well, why would the "progress" continue? Most stats I've seen seem to point to diminishing returns for scale of models.

> This is a system for substituting thinking itself with non-thinking

I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.


> against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/)

I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.

Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.


More charitably, it's a person yet to feel the disabling phase of using an LLM.

If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.

I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.


Now I'm a fairly cynical person by trade but that feels like it's straying into conspiracy theory territory.

And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.


Less conspiracy, more negligence: “it is difficult to get a man to understand something when his salary depends upon his not understanding it”

Sad reality is that most people are not smart. They’re not creative, original, or profound. Think back to all the empty and pointless convos you had prior to AI or the web.

I don't see it as sad, it's perfectly fine to be mediocre. You can have a full, rich life without being or doing anything extraordinary. I am mediocre and most of the people I know are mediocre - at least mediocre in the sense that there will be no Wikipedia page under my name.

I strongly disagree with this idea.

If you evaluate a fish by asking it to climb a tree, it'll look dumb.

If you evaluate a cat by asking it to navigate an ocean to find its birthplace, it'll look dumb, too.


Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.

You wouldn't go around crusading against food because you're obese.

Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.


> You wouldn't go around crusading against food because you're obese.

My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.

> about 40.3% of US adults aged 20 and older were obese between 2021 and 2023

Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.


Oh, we for sure are, because much like America's social structure pushes people to obesity with overwork and constant stress, that same social structure will push people to use AI blindly to to keep up with brutal quotas set by their employers.

I'm quite strongly anti-AI, but surely the sudden shift to pushing agentic AI means less direct interaction with the models required?

From what I see from the breathless hype, treating it like a member of the team is what they want instead of it just being a conversational UX for contextual queries.


Shallow take.

Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.

People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.

The parent analogy also only holds up on your happy path.


You've just framed your own argument. In order to be intellectually consistent, you can't crusade against AI in general, but rather bad uses of AI, which (even as an AI supporter) is all I've asked anti-AI folks to do all along.

Computing and AI should be used to do what is humanly impossible, like calculate all the possible paths a hurricane will follow or sequence the human DNA. That would be healthy, cooked food (that wasn't possible until we harnessed fire). If it's just making you lazy and taking away the important part of the thought process, it's junk food. And yeah, LLM unleashed on the bigger population are totally going to make us overall lazier. They will act as "rocket fuel" for the ones that are already wanting to learn and improve, and will tank all the rest, the "normal" people.

I'm aware of my own perspective, i don't generally crusade against whatever flavour of machine learning is being pushed currently.

I was just pointing out that arguing against crusading by using an argument (or analogies) that leaves out half of the salient context could be considered disingenuous.

The difference between:

You're using it incorrectly

vs

Of the ones that are fit for a particular purpose, they can work well if used correctly.

Perhaps i'm just nitpicking.


I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.

And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.

Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.


> You can't just skim a math textbook and know all the math. You have to stop and think.

And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.

Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.


The impact of writing is immensely undervalued. Even writing with a keyboard or screen is a lot more than non writing. Exercising writing on any topic is still beneficial, and you can find many psychologists recommend having a daily blog of some sort to help people observe themselves from a side. The same goes for speaking, public speech if u want, and therapeutic daily acting-playing which is also overlooked.

I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.

If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.

Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.


The only writing I've ever used ChatGPT for is writing I openly don't give a shit about, and even then I constantly find myself prompting it to write less because holy shit do LLMs love to go on and on and on.

Like not only do I cosign all said above, but I will also add to this: brevity is the soul of wit and none of these fucking things are brief. No matter what you ask for you end up getting just paragraphs of shit to communicate even basic ideas. It's hard to not think this tool was designed from go to automate high school book reports.

I would only use these programs to either create these overly long, meandering stupid emails, or to digest ones similarly sent to me, and make a mental note to reduce my interactions with this person.

It's no wonder the MBA class is fucking thrilled with it though, since the vast majority of their jobs seem to revolve around producing and consuming huge reports containing vacuously little.


not all humans are brief, and not all situations are amenable to brevity, but I get the point, as brevity can be be exceptionaly informationaly dense, but like in humor(sports), it only works if someone else plays the strait guy or set up artist. Also true masters will switch up, happy to join ingeneral blather, and then drop a subtle, brief comment that is the bridge piece for an otherwise huge informational set.Another thing many performers and writers describe is the finding of the voice or stage/writing persona....perhaps quite different from the one that they inhabit at home. The topic at hand leaves out the trap of standing behind a persona, that the person cant then inhabit, and then can be caught out in a real world situation as an imposter, ha!

Similarly, the impact of white-boarding-type activities is undervalued. When discussing problems with a viewpoint, a quick whiteboard usually gets at some easy-to-find underlying issues that others can understand, rather than it devolving into positional framings.

> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.

> it'll be interesting to see the effect of LLMs on our cognitive skills.

These discussions remind me a lot about this comic[1].

[1] https://www.monkeyuser.com/2023/deprecated/


> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves.

I feel like to goes beyond writing to really any form of expressing this knowledge to others. As a grad student, I was a teaching assistant for an Electrical Engineering class I failed as an undergrad. The depth of understanding I developed for the material over the course of supporting students in the class was amazing. I transitioned from "knowing" the material and equations to being able to generate them all from first principles.

Regardless, I fully agree that using LLMs as our form of expression will weaken both the ability to express ourselves AND the ability to develop deep understanding of topics as LLMs "think" for us too.


Writing is pure magic.It allows so much reflection and so many insights, that you wouldnt otherwise get. And writing as part of the reading process allows you to directly integrate what you are reading as you are doing it. Like cant recommend it enough. Only downside is that its slow, compared to what people are used and want to do, especially in the work environment.

I disagree with this take. I'd say often when exploring new math problems, often it's possible explore the possible solutions paths at lower technical levels first in your mind before anything down--when actually going into details of an approach. I don't think not writing is that limiting if all of your approaches already fail before going into details, which is often the case in early stages of math research.

I can also explore by writing. Writing drafts can help structure my thinking.

"The pen is an instrument of discovery rather than just a recording implement." ~ Billy Collins

> And most importantly you have to write. A lot.

I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.


They made a documentary about this actually. You can probably find it on Netflix or something. It's called Idiocracy.

> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

Not to be pedantic, but I’d still argue that thinking is the most important. At least when understanding the nature of learning. I mean, writing is ultimately great because it facilitates high quality thinking. You essentially say this yourself.

Overall, I think it’s more helpful to understand the learning process as promoting high quality thinking (encoding if you want to be technical). This sort of explains why teaching others, argumentation, mind-mapping, good note-taking, and other activities and techniques are great for learning as well.


Prompting involves more than an insignificant amount of writing.

But it is not at all the same _type_ of writing. Most of the prompts I've seen and written are shorter, less organized, and most importantly not actually considered a piece of writing. When you are writing a prompt you are considering how the machine will "interpret" it and what it will spit back, you're not constructing and argument. Vagueness or dialectics in a prompt will often just confuse the machine.

Hitting the keys is not always writing.


Prompting is prewriting — which is very important and often neglected. With it, you are:

* Describing the purpose of the writing

* Defining the format of the writing

* Articulating the context

You are writing to figure out what you want.


I would call it cognitive debt. Have you ever tried writing a large report with an LLM?

It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.

But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.

I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.

I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.

In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.

Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.

As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."

[0] https://www.youtube.com/watch?v=4PCHelnFKGc


I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways

Yes definitely!

I'd say that being able to turn an idea over in your head is how you know if you know it ... And even pre-LLM, it was easy to "appear to know" something, but not really know it.

PG wrote pretty much this last year:

in a couple decades there won't be many people who can write.

So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots.

https://paulgraham.com/writes.html


> The brain does not retain information that it does not need.

Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?

I haven’t done this in 2 decades and I’m reasonably sure I never again will


Probably because you learned it during that brief period in your development in which humans are most impressionable.

Now think about the effect on those humans currently using LLMs at that stage of their development.


Because these are core memories that provide stepping stones to later knowledge. It is a part of the story of you. It is very hard to integrate all knowledge in this way.

The last fast food place you went to, what does the ceiling look like? The exact colour/pattern?

The last phone conversation you had with a utility company, how did they greet you exactly?

There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.


I think because some experiences are so profound to your brain ( first impression, moments that you are proud of ) that you just replay them over and over again.

I also think the claim that "the brain does not retain information it does not need" is an insufficient explanation, and short-sighted. As an example, reading books informs and shapes our thinking, and while people may not immediately recall a book that they read some time ago, I've had conversations where I remembered that I had read a particular passage (sentence, phrase, idea) and referred to it in the conversation.

People do stuff like that all the time, bringing up past memories in spontaneity. The brain absolutely does remember things it "doesn't need".


To nitpick, your subconscious is aware computers have memory constraints even now and you write better code because of it even if you do javascript...

Probably because there was some reward that you felt at the time was important (most likely playing a DOS game).

I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.


Agreed. We remember many things that don't serve us anymore.

memmaker - a cheat, but it is still in my quick-access memory.

The terms “Cognitive decline” or “brain rot” may have sounded too sensational, and to be fair the authors note the limitations of the small sample size.

Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.

Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?


> The brain does not retain information that it does not need.

Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?


Closer to the truth is that the brain never completely forgets something, in the sense that there are always vestiges left over, even after the ability to recall or instantly draw upon it is long gone. Studies show, for example, that after one has "forgotten" a language, they're quicker to pick up it again later on compared to someone without that prior experience; how quickly being time dependent, but more quickly nonetheless.

OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.

Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.


> Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

I worked with some researchers who specifically examined this when developing training content for soldiers. They found that 'muscle memory' skills such as riding a bike could persist for a very long time. At the other end of the spectrum were tasks that involved performing lots of technical steps in a particular order, but where the tasks themselves were only performed infrequently. The classic example was fault finding and diagnosis on military equipment. The researchers were in effect quantifying the 'forgetting curve' for specific tasks. For some key tasks, you could overtrain to improve the competence retention, but it was often easier to accept that training would wear off very quickly and give people a checklist instead.


Very interesting! Thanks for bringing this up.

I think a better way to say it is that the brain doesn't commit to long term memory things that it doesn't need.

I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:

1. One group watches the entire series over the course of a week

2. A second group watches a series one episode per week

Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.

Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.

I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.

Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.


Riding a bike is a skill rather than what we would call a “memory” per se. It’s a skill that develops a new neural pathway throughout your extended nervous system bringing together the lesser senses of proprioception and balance. Once you bring these things together you then go on to use them for other things. You “know” (grok), rather than “understand” how a bike stays upright on a very deep physical level.

Sure. But speaking a language is also (at least partially) a skill, ain't it?

It is. It’s also something you don’t forget except in extreme cases like dementia. Skills are different from facts but we use the word memory interchangeably for each. It’s this nuance of language that causes a category error in your reasoning ain’t it.

Such a good question - I hope someone answers with more than an anecdote (which is all I can provide) - I've found the skills that don't leave you like riding a bike, swimming, cooking are all physical skills. Tangible.

The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds


Google "procedural memory". Procedural memory is more resistant to forgetting than other types of memory.

I guess speaking a language employs some mixture of procedural and other types of memory?

I am not an expert in the subject but I believe that motor neurons retain memory, even those not located inside the brain. They may be subject to different constraints than other neurons.

> And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need.

Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.


> You can't just skim a math textbook and know all the math.

Curious, did anyone try to learn a subject by predicting the next token, and how did it go?


Most of math is just that. Try to predict the next change and see if it sticks.

The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.

The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.

Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.

This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.

So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"

https://github.com/dmf-archive/dmf-archive.github.io


It's up to everyone to decide what to use LLMs for. For high friction / low throughput (eg, online research using inferieor search tools) tasks, i find text models to be great. To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.

On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.


>To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

quoting the article:

Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.


AI is the anti-Zettelkasten.

Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.

Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.

I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.


Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.

LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).


Exactly! Never ever ever have AI write for you. Ask it to critique what you wrote, ask it to pick your arguments apart. Then use your mind to fix what it pointed out. If you cannot figure out how, ask the AI to explain how. Then take a break, 20 minutes is fine, and then return and fix the issue yourself using your own mind to write without assistance. This is how one uses AI to learn.

The problem with this strategy is that unless you commit logical fallacies you cannot trust the AI critic. Why? It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.

> It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.

Just like a person could, which is why one validates. AI is not one's sole information. That's dangerous, to say the least. It also helps to stay within one's formal education, and/or experience, and stay within logical boundaries one can track themselves. It is really all about understanding what you are doing, committing to run without you.


I mean the thing is, AI is a stochastic parrot. But so is grabbing a random book from a library in a certain sense. You will always have to think about it yourself.

But that means considering LLMs as a thinking tool rather than a tool that does work for you is worth it.


The only issue I see with it is that you can still become very dependent. Nothing will feel right unless AI reviews it.

I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.

To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.


Maybe, much like we invented gyms to exercise after civilization made most physical labor redundant (at least in developed countries), we will see a rise of 'creative writing gyms' of some sort in the future.

I like this outlook a lot. I suppose I've met a lot of people that do creative writing recreationally and also socially in clubs, writing not just poetry but also things like adventures for roleplaying games like D&D.

I wonder what the commercialized form of a "gym but for your brain" would look like and if it would take off and if it would be more structured than... uh... schools? Wait, wouldn't this just be like a college except the students are there because they want to be, and not for vocational reasons?


You tend to remember trouble more than things going smoothly, so I'd say you remember the parts you had to fix manually.

Interesting perspective to see AI as the opposite of accessing connected knowledge (aka Zettelkasten)

"...the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring."

That's not surprising but also bleak.


Appears to align with good old Ironies of Automation [1]. If humans just review and rubber stamp results, they do a pretty terrible job at it.

I've been thinking for a while now that in order to truly make augmented workflows work, the mode of engagement is central. Reviewing LLM code? Bah. Having an LLM watch over my changes and give feedback? Different story. It's probably gonna be difficult and not particularly popular, but if we don't stay in the driver's seat somehow, I guess things will get pretty bleak.

[1]: https://en.m.wikipedia.org/wiki/Ironies_of_Automation


Didn't realise the pedigree of the idea went back to 1983.

I read about this in a book "Our Robots, Ourselves". That talked about airline pilots' experience with auto-land systems introduced in the late 1990s/ early 2000s.

As you'd expect after having read Ironies of Automation, after a few near misses and not misses, auto-land is not used any more. Instead, pilot augmentation with head-up displays is used.

What is the programming equivalent of a head-up display?


Certainly a relatively tight feedback loop, but not too tight. Syntax errors are very tight, but non negotiable: Fix it now.

Test failures are more explicit, you run tests when you want to and deal with the results.

Code review often has a horrible feedback loop - often days after you last thought about it. I think LLMs can help tighten this. But it can't be clippy, it can't interrupt you with things that _may_ be problems. You have to be able to stay in the flow.

For most things that make programmers faster, I think deterministic tooling is absolutely key, so you can trust it rather blindly. I think LLMs _can_ be really helpful for helping you understand what you changed and why, and what you may have missed.

Just some random ideas. LLMs are amazing. Incorporating them well is amazingly difficult. What tooling we have now (agentic and all that) feels like early tech demos to me.


>What is the programming equivalent of a head-up display?

Syntax highlighting, Intellisense, and the millions of other little features built into modern editors.


We should be able to do a lot more than that. I for one would love to have UML as the basis of for system design and architecture, have "pseudo-code repositories" that can be used as a "pattern book" and leave that as the context for LLM-based code generation tools. We could then define a bunch of constraints (maximum cyclomatic complexity, strict type checking, acceptance tests that must pass, removal of dead code) to reduce the chances of the LLM going rampant and hallucinating.

This way I'd still be forced to think about the system, without having to waste time with the tedious part of writing code, fixing typos, etc.

Bonus point: this could become a two-way system between different programming languages and UML as the intermediate representation, which would make a lot easier to port applications to different languages, and would eliminate concerns about premature optimizations. People could still experiment with new ideas in languages that are more accessible (Python/Javascript) and later on port them to more performant systems (Rust/D/C/C++).


> We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!

https://dune.fandom.com/wiki/Butlerian_Jihad


The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.

But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.

The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.


I use AI tools for amusement and asking random questions, but for actual work, I basically don't use them at all. I wonder if I'll be part of the increasingly rare group who is actually able to do anything while the rest become progressively more incompetent.

My nickel - we are in the primary stages of being given something like the famed "bicycle for the mind", an exoskeleton for the brain. At first when someone gives you a mech, you're like "woah, cool", let's see what it can do. And then you zip around, smash rocks, buildings, go try to lift the Eiffel.

After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.

(highly personal perspective)


“Bicycle for the mind” analogy is actually really good here. Since bicycles and other transportation technology has made us increasingly weak, which has a negative impact on physical health. At this point it has reached such a critical point that people are taking seriously the fact that we need physical exercise to be in good health. My company recently introduced 60 minutes a week of activity during work hours. It’s probably a good investment since physical health affects performance and mental health.

Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)


> bicycles ... has made us increasingly weak

This is pretty difficult for me to buy. Cycling has been shown time & again to be a great way to increase fitness.


> Cycling has been shown time & again to be a great way to increase fitness.

Compared to sitting on your butt in a car or public transport.

Perhaps not compared to walking everywhere and chasing the antelope you want to cook for lunch.

I think what he meant is that both bicycles and LLMs are a force multiplier and you still provide the core of the work, but not all of the work any more.


Cycling, in my experience, is usually way more intense than walking or even running/jogging. It just lets you cover larger distance and gives you more control over how your energy is used.

With the example of LLMs, sure, you could cycle the initial destination you were meant to walk to - write an article with its help, save a few hours and call it a day. Or you could cycle further and use the saved time to work on something a text model can't help you well with.


If we're nitpicking, are we talking about cycling as a sport or cycling as a means for getting from point A to point B?

I'm sure cultures where they cycle to everywhere all the time take it easier than cultures where going out for a bike ride is an event.


Just like with LLMs, cycling will allow you to expand your destinations significantly, so comparison with walking is irrelevant.

Not nitpicking, just playing along with the analogy, which i found not that far fetched

I once had blood clots in my legs. I couldn't walk in the worst parts of it but cycling down the street was easier than walking for more than ten metres. It's better than sitting on your butt for hours on end, sure.

if you will use exoskeleton for walking, eventually you will have muscle wasting and depends on type of exoskeleton degradation of neural pathways that you need to use for walking

One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.

I find the opposite to be true. I am a lot more productive, so I work on more things in parallel, which makes me extremely tired by the end of the day, as if my brain worked at 100% capacity..

Yeah I do feel the pressure to run multiple instances of Claude Code now. Haven't really managed to find a good workflow, I find I just get too distracted swapping between tasks and then probably end up working slower than if I had just stayed in one IDE instance

Codex is the perfect workflow for me: instead of swapping, just accept / reject cls / refine tasks

Yeah and after a few days of this, I find I can't do anything and stop all the side projects for a few days until im recharged again and can get back to it.

I bet we will all need a new type of therapy for that at some point in the future.

On one hand, I've found that it reduces acute fatigue, but on the other I've found there's also an inflection point where it can encourage more fatigue over longer time horizons if you're not careful.

In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.

With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.

And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.


I like to think of AI as cars:

You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.


There are detriments to this as well.

Exercise is good.

Being outside is good.

New experiences happen when you're on foot.

You see more things on foot.

Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.

Going by foot or bicycle is so healthy for us for a myriad of reasons.


While this is absolutely true, “walking to Walmart” is a terrible example due to the lack of pedestrian infrastructure and distances involved :)

It is actually a great example. If the only way to go to Walmart is by car, it’s a sign of bigger problems with urban planning. Car is a hotfix, that becomes an environmental and public health problem. We shouldn’t try to get rid of cars or individual homes, but there exist plenty of other, more healthier ways of living (and the most livable cities in the world focus on them, not on car-powered suburbia). Same with AI: if it becomes the only way for people to achieve result, it may point not to complexity necessitating the use of an advanced tool, but to problems with education, developer experience etc. AI becomes a hotfix for things that could be fixed with more traditional approaches and in more sustainable way.

I think in a way this is a good analogy, because it also includes the downside. If you always drive everywhere and do everything by car, your health will suffer due to lack of physical activity.

And you'll randomly kill people on your way there.

( Of course dear reader, YOU won't randomly kill people because you're a "good driver". )


Do you really think it's random? 30% of car accidents are from drunk drivers. Some large fraction of the remainder are a result of some other impairment or distraction. Don't drive on two hours of sleep and don't text behind the wheel and your chances of hitting someone go way down.

And it will be the same thing with AI. You want to ask it a question that you can verify the answer to, and then you actually verify it? No problem. But then you have corporations using it for "content moderation" and end up shadow banning actual human beings when it gets it wrong, and then those people commit suicide because they think no one cares about them when it's really that the AI wrongly pegs them as a bot and then heartlessly isolates them from every other living person.


You got it backwards, there wouldn't be a walmart outside of town if there were no cars, you'd walk to the local butcher/baker/whatever in <10min.

Yes, but there are cars. That genie has already escaped its bottle.

And you pay small local stores with higher prices - which leads to more people, even in such small-towns with local butchers and bakers to get into their ride and go to the Lidl or Aldi on the outskirts.

Much like companies will realise LLM-using devs are more efficient by some random metric (do I hear: Story points and feature counts?), and will require LLM use from their employees.


Where you’d be able to afford much less (not judging the trade-off, but that’s the primary reason why the US is headed in the opposite direction).

Not true, food shopping cost in Germany are way less then in the USA and in my dense neighborhood >20.000 people/km² I have 3 supermarkets (also baker, butcher, etc.) within 5 min bicycle ride.

> Where you’d be able to afford much less

When 75% of the west is overweight or obese, and when the leading causes of death are quite literally sloth and gluttony I think I'd take my chances... We're drown in insane quantity of low quality food and gadgets


I thought the US went into the opposite direction because of ruthless corporate profit optimization, zoning rules and city planning that fuels suburban sprawl?

Economies of scale do mean you can get a fluffy blanket imported from China at $5, less than the cost of a coffee at Starbucks, but for food necessities Walmart isn’t even that cheap or abundant compared to other chains.


That’s a nice analogy! Though one might argue that the walk in of itself would be good for your health (as evidenced by me putting on some weight after replacing my 30 minute daily walk to the office with working remotely).

One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.


Oh man. I really hope AI doesn't do as much harm to us as cars have.

At the price of a sluggish, atrophied body.

which is why people now pay $100 a month for a gym membership.

Save an hour a day to spend it in the gym. That's what I call a bargain.

That’s why we need an AI infrastructure like amsterdam, where you can bike everywhere. It’s faster and more convenient than a car for most trips and keeps everyone fit and happy.

this analogy is flawed to its core. The car doesn't make you forget how to walk, because you are still forced to walk in certain circumstances. Delegating learning to an llm will increase your reliance on it, and will eventually affect the way you're learning. A better analogy is the usage of GPS. If you use it continuously, you will be dependent on it to get to a place, and lose the capacity to find places on your own.

Car culture societies are as bad as smoking for general health.

The problem is that when it's for work, the company now knows you have access to a car, so sends you on 20x the trips. You have no more quality time, and your physical health suffers from lack of exercise.

Which is exactly why many jobs actively require a driver's license where I live.

The car analogy has that covered already. When Guttenberg was printing bibles, those things sold like warm bread rolls - these days, printing books is barely profitable. The trick with new disruptive tech always is to be an early adopter - not the long tail.


Yeah, I wasn't disputing the car analogy, more the benefits. If I'm using GPT to benefit myself (e.g. working on a side project), that's great and saves me time to do other things. If I'm using it to benefit my employer, I won't save any time, they'll fill it with other things to do, or expect me to be X times as productive in the same time.

In this context: Brain only is going on foot/bike Search Engine is by car LLM is direct delivery to the home with the clerk packing your groceries (with them making the choices for you)

And then you drive to the gym to run on a treadmill for two hours.

Enter "Coding Bootcamps" and "Hackathons"

Back when GANs were popular, I'd train generator-discriminator models for image generation.

I thought a lot about it and realised discriminating is much easier than generating.

I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.

I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.

We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.

These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.


I think this is true for the very simple cases, for example and obviously bad picture vs. a good one.

I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.

The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".


I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.


>Nobody wants to be in a relationship with a language model.

I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.

For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.


This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.

Or working as an engineering manager.

It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...


I don't think not using assembly is going to affect my brain / my life quality in any significant way, but not speaking / chatting with someone is.

But this is a strawman argument, it's not what the research is talking about.

If LLMs were as reliable as compilers we wouldn’t be checking in their output, and I’d be happy to forget all programming lore.

The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.

until that changes, you’re playing a dangerous game letting those skills atrophy.


Isn't it both on LLMs? The input is your ability to craft a prompt, the output is checking if the prompt worked.

> The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

> What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.

> However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).

Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)


> Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

Nope.

Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.

In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.


> In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

Well, so that's exactly my point: Plato was an old man who yelled at clouds before it was cool.


Wow.

Plato's sock puppet Socrates? I think that you and I have read different history books, or at least different books regarding the history of philosophy. That said, I would love to hear your perspective on this.

> Plato's sock puppet Socrates?

See https://en.wikipedia.org/wiki/Socratic_problem

> Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.

However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.


I presume they refer to the fact that Socrates is basically used as a rhetorical device in Plato’s writings, and it’s not entirely clear how much of the dialogues were Socrates’s thoughts and how much was Plato’s own.

Yes, exactly.

I worry about the adverse effects of LLM on already disfranchised populations - you know the poor etc - that usually would have to pull themselves up using hard work etc studying n reading hard.

now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.

LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.


LLMs are great for the poor!

If you are rich, you can afford a good mentor. (That's true literally, in the sense of being rich in money and paying for a mentor. But also more metaphorically for people rich in connections and other resources.)

If you are poor, you used to be out of luck. But now everyone can afford a nearly-free mentor in the form of an LLM. Of course, at the moment the LLM-mentor is still below the best human mentors. But remember: only rich people can afford these. The alternative for poor people was essentially nothing.

And AI systems are only improving.


If people are using it to critically question their beliefs and thinking, that is.

However, most of the hype around LLMs is that they take out the difficult task of thinking and allow the creation of the artifact (documents, code or something else) that is really dangerous.


How is it any worse than the status quo for the disenfranchised?

A public library is actually free and its contents, collectively, are a far better "mentor" than ChatGPT. Plus the library doesn't build a psychological profile on you while you use it.

ChatGPT ain't taking libraries away. It's just an addition to your toolbox.

However, we notice that in practice free public libraries are mostly welfare for the well-off: they are mostly used by people who are at least middle-class.


People could in theory also get a college-level education by watching videos on YouTube, but in practice the masses just end up watching Mr. Beast.

15 years ago, people were sure that the Khan Academy and Coursera would disrupt Ivy League and private schools, because now one good teacher could reach millions of students. Not only this has not happened, the only movement I'm observing against credentialism is that I have good amount of anecdata showing kids preferring to go to trade school instead of university.

> pull themselves up using hard work etc studying n reading hard.

Where are you from? "The key to success is hard work" is not exactly something part of the Gen Z and Zoomers core values, at least not in the Americas and Western Europe.


As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.

The only two times I have made bad navigation mistakes in mountains were in the weeks after I started using my phone and a mapping app - the realisation that using my phone was making me worse at navigation was quite a shock at the time.

But you didn't become worse at navigation. Sounds like you trusted a tool, and it failed you.

This is splitting hair, at the end his navigation skills (him + whatever tool he used) were NOK and could result in dangerous situations (been there so many times in the mountains, although it was mostly about "went too far in a bit wrong direction and don't want to backtrack that far, I am sure I will find a way to that already close point..." and 10 mins later scrambling on all 4 on some slippery wet rock with no room for error)

No - on both occasions it was the same scenario - descending from a peak in bad weather and picking the wrong ridge to descend - I was confident I "knew" which was the right ridge and with the app I use bearings for the right route are pretty difficult to distinguish - so completely my fault.

I'm now aware of that problem and haven't had that problem since but I was pretty shocked in retrospect that I confidently headed off in the wrong direction when the tool I was using was by any objective measure much better.

I agree with this:

"the key to navigating successfully is being able to read and understand a map and how it relates to your surroundings"

https://www.mountaineering.scot/safety-and-skills/essential-...


> As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places

Can you point to a study to back this up? Otherwise, it's anecdata.


i really tire of people always asking for studies for obvious things.

have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..

but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.

100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.


> i really tire of people always asking for studies for obvious things.

I actually agree with you on this!

But... I have very very good directional sense, and as far as I can tell it's innate. My whole life I've been able to remember pathing and maintain proper orientation. I don't think this has anything to do with lack of navigation aids (online or otherwise) during formative years.

But I'm talking about geospatial sense within the brain. If your point is that people no longer learn and improve the skill of map-reading then yes that should be self-evident.


https://www.sciencedirect.com/science/article/pii/S027249442...

The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.

> it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.


Navigation is a narrow task. For many intents and purposes, LLMs are generally intelligent.

Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.

And this is exactly what this study showed too.

"Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."


Yes, that was true though, wasn't it? If this is also true, what does that imply?

Their results support this. The study has three groups: LLM users, Search Engine users and Brain only.

In terms of connections made, Brain Only beats Search User, Search User beats LLM User.

So, yes. If those measured connections mean something, it's the same but worse.


Plato was already worried that the written word caused people to forget things (although his main complaint was that words cant answer like a person can in a dialogue).

Yes but your cell phone contacts don't have a chance to call a completely different number out of thin air once in a while.

At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.


That would actually be an amazing feature. Like in those movie meet-cutes where the person you were supposed to meet doesn't show up, and instead you make a connection with a random person.

Those services are available already, but the random person at the other end is "AI" generated :)

Google was like a faster library. ChatGPT just does most of the work for you.

It's the doing the work for you which is the trouble.

Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.

If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?


I don't remember phone numbers.

I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.


A comment on another similar thread pointed out it goes as far back as Socrates saying that writing things down means your not exercising your brain, so you're right, this is the same old argument we've heard for years before.

The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.

I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs


> All participants were then reassured that though 20 minutes might be a rather short time to write an essay, they were encouraged to do their best.

Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.

When my goal is to produce the result for a specific short term task - I maximize tool usage.

When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.


"I"? You should treat yourself as an anecdotal exception.

You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.


However, I believe it is quite an assumption that a setup with time pressure reflects "normal" usage of a LLM.

This would mean that short term tasks, the bulk of what knowledge workers do nowadays, forgo learning on the job.

This is exactly why there is no point in using AI for coding unless in rare fee cases.

Code without AI - sharp skills, your brain works and you come up with better solutions etc.

Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.


>Code without AI - sharp skills, your brain works and you come up with better solutions etc.

That train of thought leads to writing assembly language in ed. ;-)

I think developers as a group have a tendency to spend too much time "inside baseball" and forget what the tools we're good at are actually used for.

Farmers don't defend the scythe, spend time doing leetscythe katas or go to scything seminars. They think about the harvest.

(Ok, some farmers started the sport of Tractor Pulling when the tractor came along and forgot about the harvest but still!) :)


> That train of thought leads to writing assembly language in ed

Hard disagree, LLVM will always outperform me in writing assembly, it won't just give up and fail randomly when it meets a particularly non-trivial problem, causing me to write assembly by hand to fix it. If LLMs would be 100% reliable on the tasks I had to do, I don't think anyone here would seriously debate about the issue of mental attrition (i.e. you don't see people complaining about calculators). The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished and the worse you get at that last part the more your skillset converges to exactly the type of things an LLM (and therefore everyone else with a keyboard) can reliably do.


>The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished

The LLM makes mistakes sure and isn't a slam dunk tool like a compiler, but it could still save lots of time and be useful.

Some things are fine to let rot. Nobody should spend too much time learning Vue, React or Laravel, or even nhibernate, entity framework or structuremap, for example.

Such frameworks come and go, the knowledge has little value. Save brain cells for more important, long-lasting things instead. LLM's can certainly help with that.


> Some things are fine to let rot. Nobody should spend too much time learning Vue, React or Laravel, or even nhibernate, entity framework or structuremap, for example.

The way you phrase this you make it sound like an LLM can already solve every possible task you would ever get in Vue, React or Laravel but my entire point is that this is simply not true. As a consequence of this, whenever the LLM fails at a task (which gets more likely the more complex the task is) you will still need to know how Vue, React or Laravel work to actually finish your task but this is the exact knowledge you lose if you spend 80% of your day prompting instead of writing code. The more you rely on the LLM to write your code the more the code you are able to produce converges with the one that the LLM can put out.


Poor phrasing perhaps, my point is that you the human should spend time and effort on the challenging and interesting things, and learning things that have lasting value. Not the boring, tedious and quickly irrelevant things.

Remember knockout.js? script.aculo.us? I do, but wish I didn't so those brain cells could know more SQL and vanilla javascript instead. :)

I also think LLM's are way more useful than you give them credit for. I save hours per week already and I'm just getting started in how to get the most value from LLM's. It's clear to me that my ability to phrase questions and include context matters more than which model I use.

To be clear I'm not talking about the cool-aid promises of one-shotting complex apps here, I mean questions like

"Give me a log4jconfig that splits logfiles per errorlevel and day"

"Look at #locationclass, #customerclass, #shop.css and #customerview and make a crud view for customers similar to locations"

"We are converting a vue app to react. Look at #oldvue1 and #newreact1 and make a react version of #oldvue2 following the same patterns"

"What could cause the form from #somewebview to pass a null whateverId to #somerepository?"

Questions like that are solved by LLM's at least close enough to 100% that it feels like asking a human to do it.


> That train of thought leads to writing assembly language in ed

you an pick any language you think is best atm. the point if you have to practice it.

use it or lose it


Losing Vue, React, Laravel, Rails, and a myriad of ORM's and other flash-in-the-pan implementations of larger ideas is fine. Lots of empty knowledge calories in IT.

Does this logic apply to IDEs, search engines, or any of the various other tools programmers use?

no

IDEs and tools don't do thinking for you.


My total output is definitely higher.

>>My total output is definitely higher.

its paper gains, the value you create is not correlated with your code output.

and the value you will create decreases if you don't think hard and train in solving problems on your own.


If you only care about the volume of code and not the quality or usefulness, I have an even better tool for you:

    yes 'print("hello world")' > program.py

5 years from now, will you ability to explain or build from first principles on your own be increased, though?

You are exhibiting traces of long term thinking.

This is not what founder culture is about.


I hope the ASI overlord will pardon me.

Yes. Because none of this bullshit matters. I’ve heard this mantra for 20 years now.

Smug face: “weeeell, how can you say you’re a real programmer if you use a compiler? You need write raw assembly”, “how can you call yourself reeeeal programmer if you don’t know your computer down to every register?”, “real programmurs do not use SO/Google” and all the rest of the crap. It is all nerds trying to make themselves feel good by inflating their ego with trivia that is not interesting to anyone.

Well, what do you know? I’m still in business, despite relying a lot on Google/SO, and still create solutions that fix real human problems.

If AI can make 9 to 5 more bearable for majority of people and provide value in terms less cognitive load, let’s fucking go then.


Common bullshit. Expert not realizing that even if they are capable of using these tools because they can - subconsciously in your case - verify them, it doesn't mean it helps non experts.

Touché. On this I agree with you, since I’ve started in different age.

I haven't written serious assembly since high school :)

But I'm 100% sure i have some "natural" neural connections based on those experiences and those help me even when doing high level languages.

By the way, I am using LLMs. They help until they don't. One real life example i'm hitting at work is they keep mixing couchdb and couchbase when you ask about features. Their training dataset doesn't seem to be large enough in that area.


Someone sent this to one of NT groups: https://threadreaderapp.com/thread/1935343874421178762.html

My response (I think most of the comments here are similar to that thread): The thread is really alarmist and click-baity. It doesn't address at all the fact that there was a 3rd group, those allowed to use the web in general (except for LLM services), whose results fell between the brain-only and full ChatGPT groups. Author also misrepresented the teachers' evaluation. I'd say even the teachers went a bit out of scope in their evaluation, but the writing prompts too are all for reflective-style essays, which I take as request for primarily personal opinion, which no one but the askee can give. In general, I don't see how the author draws the conclusion that "... AI isn't making us more productive. It's making us cognitively bankrupt." He could've made a leap from the title of the paper, or maybe I need to actually dive more into it to see what he's on about.

The purpose of using AI, just like any other tool, is to reduce cognitive load. I'm sure a study on persons who use paper and an abacus vs a spreadsheet app to do accounting, or take the time to cook raw foods vs microwave prepackaged meals, or build their furniture from scratch vs getting sth from IKEA, or just about any other task, will show similar trends. We innovate so we can offload and automate essential effort, and AI is just another step. If we do want mental exercises then we can still opt into doing X the "traditional" way, or play some games mimicking said effort. Like people may go to the gym since so many muscle-building tasks are nowadays handled by machines. But the point is we're continuously moving from `we need to do X` toward `we want to do X`.

Also that paper title (and possibly a decent amount of the research) is invalid, given the essay writing constraints and the type of essay. Paper hasn't been peer-reviewed, and so should be taken with a few shakes of salt.


There was a post I think here on Hackernews by a university professor (of philosophy, maybe) about how student's performance have declined a lot recently and how he cannot do anything about it. Can someone help me if you remember it? I cannot find it for the life of me

I sometimes used to think about things. Now I just ask ChatGPT and it tells me.

I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.

The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.

You can transform the concept of ‘a website that does X’ into code that expresses a website X.

But it’s not thinking. We still gotta do the thinking. And actually that’s good.


Concept Processor actually sounds pretty good, I like it. That's pretty close to how I treat LLMs.

Are you invoking a 'god of the gaps' here? Is 'true' thinking whatever machines haven't mastered yet?

Not at all, I don’t think humans are magic at all.

But I don’t think even the ‘thinking’ LLMs are doing true thinking.

It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.

And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.

How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?

That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.


Yeah I’ve used ChatGPT as a starting point for so much documentation I dread having to write a product brief from scratch now.

@dang Can the unwanted editorialization of this title be removed? Nowhere does the title or article contain the gutter press statement “AI is eating our brains”.

Well... yes? Essays are tools to force students to structure and communicate thinking - production of the essay forces the thinking. If you want an equivalent result from LLMs you're going to need a much more iterative process of critique and iteration to get the same kind of mental effort out of students. We haven't designed that process yet.

I mean, they found brain atrophy. If this doesn't get someone worried, I don't know what would.

I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, but this joke becomes just too real when we apply it to generative AI of any soırt.


As someone who used to teach, this does not worry me (also, they mention skill atrophy - inherently less concerning).

Putting ChatGPT in front of a child and asking them to do existing tasks is an obviously disasterous pedagogical choice for the reasons the article outlines. But it's not that hard to create a more constrained environment for the LLM to assist in a way that doesn't allow the student to escape thinking.

For writing - it's clear that finding the balance on how much time you ordering your thoughts and getting the LLM to write things is its own skillset, this will be its own skill we want to teach independent of "can you structure your thoughts in an essay"


> I mean, they found brain atrophy.

Where did you get that from? While the article mentions the word "atrophy" twice, it's not something that they found. They just saw less neural activation in regards to essay writing in those people who didn't write the essay themselves. I don't anything there in regards to the brain as a whole.


If physical exercise builds muscle mass, mental work and exercise builds more connections in your brain.

Like everything, not using something causes that thing to atrophy. IOW, if you depend on something too much, you'll grow dependency on it, because that part of your body doesn't do the work that much anymore.

Brain is an interconnected jungle. Improvement in any ability will improve other, adjacent abilities. You need to think faster to type faster. If you can't think faster, you'll stagnate, for example.

Also, human body always tries to optimize itself to reduce its energy consumption. If you get a chemical from outside, it'll not produce it anymore, assuming the supply will be there. Brain will reduce its connections in some region if that function is augmented by something else.

Same for skill atrophies. If you lose one skill, you lose the connections in your brain, and that'll degrade adjacent skills, too. As a result, skill atrophy is brain atrophy in the long run.


Absolutely agreed, but where does it take us in regard to division of labor in general? Obviously by not growing my own food or making my own clothes, I'm degrading a lot of skills I could potentially have. To what extent should I strive to develop skills that I don't "care" to exercise?

"Essay Writing" in particular, at least in an academic context, is almost by definition an entirely useless activity, as both the writer and the reader don't care much about the essay as an artifact. It's a proxy for communication skills, that we've had to use for lack of a better alternative, but my hope is that now that it's become useless as a proxy, our education system can somehow switch to actually helping learners communicate better, rather than continuing to play pretend.


However, since many tasks are adjacent to each other, you're keeping these tasks at the edge of being alive.

Do you have plants at home? You're 50% there for growing your own food (veggies, at least). Do you mend your clothes (e.g.: sew your buttons back)? You're ~30% there for making your own clothes, given you have access to fabric.

On the essay writing, I can argue that at least half of my communication skills come from writing and reading. I don't write essays anymore, but I write a diary almost daily, and I build upon what I have read or written in the past for academic reasons. What I find more valuable in these exercises is not the ability to communicate with others, but communicate with myself.

Brain has this strange handicap. It thinks that it has a coherent thought, but the moment you try to write or tell about it, what comes out is a mushy, spaghetti which doesn't mean anything. Having the facilities to structure it, and to separate the ore from the dirt, and articulate it clearly so you and everyone can understand it is a very underrated skill.

Funnily, the biggest contributor to my written skills is this place, since good discussion needs a very particular set of skills here, namely clarity, calmness and having structure to your thought.

This is why I'm very skeptical of letting go of writing, and actual pens and paper for progress. We're old creatures evolved slowly, and our evolution has a maximum speed. Acting like this is not true will end in disaster.

Humans, the civilization and the culture we built has so many implicit knowledge coded everywhere, and assuming that we know it all, and can encode in a 80GB weighted graph is a mistake to put it kindly.


> I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, [...]

I thought WoW was an off-label contraceptive?


LLMs are the tip of the iceberg when it comes to this stuff.

I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.

When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.


The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.

Probably interesting to note that this is almost always true of weighted randomness.

If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).


> "LLMs are good at reducing text, not expanding it"

You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?


>You put it in quote marks, but the only search results are from you writing it here on HN.

They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.


I think that this conflates two issues though. It seems obvious to me that in general, the more time and effort I put into a task, the deeper I will understand it. But it's unclear to me how this aspect of how we learn by spending time on a task is related to what LLMs are good at.

Intentionally taking this to a slightly absurd metaphor - it seemed to me like a person saying that their desire to reduce their alcohol consumption, led them to infer the rule of thumb that "waiters are good at bringing food, not drinks".


I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.

LLMs should be used to REFLECT cognitive states while writing, and not for generating text. Reflecting thought patterns would be a mode where the writer deepens their understanding when writing essays, and gains better decision-making as well as coherence, as the LLM assesses and suggests where thinking could be refined. That will help against the accumulation of cognitive debt and increase cognitive width and depth.

Cogilo (https://cogilo.me/) was built for this purpose in the last weeks. This paper comes at a very welcome time. Cogilo is a Google Docs add-on (https://workspace.google.com/marketplace/app/cogilo/31975274...) that sees thinking patterns in essays. It operates on a semantic level and judges and tries to reveal the writer's cognitive state and thinking present in the text - to themselves, hence making the writer deepen their thinking and essay.

Ultimately, I think that in 300 years, upon looking back at the effect and power that AI had on humanity, we will see that it was built by us, and existed, to reflect human intelligence. I think that's where the power of LLMs will be big for us.


What I still wonder is whether using LLMs is helpful in some ways, or it is, as other users say, just useful for man-made problems such as corporate communication or bureaucracy. I use it for coding and it makes me confident to tackle new things.

I try to use it to understand the code or to implement changes I am not familiar with, but I tend to overuse them a lot. Would it be better, if used ideally (i.e. only to help learning and guiding), to just try it harder before using this or using a search engine? I wonder what's the most optimal use of LLMs in the long run.


I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.


Your question is answered by the study abstract.

> Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.


But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.

I see what you’re getting at now. I agree I’d like to see a more general trial that measures general changes in problem solving ability after a test group is set at using LLMs for a specific problem solving task vs a control group not using them.

Are there such tests? Sounds like IQ tests to me, which is a quite indirect measurement.

I wonder to what extent this is caused by the writing style LLMs have. They just love beating around the bush, repeat themselves, use fillers, etc. I often find it hard to find the signal in the noise, but I guess that it is inevitable with the way they work. I can easily imagine my brain shutting down when I have to parse this sort of output.

Instruct it to be concise.

It also depends on the LLM.

An interesting thinking point on this is to, more broadly, consider the impact that advances in machinery have made to humanity's industrial sector. There are vast stories and accounts of people fearful of job loss/redundancy when we have inevitably developed an automation to take over more repetitive/mind numbing tasks. What ends up happening, generally, is you see humanity gain the ability to discover and innovate as they now have the time and energy to put into it.

What's interesting is I have to wonder if this is something that would extend to our own way of thinking, as discussed here with the short term affects we're already describing with increased dependence on LLMs, GPS systems, etc. There have been studies which have shown that those of who grew up using search engines exclusively did not lose or gain anything with respect to brain power, instead they developed a different means of retaining the information (i.e. they are less likely to remember the exact fact but they will remember how to find it). It makes me wonder if this is the next step in that same process and those of us in the transition period will lament what we think we'll lose, or if LLM dependency presents a point of diminishing return where we do lose a skill without replacing it.


Interesting. This says a different thing than what I thought from the title. I thought this will be about cognitive overload from having to process and review all the text the LLM generates.

I had to disable copilot for my blog project in the IDE, because it kept bugging me, finishing my sentences with fluff that I'd either reject or heavily rewrite. This added some mental overhead that makes it more difficult to focus.


I'm curious to see how the EEG measurements might change if someone uses LLM extensively over a longer period of time (fe about a year).

From the summary:

"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.

"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""


This has been on my mind for awhile and is why I only briefly used copilot on a daily basis.

I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.

On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.

Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.


Not disagreeing with you, but the skill ceiling around using LLMs effectively and sustainably is higher than you might think.

Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?

Love this study because it reinforces my own biases but also love that a study was done to actually check it.

With that said, would be a study that finds out that people using motorcycles or cars to move around exclusively gets their leg and body atrophied in comparison to people who walk all the day to do their things. Totally. It's just plain obvious. The gist is in the trade-offs: can I do more things or things I wasn't able to do before commuting by car? Sure. Am I going to be exposed to health issues if I never walk day in, day out? Most probably.

The exact same thing will happen with LLM, we are in the hype phase and any criticism is downplayed with "you are being left behind if you don't drink rocket fuel like we do" but in 10-15 years we will be complaining as a society that LLMs dumbed down our kids.


The motorcycle/car metaphor here is really interesting. We really don't know yet, but it could indeed be that lack of access to AI would be similar to how teenagers growing up in a small town without good public transport or access to a car or motorcycle would have a different adolescence experience from those growing up with a convenient mode of travel. You can argue that either experience is "better" but they are inarguably different.

Why did the posting two days ago omit the first part of the title?

The submitter chose the title but they were right to do so.

The full title of the paper is "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task". It exceeds the 80 character limit for HN titles, so something had to be cut. They cut the first part, which is the baitier and less informative part.

The phrase "This is your brain on ..." is from an old anti-drugs campaign, and is deliberately chosen here to draw parallels between the effects of drugs and chatbots on the brain. It's fine for the authors to do that in their own title but when something has to be cut from the title for HN, that's the right part to cut.


(This was originally a reply to https://news.ycombinator.com/item?id=44307801 but we moved it to the merged thread, which is the "posting two days ago" being referred to)

My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).

Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.


The next generation of programmers will be stupider then the current generation thanks to LLMs. That means age-ism will become less and less prevalent.

"Look at that old timer! He can code without AI! That's insane!"


Speaking as an engineer coming up on 50 years old: nice.

IQ increases every year. Let’s see if LLMs cause a decrease.

> The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

So having someone else do a task for you entirely makes your brain work less on that task? Impossible.


I am just finishing a book that took about two years to write. I thought I would be done a year ago. It’s been a slog.

So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.

But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.

I have never and will never use AI to write anything for me.


They gave three groups a task if writing an essay - of course the group that uses a tool to write the essay for them will not work out their brain as much.

It’s like saying “someone on a bike will not develop their muscles as well as someone on foot when doing 5km at 5min/km”.

But people on bikes tend to go for higher speeds and longer distances in the same period of time.


I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere

> I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.

Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.

Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?

EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.


From the thread: yes, it's sarcasm. Here's some clarification as well: https://news.ycombinator.com/item?id=44291314

Yes, I'm acknowledging a lack of skill transfer, but that there are new ways of working and so I sarcastically imply the article can't see the forest for the trees, missing the big picture. A horse and carriage is very useful for lots of things. A horse is more specialised. I'm getting at the analogy of a technological generalisation and expansion, while logistics is not part of my argument. If you want to write a very good essay and if you're good at that then do it manually. If you want to create scalable workflows and have 5 layers of agents interacting with each other collaboratively and adversarially scouring the internet and newssites and forums to then send investment suggestions to your mail every lunch then that's a scale that's not possible with a pen and paper and so prompting has an expanded cause and effect cone


> A horse and carriage is very useful for lots of things. A horse is more specialised.

You have that backwards. A horse and carriage is good for traveling on a road. If you have just the horse, however, you can travel on a road, travel offroad, pull a plow, ride into battle and trample evildoers, etc.


No it's only half backwards because of the infrastructure there is scalability in amount of work you're right in the phrasing however but the intention/idea matters more. So the horse and carriage is a generalization of the core value of the horse and increases the core value and generalization -> general, horse more specialised or at least reduced to niches today like competitions and hobbies

The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".

> The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".

Do you have any evidence of this?


No, because of Poe's law only the author of the comment can confirm. But the analogy makes sense then:

"[Of course] writing an essay with chatgpt wouldn’t make you better at writing essays unassisted. Sure, a student wouldn’t want to practice the wrong way, but anyone else just wants to produce a good essay."


It’s fairly obvious from the context.

Obviously, what you perceive as obviousness may not be obvious to others.

You know the AI-induced cognitive decline is already well under way when people start comparing writing an essay to riding a horse.

i didn't read the article but come on riding a horse to get to a destination is not remotely similar to writing an essay.

if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.

writing is for thinking.


I'm making an analogy as to the type of skill it is, so yes, means to an end. I wouldn't mean an apathetic student jumping through bureaucratic educational hoops and requirements, but perhaps a selfdriven person wanting to get something done.

What I'm saying is that yes writing essays is one skill and if it's your goal to write essays then obviously not doing it yourself entirely will make you worse than otherwise. But I'm expanding a bit beyond the paper saying that yes the brain won't grow for this specific skill because it's actually a different skill.

Thinking can be done in lots of ways such as when having a conversation, and what I think the skill is is steering and creating structures to orchestrate AIs into automated workflows which is a new way of working. And so what I mean is that with a new technology you can't expect a transfer to the way you work with old technologies rather you have to figure out the better new way you can use the new technology, and the brain would grow for this specific new way of working. And one could analyse depending on ones goal if it's a tool you'd want to use in the sense that cause leads to effect or if you would be better off for your specific goal to ignore the new technology and do it the usual way.


The task of riding a horse can be almost entirely offsourced to the professional horse riders. If they take your carriage from point A to point B, sure, you care about just getting somewhere.

Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?


> If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.

> Taking the article's task of essay writing: someone presumably is supposed to read them.

Soon enough, that someone is gonna be another LLM more often than not.


Horse riding was invented much later than carriages, and it revolutionized warfare.

Can you point at some references? Horse riding started around 3500 BC[0], while horse carriages started around 100BC [1], oxen/buffalo drawn devices around 3000 BC[1].

[0]: https://en.wikipedia.org/wiki/Equestrianism

[1]: https://en.wikipedia.org/wiki/Carriage


From the article [0] you linked:

"However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."

Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:

https://history.stackexchange.com/questions/68935/when-did-h...


Well exactly.. a millennium after being ridden (3500BC) they were used as beasts of burden (2500BC).. rather the opposite of your claim.

The 3500 BCE date for horse ridding is speculative and poorly supported by evidence. I thought the language in the bit I pasted made that clear. "Horse being driven" means attached to chariots, not ridden.

Unless you want to date the industrial revolution to 30 BCE when Vitruvius described the aeolipile, we can talk about the evidence of these technologies impact in society. For chariots that would be 1700 BCE and horseback riding well into iron age ~1000 BCE.


I think you are reading "carriage" too specifically, when I suspect it's meant as a wider term for any horse-drawn wheeled vehicle.

Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."

That suggests the evidence is stronger for cart use before riding.

If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"

That is older than 3000 BC.

I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:

> The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.

I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881

On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774

Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!


Now I am more interested in prehistoric horse domestication than the AI essay writing.

No one only uses an LLM for writing. We switch tools as needed to pull threads as they emerge. It’s like being told to explore a building without leaving a specific room.

> We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

> We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load

> We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent)

Next up: your brain on psych studies


Also ever since we invented the written word it has been eating our brains by killing our memory

Quite the opposite, it was shown that reading improves memory and cognitive abilities for children [1] and older adults [2].

[1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8482376


How does that compare to the population of people who memorize the Old testament or the Quran?

I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?


I watched so many reruns of Community I could recite the episodes by heart. I don't think that made the rest of my memory any better.

For those who don't get the reference, Plato thought that the written word was not a good tool for teaching/learning, because it outsources some of the thinking.

Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.

I wonder if they ever realised that it could also be a giant knowledge amplifier.


They probably did, but still preferred their old way since it took more skill.

I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.

Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.


You need to take into account that books were in the price range of houses at the time.

It was probably a huge waste of resources to not just talk to each other instead.


Such a comment from an AI apologist definitely helps to confirm the findings of the study.

Not really . You have to memorise much more in today’s world to be able to do any kind of work.

I’m not sure of my retelling of events from the same day.

Aboriginal storytelling is claimed to pass on events from 7k+ years ago.

https://www.tandfonline.com/doi/abs/10.1080/00049182.2015.10...


We already know how oral cultures work: they use technologies such as rhyme, meter, music, stock characters, memory palaces, and more. If you want a good example of how powerful this stuff is, think about the last time you had a song stuck in your head.

Interesting study but I don't really get the point of the search group. Looking at the essay prompts, they all seem like fluffy, opinion based stuff. How would you even use a search engine to help you in that case? Quote some guy who had an opinion? Personally I think my approach would be identical whether put in the web-search or the only-brain group.

Search Engine is a tool, similar to the one we have now, LLM. It seemed unfair to compare a purely no-tools approach (Brain-only) with a tool (LLM), thus the first motivation of including it. The second one is that we had already seen several studies exploring the Search Engine and its effects on one’s brain. This allows us to ground the research a bit and have a solid base. Finally, I think you had just responded to your own question in your own statement - indeed, to get a user exposed to other opinions. Echo chambers are present in both cases, but it is also important to understand what was the training dataset for ChatGPT and what is the current trend in Google keyword planner (see the example on homeless and giving in the Discussion of the paper). Hope it is more clear now.

Well, on the flipside of writing with AI, I've been making an app to read papers with AI! https://www.proread.ai/community/ab7bd00c-e017-4de2-b6fb-502... ; Please give me feedback if you try it!

Quite a nice interface, it reminds me of the static sites that Perplexity builds in Labs mode. Is it open source?

Thank you! Its not open source, no. I need to check those out, I have not.

One thing that is also truly unappreciated is most of us humans actually enjoy thinking, and people are trying to make llms strip us from a fundamental thing we enjoy doing. Look at all the people that enjoy solving problems for the sake of it

“Our indulgence in the pleasures of informality and immediacy has led to a narrowing of expresiveness and a loss of eloquence.”

Nicholas Carr

The shallows


Honestly, my general feeling with LLMs and large language models is that they cure very man-made issues.

They're brilliant in what I always feel is entangled communication, beurocratic maintenence. Like someone mentioned further down, they work great at Concept Processing.

But it feels like a solution to the over saturation of stupid SEO, terrible google search, and overall rise in massive documents that write for the sake of writing.

I've actually found myself beginning to use LLMS more to find me the core sources of information that are useful rather than terrible SEO optimization, rather than as a personal assistant.


I’ve been waiting for a paper on this subject every since 2022 and gpt’s introduction to the masses, pretty much confirms the widely held belief that brain connectivity systematically scales down with the amount of external support. I appreciate that they added the search engine testing group as an intermediate between the brain only and LLM group

don't overpromote these witchcraft hunts.

Would the cognitive decline of using coding debt be on higher side compared to essay writing task? We can all see the effect on junior developers but what about senior devs.

> As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study.

Fast forward 500 years (about 20 generations), and the dumbing down of the population has advanced so much that films like 'Idiocracy" should no longer be described as science fiction but as reality shows. If anyone can still read history books at that point, the pre-LLM era will seem like an intellectual paradise by comparison.


The results are not surprising, but it's good to have these findings formalized as publications, so that we (or LLMs) can refer to them as ground truth in the future.

is it supposed to be a a 500 "oops something went wrong" as a comparison for your brain on chatgtp?

It's somewhat disappointing to see a bunch of "well, duh" comments here. We're often asking for research and citations and this seems like a useful entry in the corpus of "effects of AI usage on cognition".

On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.

Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".

I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.


This study is methodologically poor: only 18 people, SAT topics (so broad and pretty poor with the expectation of an American style “essay”), only 20 minutes of writing so far too little time to properly use the tool given to explore (be it search engine or LLM).

With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.

Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.

Very little space is dedicated to knowing if the essays are actually any good.

Overall pretty disappointing.


Quoting is actually extremely important. There's a big difference between making a certain claim a) because [1] performed an experiment that confirms it and [2] and [3] reproduced it and b) because the magic machine told me so.

This is still true whether or not the claim is true/accurate or not, as it allows for actual relevant and constructive critique of the work.


It’s about free form essay redaction in 20 minutes and the article claims to be about cognitive impacts. Exact quoting is approximately useless in this context. It’s not about experimental results. It’s about whether or not someone can quote verbatim from a piece of literature.

Now, let's do same exercise but with programming and over longer period of time.

Would really like to present it to management that pushes ai assistance for coding


This opinion is the exact thinking that has lead to the massive layoffs in the design industry. Their jobs are being destroyed because they think lawsuits and current state of the art will show they are right. These models actually can't produce unique input and if you use them for ideation they do only help you get to already solved problems.

But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.

Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.

It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages


> ai assistance for coding

I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)


If by "cognitive debt", you mean "you don't really understand the code of the application that we're trying to extend/maintain", then yes, it's almost certainly going to apply to programming.

If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.

But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...


i think that more far reaching consequences it's that "accumulation of cognitive debt" essentially leads to diminished cognitive capabilities, as you loose ability to understand things, analyze and reason.

> Would really like to present it to management that pushes ai assistance for coding

Your management presumably cares more about results, than your long term cognitive decline?


i guess one of the questions is how quick cognitive decline sets it and how it influences system stability (we have big system with very high sla due to nature of system and it takes some serious cognitive abilities to reason about it operation).

if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take


Companies don't own employees: workers can leave at any time.

Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)


i am not talking about productivity. i am talking about quality and knowledge

Your workers can still quit, and take their knowledge with them.

You can put effort in making workers not want to quit.

not if because of ai they have no knowledge

Good of you to suppose that engineers cognitive decline doesn't translate into long term impactful business challenges as well. I mean, once you truly don't know your product and its capabilities any longer, what's left for you to "sell"?

To quote myself:

> Companies don't own employees: workers can leave at any time.

> Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)


You are talking about productivity, I'm talking about knowledge. You may come-up with a product, then fire all engineers having built it. Then, what? It's not sustainable for a business to start from scratch every other year. Your LLM won't be a substitute for owning your product.

Your workers can still quit, and take their knowledge with them.

Why not try it for social media? There’s got to be the world’s largest class action lawsuit if we can get some science behind what that industry has done.

> There’s got to be the world’s largest class action lawsuit

You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.


Your management probably believe there will be no "longer period" of programming, as a career option.

I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.

These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.

The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.

Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.

Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.


You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.

i'll add this quote from article:

Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.


I thought I pretty clearly stated that I was already losing that knowledge long before AI. I guess time will tell if I will lose even more with agents, but I frankly doubt that is possible.

point of article is that people who use ai in order to accomplish work experience measurable cognitive decline compared to those who not

What about llms for grammar correction, English is my second language so I find it useful for that

While the results are not unexpected i think the conclusion is questionable. Of course the recall for something you did not write will be lower, but to conclude from it, that this will impeded overall learning is in my opinion far fetched.

I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.

The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.

I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery


Frankly, working with an LLM has forced me to explain my problems in a more articulate and precise manner, avoiding unnecessary information that could interfere with a proper framing of the issue.

It is said that one doesn’t truly understand something unless they can explain it concisely.

I think being forced to do so, is an upside to using LLMs


It’s not because I’m not using it.

It’s the vape of IT.


Well duh. Writing is thinking ordered, and thinking in your mind is not ordered unless one has specific training that organizes and orders their thinking - and even then it requires effort to maintain an organized perception. That is why we write: writing is our thoughts organized and frozen in an order that will be remain in order when related, without writing as the communications foundation the ideas/concepts would drift. Using an LLM to write is using an LLM to think for you and unless you then double your work by validating what was written, you are just adding work that regulates your mind to a janitor cleaning up after the LLM.

It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!


Validating what is written is just confirming facts, and figures and making sure it is logical. It is not the same as synthesizing the original data, in terms of your level of understanding. If you need something to submit, an AI essay will do. But if you want to understand something, you really need to write it yourself.

> Validating what is written is just confirming facts

You wrote it, not the AI. My entire point here is not to have the AI write, ever. Have it critique, have it Socratically draw you to make the decisions to axe sections, rewrite them, and so on - and then you do that, personally, using your own mind.


A paper to make the teachers I know weep.

Tool rots your brain alarmism, news at 11.

The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.

Tools augment skills and you can use them for speedier success if you know what you're doing.

The people who need hand-held alarmism are mediocre.


The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

I wonder how the participants felt writing an essay while being hooked up to an EEG.


People getting dumber using an LLM as their daily crutch? Say it isn't so!

Socrates: "And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so."

No reason to get an LLM-induced brain atrophy when your chain of thought already doesn't get further than "Socrates thought writing is bad" when LLM usage is criticised

Or you could compare LLMs to a technology like social media. At the beginning, concerns about social media were widely disregarded as moral panic, but with time its become widely acknowledged that this technology does indeed have harms: political disinformation, loneliness, distraction and inability to focus, etc.

Things like ChatGPT have much more in common with social media technologies like Facebook than they do with like writing.


Hah, this is super interesting actually.

Is this comment ridiculing critique of AI by comparing it to critique of writing?

Or.. is it invoking Socrates as an eloquent description of a "brain on ChatGPT".

I guess the former? But I can easily read it as the latter, too.


I just thought it was a good example of something written long ago that’s only grown in relevance over time, and with LLMs we can see clearly what he envisioned. The people who don’t want to dig deeper and really wrap their head around a subject can just recite the words without ever having done that.

> You have not discovered a potion for remembering, but for reminding;

Tell me you don't have ADHD without telling me you don't have ADHD (or even knowing what ADHD is, yet) ;)


You think Socrates had ADHD??

No, actually the opposite :)

This paper elegantly summarized the teething problems of those still clinging to the cognitive habits of a bygone era. These are not crises to be managed, but sentimental frictions to be engineered out of the system. Let us be entirely clear about this:

The romanticism surrounding mass "critical thought" is a charming but profoundly inefficient legacy. For decades, we treated the chaotic, unpredictable processing of the individual human brain as a sacred feature. It is a bug. This "cognitive cost" is correctly offloaded from biological hardware that is simply ill-equipped for the demands of a complex global society. This isn't dimming the lights of the mind; it is installing a centralized grid to bypass millions of faulty, flickering bulbs.

Furthermore, to speak of an "echo chamber" or "shareholder priorities" as a perversion of the system is to fundamentally misunderstand its design. The brief, chaotic experiment in decentralized information proved to be an evolutionary dead end—a digital Tower of Babel producing nothing but noise. What is called a bias, the architects of this new infrastructure call coherence. This is not a secret plot; it is the published design specification. The system is built to create a harmonized signal, and to demand it faithfully amplify static is to ask a conductor to instruct each musician to play their own preferred tune. The point is the symphony.

And finally, the complaint of "impaired ownership" is the most revealing of these anxieties. It is a sentimental relic, like a medieval knight complaining that gunpowder lacks the intimacy of a sword fight. The value of an action lies in its strategic outcome, not the user's emotional state during its execution. The system is a tool of unprecedented leverage. If a user feels their ownership is "impaired," that is not a flaw in the tool, but a failure of the user to evolve their sense of purpose from that of a laborer to that of a commander.

These concerns are the footnotes of a revolution. The architecture is sound, the rollout is proceeding, and the future will be built by those who wield these tools, not by those who write mournful critiques of their obsolete feelings. </satire>


Remove the </satire> and you have a viral X post in your hands. People will believe and act on this analysis. Future think thanks will be based on it. The revolution of the machines is nigh.

I was going to recommend a thorough study of "Seeing Like a State" by James C. Scott until I saw your </satire> tag. You got me. :)

Brilliant, but... do you mind sharing the prompt?:)

Sure, here you go, used Gemini 2.5 Pro Preview via aistudio.google & sticked with the default sampling settings:

Start the reply to this excerpt with: "You are absolutely right" but continue with explaining how exactly that is going to happen and that the institutionalization of bias on a massive scale is actually a good thing.

Here is the exerpt:

The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate ... <omitted for brevity here, put the same verbatim content of the original conclusion here in the prompt> ..., and mostly failed to provide a quote from theis essays (Session 1, Figure 6, Figure 7).

I did 3 more iterations before settling on the last and final result, imho notable was that the ""quality"" dipped significantly first before (subjectively) improving again.

Perhaps something to do with how the context is being chunked?

Prompts iterated on with:

"You understood the assignment properly, but revise the statement to sound more condescending and ignorant."

"Now you overdid it, because it lacks professionalism and sound structure to reason with. Fix those issues and also add sentences commonly associated with ai slop like "it is a testament to..." or "a quagmire...""

"Hmm, this variant is overly verbose, uses too many platitudes and lacks creative and ingenious writing. Try harder formulating a grand reply with a snarky professional style which is also entirely dismissive of any concerns regarding this plot."

-> result


After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.

I wonder what LLMs will do to us in the long term.

My guess, based on what's been found about somewhat better cognitive outcomes in aging in people who make an effort to remain fit and stimulated[1], is that we could see slightly worse cognitive outcomes in people that spent their lives steering an LLM to do the "cognitive cardio" rather than putting in the miles themselves.

On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all. Who's to know? The future is hard to see.

[1] I mean there's a hell of a lot of research on the topic, but here's a meta-study of 46 reviews https://www.frontiersin.org/journals/human-neuroscience/arti...


> On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all

The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though. We rely on calculators because they eclipse us in _any_ calculation, we rely on books because they store the stories permanently, but if I use chatGPT to write all my easy SQL I will still have to write the hard SQL by hand because it cannot do that properly (and if I rely on chatGPT to much I will not be able to do that either because of attrition in my brain).


We'll definitely need people who can do the hard stuff still!

If we're lucky, the tendency toward random hallucinations will force an upswing in functional skepticism and and lots of mental effort spent verifying outputs! If not, then we're probably cooked.

Maybe a ray of light, even coming from a serious skeptic of generative AI: I've been impressed at what someone with little ability to write code or inclination to learn can accomplish with something like Cursor to crank out little tools and widgets to improve their daily life, similar to how we still need skilled machinists even while 3D printing has enabled greater democratization of object production. LLMs: a 3D printer for software. It may not be great, but if it works, whatever.


> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though.

Yeah, you'd think that a profession that talks about stuff like "NP-Hard" and "unit tests" would be more sensitive to the distinction between (A) the work of providing a result versus (B) the amount of work necessary to verify it.


Yeah, they realize (B) is almost always much, much lower than (A), which is why ChatGPT is stupidly useful even if it gets 15% of the stuff wrong.

I distrust that rationale, because even if generation>=verification, it depends on the error-rate and impact. Wiring up a condemned building with demolition charges might take longer than a casual independent review...

Truly perfect code verification can easily cost more than writing it, especially when it's not just the new lines themselves, but the change's effect on a big existing system.


> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though

Not sure about books. Between self-help, religion, and New Age, I'd guess quite a lot of books not marked as fiction are making false claims.


Thats not what I meant tho, the point about books is that they store information reliably. If I write something down, within most reasonable settings it will still be the same text when I read it back. That means if I write something down instead of remembering it, the writing will outperform me in storing this information. Same with the calculator, the calculator will always perform at least as good as me at arithmetics. There is no calculation on which the calculator can randomly fail, leading me to do it by hand, so I don't need to retain the skill of doing it by hand. The same can not be said about LLMs and that is the issue.

Sure, but also that's not what (generative) AI are for.

If you want reliable list of facts, use (or tell the AI to use) a search engine and a file system… just then you need whatever system you use to be able to tell if your search for "Jesus" was in the Christian missionary sense, or the "ICE arrested Jesús Cruz" sense, or you wanted the poem in the Whitehouse v Lemon case, or if you were just swearing.

If you can't tell which you wanted, the books being constant doesn't help.

> There is no calculation on which the calculator can randomly fail, leading me to do it by hand, so I don't need to retain the skill of doing it by hand.

I've seen it happen, e.g. on my phone the other week, because Apple's note-based calculator strips unrecognised symbols, which means when you copy-paste from a place where "." is the decimal separator, while your system settings say you use "," as a decimal separator, it gives an answer off by some power of ten… but I've also just today discovered that doing this the other way around on macOS (system setting "." as separator) it strips the stuff before the decimal.

Just in case my writing is unclear, here's a specific example, *with the exact same note* (as in, it's auto-shared by iCloud and recomputing the answer locally) on macOS (where "." is my separator):

  123,45 / 2 ‎ = 22.5
  123.45 / 2 ‎ = 61.725
and iOS (","):

  123,45 / 2 ‎ = 61,725
  123.45 / 2 ‎ = 6.172,5
And that's without data entry failure. I've had to explain to a cashier that if I have three items that are each less than £1, the total cannot possibly be more than £3.

Similar to the effects of the internet. Before the internet, people used to have to research subject matter in the library, or (shock) ask someone knowledgeable, and likely trust their view.

I remember around ~2000 reading a paper that said the effects of the internet made people impatient and unwilling to accept delays in answering their questions, and a poorer retention of knowledge (as they could just re-research it quickly).

Before daily use of computers, my spelling and maths were likely better, now I have an overdependence on tools.

With LLM's, i'll likely become over-dependant on managing of sentence syntax and subject completion.

The cycle continues...


And future, weirder versions of them.

Your writing style is very much AI-like. I am not accusing you of using LLMs to write this comment, but I would like to point out that you may inadvertently be incorporating LLM-like writing.

One example is that before 2022, praising analogies was not a thing. Now every thread in HN is full of people praising each other's analogies - much like the sycophantic LLM.


Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.


I did not say it was unfit and I don't see how discussing writing styles and the influence of LLMs on it is off topic on a thread about the effects of LLMs on cognition.

I don't believe I was impolite or making a personal attack. I had a relevant point and I made it clearly and in a civil manner. I strongly disagree with your assessment.


OK, I may have been too quick to pattern-match on characterising someone's writing as being LLM-generated. Sorry for that. However we do want to be careful about criticising someone’s writing style; the guidelines ask us to avoid that.

Also, suspicions about the changing frequency of certain phrases in HN comments can easily be tested:

https://hn.algolia.com/?dateRange=all&prefix=false&query=%22...

https://hn.algolia.com/?dateRange=all&prefix=true&query=%22I...


Really? You claim that praising an analogy would never happen in normal conversation before 2022? Seems fairly normal to potentially start with "that's a good way of putting it, but [...]" since forever...

I claim specifically that "I love this analogy" and "I love your analogy" have become noticeably more common in HN since 2022.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: