Don't forget exerting control through automated surveillance. What a wonderful tool we have created for detecting whether citizens step out of line without needing giant offices full of analysts.
Well said. It's wild when you think of how many "AI" products are out there that essentially entrust an LLM to make the decisions the user would otherwise make. Recruitment, trading, content creation, investment advice, medical diagnosis, legal review, dating matches, financial planning and even hiring decisions.
At some point you have to wonder: is an LLM making your hiring decision really better than rolling a dice? At least the dice doesn't give you the illusion of rationality, it doesn't generate a neat sounding paragraph "explaining" why candidate A is the obvious choice. The LLM produces content that looks like reasoning but has no actual causal connection to the decision - it's a mimicry of explanation without true substance of causation.
You can argue that humans do the same thing. But post-hoc reasoning is often a feedback loop for the eventual answer. That's not the case for LLMs.
> it doesn't generate a neat sounding paragraph "explaining" why candidate A is the obvious choice.
Here I will argue that humans do the same thing. For any business of any size recruitment has been pretty awful in recent history. The end user, that is the manager the employee will be hired under is typically a later step after a lot of other filters, some automated some not.
At the end of the day the only way is to measure the results. Do LLMs produce better hiring results than some outside group?
Also, LLMs seem very good at medical pre-diagnosis. If you accurately portray your symptoms to them they come back with a decent list of possible candidates. In barbaric nations like the US where medical care can easily lead to bankruptcy people are going to use it as a filter to determine if they should go in for a visit.
What initially drew me to David Hume was a quote from his discussions of miracles in "An Enquiry Concerning Human Understanding" (name of chapter is "Of Miracles").
That said, I began with "A Treatise of Human Nature" around the age of 17, translated to my native language (his works are not an easy read in English, IMO), due to my interest in both philosophy and psychology.
If you haven't read them yet, I would certainly recommend them. I would recommend the latter I mentioned even if you are not interested in psychology (but may be interested in epistemology, philosophy of mind, and/or ethics), as he gets into detail about his "impressions" vs "ideas".
Additionally, he is famously known for his "problem of induction" which you may already know.
You know how many old sci-fi settings pictured aliens as bipedal furry animals or lizards? Even to go from that to realistically-intelligent swarms of insects is already difficult.
(Of course, there’s plenty of sci-fi where conscious entities manifest themselves as abstract balls of pure energy or the like; except for some reason those balls still think in the same way we do, get assigned the same motivations, sometimes even speak our language, etc., which makes it, in a way, even less realistic than the walking and talking human-cat hybrid you’d see in Elder Scrolls.)
Whenever we ponder questions of intelligence and consciousness, the same pitfall awaits.
Since we don’t have an objective definition of consciousness or intelligence (and in all likelihood we can’t have one, because any formal attempt at such wouldn’t get very far due to being attempted by the same thing that’s being defined), the only one that makes sense is, in crude language, “something like what we are”. There’s a vague feeling that it has to do with free will, self-awareness, etc.; however, all of it is also influenced by the nature of us being all parts of some big figurative anthill—assuming your sense of self only arises as you model yourself against the other (starting with your parents/caretakers and on), a standalone human cannot be self-aware in the way we are if it evolved in an emptiness without others—i.e., it would not possess human intelligence; supported by our natural-scientific observations rejecting the possibility of a being of this shape and form ever evolving in the first place.
In other words, the more different some kind of intelligence is from ours, the less it would look like intelligence to us—which makes the search for alien intelligence in space somewhat tragically futile (if it exists, we wouldn’t recognize it unless it just happens to be like us), but opens up exciting opportunities for finding alien but not-too-alien intelligence right on this planet (almost Douglas Adams style, minus dolphins speaking English).
There’s an extra trick when it comes to LLMs. In case of alien life, the possibility of a radically different kind of consciousness producing output that closely mimics our own is almost impossible (if our prior assumption is correct, then for all intents and purposes truly alien, non-meatbag-scale kind of intelligence might not be able to recognize ours in the first place, just like we wouldn’t recognize alien intelligence). However, the LLMs are designed to mimic the most social aspect of our behavior, our communication aimed at fellow humans; so when an LLM produces sufficiently human-like output—even if it has a very different kind of consciousness[0] or no consciousness at all (more likely, though as we concluded above we can’t distinguish between the two cases anyway)—our minds are primed to see it as a manifestation of [which would be human-like] intelligence, even if there’s nothing that would suggest such judging by the way it’s created (which is radically different from the way we’ve been creating intelligent life so far, wink-wink), by the substrate it runs on, if not by the way it actually works (which per our conclusion above we might never be able to conclusively determine about our own minds, without resorting to unfalsifiable philosophical assumptions for at least some aspects of it).
So yes, I’d say humans are special, if nothing else then because by the only usable (if somewhat circular) definition of what we are there’s absolutely nothing like us around, and in all likelihood can never be. (That’s not to say that something not like us isn’t special in its own way—I mean, think of the dolphins!—but given we, due to not being it, would not be able to properly understand it, it just never hits the same.)
[0] Which if true would be completely asocial (given it neither exists in groups nor depends on others for survival) and therefore drastically different from ours.
Well, most sci-fi still fits the bill. Vinge is a bit interesting in that he plays around with the idea with Tines where an “individual” (in human sense) is a pack of 5 of them[0] or with civilizations that “transcend” and then no one has any idea of what are about anymore, and how a bunch of civilizations evolved from humans which explains how they all just happen to operate on equivalent human meatbag scale.
[0] Genuinely not unlike how a congregation of gelled-together humans is an entity that can achieve much more than an individual human.
To makeitdouble's point, how is this any different with an LLM provided solution? What confidence do you have that isn't also beginning an unlimited game of bug testing whack-a-mole?
My confidence in LLMs is not that high and I use Claude a lot. The limitations are very apparent very quickly. They're great for simple refactors and doing some busy work, but if you're refactoring something you're too afraid to do by hand then I fear you've simply deferred responsibility to the LLM - assuming it will understand the code better than you do, which seems foolhardy.
Anyone truly considering this should weigh up this post with the timeless wisdom in Joel Spolsky's seminal piece, 'Things You Should Never Do'[1]. Rewriting from scratch can often be a very costly mistake. Granted, it's not as simple as "never do this" but it's not a decision one should make lightly.
The last rewrite I've seen completed (which was justified to a point as the previous system had some massive issues) took 3 years and burned down practically an entire org (multiple people left, some were managed out including two leads, the director was ejected after 18ish months) which was healthy-ish and productive before the rewrite. It's still causing operational pain and does not fully cover all edge cases.
I'm seeing another now in $current_job and I'm seeing similar symptoms (though the system being rewritten is far less important) and customers of the old system essentially abandoned to themselves and marketing and sales are scrambling to try to retain them.
Anecdotal experience is not so good. Rewriting a tiny component? Ok. Full on rewrite of a big system? I feel it's a bad idea and the wisdom holds true.
Spot on. It seems that OP is considering (1) a rewrite that can entirely fit into the mind of an engineerXYZ, and also (2) will be led by the same engineerXYZ, through executive empowerment.
I guess that in your case probably (1) did not hold. Or maybe (2) did not hold, or both.
OP's experiment doesn't prove at all, that an entire org can rewrite a complex app where 1&2 do not hold. Every indication we have is that org's executive functions perform abysmally for code writing (and rewriting). So exactly the point you are making. It would obviously mean that there is value in code, along the value in the org, once we get above the level of the value that conceptually fits into 1 head.
As a teenager I used to revel in explaining to religious people that I believe humans are actually just the evolutionary step between biological life and machine life.
It’s a belief about a great future change, but there’s nothing supernatural or totally implausible about it. And it doesn’t sound like they were preaching it as the absolute truth, but were open that it was just their belief. Also, no social rites or rituals mean that despite them telling it to people who didn’t care to hear it, I am not convinced that their belief was very religious.
Also, “As a teenager” implies more self-awareness than you seem to give them credit for.
More broadly—and at least in online spaces—I often notice that many vocal proponents of atheism exhibit traits typically associated with religious behaviour:
- a tendency to proselytise
- a stubborn unwillingness to genuinely engage with opposing views
- the use of memes and in-jokes as if they were profound arguments
- an almost reverential attitude toward certain past figures
There’s more, but I really ought to get on with work.
That's assuming I actually believed it, rather than just reveling in the reactions from religious people. It's a fun scenario that would result in immediate rejection—most wouldn’t even entertain the idea. They instead often found it completely abhorrent. Provoking discomfort was entertaining for teenage me.
Had a similar experience where a Sydney AirBnB listed their property as air-conditioned. Had a misting fan. That doesn't quite cut it as "air conditioning" in 45C weather. Had the same outcome as you. Ended up doing a chargeback on my card and got banned from the platform.
I find myself both:
- writing a comment so that Copilot knows what to do
- letting Copilot write my comment when it knows what I did
I'm now a lot more reliable with my comment writing.
reply