Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

Agreed, these things all failed to live up to the hype.

But these didn't:

Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.



Yes, and despite every single one of these world-changing inventions, people in rich countries still go to work every day, even though UBI is generally not a thing. People claim AI will eliminate large numbers of jobs. Maybe it will, just like the tractor did. But new jobs are created. I would never have guessed that “influencer” would be a thing!

This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.


Ex historian here, now engineer. I would gently suggest you’re underestimating the magnitude of some of the transformations wrought by the technologies that OP mentioned for the people that lived through them. Particularly for the steam engine and the broader Industrial Revolution around 1800: not for nothing have historians called that the greatest transformation in human life recorded in written documents.

If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.

I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization created the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.

It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies do transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.


Not at all, I fully appreciate that these inventions transformed life. I’m skeptical because so much of the breathless AI chatter claims AI will eclipse all these inventions. It is the breathless AI commentators, not I, who have lost all perspective on the magnitude and sweep of history.


It’s not AI per se, but rather ai enabled robotics that can change the world in ways that are different in kind, not just degrees, to earlier changes.

No other change has had the potential to generate value for capital without delivering any value whatsoever to the broader world.

Intelligent robotic agents enable an abandonment of traditional economic structures to build empires that are purely extractive and only deliver value to themselves.

They need not manufacture products for sale, and they will not need money. Automated general purpose labor is power, in the same way that commanding the mongol hordes was power. They didn’t need to have customers or the endorsement of governments to project and multiply that power.

Of course commanding robotic hordes is the steelman of this argument, but the fact that a steelman even exists for this argument, and the unique case that it requests and requires actually zero external or internal cooperation from people makes it fundamentally distinct in character.

Humans will always have some kind of economic system, but it very well may become separate from -and competing for resources with- industrial society, in which humans may become a vanishing minority.


You think an artificial intelligence would have less impact on the world than the steam engine?

The AI commentators are not saying that ELIZA will change the world, they’re saying that one of the big companies is moments away from an AGI. Sam Altman called a recent ChatGPT model a “PhD level expert”; wouldn’t infinite PhDs for $20/month or $200/month be transformative?

That is, your objection isn’t the usual “LLMs aren’t going to be AGI”, you’re saying “even if they do, it won’t be a big deal”?


>You think an artificial intelligence would have less impact on the world than the steam engine?

Not op, but yes, 100%. Steam backs nearly all development of technology of the last 150+ years. Where do you think the power come from to make things? More than half of the world's power *still* runs on steam, as will many of the systems running AI.

If steam power never existed, not only would you not exist but there's a good chance the country you live in wouldn't either. If you don't believe the effect is large, go to the farthest uncontacted place on earth and take out a CO2 meter.


> "If you don't believe the effect is large"

It's not that I "don't believe the effect is large", but the changes from pre-intelligence planet Earth to post-intelligence planet Earth are larger because they include the invention of steam, and literally everything else too: language, writing, irrigation, cities, trade, numbers, currency, mathematics, chemistry, engineering, nations, governments, supply chains, steam, etc.

An AGI that can solve the problems we think are solvable, but we can't solve, would be huge. Any sci-fi idea that isn't ruled out by the laws of physics, but that we haven't got the brains to solve, any breakthrough that we think should be there but we haven't found, any problem that requires too much time to learn, or too many parts to hold in one human mind, any coordination that is too big for one team, any funding problem, any scarcity problem, any disease or illness problem, any long timeframe problem, are all on the table as possibilities.


There's potential there (with the pocket-PhDs), the question is whether it'll actually make a measurable difference in the long run. I mean I'm sure it will make a difference, the question is whether it's what they say it will be, and whether it'll be financially viable. At the current burn rate of the AI companies, it isn't - before long the first ones will have to give up. They won't die, they'll be subsumed into their competitors.

Anyway, the challenge is making a difference. Current-day LLMs can, for example, generate stories and books; one tweet said "this can generate 1000 screenplays a day". Which sounds impressive by the numbers, but books, screenplays, etc were never about volume.

Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

This is the main difference with the industrial revolution - it, for example, introduced machines that turned 10 people jobs into 1 person jobs. I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use.

I don't think anyone ever asked for 1000 screenplays a day, or infinite PhD's for $20. But then, nobody asked for a riderless carriage yet here we are.


> Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

Yes, there is still a large demand for people with analytical thinking, a deep knowledge base, and good problem-solving skills. This demand shows up broadly across STEM fields, and it's a major reason that these fields pay relatively high.

Even just thinking of R&D, there is an immense amount of work left to be done in basic science. Research is throttled partly by a lack of cheap graduate lab labor. (If that physical + mental labor became much cheaper, the costs of research would shift - what does it take to get reagants? What does it take to build more lab space, and provide water and light? Etc.)

The present issue is that current AI does not really offer the same capabilities as a good grad student or PhD. Not just physically, as in, we don't have good robotics yet, but mentally. LLMs do not exhibit good judgment or problem-solving skills, like a good PhD does. And they don't exhibit continual learning.

No clue on when these will change, but yes, a cheap AI with solid problem-solving skills and good judgment would absolutely upend our economy.


> "I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use."

This is why I said "isn’t the usual “LLMs aren’t going to be AGI”", but you still went straight for "LLMs aren't AGI", which was not in question.

AGI is what OpenAI says they are going for. That's the goal of all this trillion dollar investment, not to output 1000 screenplays a day, but to takeover the world, basically. What would infinite PhDs discover if they could hold all of Arxive in their 'heads' at once and see patterns in every experiment that's ever been done? What could they engineer and manufacture if they could 'concentrate' on millions of steps of a manufacturing process at once without getting fatigued or bored? What ideas could they test if they could be PhD level in a dozen subjects all at once?


A PhD generating knowledge has a cumulative effect that an equivalent intelligence generating prose purely for entertainment does not. And a whole bunch of that work isn’t really about novel insights, it’s about filling in gaps and doing knowledge work that assists people who are capable of having those insights. AI doing this enables them, also making it possible for more people to do the same.


An actual artificial intelligence? Yes, total paradigm shift. Not even a shift, we'd launch the old paradigm into the sun.

LLMs and modern day """AI"""? Don't kid yourself.


Another interesting thing about the steam engine is much of science in the 1800s was dedicated to figuring out how steam engines actually worked to improve their efficiency. That may be similar for AI, or it may not!


> They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see

Would you mind expanding on this?


The potential of the current crop of LLM/AIs will stop at being a very powerful tool to search large volumes of text using free-form questions.

It will save a lot of time for a lot of people. Yes. But so did computers when they could search through massive amount of data.


I’d rather talk about the history of steam engines than AI today, so: let’s just say it sounds like at some time in the past you saw a clunky inefficient Newcomen steam engine pumping water out of a coal mine, and you hated it, and now you think that’s all steam engines are or can be or can do: they’re loud and annoying and they’re just for pumping coal mines. Then one day someone tells you they’re powering mechanized looms in cotton mills and you flat out deny it and you don’t even want to go into the mill to take a look, because you hated that first steam engine so much.

It’s right there. You can go and see it any time, doing the things you don’t think it’s capable of doing. Just a little curiosity is all you need.


Where is the huge mass of good software that AI has created?


Yup. I judge by results too. I'm still waiting for that too.

I see a whole lot of software created by smart people - as far as I can tell, about the same amount of software they would have created on their own.

Open to being wrong! But show me the results.


No no, an intelligent person looking at a crude steam engine could see what potential it has. This is not hindsight.

It is generating large amount of power on demand.

From that one can imagine what it could do. But more importantly in this context, one could also imagine what it could NEVER do. If someone say "Oh, the mighty steam engine! It lets us print 100x more books than we were doing before. Who knows, may be some day it will even start writing new books!"

And at that point, if you understand anything about the steam engine, or writing, you can call bluff. But if you don't understand what the steam engine is doing, and if you don't actually know what it takes to come up with a story, one could take a look at the engine printing the books, and blunder into the conclusion that it printing an entirely new book is only a question of time.

So in short, it is not "hate", just the acknowledgement about what it is not.


> No no, an intelligent person looking at a crude steam engine could see what potential it has. This is not hindsight

Steam engines were known since the first century, at the vert least: https://en.wikipedia.org/wiki/Aeolipile

It does take a lot of imagination and creativity to come up with new and better ways to use an already existing idea. We're currently just scratching the surface of what LLMs are going to do for us


From your exact link,

> The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.


The ancient Greeks surely would have realised that an aeolipile could be used as a source of power, if they'd had abundant combustible fuel, a need for rotary motion, and no better source of it.

Newcomen engines are mere curiosities today, because we have better sources of power (better engines). In the past, they had better sources of power too (donkeys, wind, water, or human slaves). Newcomen engines, like all technologies, are only viable in certain economic environments. In all others they are curiosities.


Which is the exact point I was trying to make? It's still a steam engine, the basic idea is there and, yet, nobody saw its huge potential



Yea, sure.

Better search could be used in ways that we can't think of right now..


I already use AI tools for more things than just "better search". Like, today. For work.


Yes, part of some kinds of work is actually just a glorified looking-up aka search.

For example, even something like "I want python code to do X" could get exact hit in a stack overflow answer using regular internet "search"

Just wrote about it here https://news.ycombinator.com/item?id=47178461


[flagged]


He he..if you didn't want to say anything why not just not say anything?


Early steam engines did not produce large amounts of power on demand, though. They produced small amounts of power, were a hassle to fuel and maintain, and broke often. It was reasonable that the engineers of the 1700s said "well, until someone improves on this, it's not worth using"..

.. which is not far off from what people said about ChatGPT in 2022.

I don't know how long it'll take for AI to be as broadly impactful as the steam engine was, but.. it's definitely coming. I expect the world to look radically different in 50 years.


There are lots of intelligent people looking at AI and imagining its potential

Are you just saying that you're more intelligent than them? You can see clearly, where all the steam engine technicians can't?


What are they saying that contradicts with something I said?


Well, you said:

The potential of the current crop of LLM/AIs will stop at being a very powerful tool to search large volumes of text using free-form questions.

I do think that pretty clearly contradicts with what a lot of people who make/use LLM models are saying haha


Thank you for your post. Very informative. Why is it too early for AI? It’s clearly an emergent cultural evolutionary byproduct that’s been many years in the making and quite mature. Perhaps your own bias is limiting you to imagine what AI is truly capable of?


This argument is the one that shook me, I’m curious if you think there’s any merit to it:

Humans have essentially three traits we can use to create value: we can do stuff in the physical world through strength and dexterity, and we can use our brains to do creative, knowledge, or otherwise “intelligent” work.

(Note by “dexterity” I mean “things that humans are better at than physical robots because of our shape and nervous system, like walking around complex surfaces and squeezing into tight spaces and assembling things”)

The Industrial Revolution, the one of coal and steam and eventually hydraulics, destroyed the jobs where humans were creating value through their strength. Approximately no one is hired today because they can swing a hammer harder than the next guy. Every job you can get in the first world today is fundamentally you creating value with your dexterity or intelligence.

I think AI is coming for the intelligence jobs. It’s just getting too good too quickly.

Indirectly, I think it’s also coming for dexterity jobs through the very rapid advances in robotics that appear to be partly fueled by AI models.

So… what’s left?


I think you are right, but here’s a fun counter-example. I recently bought a new robot* to do some of my housework and yet, at around 200lbs, it required two people to deliver it (strength) get it set up (dexterity) and explain to me how to use it (intelligence).

* https://www.mieleusa.com/product/11614070/w1-front-loading-w...


You don't need a lot of imagination to predict those jobs can be done by other robots in the not so far future.


Yeah and I think that extends to even trades we see as protected because they often work in novel and unknown setting, like whatever a drunk tradesman rigged up in the decades previous.

Eventually it will be more economical to just destroy all those old world structures entirely, clear the site out, and replace it with the new modular world able to be repaired with robots that no longer have to look like humans and fit into human centric ux paradigms. They can be entirely purpose built to task unlike a human, who will still be average height and mass with all the usual pieces parts no matter how they are trained.


Most of the “delivery” (getting it from the factory to its final installed location) was done by machine: forklifts, cranes, ships, trucks, and (I'm guessing) a motorized lift on the back of the delivery truck.


No one is hired to swing a hammer? What world do you live in?


They're not hired to swing a hammer hard, they're hired to swing it at the right thing, and if they can't swing it hard enough they pick a different tool.


Harder than someone else. A bodybuilder and a normal person ham swing a hammer just as efficiently as each other.

Dexterity is more important - after all you may have the stamina to bang in 1000 nails in an hour. I have a nail gun. What’s important is we can control where the nails go.


You said there are three traits, but seems like you only listed two - unless you're counting strength and dexterity as separate and just worded it weirdly.


I think they’re separate. You don’t need to be strong or intelligent to put circuit boards in printers, but there are factories full of people doing that. Purely because it’s currently cheaper to pay (low) wages to humans than to develop, deploy, and maintain automation to do that task. Yet.


AI will improve people’s understanding of the Oxford comma.


Physical labor, especially jobs requiring dexterity, will be left for a long time yet. Largely because robotics hardware production cannot scale to meet the demand anytime soon. Like, for many decades.

I actually asked Gemini Deep Research to generate a report about the feasibility of automation replacing all physical labor. The main blockers are primarily critical supply chain constraints (specifically Rare Earth Elements; now you know why those have been in the news recently) and CapEx in the quadrillions.


> Like, for many decades.

Didnt people say that AI is 50 years away in 2010s?


Yeah and until ChatGPT I thought even 50 years was optimistic, which is why current days feel like SciFi! However, at its essence, the current AI revolution has been driven primarily by a few key algorithmic breakthroughs (cf the Bitter Lesson), which are relatively easy to scale up through compute.

On the other hand, the constraints on robotics are largely supply chain-related. The current SOTA for dexterity in robots requires motors, which require powerful magnets, which require Rare Earth Elements, which are critically supply-constrained.

To be precise, the elements are actually abundant in the Earth's crust, just that extracting them is very expensive and extremely toxic to the environment, and so far only China has been willing to sacrifice its environment (and certain citizens' health), which is why it has cornered the market. Scaling that up to the required demand is a humongous logistical, political and regulatory hurdle (which, BTW, is why I suspect the current US adminstration is busy gutting environmental regulations.)

Now there may be a research prototype somewhere in some lab that is the "Attention Is All You Need" equivalent of actuators, but I'm personally not aware of anything with that kinda potential.


Some types of motors don't require permanent magnets. If we need more motors than we can make permanent magnets, we'll adapt, perhaps with an efficiency loss.


Motors with permanent magnets are preferred because they are much more cost- and energy-efficient, even with the painful reliance on REEs. There is a very strong incentive to find alternatives but nothing comparable has been found yet.

There are of course non-electric alternatives like hyrdaulic and pneumatic actuators but they are mostly good for power, not dexterity. The size and complicated fluid dynamics simply are not conducive for fine motor control. I do think these will play a large part eventually because even electric motors cannot economically produce enough force to be practically useful. Like, last I checked, the base-level Unitree robots can lift 2kg or so? Not even enough to lift a load of laundry.

At this point I suspect we'll end up with hydraulics for strength (arms, legs, torso) and electrics for dexterity (grippers)


Uh, out of all the things that are the bottleneck, you think it's robotics hardware that is the bottleneck?

In an age where seemingly every single robot company has a humanoid prototype whose legs are actively supported through high powered actuators that are strong enough to kick your ribs in?

In an age where the recent advancements in machine learning have given bipedal walking a solution that is 80% of the way to perfection with the last 20% remaining the hardest to solve?

Honestly, from a kinematics/hardware perspective the robots are already good enough. Heck, even the robot hands are pretty good these days. Go back 10 years ago and the average humanoid robot hand was pretty bad. They might still not be perfect today, but they are a non-issue in terms of constructing them.

The only real bottleneck on the hardware side is that robot skin is still in its infancy. There needs to be some sort of textile with electronics weaved into it that gives robots the ability to sense touch and pressure.

What has remained hard is the software side of things and it is stuck in the mud of lack of data. Everyone is recording their own dataset that is unique to their specific robot.


Note I didn't say the bottleneck is the hardware itself, it's the supply chain for production of the hardware. Specifically the Rare Earth Elements, as I explained here: https://news.ycombinator.com/item?id=47178210

A bit more detail in this article: https://www.adamasintel.com/humanoid-robots-and-the-future-o...


> think AI is coming for the intelligence jobs

What you call "AI" is coming for the "search and report" jobs. That is it.


The problem with that argument as I see it is that a lot of jobs can be described that way if you want.

And it's not just these; i.e. video generation is getting better every other week too. It's not yet good enough to produce full length movies but it's getting there and the main component that seems to be missing is just more control over the generated output, but that'll come too.

You might say these movies will be AI slop and you'd be right, but then that'll be enough for most people who just want to see a lot of shit blow up on screen and superhereos fighting other superhereos.

You will still have a niche for 'real actor' films, but it will become a niche.

Same for music, art etc.


This overlooks that there aren't enough 'intelligence jobs' in an economy for it to be impacted by this.


Intelligence jobs are sort of the apex of the economy where everything coalesces around to serve those positions ultimately. E.g. any low skilled area even devoid of any resources that basically insists upon its own existence at this point (e.g. walmart workers need gas station, gas station workers need walmart, there is a sort of economy but these are straight up consumption black holes with nothing actually being invented or produced, maybe agricultural products but not by a large fraction of the labor force any longer).

So where does that leave our world without actual creation, production, ideas? I work at the gas station and sell you zyns? You work at the walmart and sell me rotisserie chickens? We both work doubles and eat and sleep in the time remaining? Remain in this holding pattern until World Leader AI realizes we are just waste heat and culls us? I mean, that is sort of the path we are on. Disempowering people. Downskilling them. Passifying them. Removing their abilities to organize themselves. Removing access to technology and tooling. Making the inevitable as easy at it can be when it comes time for it.

We are in a death cult called business efficiency. Fire them, it's more efficient. Lean up the company. Don't invest in research, cheaper not to and buy back stock instead. These are death spirals no different than what happens with ants. We are justifying not giving our own species a seat at the table out of pragmatism. Why create a job for someone? It is inefficient, do more with less and don't worry about the unemployed it is their fault. Why pay them well and let them live comfortably? That is profit you could be making. Eventually it is going to be why feed the human species, because that is the line of logic here with business efficiency. We don't optimize to uplift our species. Quite the opposite, we optimize to hold it down and squeeze and extract.


The key mistake you make is to believe that "first world" is sustainable by it's own. A lot of people are hired today because they are good at a physical tasks, globalized capitalism just decided that it's cheaper to manufacture it overseas (with all the environmental and societal downsides that hit us back in the face).

So don't worry if we lure ourlselves that it's ok to stop caring for "intelligence job" globalization will provide for every aspect where AI is lacking. And that's not just a figure of speech they are already plenty of "fake it until you make it" stories about AI actually run by overseas cheap laborers.


> So… what’s left?

Barbarism or revolution.


Life, uuuuh, finds a way.

This ignores that the forces of capitalism, the labor market, value, etc are all made up. They work because people (are made to) believe in them. As soon as people stop believing in them, everything will fall apart. The whole point of an economy is to care for people. It will adapt to continue doing that. Yes, the changeover period might be extremely painful for a lot of people.


The whole point of an economy is to generate value. Very, very different than caring for people

Feudalism was the dominant economic system for millennia. The point is to extract value for the upper class. Peasants only matter as a source of labor, and they only get 'cared for' to the extent of keeping them alive and working.

Now think about what feudalism might look like if the peasants' labor could be automated


Well, yeah, "keeping alive" sounds like caring to me. Not to a great standard, that's how we got numerous revolutions, and feudalism did end eventually. People stopped believing it, and some kings lost their heads.


But what if new jobs aren't created? I don't think it's an absolute given that because new jobs came after the invention of the loom and the tractor that there will always be new jobs. What if AI if a totally different beast altogether?


Then there will be no one to buy the robots :)


It's quite possible that the rich will essentially form a new economy.

They build the robots to build the factories, run the mines, build the solar farms, run the research labs, repair the robots, etc. They sell to and buy from each other.


What if we just run out of new jobs?


Areas of the economy suffered this time and time again. Even if there are new jobs, even if those new jobs are better paid and better conditions than the ones they replace, how does that help the 55 year old coal miner who has seen his industry vanish. Can he realistically retrain?

It’s not unprecedented however the scale and speed that it will come at is. Things like the spinning jenny came along and replaced spinners, but weavers stayed for another generation.

Selfishly though I am more concerned about losing my job and industry than I was concerned about others suffering from the 80s, or during the pivot to the intenet. To quote Dr McCoy

> We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.


Realistically, he can retrain, although he is unlikely to be a good culture fit. /s


If you look closer into history -- or ask your favorite AI to summarize ;-) -- about what new jobs were created when existing jobs were replaced by automation, the answer is broadly the same every time: the newer jobs required higher-level a) cognitive, b) technical or c) social skills.

That is it. There is no other dimension to upskill along. (Would actually be relieved if someone can find counter-examples!)

LLMs are good at all three. And improving extremely rapidly.

This time is different.


LLM's are just a better search tool. Nothing more.


You say this as though it's a pithy point.

Might as well say humans are just a better search tool - it's true in the exact same sense you're using.

All humans do is absorb information, then search through our memories and apply that information in relevant contexts to affect the world


> pithy point.

Not really, because I do think all knowledge can be obtained by searching true randomness.


You keep repeating it, but it’s obviously wrong in practice. I guess you can make an argument that sending WhatsApp message or generating video is just a search job but that’s not a great argument for why humans wouldn’t get replaced - it doesn’t matter if LLMs can be reduced to search tools, but if their output is good enough approximation of human worker output. If it is then it has a chance to replace human, even if you call it glorified search tool.


Yes, a better search tool will automate a lot of currently employed manual search jobs.


Surely you must realise that calling things like programming or different types of office jobs (which are almost replaceable even today) "manual search jobs" is absurd?


I didn't name any jobs.


The "AI will destroy all the jobs" narrative also has one obvious problem from an economics perspective, which is being obscured by tribalism and egocentrism.

When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.

If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.

What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.

What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.

When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.


I think what most people are worried about is that, as you say, AGI won't necessarily have our biases/biological drives

That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)

So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win


> This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.

But the people who hoard the wealth, electricity, and whatever else is needed to run the uberoperators are not branded as useless. Why is that? An aside..


Some inventions--like the heavy plough--really do turn society upside down with the sudden and vast removal of jobs, though.


Exactly my thoughts. Selective whinging indeed.

Also meta-platitude whinging like

> The ideology of "winner takes all" is unsustainable and not supported by reality.

Sometimes the winner deserves to win, AND that's a good thing even at scale. It kindof depends.


The winner that deserved to win might turn into the complacent monopoly pf tomorrow. It might vow to Not Be Evil for a while, but the investors will demand that it does whatever it takes to grow.


Enshittification usually means you are right over time. It still kindof depends.

To be fair, I also dislike abstract platitudes that are overly optimistic as I think you might be.

"Diversity is our strength"?? I mean, I guess diversity of _opinion_ is desirable to a point so we get all the ideas on the table. But not at the sacrifice of unity and shared goals. Unity is our strength. Discord and wasteful politicking are our undoing.


Google had "don't be evil" and even that bar was not low enough.


The thing is many of those did not fail at all. They just weren't that great from the start. A overhyped technology is a technology that makes people believe it is going to be something that it isn't and solve issues that it doesn't (or that weren't really issues).

To take the first of the list: 3D TV. Everybody liked the idea of being more immersed in a fictional world. But if you watch closely (I studied both media science and film directing), you will realize that there are already traditional 2D films that are so immersive, parts of the audience dislike these films for the lack of distance between what they are watching and themselves. Which is why I said of the brink of the last 3D hype that this is not going to last. So the issue was for the most part that the problem 3D appeared to be solving wasn't actually a problem, while a whole segment of the market fooled itself and the consumers into this was actually the future.

Blockchain is literally the same and everybody could easily predict it by the point block chain evangelists started trying to find blockchain-shaped problems, when they didn't find any useful legal applications where a traditional chain of trust wasn't vastly superior.

Now LLMs are actually useful. The question is just, how much money is that usefulness worth for a regular person to pay and what does it do to society and the planet as a side-effect.


>Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

All of those were invented pre-1980. To misquote Thiel, if you remove TVs/phones from a house, you would think we're living in the 1970s


Neural networks were in invented in the 40s. I don’t know what your point is, and I’m mostly convinced that you don’t have any, just as the article author and 99% of people shitposting their wishful thinking about AI.


So if you were overwhelmingly wrong about technology fads in your lifetime by saying yes to everything, you can comfort yourself by saying that if you had gone back a century and said yes to everything, you would have been right about some things!


But not most things; there was a lot of nonsense back then, too. We all go to work in a bullet fired through a tunnel by pneumatic pressure, right?

(This was a real thing, and they got as far as partially building a tunnel under the Thames for it, before sanity prevailed.)


Also, the ones you were right will provide 10,000x returns for all the 1x losses you have suffered.


Also I wasn't excited about anything from that list, but I am very excited about AI.


Electricity bros want to put a socket on every wall. That is such a non-starter from a safety POV. It's a fundamentally unsafe technology and it can never be made safe.


Facts


The first few paragraphs are all you need to see that the author is writing a propaganda piece. It's not meant to be truthful, it's meant to convince.

I think this is what is meant by "bullshit".


“Bullshit” is:

+ statement of dubious correctness

+ and that serves the author’s interest

+ and which the author does not care whether or not it is believed.

When the author wants you to believe it, that’s horseshit.


The article is trash. The only reason it got voted to the front page is because the author is salty about AI.


It's worse than AI slop. Unlike this article, AI slop usually includes reasonable supporting evidence. The only problem with AI slop is that this supporting evidence is presented in an annoying Buzzfeed-like way by default prompts.


OP here! Thanks for replying.

To take, for example, calculators. I can't find any evidence of a massive influx of hyperbolic articles talking about how the calculator will change everything. With bikes, there were plenty of articles decrying how women would get "bicycle face" but very little in terms of endless coverage about them being miracle technology.

People adopted bikes and calculators and electricity because they were useful. Car manufacturers didn't have to force GPS into vehicles - customers demanded it.

The narrative I'm describing is how hype sometimes (possibly often) fizzles out. My contention is the more a technology is hyped, the less useful it will turn out to be.

Now, excuse me while I ride my Segway into the sunset while drinking a nice can of Prime.


You have gotta stop cherrypicking. The massive influx of hyperbolic articles about how electricity will change everything started in the 19th century. It became a common theme in fiction (including classics like Frankenstein) and became an enormous media hype war, which historians call the War of the Currents.

Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.

And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?

What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?


There was a great deal of hype around the atom changing everything, but electricity was just too slow to see such breathless anticipation takeoff.

200 years ago the was some hype around how electricity caused mussel contractions in dead flesh, but unless you consider Frankenstein part of the hype cycle it really doesn’t compare to how much people hyped social media etc etc.

Public street lights long predated light bulbs as did both indoor and outdoor Gas lighting 1802 vs 1880’s was just a long time. People were burn, grew up, had kids, and become old between the first electric lighting and the first practical electric bulb. People definitely appreciated the improvement to air quality etc, but the tech simply wasn’t that novel. Rural electrification was definitely promoted but not because what it did was some unknown frontier.

Similarly electric motors had a lot of competition, even today there’s people buying pneumatic shop tools.


> unless you consider Frankenstein part of the hype cycle

It absolutely is. Frankenstein is a seminal work of science-fiction horror, and the mysterious power of electricity to change everything is what made it so chilling to its readers in the 19th century.

> it really doesn’t compare to how much people hyped social media

The media is considerably different now from in 1818, thanks, in significant part, to the power of electricity. I assure you, when the electrical telegraph came on the scene, people were hyped.

Of course, much of that hype was on paper printed on printing presses, so it was, in some sense, "incomparable" to the hype possible on cable television, or the hype that's now possible with online social media.

But if your argument is "Yeah, electricity was kinda hyped, but, you know, not all that hyped, so it proves my point that the more the hype, the less the impact," you have some more research to do. Please just Google "War of the Currents" for a minute.


> It absolutely is.

It was published as Fiction. The vast majority of people didn’t think it was anymore realistic than Interstellar etc.

There’s plenty of stories where we cure cancer, but the 50% improvement in cancer treatments over the last 40 years just doesn’t get much hype because it’s so slow. It’s hard to get excited about the idea cancer may be gone in 200 years because while that will be awesome for people alive then it doesn’t do anything for the people I know.

> electric telegraph came online people where hyped.

Objectively it got way more of a meh reaction than you’d think simply based on the timelines involved.

France was happy to continue using its network of optical telegraphs long after the electrical telegraph became a practical thing. Transatlantic telegraphs got hyped up somewhat, but again the technology took so long from the first serious attempt to a practical working system people understood the limitations inherent to having such limited bandwidth between the contents.

Obviously new technology gets attention because it’s a net improvement, being able to send messages across the US much faster was useful. But hype is different, it’s focused on second order effects not what it does but what will change. The original iPhone isn’t just another cellphone that also takes pictures, it’s “the internet in your pocket.”


The electrical telegraph was integral to the growth and consolidation of the British Empire. Britain acquired more colonies and held on to them for longer than the other European powers partly due to its naval might, but also due to far superior bureaucratic and communications technology.


I think you misunderstood what I was saying.

Technology can be quite useful directly and have significant second order effect, hype is about the second order effects being overblown. Second order effects are difficult to predict when something is actually novel, will LLM’s make programming obsolete is harder to answer in 2023 than 2063.

Home automation like dishwashers really did meaningfully impact how much effort was needed to keep a home livable, but we didn’t predict the kind of helicopter parenting that happened because of more free time especially after smaller families became common. Thus a great majority of incorrect predictions where just hype.

The faster new technology becomes widespread the harder it is to predict those second order effects and thus more hype you see.


You can find similar hype articles about the Palm Pilot, then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket. And yet here we are.


> then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket

Mmm..they didn't, at that time.

That we grew to be dependent on the computer in the pocket does not mean that it was a necessity at any point.


Calculators are a particularly bad example for your case. There was absolutely hyperbole against calculators when they were introduced. [1]

With similar sentiment as well "They make us dumb" "Machines doing the thinking for us"

Cars were definitely seen as a fad. More accurately a worse version of a horse [2]

If you looked through your other examples, you'd see the same for those as well.

Some things start as fads, but only time will tell if they gain a place in society. Truthfully it's too early to tell for AI, but the arguments you're making, calling it a fad already don't stand up to reason

[1]: https://www.newspapers.com/article/the-item/160697182/ [2]: https://www.saturdayeveningpost.com/2017/01/get-horse-americ...


LLMs will absolutely will have a place. There is no question about it. But it will be doing searching for us, not thinking.

The flip side to this is that a lot of jobs today that appear to require "thinking" is actually just doing looking up aka "search"..


Searching for the optimal solution...


The personal computer, laptops, web browsers, cell phones, smartphones, AJAX/DHTML, digital cameras, SSDs, WiFi, LCD displays, LED lightbulbs. At some point, all of these things were "overhyped" and "didn't live up to the promise." And then they did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: