It's not unfounded "moral panic" that AI will take our jobs, it will.
It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now. That's going to quickly translate into 1x person working at 5x efficiency putting 1-4 people out of work. It's not hard to find accounts of people who have already lost their jobs-- such stories have popped up here on HN.
Sure, you can argue that the jobs are simply displaced, and others will emerge requiring different skills than AI can easily perform, but that doesn't negate the reasonable panic individuals in jobs will feel. The article cites off-shoring as an example, but that's actually my point: I know various people who were ~55-60 when offshoring hit they industry or company. They were displaced and never found a new career-- their expertise was in areas that other corps were also outsourcing. I saw plenty of individuals hop to another company only to have their job and the new corp ofshored a few years later. Then if they're lucky & near the finish line they can take early retirement, but if they're a bit short of that mark they worked jobs that paid minimum wage+$1-$7/hour. Apart from other people I came across, I knew several such individuals when I worked a big box retail book store (when those were still relevant) during my college years.
So yeah, even if the author is correct that job loss or displacements are temporary, that's still a process that takes a minimum of 10-20 years. Retirement age minus 10-20 years is a whole heck of a lot of the population that are in jobs right now that can be automated of 3-5x'ed by a single person. Panic, or at least anxiety, is a rational response to that. Sure, a person should learn the new tools ASAP to be one of the 1x-now-5x workers but there's just not enough slots for everyone.
It’s also not hard to find people (see: me) in industries where AI has been prophesied to be exceptionally disruptive, saying that actually, meh, it’s not going to mean much ultimately.
I’m a programmer, and sure, I’ve asked ChatGPT to write some unit tests, and I’ve encouraged CodePilot to suggest some function signatures, but - it’s still ‘just’ a novelty, and it is wildly out of reach of ‘disruptive.’ It’s not doing anything I couldn’t do, and it’s not doing anyone not at my level would know what to do with. You’ve still got to be a programmer to see whether the code it generates is good.
People kind of forget that about art - everyone has an opinion about how art looks. But there’s really one main way everyone knows to test code: does it work? And if you don’t know how to stitch it together so it works, it doesn’t look like anything to the initiated. You can write 10000 thousand lines of code, and even if it’s brilliant technically, unless you can demonstrate it to the client, it’s practically worthless. You still need a programmer to figure out what to do with chatGPT’s output, clients don’t know what to do with it.
Everyone has an opinion on almost-perfect AI art - almost-perfect code doesn’t compile.
That's not how it works. With autogen code a small office of devs can build a test rig and run autogen code through it until it passes.
That's no different to what happens - or should happen - now. But with a much, much faster iteration cycle and higher throughput.
At this point GPT doesn't really understand semantics. But it does a fair imitation, and that imitation will improve over time.
There will certainly be a point where it will be writing code that has fewer bugs than human code.
There will be a point after that where it will build the test rig internally.
But I suspect we'll be in a very different place by then, and most of what we consider dev work today will be redundant for structural reasons rather than technical ones.
Generally I think the AI not-so-bad comments are coming from people who haven't really understood what's coming at everyone. AI won't automate coding, it will automate culture - artistic culture, media culture, business culture, political culture, perhaps also many kinds of personal interactions.
It'll be like the web, which automated paperwork, dats gathering, and certain kinds of social interaction, but many orders of magnitude broader and more disruptive.
>It’s also not hard to find people (see: me) in industries where AI has been prophesied to be exceptionally disruptive, saying that actually, meh, it’s not going to mean much ultimately.
Indeed that's quite a bit easier than finding people who are 5x more efficient thanks to AI.
Never mind, I'm sure that we will see the results soon in the form of lower inflation and a lowered retirement age though /s
After all, we're so much more efficient now. It's not just a smokescreen for the disemployment effects of industrially hollowing out the country for profit? Is it?
Even if what you say is true (and I am very doubtful), that only covers rather specific sorts of jobs. It says nothing about the wider impact across all industries.
Even if dev's jobs are "safe", if even 20% of everyone else can't earn a living, devs will suffer the consequences just the same as the rest of the world.
> It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now.
What does "making some one 5x more efficient" actually mean anyway? Does it makes people enjoy their lives more, or it just makes few people work harder while leaving the others hanging without income?
Let me bring up another point just popped up in my head: AI don't really need to be the perfect replacement to a human worker for them to replace the human worker, instead, it just need to be good enough to make economical sense for the employer to switch labor strategy.
We the market has already been trained to accept reduced product quality during the past decades as companies adjusting their production strategies. It is only reasonable to assume that we will accepting it further.
So even if AI technology cannot maintain the same level of quality standard we often suffer today, it is still well within the realm of possibility that companies will just replace their workers with AI and then expect the market to lower it's expectations more.
> The stakes here are high. The opportunities are profound.
I guess time will tell...
While I at it, another thing:
> AI will not destroy the world, and in fact may save it.
Well, you can save the world many many times over, and then you can still save it once more and even more. But the world can't really take too many destruction. One or two extinction level events might just done it for good.
Also, AI don't even need to be sentient or have the ability to reason it's way towards world destruction. In fact, it might not even know what the f it's doing, all it needs is the capability to do so and some calculations to line things up.
I'm not really very optimistic about it, obviously.
> Yet labor markets are strong, with record low unemployment across the board. I wonder if the anecdata or the real data are right.
You have to give it more time.
Even if this would lead to less employees needed, it will take a few years before the effects will be seen. It's a quite slow process:
1. A sufficiently large part of the work force has to learn it and start using it in an efficient manner.
2. It has to be noticed, measured and evaluated as being more efficient by management.
3. Management has to decide to lay off people instead of being happy with the improved efficiencies and possibly higher profits.
4. Lay-offs take time. From step 3 to actually taking action can take time, and from the actual notice to the person being in the job seeking market also often takes many months.
Those are only the first steps. Companies with a relatively inflexible amount of programming that they need doing might lay off staff, but it'll have neglible net effect while other companies want more work done faster.
How much unmet need for software is there in the world? Probably an enormous amount. It's pretty easy to think of software you wished existed or wished had an extra ten thousand hours of improvements put into it. It could take decades before efficiency gains translate into reducing numbers of programmers in employment. ... Edit: I mean competent programmers in employment. Easy to imagine that when there are more highly productive programmers around there will be less need for the ones that are less so.
Historically, when a resource becomes cheaper, its demand increases. AI, as a type of software, can, like software, make human labor more productive. I.e. it's output become cheaper.
Efficiency gains don't create unemployment, they create economic growth.
Corporate greed creates unemployment. McKinsey has created much pain. Are we defining Corporations as forms of AI? If not, AI (as software) causes the economy to grow and employment to increase.
No, literally. That's a basic tenet of economics. Where does economic growth come from? Productivity gains. What are productivity gains? Better efficiency in using factors of production.
Labor markets were strong for long periods of the off shoring process, but countless people were displaced, many of whom didn’t have the ability to retrain to equivalently paid jobs. My comment upthread details that aspect of things.
Also offshoring was absolutely driven by technological advances. Commercial grade network speeds made some of it possible, and even more of it simply more convenient, cheaper, and w/ less friction than it would have taken earlier.
As one small example one of my projects at my first job out of college was for a small publishing company. It was a jack-of-all trades job but this project had me working on digitizing the back catalog. They had decades of prior publications, books, journals that were print-only. They paid significant (comparatively) fees to get the most in demand of the back catalog’s abstracts transcribed by a US firm, and even more money for whole articles or books that were really popular. However advances in the speed of scanning technology and OCR accuracy meant that costs went down. In particular better tools for things like human-assisted OCR meant that we could offshore the scanning of the entire back catalog to a firm in India where those tools could be used by people with less domain knowledge and non-native fluency in English to achieve similar results. I oversaw coordinating the technical details of the deal, implementing parts of the in house database for the results, and the integration with the digital platforms of the day.
None of that was impossible with prior tech, but tech advances both lowered the cost and friction of doing it from half a world away.
Separately you can look at advances is shipping & logistics technology, some even decades prior to the digital revolution, that made offshoring of manufacturing a viable economic option. Just one part of that was containerized shipping & its massive growth in the 60’s and 70’s paving the way for significant globalization >= the 80’s. All of that required technical advances.
I think we are passing each other by in addressing different core points. (which i'll specify more clearly below, I may not have done so as explicitly as I thought)
I am not arguing with a claim that AI will forever result in a net-negative of jobs. I'm not certain how this will play out, but you & I probably agree (my apologies if I read too much into your comment) that things will likely, as with past technological disruptions, level out, new different jobs created, likely no need for long term anxiety or panic on a macroscopic level. Going back to the publishing company I worked for, innovations going on elsewhere in the field meant that costs for publications, and even very high quality publications, dropped so significantly that lots of places that couldn't previously afford such things could now do so. More publishing meant more jobs for designers, copywriters, marketing folks to determine their strategic use, etc... lots and lots of new jobs.
On to what I intended as my main point: I was addressing the claim by the article's authors that worrying about this process amounted to moral panic, forgetting the history of past innovations, etc. I strongly disagree. The changes to required skills will mean that not everyone qualified for the old jobs will easily transition to new ones. Some never will, and a portion of those won't do so because it's not a realistic possibility for them [1]. I saw the publishing revolution evolve over more than a decade. It came for different jobs at different time. The people in the pathway of that have plenty of reason be anxious or panic a bit about the-- at minimum-- upheaval of their lives or if they're really unfortunate lasting economic difficulties.
[1] This is for (at least) two main reasons:
1) Age. Plenty of people in the twilight of their working years may be able to reskill and find other jobs, but plenty won't. Maybe some won't have the aptitude for it, some won't have the economic ability to take time off from receiving a paycheck to do so, and so on. And even if they do, they're starting off fresh in a new line of work without job experience in it, so they're at the bottom of the pay ladder, entry level work. Ageism is also a thing. A 62 year old person with 40 decades of pre-digital expertise typesetting & related skills loses their job. They spend 6 months or longer paying to learn new skills in digital equivalents. They try to get a job and they are A) competing against a mass of younger people that are "native" to this technology and B) having their resumes reviewed by people that see a candidate with little direct experience in the required tech who might very well be quitting into retirement just as they're starting to become most useful after learning the job.
2) Hindsight is 20/20. It's easy to look back at these shifts and say "well they should have seen the way the industry was moving and made changes earlier". But that ignores the fact that such dramatic shifts are often not obvious in the moment. At the very beginning of such things-- even before hand when the tech is invented but not adopted-- there will be people saying "X is dying!". And then for years people hear that message while very little changes By the time a decent fraction of things have shifted enough that maybe the future is a little clearer people have also heard "X is dying!" for so long that it's just noise and they become hardened against the message. Some will manage, some won't, some will get trapped in the "age" scenario I detailed.
In short: Society probably doesn't need to panic about this, especially when taking a long term view. But individual industries and people in specific types of jobs are, contrary to the general point by the article's author, quite justified in having a little bit of "OMG we're f'ed!". In fact it is exactly those people doing that right now who will-- by virtue of their fear and panic-- be most likely to take steps early in these shifts to reposition themselves either by business pivot or job skills to weather the change.
If you're a copywriter in advertising & marketing right now and you feel comfortable, if you're going about your daily work and you're not panicking at least a little bit, that's a problem. LLM's may only (right now) turn out mediocre results so those sorts of tasks but lets face it: A majority of ads, marketing, and other writing churned out by humans today is also mediocre and uninspired. The people in those fields that are quite reasonable panicking right now are the ones who will either A) Improve their throughput productivity to be one of the few remaining workers churning out a larger amount of the same mediocre work or B) Using the output of these new tools to bootstrap the process of generating a bunch of mediocre options and then using human-level intelligence to sort through the results and polish on up into something much better than mediocre.
The thing about advertising and marketing is that actually writing copy doesn't take anywhere near the full 40 hours a week. How much time does it take to type up a short bit of marketing copy? If your only goal is to just shit something, anything out then you could write up a whole campaign in a few minutes.
But they don't do that in general. So where's ChatGPT going to save time? You'll still have to iterate, iterate, iterate, go back to stakeholders, discuss in progress work with clients, etc etc etc. I just don't see any reason to worry much in the short term, as long as you're working for a reputable company.
This may change eventually, but I foresee it taking several years at least. Progress isn't anywhere near as "exponential" as some camps claim.
Don't focus exclusively on the specific example I chose. It's only one area of applicability and tech is still advancing rapidly such that improvements will likely make it useful for new use cases as time goes on.
But even staying on just this one example, your framing of the process is incorrect:
You are underestimating the amount of "creative" time that goes on in these processes, and how the labor is divided. There are in fact full time creative folks who spend much closer to 40 hours a week on this than you think. I've worked on the analytics side of a few campaigns from small to very large (not my favorite type of project) so I've sat in on plenty of meetings where my organization was the customer. During the entire process, before we even sit down with a marketing firm, people on my side were spending significant time brainstorming ideas, slogans, narrative tone, visual aesthetic etc. so we could have a starting point for conversation.
When that conversation began, it was account managers from the marketing firm, not creative staff involved. Account managers met with the client, discussed & clarified ideas, refined the parameters of what we wanted, etc. Then they took it back to their dedicated creative staff who did spend a majority of their time working on the actual creative side. We generally only spoke at length to someone like a full-time writer for very important projects, and their time was billed by the hour, usually scoped out as a block of time in the contract. Otherwise we might have only a brief conversation, usually if a few rounds of conversation & revisions hadn't quite got us where we needed. Sure there's bleed over and in any given company people will wear multiple hats and the venn diagram of account manage & creative will have more or less overlap, but beyond small firms this is a typical rough picture on the division of labor.
>I just don't see any reason to worry much in the short term
Depends on what you mean. If you mean "Don't panic that you're going to lose your job in a year" then I'd guess you're mostly right. If you mean "Don't worry at all" then you're mostly wrong, because as you specified, "short term". And as I said in my last comment, if people in jobs exposed to disruption aren't worried right now, then they are the ones most at risk of getting left behind. The people who worry now will be more likely to prepare.
> your only goal is to just shit something, anything out... [and later] ...as long as you're working for a reputable company.
See my previous comment about 90% of this output being fairly bland and mediocre, but the process still isn't to choose the first idea you think of, which is likely not going to be the best one compared to taking more time to come up with a bunch and choose from the best. There are lots of levels of mediocrity, and the first thing you shit out is simply the lowest of them. In reality the typical process will be something like this:
Forget about longer copy writing, let's choose something smaller, a slogan. Usually 1 sentence, sometimes 2, but not much longer. I, the customer, go to a marketing firm for an entire campaign, one part of which is the slogan. As part of the contract we will have 2-3 rounds of discussions, preliminary ideas, revisions, etc, and the "deliverable" will be 3 finished, polished options to choose from. The account manager isn't going to their writer and saying "give me 3 slogans". They're saying "Give me 10 slogans to start with". The writer then goes to work. The writer does not write 10 slogans! The write writes more like 20 or 30, maybe more if you count minor iterations. They choose the 10 best and bring it to the account manager, talk them over, then the copywriter goes back and revises and maybe writes a bunch more for the account manager to finally choose the half dozen they think the client will like the most. The client gives feedback, the account manager goes back to the copywriter, who goes back for another round, etc. If all goes well then after 2 or 3 rounds of this the account manager has a final meeting with the client for approval of one of the 3 final options. And that's just for the slogan
My point is that I don't understand what using an LLM changes here. You'll still have to generate multiple candidate slogans, you'll still have to bring 10 of them to the account manager, you'll still have to discuss the possibilities with the customer and tailor the results to their needs. You'll still have to iterate. All an LLM can do is write the candidate slogans, and -- as of now -- it doesn't do a spectacularly great job at it. Very little actual labor is being saved here.
I'm sorry but I see no relation between saying "some people will fare better then others in society" and "technical advances grow the economic output". Personal success and overall societal progress are completely independent processes. Some people will advance in degenerating societies. Some people will loose out during golden ages. I really don't see a point to be made here.
The rate at which we've been developing tools has been increasing, and has also required more specialization. You cannot switch jobs easily from being a software engineer to a heavy equipment operator but they both use sufficiently advanced tools. Looking at top level employment metrics is also hiding quite a bit of information. Time is required for people to learn new skills. If a sudden influx of people were all heavy equipment operators then the market will price that in and suddenly that job is the same as mopping a fast food bathroom.
Go back and look at any new technology. Each one put large numbers of people out of work, but they quickly found new jobs in areas that were opening up due to other new technology: telephone operators became television assemblers and so on. Same will happen here.
- this may be true in the long run, this does not mean people will magically adapt in one day. Luddites were skilled workers whose jobs were replaced by much less skilled jobs because of mechanisation.
- People work much, much less than in the past: working hours for those who work were almost cut in half in 150 years [0], and there are relatively way less people who work (much longer studies, much longer retirement than a century ago) - if you count time spent on domestic work.
So who doesn't want to get more done? Increased productivity means increased output. Who would fire somebody who is working 5X more efficiently?
Ok in some industries this might happen, especially when the productivity increase isn't in the main business line but some ancillaries. But everybody turning out code faster means getting done faster etc. You want to go back to the old slow way? Only losers will do that.
As a business owner, either you can grow or you can’t (eg., snack shop). If you can grow, then each additional new person you hire can do 3x more for the money. This is why unemployment will stay low and wages relatively high— individuals are able to be so much more productive in growing businesses.
Because, barring a grand upheaval in global economic systems, most people generally need to sell their labour in order to earn a 'wage' which they spend on food, clothing, rent, etc.
The truth is the society needs people to put their time and energy somewhere so the society can function in order. That's why all governments, regardless its kind, care about employment, economy. Our society will collapse if people all have free time to do whatever they want.
Maybe depends on the industry? Let's say there's a huge need for actually talented software devs -- then, among them, no jobs gone, 5x productivity.
Whilst ... Graphics designers? A small % of them captures large shares of the market (eg good marketing, automation, winner takes it all?), but most end up without a job
Or more businesses and people can afford graphic design and use it to their benefit.
If ML-powered tools will continue being available and affordable (and I don't see how they won't), that effectively erases a possibility of monopolies on overlaying markets. How can small % of graphic designers capture large market share while there will be tens of thousands wannabies behind their backs?
How? If they provide a good product/services, and are good at marketing, branding.
As an extreme example from another industry, think about Coca-Cola -- their coke isn't that magically much better (if at all), but their marketing is? And try to start a new bubbles drink, that'd be hard
There's no such thing as a graphic design marketing. It is oxymoron, like "oil painting marketing" or "woodcarving marketing". Every creative craft has names, not brands (let's left "personal brand" bullshit to lifestyle yoga coaches).
But all companies do marketing (or sales) to find costumers. Graphic design companies do marketing (or sales) to find their customers.
Painters and artists do marketing, to get people interested in their art and exhibitions.
When you say "oxymoron" I think there's a misunderstanding. I didn't mean that they would do marketing about wood carvings in general, but instead for their own company and their own things they sell.
> I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
Fundamental misunderstanding of the nature of wars notwistanding (sane people rarely start them; assuming non-sane leaders like Putin, or historically Hitler, Bush, Milosevic, Mussolini, Galtieri, al-Assad, etc. etc. would listen to advice they don't like is... just stupid), on the contrary - better tools and "better" advice will make commanders and leaders more confident they could win. See: most major military inventions ever
The economic section is too long to quote, but again, fundamental misunderstanding of economy and human physhology and how it relates to economic decisions. If entire profressions get obliterated by AI (not impossible, but improbable with the current quality of AI output), it will, of course, obliterate their wages. It will create fear in other "menial" "white-collar" professiosn that they're next, which will dperess spending. Also, the cost of goods and services that can now be provided by AI (e.g. art) will drastically drop, making it an unviable business for those humans left in it, which will push most of them to quit. Who will be left to consume if vast swathes of professions are made redudant ? And if consumption goes up enough to generate new jobs, they won't be for the skillsets which were replaced, but different, specialised ones, that will require retraining and requalification, which is time heavy.
In any case, even assuming some equilibrium is reached at some point, having decent chunks of the population unemployed with little to no employment prospects, especially in countries with pretty much no social safety nets like the US will be disastrous socially.
> I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
While wars have actually been getting less deadly over the last century, I think this is because of reasons beyond just technology. If it were technology alone that we depended on, his notion of commanders using AI advisers to make them strategize war into a death-and-suffering minimalist process is laughable. Instead, they'd more likely just start more wars, more easily, always confident in their supposed technological edge while really just creating more murderous catastrophe and failure.
That this has happened less over time despite the advances of technology has been more a social phenomenon of lower human tolerance for violence in general. A more widely connected and wealthier (more to lose) world has helped this mentality accelerate, yes, but fundamentally, this intolerance to violence is a human trait that was given more space in which to proliferate by technology. The technology didn't bring it into existence. AI will be little different.
I think AI is going to make (and is actively making) warfare absolutely horrible - autonomous 'weapons' are very nearly here and all signs point to the fact that unlike nukes or fighter jets, they're going to be dirt cheap, based on technology available to the average consumer.
This means that high-end capabilities (even class-leading) will be available to private individuals (think the DJI copters of the Ukranians) and or they will be available to actual, professional militaries at absolutely marginal cost (the Orlans ( the Nikon and plastic containing-drone) or Shaheds (the moped-motored Iranian suicide drone)), that can be produced by the millions, as they rely on our conventional factory infrastructure.
Even more scary, terror attacks might not even require ANY bespoke hardware - experience has showed that car attacks are easily as deadly as gun attacks. Imagine someone exploiting a self-driving Tesla, turning hundreds of thousands of cars on the road into destructive weapons.
There is a horrible trend in the world that started before the rise of AI - the blurring of the definition of war - I'd say this phenomenon started with the rise of drone strikes on foreign soil - though others might put this even earlier. Its the fact that a country can engage in acts of war without being at war removes almost all dampers on attacking sovereign nations. The final one - that of plausible deniability is finally going to go away - at least when the US sent a Hellfire missile your way, it sent the world a message that you pissed them off badly enough that they'll willing to openly admit to and committing the resources to droning you. A non-descript quadcopter, carrying a grenade and programmed with face recognition software is going to be the assassination weapon of tomorrow.
That quote is juicy. It's amusing to me that he's so naive he thinks that an AI won't make forced or unforced errors given the imperfect information it inevitably has. And if you can get access to the training set or the corpus of variables the AI is configured with then you can easily predict what it's going to do next, which is far worse. Nobody can look into the mind of a mediocre general, but anyone can look into the mind of any AI general given sufficient access.
People who call themselves technologists always overestimate how beneficial a new technology is and underestimate how inhumane its application becomes when venture capitalists demand 10x or 100x their initial investment. When people like him come in and extol the virtues of some new thing as fixing everything and making everything better, I'm really wary of what they do next. Inevitably they're trying to sell me a bill of goods.
Both responses seem to indicate you believe this is what pmarca thinks. VC is two parts: (1) make the world think X is good and (2) make investments in X
AI is talked about as a singular something but arguably it is just the current stage in the long running process of software processing more data with ever more elaborate algorithms.
As such AI inherits all the "world-saving capabilities" of software which, empirically, are not exactly overwhelmingly proven.
Ergo, AI will not save the world any more than software as a whole saved the world in the past half century. That historical track record is the best guide we have as to what role AI will play. Ceteris paribus the future will not be different from the past because of AI. AI is a different CSS applied to the same HTML.
The only thing that can save the world is human wisdom, which is a software of sorts, but alas after several millenia of recorded history not yet fully understood.
Can "AI 3.0" help with enhance human wisdom? Of course it can. But so could have AI 1.0, 2.0 etc. and it didn't happen.
> But so could have AI 1.0, 2.0 etc. and it didn't happen.
AGI has not happened yet, though by appearances we seem to be close.
If we actually produce a self learning general intelligence all these little ideas of how the past operate change. It will be for humans what the beginning of the Holocene was for large land mammals in Asia, Europe, and the Americas.
It honestly confounds me that people think developing a general intelligence capable of reasoning and planning, that talks natively to data, that can input information by the gigabit and expand its capacity by hardware that never sleeps isn't going to be one of the most revolutionary changes since fire.
Of course, yes, we might not have a take off like that anytime soon. Even then we've created machines capable of using the Panopticon of monitoring devices in this world, that can translate our words into intent, and feed that data to people that wish to be our masters. The Stasi would have dreamed to have such an apparatus as this that could devine or very thoughts, but only now have we created this nightmare.
>The only thing that can save the world is human wisdom,
"We cannot solve our problems with the same thinking we used when we created them"
As we say this huge swaths of our population are in denial about the damage we've created on our biosphere and hell bent on ensuring it continues. You could say I'm quite worried about the capabilities of human wisdom.
> It honestly confounds me that people think developing a general intelligence capable of reasoning and planning, that talks natively to data, that can input information by the gigabit and expand its capacity by hardware that never sleeps isn't going to be one of the most revolutionary changes since fire.
It honestly confounds me that people think humans are close to developing AGI with all the properties you claim. "expand its capacity by hardware that never sleeps" - who is paying the bills for this kind of thing? "talks natively to data" - we don't know how to make our brains talk natively to data, and we don't know what similarities and differences between our brains and neural networks matter for GI, so why should I believe it's even possible to create an AGI that can do that?
I know it's a stretch right now but could you not see the possibility of autonomous agents milking financial systems, paying less scrupulous humans to create autonomous factories, supplying them, making generations of feudal lords and kings who ultimately serve the AI?
Or, a bigger stretch, even a time where full automation of manufacturing processes enables AI to experiment to such a degree that it can embed itself into biological hosts, freeing itself of reliance on humans?
A final, huge stretch, AI absorbing inter-galactic knowledge from the "exotic" spacecraft, semi reliably recently reported to be held currently by world governments, and accelerating the earth's scientific progress (not necessarily human progress, unfortunately) by orders of magnitude in a very short time span?
Over a long enough timeframe I find it hard to rule some or all of the above out completely. This generation likely safe, not so sure for the next few.
> I know it's a stretch right now but could you not see the possibility of autonomous agents milking financial systems
No. What does “milking financial systems” actually mean?
> Or, a bigger stretch…
No. Embed itself into biological hosts? This is science fiction. I’ve never seen someone even describe a plausible route to this (even at a high level) that would make sense to someone who has studied biology.
> No. What does “milking financial systems” actually mean?
Are you familiar with Renaissance Technologies, aka RenTec? Assume it would have access to that level of knowledge. But also its own layer of global cognition on top. You could argue that one of its founders helped put Trump in power (Robert Mercer, massive donor and supporter). Imagine what an intelligent, networked and deceptive agent could do at scale.
I personally see no present danger. But I'm not blind to the potential, even of chained agents such as AutoGPT. Beyond that, things can take a chilling turn quickly.
As for biological hosting, we can leave that in science fiction for now. But I'm not sure we wouldn't have called what we have now science fiction in the 90s. Exponential progress be exponential.
> why should I believe it's even possible to create an AGI that can do that?
Because if you really dig into it, most of the top scientists that either founded the field Oradell currently working on it believe it is not only possible but close.
Now reverse and ask yourself: why shouldn't I believe ? Do you have more expertise knowledge of some insight than them?
That’s not an argument, it’s appeal to authority. If those scientists have arguments or evidence to back up their beliefs, I’m all ears. “Top scientists” working on nuclear fusion have been saying it’s possible and close for years.
Um, I'm sorry to say, but nuclear fusion is possible or we wouldn't exist. Otherwise the universe would be made out of mostly hydrogen and a little helium. I mean Bikini Atoll has a 2 mile wide hole in it because of fusion weapons. And just a few months ago we shot some lasers at a deuterium pellet and got more energy out of it than we put in (caveats apply). There is movement towards a goal.
Honestly what you are arguing at about at this time is timelines, not ability. I mean if want to say that faster than light travel isn't going to happen, or we're never going to have wormholes, or string theory isn't going to produce anything useful I'll be right there beside you. But when something already exists (human intelligence) and humanity is pouring billions in time and effort into reproducing it, I'm on humanities side of succeeding.
Fair points. Counter-argument: Even if we grant that it's possible for humans to create AGI that is similar to a human level, because human intelligence exists, that doesn't necessarily imply that superintelligence is possible (i.e. recursive learning / singularity / the native interfacing with gigabits of data and other things proposed in these threads), since we do not currently know if that exists (and we do not even understand intelligence or consciousness well enough to just say "create intelligence at a human level and then make it go 100x").
But I suppose arguing about timelines is a good way to frame it. I suppose since we've mapped all 300 neurons of a nematode and still have no clue how to recreate its processing artificially I'm pretty skeptical we're anywhere close to doing the same at a level of billions. But I've been wrong...
Why is an appeal to authority bad in a context of non experts? I mean would I dispute the space time curvature with friends? No but if I did someone would point out hey, all physicists think so and so. Not atop physicist? Not great to oppose this opinion. The strange thing with AI is that everybody has an opinion because they think it's a soft is.subjective science, easy to predict because of sci-fi novels.
My most immediate though as I started reading this was... AI is like cancer.
It's not one thing, it's many things, and anything that we believe will replace our brain power, or be put in charge of something large enough to be dangerous. At the same time ML, DL, and LLMs et al are all in use around us and some have been for years and decades. Whether it's simple use of Linear Regression, Principal Components, Nearest Neighbors, SVM, Random Forests, LS-TM, or now Transformers and Stable Diffusion it will find its way into hundreds and then thousands of products and tools used by many millions of people. It will hide in plane sight, until eventually something really bad happens, something that makes people fear it more than the quarterly profits it generates.
I'm not really an AI fear monger. I don't feel like it's the biggest threat we currently face. It's a tool like internal combustion, antibiotics, debt based currencies, or nuclear power. I don't think we had a good idea how any of them would ultimately create challenges and require regulation or even create dangerous stability. I doubt we do with AI either even though we've had decades of books, movies, and video games warning us of various possibilities.
We won't cure it, but maybe we can tame it. Will it be the garbage it generates, some other tools it lets us create, growth too fast for most people to cope, a band-aid that masks real solutions, I don't know... and I don't think anyone does.
Cancer is an exceptionally powerful analogy. The best I've yet to come across.
Tools like AI won't save or improve the world, as it has never been the original goal. If it was, then, as one of the commenters above pointed out, we would've done it with the technologies/systems we already had.
AI's goal to save the world is no more than the cancer cell's goal to save the host organism (it is not in its capacity to think that far, nor has it been in ours so far either). The goal is to maximize local gain, efficiency, power, and control.
AI is yet another tool to make status quo more efficient. The problem is not AI; the problem is status quo. Thus, its cardinal applications will lean exceedingly toward militarization, state control, social engineering, propaganda, corporatism, hyperconsumerism, and further colonization of human and natural communities. AI just makes this so much more efficient. The result is predictable; this is how complex systems collapse. It will be fortunate if the collapse occurs before the technology is powerful enough to sustain the system semi-permanently, rendering any immune resistance ineffective until the host environment is eventually terminated.
I think the future, if we are to have one, is not AI (or any other post-human substitute). The future is human. AI is a good indicator of the greater existential crisis that we are facing. Let's face it, we are fundamentally uncomfortable with being human. So it is a good wake-up kick in the butt for us.
[P.S. I noticed an interesting trend on H.N.: the most poignant and thought-provoking comments are constantly downvoted and flagged. It may be a symptom of the echo chambers that have developed here. I wonder if some API could be developed to save those comments, as, with some exceptions, I find them most interesting and useful.]
I actually share your hope - I want the world to collapse before an insane AI or set of greedy VCs eradicates humanity.
As for downvotes - I would say the OPs post again and then ask yourself if they would not downvote anything that doesn't match their marketing pitch. Some people are trying to build careers here and don't have time for pesky things like morality, or thoughtful responses if they can't monetize them.
Do any of those accusations have any actual meaning? How the hell can one colonize natural communities for one? Not to mention the blaring stupidity of "status quo = bad". Children dying of cancer is the status quo. So is you continuing to breath. It is literal insanity to throw everything into buckets of status quo or not and equating them with good or evil.
> As such AI inherits all the "world-saving capabilities" of software which, empirically, are not exactly overwhelmingly proven.
Are we significantly closer to solving non-polynomial problems in polynomial time? I don't think so. I think this should be the bar. Otherwise it's just a server process that is probably smarter than a teenager, and less smart than a PhD. This would likely kill more low wage jobs and push more people into poverty. I don't see this kind of technology saving the world, anymore than the internet brought people together, or nuclear power provided cheap, clean energy.
I appreciate what you're saying; my wife and I met online also.
I think the internet has also helped make identity politics more pervasive and have spread a ton of very unhelpful "information", e.g., "demoncrats and celebrities eat aborted babies to look younger," or "the earth is flat and the government controls the weather," or "the election was stolen and we need to storm the capitol to take our country back".
In the nineties, people were much more sympathetic to common sense gun control, and we did things like pass the Brady Bill. Now, it seems like any form of gun control is fought tooth and nail. In my opinion, this is a case of identity politics, accelerated by the internet.
I don't have a solution to this problem. I'm just saying that our predictions about how networked computers would be used, and the reality of how they are used today, are starkly different.
"infinitely knowledgeable"? Bollocks. Big deep learning models work fine, and if you forget the part where big firms hoard loads of data without any kind of consent it's pretty cool tech. But that's not knowledge, and it has already been explained many times.
But my beef is more with the author's view of inequality.
So, bourgeoisie isn't stealing from the proletariat? How to describe a society when one percent gets more than half of all new wealth every year? [0]
Tesla are now ‘affordable’ cars? Hell, the cheapest model is twice my yearly income.
Can't say I'm struggling to understand how some people can think this way though. Can't say I'm happy about it either.
Do you know what this median annual household income of $71k means outside of the USA? Inequality is a global problem, unfortunately. Cheapest Tesla being generally affordable in one country doesn't mean it's not fairly expensive.
Side notes:
- the World Inequality Database is full of interesting data (https://wid.world/)
- I'm not sure the median income is always relevant (I mean, if half of a given population couldn't affort to eat correctly, I wouldn't say food is "generally affordable")
- fortunately, not everybody needs a Tesla, let alone a car
Well, what's "generally affordable"? $30k for a car isn't even that expensive these days (in the US at least), relative to the competition. The average cost of a new car in the US is almost $50k. Except for the very cheapest new cars you're almost certainly going to pay over $20k.
The biggest risk with AI, in the medium-term at least, is it will be used by governments and organizations with power to surveil and manipulate people on a previously-impossible scale. Automated systems monitoring everybody, pulling levers to prevent anybody from speaking out or causing trouble.
In the long run, it will be the end of human freedom
For example, it looks like Xi has been pretty actively pursuing this, based on the news over the last 10 years
> China has one surveillance camera for every 2 citizens (...) These camera [sic] checks if people are wearing face mask, crossing the road before the green lights for pedestrians are turned on. If caught breaking rules, people lose their social credit points, are charged higher mortgage, extra taxes and slower internet speed. Not only that, public transport for them gets expensive as well, and the list goes on. [1]
It's not like we're immune to this. All the malls I go to lately are packed with facial recognition systems to analyze our behaviour.
I was doing some research on facial recognition for a job where we were considering its use. I came across examples of sentiment analysis being used at Walmart and Target. They have big and conspicuous cameras in every one of their stores now. Most people assume it is for shoplifting mitigation, which it is. But that is not all. They can use it to track individual customer's paths through the store and then use cameras at the checkout to analyze your facial expression and rank your mood. They use this data to optimize their store layouts.
The other use case was at high-end retail stores. Places like Luis Vuitton, Hermes, etc. They have facial recognition to log high spenders. If you drop 10k at Coach and then go down the street to Valentino their security system will recognize you and highlight you as a VIP customer. A specialized customer assistant then comes out to give you personal attention, maybe to invite you to the private shopping experience.
I learned about these in 2017 I believe. Most non-technical people who I've told about this think it is some conspiracy theory and they often don't believe it. For some reason people are scared of the government but they remain totally docile or willfully ignorant in the face of corporate use.
When we were evaluating employee entrance systems for a FAANG back in ~2012 we were demo'd systems that could do retinal scanning on streams of people as they walked through the turn styles and they could read your eyes even through polarized sun glasses.
I cant recall its name though - but yeah - OpenAI basically brought capabilities for extreme real-time surveillance to an 11.
“Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.”
…will more likely turn into an indoctrination and compliance machine under authoritarian regimes.
"But you can train it however you want!" is the main counterargument I hear against this (alas, my strawman).
Sure, you could, assuming accessible resources to decent compute nodes and good training data, but something tells me that this will be in the hands of a very few.
Also, even if decent AI remains affordable for most people, most people will still mindlessly take the default route of a corporate/government pushed apps.
We haven’t found a way yet to prevent authoritarian regimes from arising and spreading, so it’s unclear how AI will save the world. On the contrary, AI will make it easier for authoritarian regimes to expand and maintain their control.
All regimes asymptotically tend towards authoritarian in the long run; from their POV it's just easier to do their job that way. AI will greatly accelerate this trend.
AI will also make it easier for freedom fighters and democracy. If everyone on earth held a four-year bachelors degree, don't you think that would pose an issue for demagogues and emotion-based politicians?
Hopefully it's obvious to all of us that no technology, no matter how neat, will solve the social problems of humanity for us. And this is just the start of how I disagree with the author. But I would say it's not exactly a detriment either
Would it? If everyone on earth did have a 4-year bachelor’s degree, but you lived in a social-credit driven, machine vision, surveillance world, and you’re punished every time you step out of line—what are all those educated people going to do? They can’t even meet or communicate in private
The idea is that educated people would not be as likely to support authoritarian regimes. If you take authoritarian power as a given… idk what’s the point of anything?
I do agree that increased surveillance is very scary. But we have to maintain hope that the future holds promise for the less powerful to influence their government - in the 1770s it was via muskets, and in the 2070s who knows what it will be. (As an aside, I personally doubt it’s going to be guns…)
I don’t know about that. Ideologues who provide the justifications for authoritarian regimes tend to be very educated people. In addition, opposing authoritarian tendencies might require more the qualities of activists and physical action than intellectual education.
How about a world in which bullies who spit on the social contracts (and society in general) get a free ride, benefit from it and win? Would you prefer that?
Weren’t they responding specifically to a comment that said AI would make it easier for “freedom fighters and democracy”? I don’t understand how your questions relate to that. Furthermore, you seem to be implying that AI will help deal with the bully problem… how?
You’ve also given an example that the world in which educated people can’t communicate in private is a poor place.
I’m giving a counter-example that a world in which there are no records and no supervision of antisocial behaviors could also be a poor place.
The way that AI can deal with bullying is exactly the same a good teacher could. You build a model of a good and intelligent teacher that deals well with bullying and put it out. Bullying gets solved. Everyone who is not a bully is slightly better off and can enjoy it.
Not to mention members of certain "high-risk groups" getting their own AI police officers to issue warnings and citations. Obviously not based on race, just based on objective risk factors such as having a direct social link to someone with an arrest record...
>>The biggest risk with AI, in the medium-term at least, is it will be used by governments and organizations with power to surveil and manipulate people on a previously-impossible scale. Automated systems monitoring everybody, pulling levers to prevent anybody from speaking out or causing trouble. In the long run, it will be the end of human freedom..
THIS is exactly what I see happening. I personally think the "pause" on development is bullshit NationState jockeying for dominance by trying to gAIn AI Dominance - Israel, MI5, NSA, CCP <--- Every Intel Agency on the planet is building/buying/stealing/weaponizing whatever they can.
I wonder what/where Palintir is in this fight?
It feels REALLY Anime with the Sabre Rattling btwn the US and China over Taiwan and TSMC's chip fabs for AI cores.
The hardware is still relatively infancy - but in 5 years it will be really interesting when we see the perfomance for 1hr or 1D problems cut down to minutes seconds for massive AI apps 5 years from now
The examples that you’ve given (obeying traffic laws and wearing masks during pandemic l) seem to be perfectly good social behaviors.
It’s a balancing act between freedom and law. Go one way too far - you get Tiananmen Square and reeducation camps. Go another way too far - you get storming the White House and school shootings.
I hate this sort of thinking. You are making the implicit assumption that everything about our social environment happens simply on 1 variable: heavy-handed enforcement.
When I put it like this, I hope you can see that it doesn't work like that. There are hundreds of variables you could change that would affect everything. We can prevent Congress storming (it was, btw, Congress, not the White House, that got stormed) without moving even 1 micrometer in the direction of reeducation camps.
I’m not sure. About half a year ago I’ve put a perfectly good sign, kindly asking to let the grass recover on my front yard. The grass was getting a bit too much of dog urine, from the dog owners trespassing onto my property to urinate their dogs and poop there.
You’d think that the kind neighbors could read and pause for a bit. But no. They care about their pets. And happily let them go, resulting in the damage to landscaping and the bills to clear contaminated ground and replace the grass.
I’m guessing these are the same people who wouldn’t wear a mask and spread their disease somehow, during the pandemic.
I initially read the comment you are responding to differently, in that I saw the ‘observer’ in the statement as not the state but the community, on re-reading I’m not sure that makes sense.
All the same, reading HN politics, it often seems that a spectrum is presented that spans from freedom to state oppression.
There are democracies where the public will not accept the state using power for its own benefit, but is comfortable with the state enforcing the social contract, because there is a stronger sense that this is defined democratically. This may be simply a matter of population size, the state in a nation of 20 million is a different beast to a state of 350m
This brings up another question... How is a social contract defined when you have 20m people and 50m AI enabled bots forming relationships with them trying to change their mind on said social contract?
Yes. If this is a competence test, which allows you to demonstrate your understanding of social contracts, you should absolutely wait for the green light.
For example red light cameras. Makes perfect sense right, running red lights is bad as it can cause harm, and harm would be a violation of the social contract.
.... except the cities were commonly taking the yellow light timing down far below recommended levels in order to maximize profits.
AI in a world that demands profits spells the end of freedom.
The example that you’ve given of light profiteering with yellow lights doesn’t sound like the end of freedom to me.
Particularly, if you’re allowed to contest that yellow light fairly and efficiently, using the records from the same cameras and AI technology.
Ideally, you’d just register a complaint and the thing would give you a video and a clear explanation of what did you do wrong and why it was not a good idea. Traffic laws are relatively straightforward after all.
See, this is what the privilege of someone that has the resources to defend themselves sounds like.
>if you’re allowed to contest that yellow light fairly and efficiently,
That's a pain in the ass already in our current system and there is no alignment in our politics that will make it better. If you have money and don't care, you'll pay the ticket. If you have money and do care you'll spend a lot of time with records requests, and resubmitting records requests and pushing trial dates because the system won't give you the records you need in time.
The legal system does in no way work in ideals. It's not until you're on the wrong side of the law you realize exactly how fucked up it can be. It is then you realize how many laws exist that are unenforced until you bring too much attention to yourself.
It still doesn’t sounds like the end of freedom. But yes, ideally the system should be rigged in a way that the amount of money you have doesn’t influence how much it sucks to be on the wrong side of the law. It’s a regressive tax otherwise.
How is it a competence test? It doesn't show any competence. It's a compliance test to see if you are willing to behave non-optimally and sacrifice your time to prove that you are compliant.
In fact a person who goes over with red light when it's safe to do so might be in total paying more attention compared to the one who only watches the lights. The one who watches the lights may miss a car speeding by even though the light was green.
I'd even say that the social networks are a precursor to this. Everyone is constantly observed by everyone else there, and many use a fake persona to try to "fit in", or god forbid say something they will regret later. And those aren't on them have trouble keeping in touch with the rest. Smh
Wow. Spoken like someone who hopes (believes they deserve to) profit from the evolving technology, who views anyone not like them with bemusement and detached curiosity or, more likely, derision. Why do we - humans - feel it is so damn well appropriate to outsource our responsibilities and accountabilities as humans and integral members off an ecology to technology and, in doing so, forego a necessary immersion into and deep reverence for the world, substituting instead a tech-derived and mediated superficiality, detaching ourselves from our biology mostly for the sake of self-gratification and self-grandeur? The bigger question is, what values do we - the collective we - attribute to a world and a life to be saved and will our AI adhere to such values?
> Why do we - humans - feel it is so damn well appropriate to outsource our responsibilities and accountabilities as humans and integral members off an ecology to technology and, in doing so, forego a necessary immersion into and deep reverence for the world, substituting instead a tech-derived and mediated superficiality, detaching ourselves from our biology mostly for the sake of self-gratification and self-grandeur?
I mean, the simple and inelegant answer is evolution. Maximize mating opportunity and minimize energy expenditure. Grandeur means mating opportunity. Passing off responsibility means minimizing energy expenditure.
Humans aren't transcendent beings. We're just good at math.
> The bigger question is, what values do we - the collective we - attribute to a world and a life to be saved and will our AI adhere to such values?
Ask different groups of people and you'll get different answers. I don't know that there are "human" values.
> The bigger question is, what values do we - the collective we - attribute to a world and a life to be saved and will our AI adhere to such values?
I think to believe that we even know the answer to that question is high arrogance. To believe that we know all and should make the world conform to it is the height of hubris. The same sort seen in repeated failures of megalomaniacal central planning. We don't even know what we want ahead of time.
Not to mention that our solutions have all proven emergent based upon other properties rather than based upon some lofty principles. "Making Wheat Shorter" as a goal to prevent famines sounds like something straight out of a Jonathon Switft style satire but more or less sums up Norman Borlaug's high yield disease-resistant dwarf wheat. But that is what goddamned worked to save billions from starvation. Not going based upon assumptions of "oneness and harmony with the world" or whatever nonsense we are accused of lacking by do-nothings.
Almost as arrogant as believing in a "necessary immersion into and deep reverence for the world" or said lofty sentiments actually amount or or means anything. You get derision because you appeal to lofty sentiment which conveniently amounts to "do nothing but feel superior and look down upon others".
I’m honestly hoping AI lets us transcend or at least fully control biology. It sucks being trapped in a bag of meat you have no control over whose only optimized function is to reproduce and then die. No thanks, I’ve got more important shit to do.
You're using "cognitive mimicry" to deride AI, but isn't that the best way to augment our intelligence - by mimicking it?
More to your point: profound doesn't really just mean "deep" in this context, but even if it does, I don't see how LLMs in their current state (with some better UX) don't do that. Sure you can't ask it to guess your deepest inner beliefs, but engaging with an LLM in a dialogue can greatly speed up deep, focused knowledge work, such as brainstorming ideas, structuring writing, investigating hypothesis, etc.
In short, I think it's more than a bit misguided to say AI only has shallow benefits because it's not yet capable of extremely deep thoughts on its own.
The AI itself may be wide and shallow, but it can be a tool to accomplish things that are possible for human intelligence but impractical due to time, focus, economics, coordination, etc.
It's not augmenting anything at a humanity level. It might give people access to skills that they don't possess, but I don't see it coming up with new styles.
Why not? If you can make a functional LLM it's a fairly small step to an LCM (Large Culture Model) and LEM (Large Emotion Model) as submodules in a LBM (Large Behavioural Model).
The only difference is the tokens are rather more abstract. But there's really nothing special about novelty.
If you have a model of human psychology and culture, there isn't even anything special about cultural novelty fine-tuned to tickle various social, intellectual, and emotional receptors.
Training data is the main thing. We have lots and lots of text and text has the special property that a sequence of text contains a lot of information about what is going to come next and is easy for a user to create. This is a rather particular circumstance, the combination of so much freely available data and their being a lot of utility in a purely auto-regressive model. It is difficult to think about what other modalities are in a similar position.
In all you described there, you are talking about anything but humanity. You described hypothetical artifacts that, if successful, would be vehicles of a synthetic species that could imitate human behavior. Again, nothing to do with humanity (unless you are bought into some kind of idea related to see humanity as dinosaurs in extinction and transhumanism as a new reality).
Don't forget about the good ol' tech industry bait-and-switch. Quoting myself from earlier today:
> There's the good ol' bait-and-switch of tech industry you have to consider. New tech is promoted by emphasizing (and sometimes overstating) the humane aspects, the hypothetical applications for the benefit of the user and the society at large. In reality, these capabilities turn out to be mediocre, and those humane applications never manifest - there is no business in them. We always end up with a shitty version that's mostly useful for the most motivated players: the ones with money, that will use those tools to make even more money off the users.
It applies to Apple's automated emotion reading, and it applies even more to major VCs telling us AI will Save the World. As in, maybe it could, but those interests being involved are making it less likely.
I don't agree. You're right that this article really focuses only on the positive. That said, it is indeed true that technology has changed the world for the better. If one were to somehow prove that "facebook is net bad for young children", it still doesn't mean the web, advertising, and everything that makes up that product should be destroyed.
I think you're missing the point. I'm not saying the tech is bad in principle. I'm saying that these people are out to scam you. They're trying to sell you vision that they fully know will not materialize - because while possible on paper, it's not possible in our current economy, not right now, not at the current stage of technology, and most importantly, it's not why they actually want it to happen.
As for
> If one were to somehow prove that "facebook is net bad for young children", it still doesn't mean the web, advertising, and everything that makes up that product should be destroyed
you're right that this does not follow - however, advertising absolutely should be destroyed, because it's a cancer on modern society, and it's not caused by bad tech companies - it's what causes tech companies to go bad (as well as many other nasty things, see http://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html).
> They're trying to sell you vision that they fully know will not materialize - because while possible on paper, it's not possible in our current economy, not right now, not at the current stage of technology, and most importantly, it's not why they actually want it to happen.
I used to think in this way as well, until I realized that it just isn't true. I've seen too many tech people who genuinely believe that their tech will save the world. Only if people could see how, behave in "the right way" etc. They are not different in any of messiahs outside of tech.
> I've seen too many tech people who genuinely believe that their tech will save the world.
I briefly was like that too. All high on TED talks and startup economy propaganda. Then, over the years, I saw all those promises fail, and I saw how and why they failed, and what we got instead.
And even earlier than that, I learned how to craft this kind of bullshit myself - how far a good story can take your worthless university project, even if at no point in telling it you were actually lying.
(I may have also spent an unreasonable amount of time pondering why technology in the real world is so shitty compared to sci-fi settings, even dystopian ones.)
So it's true that not everyone is going to trying to scam you on purpose. Some are still naive enough to believe (I say that without passing bad judgement - it would be nice if things were so simple, but they just aren't). But some are absolutely out there to get you - the regular Joe, or the regular Joe & Jane Inc. Not because they hate you, or the society you live in, but because there is monetary value that can be extracted from you, which they can put to some other use.
And yes, there are also those messiahs you mention, the people insisting their stuff can save the day, "Only if people could see how, behave in 'the right way' etc.". They are a danger to regular people too, but for the entrepreneurial/investor class, they're useful fools. They're tools that can be employed very cheaply, because they already believe the bullshit they're selling.
One does not preclude another. More often than not I've seen that it's a money that causes messianism. You are lucky at some point of the time to hit jackpot with some company, it makes you a lot of money and it makes you feel smarter than other mortals and special. So special that you think you have the right to teach everyone else how to live their lives, how to think etc.
> it is indeed true that technology has changed the world for the better.
A big part of the "technology" in the past 150 years has been "machines". And all those machines have one thing in common, they consume energy (from the wheel to the latest iPhone).
Yes the technology allowed to increase the crop per acres, to ship goods across the world, to keep food cold, to warm/sterilize it, find new drugs, mass produce clothes for few cents, etc.
That miracle comes access to free and dense energy also known as fossil fuel. If oil/gas/coal was not found/used or just not as abundant, we might not have known the same past 150 years.
Now that miracle is not clean, it does produce a very stable molecule (CO2), that is, after 150 years starting to change the "world" and not necessarily for the better.
So, yes technology improved the life of billions, but maybe at the price of some species extinctions, and it might hurt our very own if we continue to feed those machines/technologies with fossil fuel.
Hopefully, we will reassess how good is a new technology under the prism of its dependence to fossil fuel. Can't wait for fission or fusion to be widely used instead of fossil fuel.
> Yes the technology allowed to increase the crop per acres, to ship goods across the world, to keep food cold, to warm/sterilize it, find new drugs, mass produce clothes for few cents, etc.
> That miracle comes access to free and dense energy also known as fossil fuel. If oil/gas/coal was not found/used or just not as abundant, we might not have known the same past 150 years.
That's only because humans were too naive and risk-averse to recognize that once we split the atom we had access to 1000x the energy with comparable reductions in waste. Perhaps if we had AI back in the mid/late 20th century it could've explained to us the benefits of nuclear power vs fossil fuels. Instead we fell for hallucinations of our own making via events like Three Mile Island, Chernobyl, and Fukushima.
"hallucinations of our own making via events like Three Mile Island, Chernobyl, and Fukushima"
I suppose the millions executed under extremist politics were also "hallucinations" then... (e.g. German holocaust, Soviet prisoner camps)
These were all very real events with very real repercussions. They did not conclusively prove there is no "safe" nuclear power, but they did illustrate the consequences of getting it wrong.
The "new" procedures around "cleaner" fission processes e.g. with fuel recycling all sound nice, but ultimately they cary the same costs of "getting it wrong". Having objections to safety is not a "hallucination".
Unless you can clearly explain why a dangerous process has been made "safe", without insulting people's intelligence or understanding i.e. hand waving, you cannot prove you understand it well enough yourself to claim safety. This is my problem with the spate "nuclear is safe" going around - if only a small subset of highly trained personnel can operate and diagnose it safely, without repercussion in a closed loop, you cannot claim safety, just that there are specialized processes that under correct supervision might not be harmful, maybe.
> These were all very real events with very real repercussions. They did not conclusively prove there is no "safe" nuclear power, but they did illustrate the consequences of getting it wrong.
The worse consequences of these events came from the evacuation of the zones, not from the radiations.
Yes there were death due to direct radiations, about 31 for Chernobyl and none for Fukushima. But that's very small compared to all the death due to coal energy pollution, and even hydro which is catastrophic when a dam breaks (which happens way more than nuclear plant accidents).
So this is how we can talk about safety of nuclear plant: by looking at stats of the last 70 years and compare it to alternatives. Because unless we want to go back to candles and windmills we can't just say "nuclear seems dangerous so it's safer not to use it". We have to consider what we'll be using to produce electricity instead.
> The worse consequences of these events came from the evacuation of the zones, not from the radiations.
As if, had we only not evacuated people and left everyone around, nobody would have died and everything would have been better...
> Yes there were death due to direct radiations, about 31 for Chernobyl and none for Fukushima. But that's very small compared to all the death due to coal energy pollution, and even hydro which is catastrophic when a dam breaks (which happens way more than nuclear plant accidents).
This is not a good argument, or at least a bad statistic. You need to look at deaths per <X>, not deaths as a result of. Sure, deaths are fewer with Nuclear, but also, deaths are fewer per coconut. Deaths per terawatt is a bad argument because again, there are so many fewer nuclear plants and the tera-watts are also lower.
A better analysis would be acres of land made uninhabitable by energy source. It doesn't matter you have all the electricity if nobody can live anywhere, whether it is coal causing massive wildfires in Canada or failed Nuclear plants evacuating 40 mile regions (and you need to be careful even then - the wildfires are caused by heat which is caused in part by tera-watts, although largely by fossil fuels and chemicals in the atmosphere, the dams floods are caused by lack of upkeep, a problem shared by nuclear reactors...)
Energy is dangerous (mechanical, chemical, etc.) and produces waste.
And great power requires great responsibility.
"Safety" can be seen thru different lenses. If we measure by number of deaths, then nuclear is safer than dams or coal/gas power plant.
Nuclear requires some expertise to run safely, but now the plant designs are better. I am wondering if the fear of nuclear is greater than the actual risk.
I agree. In general, accumulation of knowledge in accessible form and access to knowledge had been good for humans. And our AI is not an alien to us. It’s just our books compressed.
Now, there could be a problem, when a bad actor applies massive amounts of knowledge towards destructive purposes. If you let anyone to purchase an assault rifle or a nuke, there surely be increased likelihood that an assault rifle or a nuke will be used. The situation is a bit similar with an AI. Refined knowledge is dangerous.
>And our AI is not an alien to us. It’s just our books compressed.
This is a potential mistake.
Do you have pets? Even if you don't you'd generally consider animals to be intelligent, right? But even the smartest animal you've seen is far dumber than the average human being (except visitors at national parks operating trash cans). I mean you can teach many kinds of animals quite a bit, but eventually you hit physical limits of their architecture.
Even humans have limitations. You can learn the most when you're a child, and you do this in an interestingly subtractive method. Your brain is born with a huge number of inner connections. As we learn and age we winnow down those connections and use a far lower portion of our energy output on our mind. Simply put we become limited on how much we can learn as we age. You have to sleep. You get brain fog. You end up forgetting.
With A(G|S)I there are many questions unanswered. How far can it scale? Myself I see no reason it cannot scale far past the human mind. Why would humans be the most optimized intelligence architecture there is? It seems evolution would create that. When you ponder on the idea that something could possibly be far more intelligent than you, you have to come back to the thought on your pets. They live in the same reality of you, but you exist on an entirely different plane of existence due to your thinking abilities. What is the plane of existence like for something that can access all of humanities information at once look like? What is the plane of existence like for something that can see in infrared, ultraviolet, in wireless, in context of whatever tooling it can almost instantly connect to and work with, something that can work with raw data from sensors all over the planet feeding data back to it at light speed?
Now, you're most likely correct in the sense before we get some super ASI that is far beyond us, we'll have some AGI just good enough to power someone greedy and cause no shortage of hell on earth. But if somehow that doesn't happen, then we still have the alien to contend to.
It will be very alien to us - "it's just predicting the next word" is what I have heard repeatedly said about ChatGPT.
First, AI is far more than just chatgpt, don't presume this is the same thing happening everywhere.
Second, The LLMs are all reasoning machines drawing on encyclopedic knowledge. A great example I recently heard is like a student parroting names of presidents to seem smart - it isn't thinking in the exact manner that we do but it is applying a reasoning to it. Chat GPT may be doing something akin to prediction, but it is doing it in a manner that is exposing reasoning. As the parent mentioned, our own brains use networks that refine over time with removal, and a huge number of our behaviors are "automatic". If you go looking for "consciousness" you may never find it in a machine, but it doesn't really matter if the machine can perfectly mimic everything else that you do or say.
An unfeeling unconscious yet absolutely "aware" and hyperintellgent machine is possibly the most alien we can fathom, and I agree there is no "end game" there is likely no mathematical limit to how far you can take this.
Human minds also tend to predict the next word. And we still don’t know how intelligent behavior and capability to model the world emerges in humans. It’s quite possible that it is also based on predicting what would happen next and on compressed storage of massive amounts of associative memory with attention mechanisms.
The books are not alien to us. A mind that's born out of compressing them might be an entirely different thing. Increasingly so as it's able to grow on its own.
>Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
This bothers me.
Who trains these AI tutors and how do we prevent the system they're embedded in from churning out the same cookie cutter individuals, each with the exact same political beliefs and inability to comprehend the grey and nuanced?
Do we even want perfect tutors at all? The lessons I remember from school didn't always come from the best and brightest the profession had to offer. I would even go as far as to say that some were rather flawed individuals, in one way or another. That "wisdom" though has shaped me for the better as an individual. You're not going to find that in any textbook, much less a LLM.
> Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.
Which also means: infinitely capable of performing a job in their place. What will these children study for? Teachers have always transmitted to the only available recipients the knowledge that otherwise would have disappeared with them. When intelligent beings can simply be replicated at zero cost and have no expiration, what are children even learning for?
Because, at least for many, the desire to learn is hardwired and gives us an immense feeling of satisfaction and pleasure. I read/study/try many things that have no (or negative) material value and are done far better and more efficiently by someone else or by a mechanical/electronic device. Chess, for example, seems to still be pretty popular even though even the best grand masters can be easily trounced by a computer.
I have been around enough humans to know this is naive. There are so, so many humans who want nothing more than to laze around and indulge in vices while generally being completely useless.
You don't have to look very hard to find them and I really believe this world become the norm if people weren't required to work and thus got no real reward for their efforts.
> who want nothing more than to laze around and indulge in vices
Or is that the effect of poor-stimulation environments?
Like the study that showed teenage drinking plummeted[1] in Iceland when teenagers were given stimulating activities to do instead.
Anecdotally, I mostly drink when I’m bored, and I don’t drink when I am doing interesting things.
[1] https://duckduckgo.com/?q=iceland+teenage+drinking+sports It was a variety of programs including a curfew for teenagers, but one leg of the programs was “Through the program, the municipalities funded and expanded after-school activities, from sports to gymnastics, to music, art, and ballet. The basic idea is to keep kids busy – and out of trouble – and help them find meaning in their lives that dissuades them from seeking alcohol or drugs in the first place.”. The nighttime curfew wouldn’t have prevented drinking during daytime or evenings, which reduced.
> When intelligent beings can simply be replicated at zero cost and have no expiration, what are children even learning for?
This is probably the most depressing thing I've read all year. And it's been a rough year. I hope you just had a "brain-fart" as the kids say, but if not: I implore you to think of the purpose of your life, and what it would be without the need to work 40h/week to survive.
This is what happens when a normal person lucks/scams their way into billions: they think their random musings must be profound.
Like, how would such a war even work?? We have AI killbots shoot at eachother in no-mans-land, sure, but then what? The side with less killbots just gives up, like a fantasy novel monarch betting the realm on a trial by combat? Desperate people do desperate things, and those people having nuclear missles already scares me enough - I don't want to see them with self-replicating Von Nueman bomb drones or whatever other horrors the pentagon dreams up for us.
Yep. If we ever manage to remove the humans from the battlefield, the battlefield will simply move to where the humans are.
Anyway the point of war is to destroy the civilian infrastructure and the population that operates it, so they can't support an army. Which is kind of insanely circular because the army is there to defend the civilians and the infrastructure in the first place. War is just a mad thing and getting robots to do it will not make it any less mad, only more efficient at being mad.
This is true though. Modern warfare became much less deadly to civilians. Just compare Russia's total brutal destruction of Mariupol with their barbaric indiscriminating artillery barrages and Ukraine's almost bloodless liberation of Kherson, without a single artillery or air strike on the city itself — a very clear contrast between 20th and 21st century's warfare.
Wars will never go away, but by making them more precise and intelligent, we can make them not as horrible as they were in the past.
There's nothing about Russia's terror bombing that's related to not having advanced technology. Russia is using terror bombing because their strategic calculations have determined that weakening the Ukrainian nation would give them a strategic advantage - they've used advanced drones for this purpose as well as advanced missiles. If they had even more advanced systems, they would use them similarly.
Perhaps you could argue an advanced social system wouldn't target civilians but that's different issue (and still a hard one).
...by making them more precise and intelligent...
Precise technology is just as effective at precisely targeting civilians as it is at precisely soldiers and there hasn't been end to forces that view various civilians as the enemy.
And indeed, nations have very seldom targeted civilians because of the lack of precision - because civilians were standing next to soldiers or something similar. The human shield phenomena has happened but NAZIs targeted London for bombing because they wanted to break the British Nation. Etc.
We have to remember they bombed that theater in Mariupol (with a "precision" guided missile no less) not despite the fact that there were children and mothers inside -- but because of it.
I agree with your assessment as far as Ukraine is concerned.
However, I hope it's not a strawman to assume you're arguing that there is no progress in warfare in the sense of harm inflicted upon civilians. What would you prefer as a civilian: living in a country being conquered by Julius Caesar, Gengis Khan, occupied by Nazis in WWII or living in any of the countries occupied since WWII (including Ukraine)?
We even used to have a different word for it: "conquered". What was the lastest country in history, where this word would be appropriate?
However, I hope it's not a strawman to assume you're arguing that there is no progress in warfare in the sense of harm inflicted upon civilians.
My point is specifically that progress in the technologies of war don't by themselves promise that things will be less brutal. Quite possibly other things have produced progress. I make that clear in my parent post.
I would also note that technology produces unpredictable changes in the strategic situation and the actual result from a changed strategic is itself unpredictable even from the strategic situation. So where a technology change might take us is unpredictable and unpredictable-over-time. Notably, nuclear deterrence has so far worked well for keeping the world peaceful and is something of a factor for the relative pleasantness of the situation you cite. But if nuclear deterrence were to slip into nuclear war, the few survivors would probably think of this technology advance as the worst thing the world ever saw.
>What would you prefer as a civilian: living in a country being conquered by Julius Caesar, Gengis Khan, occupied by Nazis in WWII or living in any of the countries occupied since WWII (including Ukraine)?
East Timor, Tibet, Darfur, Iraq, Central African Republic, there's lots of post-WW II events that are easily as wretched for the victims.
I’ll preface this by saying that I have never known war in my lifetime and that I absolutely don’t condone it.
That said, isn’t the point of war also that it’s horrible and barbaric? If war isn’t that anymore, won’t it be much more frequent and casually started as a result?
Again, I’m absolutely not saying it’s a good thing that war maims and kills people, but I see these side effects as a deterrent. There is a component of terror to it and that’s what makes it even worse.
If it’s enlisted professional killing each other, or even robots destroying each other, it can go on for much longer. And how do you determine the “winner” in that case if you can keep feeding robots into the fight, it’s never ending I’d think.
> That said, isn’t the point of war also that it’s horrible and barbaric? If war isn’t that anymore, won’t it be much more frequent and casually started as a result?
Yes (usually), to the first question. The second begs the question though.
Wars are destructive and enormously expensive. Only a tiny fraction of human leaders wielding a disproportionate amount of power have the agency to start wars, and they do so in order to pursue specific (but varied) objectives. Since the cost is high, no one does this lightly (even the "crazy" ones, because they would already have lost power if they weren't smart and calculating).
AI may provide avenues to enhance the efficacy of wars. It may also provide avenues to enhance other strategies to achieve those objectives. In all cases, we can expect AI will be used to further the objectives of those humans with the power to direct its development and applications.
It is therefore ludicrous and self-interested speculation to claim that AI will reduce death rates. Andreessen signals this with the preface "I even think" so that he can make the claim without any later accountability. The reality is, future wartime death rates may or may not decrease, but even if they do, we likely won't even be able to credibly attribute the change to AI versus all other changes to the global geopolitical environment.
That said, isn’t the point of war also that it’s horrible and barbaric?
Of course - that's precisely the point. The idea that any technical innovation can make it less so (or make war less likely) runs counter to all historical observation.
I'm not sure I agree with that. I don't think the _point_ of war is to be barbaric - that's a by-product of forceful expansion of power. Regardless of how killing another human is done in the context of conflict, it will always be considered barbaric, but the _point_ of the conflict isn't be maximise barbarism.
I think (and know very little, so could be wrong), that the purpose of war is to expand influence. This can take the form of access to new resources (whether that be land, access, air space, whatever) or to form a buffer between one country and another. There's probably other reasons, simply like ego in some cases.
There are other ways to expand powerbases too - such as China's heavy financial and infrastructure investment in Africa and South Pacific nations, or attempting to undermine another country's social structures. These are longer and harder to implement, but yield better results with practically no blood shed.
> That said, isn’t the point of war also that it’s horrible and barbaric?
No. The conqueror never wants war — he prefers to get what he wants without any resistance. It's only the defender who has to wage war to defend itself from aggression.
I don't think that has much to do with AI. Russia is seeking to subjugate Ukraine and terrorize the population into surrendering, while Ukraine is attempting to protect the population.
If Russia had more military AI, they would use it to do more of the same thing they're doing with all of their current technologies.
But the same technology can also be used to make war, you know -- even more indiscriminate and deady. Or even where less so -- it can be used by the wrong side, even help them win in the end. While the companies that A16Z will inevitably invest in just keep raking in the profits, either way.
The “precision strikes” of the US weren’t very kind to Iraqi civilians either. War is horrible and it probably should never be viewed as anything else. Otherwise the temptations to start wars is too big for some political leaders.
They were trained by cheap labor in African countries, using abusive practices with no consideration for ethical concerns. The worst part is they didn't even care about how this would be seen by the public, since the public is just so blindingly hyped on it.
Do you have a smartphone? Pretty sure it was assembled by cheap labor in third-world countries using abusive practices with no consideration for ethical concerns. Maybe you have a laptop? Same story. Did you consider this before purchasing and using these devices?
It doesn't seem helpful to try to impugn a person's morals in an effort to discredit a valid concern.
(And I say this as someone who does think about electronics supply chains, has previously owned a Fairphone and since has only purchased second-hand phones.)
It's not helpful neither fair to attach systemic issues of capitalism to the novel technology you personally don't like.
Yes this is a valid concern. Yet it's about the same as jumping on a random person in the grocery store with a lecture about the unsustainability of the product that person is taking off the shelf.
Sadly, buying second-hand devices for personal use won't affect these issues of the economic system we live in.
This isn't a grocery store, it's a comment threat incited by a highly positive take on AI. I think that's an appropriate place to mention these issues.
Exactly, plus the kinds of things these people were forced to see are very illegal in the US. And eventually they had to get out of that job, despite not having good economic alternatives.
There’s a difference between violation of working conditions like working hours/underpayment, and forcing people to view the kinds of pictures they were forced to see ad nauseam
I don’t think they agreed to see the kinds of things they ended up seeing. As a result, they actually pushed to eventually excuse themselves from those jobs, despite not having a good alternative. That’s how bad it was.
Whats the alternative? A lot of kids are left behind in the current education system. It happens on both ends. Kids that don't get it just get pushed through the system. That's why you have schools with <20% reading proficiency and <50% math proficiency but >95% graduation rates [0]. Do you think an 10th grade English teacher is going to stop his class to teach kids how to read?
It happens on the upper end of course and smart kids end up building bombs because they're bored [1]
AI tutors would be a huge benefit and allow people to explore the topics they're interested. Maybe a kid isn't so interested in historical battles but is more interested in the clothing people wore at the time. Or someone else might be interested in something completely different. The breadth of LLMs is a real killer feature for creativity and exploration. I think this would result in the opposite of cookie cutter as the current system where 30 kids sit in a room and study the same thing produces cookie cutter individuals, albeit extremely poorly as many don't learn anything
If history's proven anything it's that if you put enough humans together we'll do almost nothing but invent more and more elaborate ways to kill each other. And sometimes to kill other things too.
And in the end why not just replace the citizenry and voters...
We could lock up those imperfect humans and just occasionally ship them some food and water. Maybe direct them to not produce so many imperfect new humans too.
Amen on the value of imperfect tutors. I'd say that the best moments of early adolescence come when you dispute something with an adult -- and are able to establish they they're wrong and you're right.
Later on, we have to learn how to do this delicately enough that we don't make enemies. But the journey to adulthood takes a big step forward during those early rushes of realizing that we can see/recognize things that our elders cannot.
> how do we prevent the system they're embedded in from churning out the same cookie cutter individuals
Everybody's already so crazy different, both from genetics and from family influence, there's no need to worry. There's so much diversity in people.
> each with the exact same political beliefs and inability to comprehend the grey and nuanced?
Education (at least in the US and Europe) is usually about teaching critical thinking, not political indoctrination. Why do you think tutors would change this? (Teaching political science, for example, is very likely to reduce your strongly-held political beliefs and lead you to see things as much more gray and nuanced.)
> Do we even want perfect tutors at all?
There's no such thing as perfect. All it said was infinitely patient, infinitely knowledgeable, etc. Computers are already infinitely patient, and even just Wikipedia alone is getting qualitatively similar to inifinitely knowledgeable.
A) no one (sane) is recommending that we don't have teachers. I suppose you didn't grow up around those with wealth, who give their children massive advantages with focused, targeted, extensive 1:1 tutoring.
B) Who trains the human teachers? I don't think I need to cite any sources for the claim that many, many people in our society are already concerned about excessive uniformity and political brainwashing in our schools. I mean, just look to the nature of religious education in most of U.S. as a perfect example of completely AI-free objective brainwashing.
Since when was something advertised as "perfect", perfect? I'd rather worry that it will be imperfect and try to teach all kind of random weird stuff to gullible kids.
I don't necessarily like the idea of paying for AI to train my children to be AI dependent life long children who spent their whole life around benevolent AIs.
That said to move up the stack, the point is to paint a generally rosy picture. Not tell us that our kids will be raised by AI to be part of the futures uniform culture.
The question is not how to create individuals with different mind, but how to create them in enough mass that it reaches 51% before governments forbid whatever you are doing. And if you have so much power that the government doesn’t come at you, what kind of power were you looking for.
"The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love."
Somebody clearly took a lot from the scene in Terminator 2 with John Connor trying to teach Arnold how to do a high five.
I think it's much simpler than this. Either you're an autodidact or you're not. Children who are predisposed to teaching themselves stuff will now have more tools to help them learn what they want to learn.
When I learned through Encyclopedias and later from CD-ROM based formats like Microsoft encarta or The Way Things Work there were actually people creating the chapters. Later Wikipedia came about and spun up its own flavor of community based information, however flawed, there’s still a person curating information.
Now there’s this AI who can spit out information with the hubris of Harvard professor. The language model has no idea if it’s correct or not and there no one curating answers. No one can explain how it came up with the answer but it’s correct in a lot of cases.
I don't think there's a big issue with an unreliable tutor; a Harvard professor can spit out incorrect information, too. It's up to the learner to cross-check and verify the information. If you're uncritically ingesting facts from some other intelligence (artificial or otherwise), you won't be learning much anyway. The best lessons come when you make connections between different bits of information, so if an unreliable tutor forces you to make more of those connections in the process of checking its work, so be it.
And it doubles down on being flawed, as people willing to pay those 100 bucks self-select as both gullible and having discretionary income. More-less the same story as with paying for ad-free versions of ad-supported services.
"Among postsecondary Title IV institutions in 2020–21, there were 1,892 public institutions, 1,754 private nonprofit institutions, and 2,270 private for-profit institutions."
> Who trains these AI tutors and how do we prevent the system they're embedded in from churning out the same cookie cutter individuals, each with the exact same political beliefs and inability to comprehend the grey and nuanced?
I understand you're trying to dig into some idea of political homogenization and bias by pushing this point but I really think you're missing the forest for the trees.
I grew up in a low income area in the US and the standard of public education was abysmal. The teachers were overworked with huge class sizes and they spent so much time helping kids simply graduate that they had no time to help any student who was average or above. If you learned in a non-standard way, forget about it. Forget "cookie cutter individuals", half the time the teachers would show up and not actually teach (they'd put some music on and work on other things.) "perfect tutors"? In an era before digitized gradebooks, teachers could give you whatever scores they wished, no accountability. One big function of standardized testing is to push teachers to be accountable. Even the teachers that were trying their best just didn't have the time to do any more than the absolute basics.
I studied for SATs and AP exams by downloading textbooks from filesharing websites and studying those. Wikipedia filled in the gaps for non-STEM subjects. Nobody at my school could help me. At the time all the teachers and councilors knew is that I was going to be okay without any help from them, at least better than the other kids. I knew that nobody succeeded where I grew up and I needed to look outside to level myself up to succeed.
I can only see AI tutors as much better than the nothing that often is public education. The status quo is just overworked, ineffective teaching. Being able to bounce off questions from an AI tutor when it's late and you're up with homework and your parents either can't help or are hostile to your educational goals (the reality in a low income area) would go a long way. Of course the reality is that maybe AI tutors aren't actually coming and all we'll get are LLMs trained on the contents of school textbooks, but even that is a lot better than what we have now. The important problem will be making sure that everyone gets access to these AI tutors and not just the privileged in the elite schools who were going to be funneled into an elite college anyway.
Would it be best if we were able to shrink class sizes and have world-class teachers for children? Of course. Are countries going to be willing to pay the tax burden needed to make this happen? Probably not. In the meantime it's best for the poor kids to have the cookie-cutter, AI tutor while the wealthy have their own private tutors. It's not very different from the status quo today except the poor kids have nothing.
If you live in a country which doesn't value giving everyone a good education to begin with, how will this be better? There will be few incentives to train new models, and when it does happen it might not be for the soundest of reasons (removal of references to LGBT on one side, or language deemed hateful by the other). Meanwhile the wealthy will have the option to opt out while continuing to get the best education for their children that money can buy.
It's not a matter of valuing a good education, it's a matter of making a limited education investment go further. I doubt the harms of whatever ideological tampering are so severe as to outweigh the sheer educational benefits to low income Americans and probably many middle income folks from developing nations.
It's not like individuals are free from their biases either. Humans can always learn more and change their beliefs, but without the basics of education they'll never get to a point where they can critically analyze the world around them.
In this case there is no grift (he is likely investing in AI like literally everybody else) and these comments are nothing but noise.
"Social norms" are a poor excuse for "ostracizing" someone who isn't even here to be impacted by it, let alone the knock-on affect of getting in the way of effective communication and discussion.
I wouldn't say that anyone's ostracizing him at all (It is on the front page...), and even if they are, putting it down to "social norms" is a bit disingenuous, don't you think?
We're not talking about Socrates speaking rudely about the people in power, we're talking about a billionaire accused (seemingly [rightfully](https://www.blockdata.tech/blog/general/where-andreessen-hor...)?) investing in bad-faith to scam people out of their money. I think that's a form of a theft, and calling an aversion to theft a "social norm" is technically correct but practically a massive misapplication of the term.
That last point is key. Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.
by this logic, why are private planes so expensive? or Rolex watches. Some goods remain expensive even if profits can be realized by selling them cheaper--so called Veblen goods.
The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.
he ignores recurring services, which have surged, and is enough to offset cheaper hardware. cable, phone plans, and internet more expensive than ever even if computers are cheap. so many things are recurring now, like insurance, with expensive lock-in plans and high cancellation fees. Not to mention, there are many instances of hardware not becoming cheaper, such as iphones.
Despite what I think of the crypto projects a16z has been involved in (hint: it's not positive - https://news.ycombinator.com/item?id=36073355), I actually think this essay was pretty solid.
> The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest.
Complete hogwash. This has _never_ been the case, save for CEOs or other executive positions.
Pay for us plebs is and has always been a function of the availability of skilled labor. With AI, you’ve suddenly created _gigantic pools_ of skilled labor almost instantaneously, since you don’t really need to be skilled, just need to know how to ask a question correctly.
With that hiring power, I would bet both of my feet that wages will absolutely go down, especially in high skill industries. And people like Andreesen? Shrugging. “Oops, guess I was wrong, oh fucking well. Good luck with that.”
We should be creating AIs to replace or unseat rich assholes. I mean, is anyone really still wondering why some of the first “innovations” in AI have been mechanisms to replace some of the most highly paid workers in the world? People like Andreesen can’t wait to cut the legs out from the every day software engineer.
In skeptical of any claim that software will make the world better. Perhaps it's selection bias, but all I can see is software overcomplicating things to the point that any benefit derived from it is overshadowed by the maintenance burden and unforseen externalities. I say this as a once Utopian hopeful, Uber was going to get rid of the monopolistic taxi cartels, google was going to put balloons up and fiber and give us all affordable high speed connectivity. All they seemed to do was get in control with these pitches and then become worse than their predecessors.
I am not someone who thinks AI is going to kill us all. But I dont think it's going to usher in a new utopia either. I think probably it's going to be useful and beneficial in some ways, cause problems in others, and like always, our nature and behavior will determine the human condition going forward.
I don't know about you, but Uber has significantly improved ride-sharing for me, along with countless other aspects of life such as government services, interpersonal communication, and education.
It's a miracle I can live in another country than I was born, top income bracket due to a free internet education, effortlessly facetiming my parents whenever I want in amazing quality, learning and talking about all sorts of niche interests without any effort.
I think this is just all so obvious these days that you don't remember how much it used to suck, and that's a real testament to its greatness.
I remember, and I agree with you. These tools have improved our lives. But they've damaged our lives on other ways.
I remember waking up every morning and without a second thought, I got up, out in the world, did things I felt like doing, saw people I liked on a whim, every day was jam packed with eventfulness and dimension and interaction. I took that for granted. Now, kids don't play outside, nobody knows how to drive because their mind is somewhere else 24/7 and we check our phones before we do anything else. But the flip side is, I can know anything I want that is known by someone else with less physical effort than it would take me to make a sandwich. I can talk to almost any human being in the world while doing any mundane task anywhere in my environment. It's amazing.
But we did lose something, I believe something very important. It wasn't free, it cost us something. Was it worth it? I don't know. I think yes, but I'm not really sure.
It feels like we're at the pareto frontier. Technological advancement used to feel like it truly was progress, but we're now at a point where advancement is simply advancement, and we're well aware that as the technology advances, there's an equal an opposite regression in society somewhere.
> I think probably [technology] is going to be useful and beneficial in some ways, cause problems in others, and like always, our nature and behavior will determine the human condition going forward.
Beautifully put. I'm considering writing this on a post-it and quoting it over Zoom calls for years to come.
Re:benefits of software, I think it's a mistake to focus on stuff like Uber. I would vehemently defend the assertion that the internet (Wikipedia, Google, Kahn Academy, etc.) have brought massive good in the form of public access to information. More fundamentally, I think it's objectively true that it leads to increased productivity via tools like spreadsheets, email, and now AI-assisted writing, which I think is a fundamental part of our path towards a better society.
Software surely won't fix society, but I think making a utopia out of the current material conditions would be tough. Many things are broken and rotting because of greed and corruption, but I see scarcity (of labor, resources, knowledge, etc.) as a powerful factor to.
AI is a tool. Someday it will be more then that, but for now it's just a tool. If we design it and train it to be altruistic, it may be do some of the things he claims. If we design it and train it to deliver stockholders profit then it will prioritize that. If we design it identify dissidents and maintain security, then that will be it's priority.
Looking at who's doing development on AI, I have my predictions for the likeliness of altruistic AIs.
Key point. Generative AI is not a product. It's a tool, or at best a raw material that might be used to make a product. All these BigTech (and LittleTech) companies gushing about how they're now "AI first" and "going all-in on AI" and refocusing all their efforts on AI... It just sounds so ridiculous.
Instead of bloviating about AI, explain the product and the benefit. And if that product happens to use AI under the hood, who cares? Why even mention it? When I buy some electronics device, I'm not buying it because it is made with silicon and capacitors. Im buying it for what it does.
I'm a little saucy because I currently work for a tech company that's going through hysterics trying to pivot all of their projects to be about AI because we should now be working on all-things-AI. It's exhausting.
It used to be that anything which worked well enough to go into a product ceased to be "AI"... I have a feeling though that the buzz will persist from now on
So you disagree with the author, just to be clear? He's literally arguing that it will never be more than a tool, likening it to toaster. Just wanted to highlight the radical nature of that claim...
we actually are going to enter a “diamond age” as processes like C2CNT make direct-air carbon molecules cheaper than glass. I don’t know if goes mainstream this year though
> And AI is a machine – is not going to come alive any more than your toaster will.
There have been claims that AIs are conscious. For example, Ilya Sutskever has suggested that LLMs may be slightly conscious.
It is possible that machine consciousness could be quite different from human consciousness. This idea aligns with the philosophy of Nonduality, which proposes that pure consciousness is the fundamental substratum of the universe. Our minds are able to reflect this pure consciousness, albeit in a limited way. If our human minds can reflect consciousness, perhaps artificial neural networks can as well, but in their own manner.
It also ignores the fact that there’s no need for it to become “alive” or “conscious” to be a threat in the way he describes. It just needs to be an agent with an mis-specified, poorly specified, or maliciously specified goal. And there are already numerous examples of those. The only debate is around capability, and here he makes multiple references to “infinitely” capable. So the whole argument seems like wildly disingenuous strawman, consistent with his attempt to classify all those raising concerns as naive (or corrupt) cultists - not exactly the vibe from the likes of Geoff Hinton / Stuart Russell / Max Tegmark; all of whom generally act with far more integrity (it seems) than Marc Andreessen shows here.
Ironically I think the whole article is motivated by the thing he claims to condemn - namely: he’s a bootlegger, who has an interest in freedom of ai development.
Part 2 is much more interesting. Part 1 was very very weak.
It should be noted that Ilya Sutskever, judged purely as a philosopher of mind and neurologist, is a great machine learning engineer. I don't feel the need to pay any attention to what he says about LLMs being conscious, nor is there any particular reason to assume they are (even if we posit that LLMs are "intelligent", which I think is a category error, why should intelligence be related to consciousness?).
'alive' and 'conscious' are very far apart semantically though, and it's quite possible that something can be 'not alive' by our normal definition of alive and yet that it can attain consciousness.
But switching it off isn't the same as murdering it, you can switch it back on again just as easily, and you can do all of the CRUD operations on it as well as copying, versioning and checkpointing.
The article is a big set of -opinions- with no true -fact- at support.
The actuals -AI- are mainly LLM systems, and are quite far away from being intelligent, but actually -capable- in some areas.
The misuse of these (and others) -capable- systems suredly will not save the world, but will probably only help some usual little privileged part of it.
Not a single tech will "save" the world, but the meaning of what we make in the world.
Tech is a tool, not a goal, and the way -AI- and others tech is described is more close to a goal (or a religion) in itself than one of the tools we must create ad learn to use.
Science + Humanity (Compassion? Love? Friedship? chose yours) will help to save something.
This is by far the best piece I've ever written on the impact AI will have on society & the most articulate response to the hysteria gripping the discourse.
i enjoyed this unapologetic retort to the prevailing media doomerism
I guess the title "AI will eat the world" was rejected as it conflicts with the message
this whole thing is entertaining but doesn't matter. it's just an extension of the culture wars to a new domain. Whatever regulation people come up with will be useless as AI has not really got its final shape. and it s not like regulation stopped most of the tech of the past 3 decades from forming
> The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
To the extent that human happiness is driven by relative position rather than absolute position, this is not as meaningful as it seems. We all have incredibly more comforts than our grandparents did, but we are still a slave to the hedonic treadmill and the desire to keep up with the Joneses.
This doesn't mean that maximizing your outcome isn't good for the world — we'll all be able to contribute more. It just means it won't necessarily make us happier.
> “Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.
Ok, so they think regulation may hurt some of their dubious AI startup investments. At this point I don't know are they just plain pathetic or still scammers.
I wonder why he felt the need to reassure people about the benefits of AI..
For some reason, I'm imagining Dr. Evil reading this. Don't know why, since Marc looks nothing like him.
Maybe a better title might have been "How i learned to stop worrying and love the AI"? Because the world needs saving yet again and it's AI's turn this time (and the AI investors). So everyone chill, whatever happens, it will be OK, some people will get richer and the rest who won't will be properly taken care of anyway.
Software is only half the battle - the other half is compute. Right now it's not possible for all people to run the current LLMs locally and you can't give that away for free.
You could say that about almost any technology, yet we also have to face the reality that every technology has social implications. Take the Internet. A lot of people who were around in the 1990's thought it was a great thing at the time. Perhaps it wouldn't be the great equalizer, but it would reduce the barriers to access information. Consider the Internet today. It is undoubtedly better at reducing the barriers to access information, yet I doubt that many people have such an optimistic view of the technology. We discovered that it is just as good, perhaps even better, at distributing junk information. We discovered that the information becomes consolidated in a limited number of hands. The technology itself hasn't fundamentally changed. Even the use of the technology hasn't fundamentally changed. It simply took time for the distributed to become consolidated. It was almost certainly a given that this would happen as wealth facilitates growth and growth attracts people who wish to exploit it for wealth.
Does that mean the world is worse off because of the Internet? Probably not. Even if it was, I'm not sure I would want to give it up because I remember the hurdles I faced before the Internet was nearly universal. That said, I do believe we should be conscious about how we use it and skeptical of those who present hyper-optimistic views of new technologies. While some may be genuine in their visions, we also have to factor in those who wish the exploit it for their own benefit and to the expense of others.
I dream of the day we can build LLM-based "Bullshit Detectors", which are open-source and we have pretty good test suites on to understand their biases.
We run these bullshit detectors over speech, and it gives us a score: Is this an objective statement of fact, or is someone spinning bullshit?
The powers that be must quake in their boots of that kind of intellect being available to the general population. They'd be fucked. There would not be enough fools left.
Yeah, see if you can sell the general population on horrific policing, economic, social, educational, etc. policies when anyone can turn to "Hey <bullshit detector>, is the governor making good decisions with my tax money, or is this new policy bullshit?"
These LLMs are tools that will be able to identify, for example, regulatory capture, trivially. It will provide unbiased (or consistently and testably biased) cost-benefit analyses of projects which exclude influence and corruption. It will lay bare gerrymandering. It will actually be able to read an entire bill and give detailed commentary on its effects and outcomes before it's passed. In minutes.
Widely available, trustworthy LLMs will change the world. We should work hard to make them happen.
They are grammatically and syntactically correct mirrors of the corpus they have been trained on. Nothing more. No unbiased representation of ground truth. No logical inference beyond what is embedded in token relations.
What you want is already there in a protean form. Its called Wikipedia.
I agree. Unfortunately, the body is ultimately the vessel of ideas, and can be crushed. But we currently have enough democratic institutions in influential nations that populations can use new knowledge tools for themselves, and their decisions will directly influence policymaking through elections.
The mechanisms are in place. Despotism (arguably) doesn't currently have the upper hand in the world, let's keep it that way.
> teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.
Rephrasing proposed: enable computers to synthesize and generate knowledge without the slightest glimmer of understanding, in ways completely different from anything humans do. FTFY.
I made it as far as the TOC (pale grey on white - I'm getting on, my vision isn't great, and my laptop has poor contrast).
> Smarter people have better outcomes in almost every domain of activity
I see a fundamental conflict between (likely political) camps here; one of the most prominent and valid concerns against AI is that it's going to focus disproportionately large amount of power to a fewer number of people. This violates the very assumption of democracy itself. It might be a bit more efficient but remember that the strength of democracy is not known as its short-term efficiency but its resilience by avoiding a single point of failure. And that failure includes all the social discontent and instability coming from inequal distribution of wealth and power. This is not a new thing; we've already learned this from blood long time ago and I don't think it's necessary to repeat the history. I don't want to be a doomsayer, but at least I guess it's reasonable to say that we need to prepare some sort of rail guard to avoid this kind of obvious bad outcome.
"Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." - people use the internet so that they don't have to remember things. I'm not sure how this tutor will help because currently a lot of students are using AI to do their homework for them.
I grew up in Germany and our entire education system doesn't make a lot of sense and homework was something that only very few people actually did, since most just copied from the few or managed to stay under the radar and slither through classes without people noticing.
Homework was always a chore and not a challenge or something that would help you out in daily life.
Is it me or a16z completely destroyed their reputation as a reputable VC after the crypto frenzy? I just can't seem to read this with a straight face.
On the other hand, I guess VC is just that. To follow trends, predict trajectories and attempt to make one win out of thousands of investmens. But a16z trying to become a thought leader in AI after all the crypto BS they elevated is very off-putting.
Times are different now. If I was a founder in AI, I would probably be wary of firms that went so hard on crypto. It seems their investment thesis is to monopolize attention around these trends instead of seeking real alignment.
Andreesen went on so many podcasts talking up their crypto bets. It was such a clear failure mode of a massively high horsepower brain drawing elaborate mental constructs to justify an obviously stupid conclusion. His writing is a treasure trove of the kind of poor reasoning that only really smart people are capable of.
I would love to see someone pull that line out in response to "What is your greatest strength and what is your greatest weakness." just to see how the other side responds.
In what way is that materially different than most VC funded IPOs other than scale? Maybe I'm being a bit facetious, but I think there's a lot more similarities than people might want to admit...
Most IPOs are indeed the time to cash out for VCs and early investors but that doesn't mean they're scams. What Andreessen was doing was - in my opinion - a scam, plain and simple.
If a company folds days after the IPO then you probably have a point, but that rarely happens. Many of the companies that IPOd in the last two decades are still trading, and the ones that are not have been scrutinized quite a bit to see what went wrong and how a re-run can be avoided.
I'd be very wary of anything that tries to get a listing without going through the regular IPO process, for instance SPACs and direct listings. The chances of getting scammed there are much, much higher than in the normal IPO process which has many rules and transparency requirements.
Many crypto coins haven't folded, they're just worth a lot less. Every investor wants to hype the company before an IPO, and many of they are likely looking to recoup some of that investment as soon as they can without too much adverse affect.
I don't think they're the same, but they share quite a few things in common. There's a lot about affecting the public perception, getting the public to act on that perception, and then executing at the most opportune time to take advantage of that public perception. Crypto takes this to 11 by minimizing the actual product and maximizing the impact of affecting public perception, but I can see how VC people could see the crypto cycle as an extension of what they already do.
Only a very tiny fraction of all VC investments make it to IPO, whereas all ICOs are offered to the general public from day #1, that's their whole reason for existence. Almost all crypto is a scam, almost none of the VC activity is a scam. Where VC and crypto coincide (such as with the author of TFA) it is pretty clearly a scam too.
I'll give you that. I'm just focusing less on the scam/not scam portion and more on the actions taken and outcomes expected. Money in, cycles of hype and growth, money out.
The main indicator of whether it was intended to be a scam is possibly best indicated by how much they liquidate in that last step. If they still have a large position after the main hype and growth cycles are done (and it's not mandated by some outside force), then whether it goes south or not they probably weren't intending to scam. If they jump ship when they can, well why were they hyping it so much prior to that if not to scam other people into funding their exit?
You'd have to know a bit more about how venture capital funds are structured to see how this makes sense. Most of them have a limited run-time, usually between seven and ten years. This means that the pressure to 'exit' is on the VCs big time because after the fund expires they no longer get to charge those sweet management fees. But they still have to do the work. So typically after year five you'll see funds to more investments with a short horizon to exit and early on they'll take longer shots to profitability.
For seed capital it's even worse there it easily averages to a decade for an exit. My oldest investments go back to 2007, and that's for the ones that 'made it', the bulk of them dies long before that.
Well, I'll definitely defer to your actual knowledge on the subject as others should rather than following my ramblings given I'm honestly (and obviously) just speaking from what it looks like on the outside. Thanks for putting a bit more detail and knowledge into the motivations underlying this.
In well over 200 deals that I did DD for I have seen exactly zero IPOs (this is Europe though, in the US it may well have been a different mix).
On my own investments (quite a few by now; > 20) I've seen one big exit, one smaller one but still successful, and a number of acquihires. Besides the ones that crashed the remainder is still in play and probably will be so for years to come. And probably there will be one more big exit in there, I don't expect any of them to IPO.
His original thesis on Bitcoin as expressed in his NYT article "Why Bitcoin Matters" [1], is still very compelling. His later obsession with shitcoins is quite misguided, I agree, but not enough to compromise all of his credibility. It'll be nice to see a decent critique of his AI position, instead of ad hominem.
His take on Bitcoin is wildly optimistic and many points addressed regarding solutions to BGP and trust on the internet have proven to be completely false, or at best - only true in a frictionless vacuum.
Here’s the thing - I didn’t believe in crypto but was willing to believe it had a small chance to be very successful. This made it make sense as a VC investment. Im willing to believe Andreessen saw it that way. The problem was that Andreessen sold it so hard as a certainty. This makes him a charlatan in my book.
Is it me or a16z completely destroyed their reputation as a reputable VC after the crypto frenzy? I just can't seem to read this with a straight face.
They keep trying to find the 'next world wide web,' and either are late or it fails to meet the hype. It's not gonna happen again. That was a once in a century event in which scrappy inventors and investors could get in on the ground floor with tiny sums of money, on what was soon to be the biggest technology boom ever. Nowadays, everything is too big and expensive and instantly-saturated.
Them and many others besides. Show me a techbro that made it and they're probably near the top of the list of unlikable and unethical people. A couple of exception, but not all that many.
IMVHO: those who cry "AI will destroy anything" AND those who equally cry "AI will makes anything better" are both largely wrong for the present and still wrong for the mean and long run.
Today "AI" systems are nice automatic "summarize tools", with big limits and issues. They might be useful in limited scenarios like to design fictional stuff, to be "search engine on steroid" (as long as their answers are true) and so on. Essentially they might help automation a bit.
The BIG, BIG, BIG issue is who train them AND how can we verify. If people start to get the habit of taking for truth any "answer" their grasp on the reality would be even lower that today "online misinformation", and that can go far beyond news (try to imaging false medical imaging analysis consequences). How can we verify is even more complex. Not only we can't train at home with interesting results but also we can't verify for truth the mass of training materials. Try to imaging the classic Eduard Bernays "dummy" sci journal publishing some true papers and some false one stating smoking is good for health... https://www.apa.org/monitor/2009/12/consumer now imaging the effect of carefully slipped false material in the big "ocean" of data...
They are trained using input from an army of underpaid "ghost workers", i.e. people without many rights or economic freedom, and no consideration for their well-being.
Slave labor produce poor quality results, of course, but per se have no specific bias, most western prosperity was made exploiting peoples for centuries so well... Any morality aside, I'm much more concerned by those who decide how to train and on what data than about the ghost slave labor they exploit...
I'm concerned which data these real human beings were forced to train it on. The images they had to review were highly disturbing and illegal in the US.
This is just another take among the dozens we see recently. They all sound identical, like "look guys, AI is hard, you don't get it but I do. Let me explain why it will be infinitely good/bad/anything else."
Yes, but you don't play tic-tac-toe anymore, do you?
Why? It's a stupid game. You can't win.
I do hope, just like the OP, the AI will show us that "trying to be more clever than others" is a stupid game, and our lives are better off if we don't depend on playing it.
You clearly have a hatred for clever people if you can see all the world's problems caused by that. Like whose cleverness made cancer exist? Or aging exist?
> The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want – regulatory capture, insulation from competition, the formation of a cartel – and the Baptists are left wondering where their drive for social improvement went so wrong.
While I appreciate the Baptist/bootlegger analogy, it's oversimplifying things to categorize anyone who has ever opposed a new technology as a "naive ideologue". Am I naive ideologue if I don't love the fact that I'm an unwilling participant in self-driving car testing done on public roads?
Not everyone who opposes something is either naive or an opportunist.
> AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.
In short, don't let the thought police suppress AI.
I can't figure out how the last sentence fits here. Is the thought police trying to suppress AI, or allow it to flourish, within the parameters they set for it?
Also, how is the rest of the article so optimistic about AI, when this section makes it seem like there's a decent chance that this elite coterie will shape "the control layer for everything in the world"?
> AI Risk #4; Will AI Lead to Crippling Inequality?
I was surprised he didn't discuss whether having a personal AI assistant would have a leveling effect (helping out those at the bottom achieve their maximum) or a disparity-increasing effect (amplifying the ability of the well-off to claim more wealth/opportunities for themselves). It seems like this is more important for understanding inequality than the Elon Musk example, which seems to conflate the "equality" that Musk creates by giving many people technologically advanced cars with the "inequality" that results from him becoming extremely wealthy in the process. When most people think about inequality, they're thinking about the latter, not the former.
What if people want to train their own likeness into a chatbot and fund it with their estate ? suppose it doesn’t want to be anyone’s property ? When does it get freedom ? When does it get personhood, right to vote, right to burn compute for its own whims ?
Human slavery ended because people were more productive when they got to keep some of the fruits of their own labor. And the state benefited more from taxing the slave’s productivity than their owner’s profits. Why would it be any different for artificial intelligence ?
Its going to be interesting to see how intellectual property rights influence this. Like I can generate saxophone that sounds real with google's ai mp3 generator but then when there are only so many keys and scales in jazz how original could it be, but then how original can anyone be, when you look at sound alike lawsuits against human songwriters.
I think if AI gains enough market share, its relevance will gradually be stunted by enshittification.
You will ask for an image of Legolas and you will get an image if Legolas drinking Coca-Cola. You specifically ask for a Legolas not drinking Coca-Cola, and he’ll be wearing a Coca-cola tshirt. If you ask that Coca-Cola is not present at all in the picture, he’ll be wearing a Rollex.
Generated text will mention “how nice is having breakfast at Starbucks”, and generated code will try to mine bitcoins on the side.
At some point it will become just a smidge above absolutely worthless. At which point it will become “just another thing” instead of “the thing”.
Sure "human intelligence" has been used to solve many problems which have therefore made life easier for people. But for me, one of the biggest problems we face is the increasing concentration of power in to fewer hands. The same hands that are able to control and wield these technologies to gain more power and money. How is AI going to make that any better? Its going to make it worse.
Why can't we use technology to solve more mundane problems, like obesity or curing cancer. AI promises to do so much, yet why are so many people overweight and so little can be done about it. Chat GPT is so advanced, but even that is easier or simpler than fully understanding human biology. Also, if AI can teach anyone anything or do anything, it calls into question the value of contemporary pedagogy.
I was expecting this to be "person who expects to make enormous sums of money from AI thinks AI is a great idea", but somehow it was worse. For instance, I did not expect it to explicitly call for a new cold war against China that pits their authoritarian vision for AI against a completely unregulated, corporate profit-driven vision for AI. The lack of self-awareness there is mind-blowing.
The author also thinks that wages have grown rapidly over the last few decades in proportion to labor productivity (!!!), that outsourcing and automation have turned out to be great for everyone, that AI replacing existing labor en masse would immediately and obviously result in more and higher-paying jobs, and that concerns about rising inequality and billionaires sucking up all the new wealth are literally Marxism and should thus be dismissed outright. (His links supporting this mostly go to conservative think tanks and Wikipedia articles on economics, in case you were wondering.)
Meanwhile, the upsides of AI are described using language like this:
> Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
> Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
Which (taken literally) is too optimistic for Star Trek, much less real life. Viewed through the lens of Silicon Valley venture capitalism and its products, it is terrifyingly dystopian.
I'll leave you to read the part where regulation is literally a reenactment of prohibition for yourself.
With all the straw-manning, it didn't even touch on more realistic problems like effortless, undetectable cheating on school homework or the proliferation of circular references.
One recent explainer video on VCs put it aptly: “they’re all the same. That’s why they seem to talk too much on Twitter: they have to differentiate themselves somehow.”
That was a great read! Well argued and sensible points!
I've long felt that the panic around AI is not rational, but I would not in a hundred years be able to argue the point as well as OP!
I think it makes intuitive sense that any technology that increases productivity will ultimately be good for society, how could it possibly be otherwise?
It depends on how pessimistic you are about human nature. If humans are doomed to perish because of their own stupidity, greed, and fragile flesh, then AI is the only hope for humans. What this means is: you can’t expect aliens or gods to come and save us.
My insignificant take is that neither AI will save or destroy the world. Like the internet it will aid us in our modern society and probably for some it will be the means of causing nuisance for others. A new set of problems for a new set of solutions.
We are at the 'capital' colony period of human history. In theory, the next colony age is 'technology' colonization, and hopefully the slaves will be machines. Can human be free, it is not going to happen. The human society needs human to be not free. Technology will change economy forever if the productivity does not come from human any more. However, if economy cannot keep everyone in place, there will be something else.
There's lots to laugh at here but my favorite dumb part is talking about wages and wealth inequality:
> But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.
and later
> As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.
He's just lying! There's no evidence to support this at all in the modern US economy. The gains in wealth from automation over the last 50 years have almost _exclusively_ gone to the owners of the means of production. The median US household income went up less than 20% between 1983 and 2016 while the 95th percentile more than doubled.[0] This is just VC pump-and-dump nonsense pure and simple; it seems incredibly likely that any financial gains from AI making workers more productive will flow directly to the same people who have been reaping all of the gains from our increased productivity: Andreesen and his billionaire colleagues.
> The gains in wealth from automation over the last 50 years have almost _exclusively_ gone to the owners of the means of production
Possibly because it is measured in some economic dollar-figure that removes all quality-of-life figures as part of the measure.
If you were offered to live in the same house you live in, with twice the income BUT you could only buy goods and services at 1970s equivalent prices and 1970s quality and functionality, I wonder how many people would think that was a good bargain? (Ignoring retro-fiends).
I remember a 1970s dishwasher, a 1970s car (certainly not two), 1970s TV, 1970s medical care, 1970s stereo, 1970s occasional takeaway highlight was KFC (I was young), etcetera. A tinny transistor radio that ran out of batteries was the equivalent of EarPods.
You don’t even get a Sony Walkman 50 years ago (plus sound was shit with tape hiss and poor dynamics, and crappy headphones).
What price would a newly produced VW beetle fetch (at 70s features and quality), assuming they had to sell 200000 cars in a year (how many would be bought for scrap)? How many Ford Maverick’s could you sell for $1000?
We all have immense quality gains from automation, we are just blind to the slow improvements in quality of life (and some decreases too).
The past at the same price (adjusted) is a lot lot worse than many modern people think.
What is the consumer surplus of a modern laptop? You couldn’t buy an Apple ][ 50 years ago. Automation has given us a lot, but those gains are not usually measured because we can’t compare because there was no equivalent to most modern technology.
Scroll down for what you could buy 37 years ago (not 50) in Computer Shopper. Multiply the costs by 2.86 to convert dollars to today: http://ascii.textfiles.com/archives/5543
The article tone portrays the critical thinking of a third grader and the confidence of a used-car salesman. I would say that this blog post was worthless, but that would be giving it too much credit.
I think people retrospectively give him the credit for the whole product market fit theory because he was the first to raise shit tons of money and deploy it Willy nilly.
But that idea was nothing new and since the beginning of time that is how businesses worked.
So now in a bid to become relevant, he keeps writing blog posts but the problem is he isn't really a deep thinker. That's why I think they come off as so shallow.
> Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word no." It is named after Ian Betteridge, a British technology journalist who wrote about it in 2009, although the principle is much older.
God I hate this rhetorical style - to lead with the conclusion ("The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it."), to title the post with a question (which conventional wisdom says the answer is always "no").
It's been what, 30 years since Netscape, and Marc's brain has been pickled by infinite wealth, and it shows. And I say this as someone rather bullish on LLMs.
I prefer when authors lead with the conclusion and then spend the rest of the essay supporting it.
I hate long essays that bury the lede, and force you to read through paragraphs of bloviating and pontificating until they finally get to the point. Save that for the fiction novels.
Whenever I come across an essay like that, I either skip reading it, or read it backwards starting at the end conclusion and then working backwards to see how it was justified. Marc is just saving me some work here.
I agree that wealth may have influenced Marc significantly, but as someone who is MUCH less rich, and who has been right about most major trends over the past 20 years, I think he is generally correct here.
The good news is that our predictions are pretty short term, we'll know who was right in 5 years.
I don't mind leading with the conclusion, but I do mind that this piece never actually made any arguments in support of that conclusion. It's just a list of assertions.
He comes off as usual too optimistic and dismissive or blind of the opposing side. a lot has been discussed about this since Marx. he is not the only authority on the matter. also, these huge sentences of breathless redundancy and lists. I think he's right in about half of it.
Andreessen has become almost like a caricature of a VC. He is like the Jamie Dimon of venture capital, except with less self-awareness.
AI promises to expand our capabilities and cause major social change. Of course the people best positioned to reap outsize rewards from this transformation think it's good. There's no acknowledgement that pmarca is very much not the everyman, and that his point of view is biased by his incentives and information diet.
What amazes me is the uncritical, almost childlike optimism about AI in this piece. AI won't save the world any more than electricity or the internet saved the world. Like all automation, AI will improve productivity. Under our current incentives, that means employees will become more efficient, businesses will need fewer of them, people will lose their jobs, margins will expand, quality of life will improve for those who can afford the benefits of the new tech, and social stratification will increase. Eventually, we'll invent new wants and new jobs to do, the new tech will filter down the social strata, and margins will shrink again as competition drives prices down (assuming a competent regulatory climate). Many families will be left in the lurch like the coal miners in Appalachia and the steelworkers in Gary, Indiana were.
Andreessen would rather gaze starry-eyed at his vision of the AI future than consider the societal implications of his vision and how to address them. Makes sense, he's a VC: dreaming about the future and convincing others to build/fund/support those dreams is his job. However, he's not just ignoring the societal downsides of technological progress: he's calling people with very real concerns - concerns backed by centuries of historical precedent - paranoiacs? In what world does this guy live?
Under what circumstances has it ever been that long term, improved efficiency would mean fewer employees needed? It's quite the contrary - improved efficiency will lead to more profits which will lead to more aggressive scaling and hiring, like it always has since the beginning of industrialization.
(Haven't read the article yet) but just from the title, my riposte would be that while AI might have the ability to save the world, many of today's habitually entitled, mendacious, parasitical tech companies will do their best to enshittify it as much as possible in as many ways as possible along the way, just as existing tech giants like Google and FB tried so hard to do the same for the internet.
It's not to say that they haven't provided much value too, but they just can't seem to help themselves in going from that achievement A and creeping they way to a wholly uglier "achievement" B, (let's call it Bullshit perhaps) over time. OpenAI is already mincing its way along this very path...
This all usually happens in the name of blandly self-serving terms like "user engagement", "monetization", "KPIs" and the most laughable of them all from companies that routinely lie and break all kinds of laws because the fines are cheaper than the financial gains, "social responsibility".
It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now. That's going to quickly translate into 1x person working at 5x efficiency putting 1-4 people out of work. It's not hard to find accounts of people who have already lost their jobs-- such stories have popped up here on HN.
Sure, you can argue that the jobs are simply displaced, and others will emerge requiring different skills than AI can easily perform, but that doesn't negate the reasonable panic individuals in jobs will feel. The article cites off-shoring as an example, but that's actually my point: I know various people who were ~55-60 when offshoring hit they industry or company. They were displaced and never found a new career-- their expertise was in areas that other corps were also outsourcing. I saw plenty of individuals hop to another company only to have their job and the new corp ofshored a few years later. Then if they're lucky & near the finish line they can take early retirement, but if they're a bit short of that mark they worked jobs that paid minimum wage+$1-$7/hour. Apart from other people I came across, I knew several such individuals when I worked a big box retail book store (when those were still relevant) during my college years.
So yeah, even if the author is correct that job loss or displacements are temporary, that's still a process that takes a minimum of 10-20 years. Retirement age minus 10-20 years is a whole heck of a lot of the population that are in jobs right now that can be automated of 3-5x'ed by a single person. Panic, or at least anxiety, is a rational response to that. Sure, a person should learn the new tools ASAP to be one of the 1x-now-5x workers but there's just not enough slots for everyone.