Hacker News new | past | comments | ask | show | jobs | submit login

Calling bullshit on multiple points.

> I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

Fundamental misunderstanding of the nature of wars notwistanding (sane people rarely start them; assuming non-sane leaders like Putin, or historically Hitler, Bush, Milosevic, Mussolini, Galtieri, al-Assad, etc. etc. would listen to advice they don't like is... just stupid), on the contrary - better tools and "better" advice will make commanders and leaders more confident they could win. See: most major military inventions ever

The economic section is too long to quote, but again, fundamental misunderstanding of economy and human physhology and how it relates to economic decisions. If entire profressions get obliterated by AI (not impossible, but improbable with the current quality of AI output), it will, of course, obliterate their wages. It will create fear in other "menial" "white-collar" professiosn that they're next, which will dperess spending. Also, the cost of goods and services that can now be provided by AI (e.g. art) will drastically drop, making it an unviable business for those humans left in it, which will push most of them to quit. Who will be left to consume if vast swathes of professions are made redudant ? And if consumption goes up enough to generate new jobs, they won't be for the skillsets which were replaced, but different, specialised ones, that will require retraining and requalification, which is time heavy.

In any case, even assuming some equilibrium is reached at some point, having decent chunks of the population unemployed with little to no employment prospects, especially in countries with pretty much no social safety nets like the US will be disastrous socially.




> I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

While wars have actually been getting less deadly over the last century, I think this is because of reasons beyond just technology. If it were technology alone that we depended on, his notion of commanders using AI advisers to make them strategize war into a death-and-suffering minimalist process is laughable. Instead, they'd more likely just start more wars, more easily, always confident in their supposed technological edge while really just creating more murderous catastrophe and failure.

That this has happened less over time despite the advances of technology has been more a social phenomenon of lower human tolerance for violence in general. A more widely connected and wealthier (more to lose) world has helped this mentality accelerate, yes, but fundamentally, this intolerance to violence is a human trait that was given more space in which to proliferate by technology. The technology didn't bring it into existence. AI will be little different.


I think AI is going to make (and is actively making) warfare absolutely horrible - autonomous 'weapons' are very nearly here and all signs point to the fact that unlike nukes or fighter jets, they're going to be dirt cheap, based on technology available to the average consumer.

This means that high-end capabilities (even class-leading) will be available to private individuals (think the DJI copters of the Ukranians) and or they will be available to actual, professional militaries at absolutely marginal cost (the Orlans ( the Nikon and plastic containing-drone) or Shaheds (the moped-motored Iranian suicide drone)), that can be produced by the millions, as they rely on our conventional factory infrastructure.

Even more scary, terror attacks might not even require ANY bespoke hardware - experience has showed that car attacks are easily as deadly as gun attacks. Imagine someone exploiting a self-driving Tesla, turning hundreds of thousands of cars on the road into destructive weapons.

There is a horrible trend in the world that started before the rise of AI - the blurring of the definition of war - I'd say this phenomenon started with the rise of drone strikes on foreign soil - though others might put this even earlier. Its the fact that a country can engage in acts of war without being at war removes almost all dampers on attacking sovereign nations. The final one - that of plausible deniability is finally going to go away - at least when the US sent a Hellfire missile your way, it sent the world a message that you pissed them off badly enough that they'll willing to openly admit to and committing the resources to droning you. A non-descript quadcopter, carrying a grenade and programmed with face recognition software is going to be the assassination weapon of tomorrow.


That quote is juicy. It's amusing to me that he's so naive he thinks that an AI won't make forced or unforced errors given the imperfect information it inevitably has. And if you can get access to the training set or the corpus of variables the AI is configured with then you can easily predict what it's going to do next, which is far worse. Nobody can look into the mind of a mediocre general, but anyone can look into the mind of any AI general given sufficient access.

People who call themselves technologists always overestimate how beneficial a new technology is and underestimate how inhumane its application becomes when venture capitalists demand 10x or 100x their initial investment. When people like him come in and extol the virtues of some new thing as fixing everything and making everything better, I'm really wary of what they do next. Inevitably they're trying to sell me a bill of goods.


Both responses seem to indicate you believe this is what pmarca thinks. VC is two parts: (1) make the world think X is good and (2) make investments in X




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: