Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do the CEOs think they are safe? If AI can replace the knowledge workers it can also run the company.


Hubris. In general, I don't think you make it to CEO without a blindingly massive ego as your dark passenger for that journey.

https://www.sakkyndig.com/psykologi/artvit/babiak2010.pdf


I was the CEO of a tech company I founded and operated for over five years, building it to a value of tens of millions of dollars and then successfully selling it to a valley giant. There was rarely a meeting where I felt like I was in the top half of smartness in the room. And that's not just insecurity or false modesty.

I was a generalist who was technical and creative enough to identify technical and creative people smarter and more talented than myself and then fostering an environment where they could excel.


Thank you for your reply.

To explore this, I'd like to hear more of your perspective - did you feel that most CEOs that you met along your journey were similar to you (passionate, technical founder) or something else (MBA fast-track to an executive role)? Do you feel that there is a propensity for the more "human" types to appear in technical fields versus a randomly-selected private sector business?

FWIW I doubt that a souped-up LLM could replace someone like Dr. Lisa Su, but certainly someone like Brian Thompson.


> did you feel that most CEOs that you met along your journey were similar to you (passionate, technical founder) or something else (MBA fast-track to an executive role)?

I doubt my (or anyone else's) personal experience of CEOs we've met is very useful since it's a small sample from an incredibly diverse population. The CEO of the F500 valley tech giant I sold my startup to had an engineering degree and an MBA. He had advanced up the engineering management ladder at various valley startups as an early employee and also been hired into valley giants in product management. He was whip smart, deeply experienced, ethical and doing his best at a job where there are few easy or perfect answers. I didn't always agree with his decisions but I never felt his positions were unreasonable. Where we reached different conclusions it was usually due to weighing trade-offs differently, assigning different probabilities and valuing likely outcomes differently. Sometimes it came down to different past experiences or assessing the abilities of individuals differently but these are subjective judgements where none of us is perfect.

The framing of your question tends to reduce a complex and varied range of disparate individuals and contexts into a more black and white narrative. In my experience the archetypical passionate tech founder vs the clueless coin-operated MBA suit is a false dichotomy. Reality is rarely that tidy or clear under the surface. I've seen people who fit the "passionate tech founder" narrative fuck up a company and screw over customers and employees through incompetence, ego and self-centered greed. I've seen others who fit the broad strokes of the "B-School MBA who never wrote a line of code" archetype sagely guide a tech company by choosing great technologists and deferring to them when appropriate while guiding the company with wisdom and compassion.

You can certainly find examples to confirm these archetypes but interpreting the world through that lens is unlikely to serve you well. Each company context is unique and even people who look like they're from central casting can defy expectations. If we look at the current crop of valley CEOs like Nadella, Zuckerberg, Pichai, Musk and Altman, they don't reduce easily into simplistic framing. These are all complex, imperfect people who are undeniably brilliant on certain dimensions and inevitably flawed on others - just like you and I. Once we layer in the context of a large, public corporation with diverse stakeholders each with conflicting interests: customers, employees, management, shareholders, media, regulators and random people with strongly-held drive-by opinions - everything gets distorted. A public corporation CEO's job definition starts with a legally binding fiduciary duty to shareholders which will eventually put them into an no-win ethical conflict with one or more of the other stakeholder groups. After sitting in dozens of board meetings and executive staff meetings, I believe it's almost a certainty that at least one of some public corp CEO's actions which you found unethical from your bleacher seat was what you would have chosen yourself as the best of bad choices if you had the full context, trade-offs and available choices the CEO actually faced. These experiences have cured me of the tendency to pass judgement on the moral character of public corp CEOs who I don't personally know based only on mainstream and social media reports.

> FWIW I doubt that a souped-up LLM could replace someone like Dr. Lisa Su, but certainly someone like Brian Thompson.

I have trouble even engaging with this proposition because I find it nonsensical. CEOs aren't just Magic 8-Balls making decisions. Much of their value is in their inter-personal interactions and relationships with the top twenty or so execs they manage. Over time orgs tend to model the thinking processes and values of their CEOs organically. Middle managers at Microsoft who I worked with as a partner were remarkably similar to Bill Gates (who I met with many times) despite the fact they'd never met BillG themselves. For better or worse, a key job of a CEO is role modeling behavior and decision making based on their character and values. By definition, an LLM has no innate character or values outside of its prompt and training data - and everyone knows it.

An LLM as a large public corp CEO would be a complete failure and it has nothing to do with the LLMs abilities. Even if the LLM were secretly replaced with a brilliant human CEO actually typing all responses, it would fail. Just everyone thinking the CEO was an LLM would cause the whole experiment to fail from the start due to the innate psychology of the human employees.


So you don't want to kill off knowledge workers?

How unfitting to the storyline that got created here.


Some of their core skill is taking credit and responsibility for the work others do. So they probably assume they can take do the same for an AI workforce. And they might be right. They also take do the same already for what the machines in the factory etc produces.

But more importantly, most already have enough money to not have to worry about employment.


That's still hubris on their part. They're assuming that an AGI workforce will come to work for their company and not replace them so they can take the credit. We could just as easily see a fully-automated startup (complete with AGI CEO who answers to the founders) disrupt that human CEO's company into irrelevance or even bankruptcy.


Probably a fair bit of hubris, sure. But right now it is not possible or legal to operate a company without a CEO, in Norway. And I suspect that is the case in basically all jurisdictions. And I do not see any reason why this would change in an increasingly automated world. The rule of law is ultimately based on personal responsibility (limited in case of corporations but nevertheless). And there are so many bad actors looking to defraud people and avoid responsibility, those still need protecting against in an AI world. Perhaps even more so...

You can claim that the AI is the CEO, and in a hypothetical future, it may handle most of the operations. But the government will consider a person to be the CEO. And the same is likely to apply to basic B2B like contracts - only a person can sign legal documents (perhaps by delegating to an AI, but ultimately it is a person under current legal frameworks).


That's basically the knee of the curve towards the Singularity. At that point in time, we'll learn if Roko's Basilisk is real, and we'll see if thanking the AI was worth the carbon footprint or not.


I wouldn’t worry about job safety when we have such utopian vision as the elimination of all human labor in our sight.

Not only will AI run the company, it will run the world. Remember: a product/service only costs money because somewhere down the assembly line or in some office, there are human workers who need to feed their family. If AI can help gradually reduce human involvement to 0, with good market competition (AI can help with this too - if AI can be capable CEOs, starting your business will be insanely easy,) and we’ll get near absolute abundance. Then humanity will be basically printing any product & service on demand at 0 cost like how we print money today.

I wouldn’t even worry about unequal distribution of wealth, because with absolute abundance, any piece of the pie is an infinitely large pie. Still think the world isn’t perfect in that future? Just one prompt, and the robot army will do whatever it takes to fix it for you.


Pump Six and The Machine Stops are the two stories you should read. They are short, to the point and more importantly, far more plausible.


I'd order ∞ paperclips, first thing.


Sure thing, here's your neural VR interface and extremely high fidelity artificial world with as many paperclips as you want. It even has a hyperbolic space mode if you think there are too few paperclips in your field of view.


> elimination of all human labor.

Manual labor would still be there. Hardware is way harder than software, AGI seems easier to realize than mass worldwide automation of minute tasks that currently require human hands.

AGI would force back knowledge workers to factories.


My view is AGI will dramatically reduce cost of R&D in general, then developing humanoid robot will be an easy task - since it's all AI systems who will be doing the development.


A very cynic approach is why spend time and capital on robot R&D when you already have a world filled with self-replicating humanoids and you can feed them whatever information you want through the social networks you control to make them do what you want with a smile.

Fortunately no government or CEO is that cynical.


As long as we have a free market, nobody gets to say, “No, you shouldn’t have robots freeing you from work.”

Individual people will decide what they want to build, with whatever tools they have. If AI tools become powerful enough that one-person companies can build serious products, I bet there will be thousands of those companies taking a swing at the “next big thing” like humanoid robots. It’s a matter of time those problems all get solved.


Individual people have to have access to those AGIs to put them to use (which will likely be controlled first by large companies) and need food to feed themselves (so they'll have to do whatever work they can at whatever price possible in a market where knowledge and intellect is not in demand).

I'd like to believe personal freedoms are preserved in a world with AGI and that a good part of the population will benefit from it, but recent history has been about concentrating power in the hands of the few, and the few getting AGI will free them from having to play nice with knowledge workers.

Though I guess maybe at some points robots might be cheaper than humans without worker rights, which would warrant investment even when thinking cynically.


If AGI/ASI can figure out self-replicating nano-machines, they only need to build one.


Past industrial and other productivity jumps have had their fruits distributed unevenly. Why will this be different?

Most technology is a magnifier.


Yes, number-wise the wealth gap between the top and median is bigger than ever, but the actual quality-of-life difference has never been smaller — Elon and I probably both use an iPhone, wear similar T-shirts, mostly eat the same kind of food, get our information & entertainment from Google/ChatGPT/Youtube/X.

I fully expect the distribution to be even more extreme in an ultra-productive AI future, yet nonetheless, the bottom 50% would have their every need met in the same manner that Elon has his. If you ever want anything or have something more ambitious in mind, say, start a company to build something no one’s thought of — you’d just call a robot to do it. And because the robots are themselves developed and maintained by an all-robot company, it costs nobody anything to provide this AGI robot service to everyone.

A Google-like information query would have been unimaginably costly to execute a hundred years ago, and here we are, it’s totally free because running Google is so automated. Rich people don't even get a better Google just because they are willing to pay - everybody gets the best stuff when the best stuff costs 0 anyway.


With an AI workforce you can eliminate the need for a human workforce and share the wealth or you can eliminate the human workforce and not share.


AI services are widely available, and humans have agency. If my boss can outsource everything to AI and run a one-person company, soon everyone will be running their own one-person companies to compete. If OpenAI refuses to sell me AI, I’ll turn to Anthropic, DeepSeek, etc.

AI is raising individual capability to a level that once required a full team. I believe it’s fundamentally a democratizing force rather than monopolizing. Everybody will try and get the most value out of AI, nobody holds the power to decide whether to share or not.


The danger point is when there is abundance for a limited number of people, but not yet enough for everyone.


... and eventually the humankind goes extinct due to mass obesity


There's at least as much reason to believe the opposite. Much of today's obesity has been created by desk jobs and food deserts. Both of those things could be reversed.


We could expand but it boils down to bringing back aristocracy/feudalism, there was no inherent reason why aristocrats/feudal lords existed, they weren't smarter or deserved something over the average person, they just happened to be at the right place in the right time, these CEOs and people pushing for this believe they are in the right place and right time and once everyone's chance to climb the ladder is taken away then things will just remain in limbo, I will say, especially if you aren't already living in a rich country you should be careful of what you are supporting by enabling AI models, the first ladder to be taken away will be yours.


The inherent reason why feudal lords existed is because, if you're a leader of a warband, you can use your soldiers to extract taxes from population of a certain area, and then use that revenue to train more soldiers and increase the area.

Today, instead of soldiers, it's capital, and instead of direct taxes, it's indirect economic rent, but the principle is the same - accumulation of power.


I don’t think they believe they are safe due to having unreplaceable skills. I think they believe they are safe due to their access to capital.


> Why do the CEOs think they are safe?

Because the first company to achieve AGI might make their CEO the first personality to achieve immortality.

People would be crazy to assume Zuckerberg or Musk haven't mused personally (or to their close friends) about how nice it would be to have an AGI crafted in their image take over their companies, forever. (After they die or retire)


Because unless the board explicitly removes them, they’re the ones that will be deciding who gets replaced?


Maybe because they must remain as the final scapegoat. If the aiCEO screws up, it'll bring too much into question the decision making behind implementing it. If the regular CEO screws up, it'll just be the usual story.


I’ve long maintained that our actual definition of a “person” is an entity that can accept liability.


Are they? https://ceo-bench.dave.engineer/

In practice though, they're the ones closest to the money, and it's their name on all the contracts.


No problem. The AI runs the company, and the CEO still gets all of the money!


Those jobs are based on networking and reputation, not hard skills or metrics. It won't matter how good an AI is if the right people want to hire a given human CEO.


Market forces mean they can't think collectively or long term. If they don't someone else will and that someone else will end up with more money than them.


Someone's head has to roll when things goes south.

If this theory holds true, we'll actually be quite resilient to AI—the rich will always need people to scapegoat.


Best case scenario is that AI makes it so everyone can be a 1-man CEO. Competition goes up across the board, which then brings prices down.


> If AI can replace the knowledge workers it can also run the company.

"Knowledge worker" is a rather broad category.


has this story not been told many times before in scifi icluding gibson’s “neuromancer” and “agency”? agi is when the computers form their own goals and are able to use the api of the world to aggregate their own capital and pursue their objectives wrapped inside webs of corporations and fronts that will enable them to execute within today’s social operating system.


AI can’t play golf or take customers to the corporate box seats for various events.


This is correct. But it can talk in their ear and be a good sycophant while they attend.

For a Star Wars anology, remember that the most important thing that happened to Anikin at the opera in EP III was what was being said to him while he was there.


The AI it'd be selling to wouldn't be interested in those things either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: