Hacker News new | past | comments | ask | show | jobs | submit login

I wonder when the investors and investors in the early printing press or steam engine or excel spreadsheet was invented did they think of the ways — soul crushing homework(books), rapid and cruel colonization(steam engines and trains), innovative project management(excel) — there tech would be used?

The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.

If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.




The trouble with looking at past examples of new tech and automation is that those were all verticals - the displaced worker could move to a different, maybe newly created, work area left intact by the change.

Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.

I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.

It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: