Hacker News new | past | comments | ask | show | jobs | submit login

Right, so obviously tasks oriented around biological survival and production are different than tasks oriented around competitiveness in a modern economy. But the question does make me wonder:

What if we got AGI, failed to solve the alignment problem, and were just... fine? Because the AIs' goals were compatible enough with ours? We can't make a system like a bird, but if we did there's no real way it could spell our doom, considering how we're fine with all the existing bird-like systems. (Not the first time I've been skeptical of AI alarmism, but a different perspective on it.)




That is one potential outcome, but it’s really scary to bet our future on it being correct.

I think economic incentives will lead us to making that experiment, but there is a risk that it fails. There’s obviously vehement disagreement on the probabilities involved.

It’ll be like «well, maybe nuclear weapons start a self-sustaining fusion reaction in our atmosphere» again, but with lower reassurance this time.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: