Humans are evolutionarily adapted to running. Yes, going from a modern sedentary lifestyle to running will feel rough for a few months as you acclimate (and most people don't start off at the slow 11 min. / mile [7 min. / km] pace they should). But there's a reason marathons are so popular, running and exercise starts to feel really good.
> Yes, going from a modern sedentary lifestyle to running will feel rough for a few months as you acclimate
Right. Which is why people don't exercise. That's a lot to ask of people with other things on their plate.
To be clear, I know I should exercise. I just find it very difficult to do so, and it's very easy to convince yourself you should do something else with that energy.
I think team sports are probably the best way to get into exercise. This allows tying the benefit directly into a reward system. I ran long-distance in high school and, without teammates to let down, it was very difficult to push myself beyond the bare minimum effort. Most of us don't have the ability to summon a team sport into our work schedule though.
The problem with exercise is that we have evolved to be energy efficient and this includes being lazy when we have food/shelter so the resistance to it is high.
However once we start exercising for any period of time and observe the positive outcomes then the difficulty drops and it becomes enjoyable.
The problem is peoples expectations and approach.
> Yes, going from a modern sedentary lifestyle to running will feel rough for a few months as you acclimate
This is a terrible idea and for someone who has been sedentary they will likely just injure themselves and/or feel miserable. People don't have realistic expectations. It's better to do something like "couch to 5K" running on a grass or a dirt track. In a couple of weeks they will feel good after a run (if they are not too distracted and outward looking) then from there the runners high reinforces the behavior and they will look forward to exercising.
Many people find the data and record-breaking in running a good reward system. There's all kinds of goals to try to achieve, from your best time at 10 different distances, to weekly miles, to a longer and longer distance like a marathon.
Team sports are harder to work into a schedule, which is why running is easier to start for many people because it just requires a pair of shoes and leaving the house yourself. For others, there's a parallel social angle where you can also make many friends you see regularly at clubs and enjoy the same activity together.
Haha my thoughts exactly. This HN thread is simultaneously criticizing them for being too assured, not considering other possibilities, and hedging that they may not be right and other plausibilities exist.
They would love for this to be true. Wishful thinking is a powerful thing. Harder to face the culpability and upcoming tragedy for the other outcome. Denial is the first stage of grief after all.
Wasn't Facebook and social media supposed to be a tool for social superpowers? Those optimistic days are long gone, obviously flawed, and people probably forget they were acting just as hopeful as AI now. Instead people in general are lonelier and more depressed than ever, but Facebook is really good at making $$$, not so much improving peoples' social lives. Best not look at how many university students are depressed...
I'm old enough to remember when Twitter was new, and for a moment it felt like the old utopian promise of the Internet finally fulfilled: ordinary people would be able to talk, one-on-one and unmediated, with other ordinary people across the world, and in the process we'd find out that we're all more similar than different and mainly want the same things out of life, leading to a new era of peace and empathy.
I believe the opposite happened. People found out that there are huge groups of people with wildly differing views on morality from them and that just encouraged more hate. I genuinely think old school facebook where people only interacted with their own private friend circles is better.
Broadcast networks like Twitter only make sense for influencers, celebrities and people building a brand. They're a net negative for literally anyone else.
| old school facebook where people only interacted with their own private friend circles is better.
100% agree but crazy that option doesn't exist anymore.
Was Twitter ever really meant for that? As far as I can tell the primary purpose of twitter is moderated access to celebrities with the utopian ideas about communication just used to sell it.
It amazes me people still trot out the same "where's the immediate profits?" line after they were wrong, wrong, and wronger, about Uber, Facebook, Amazon, and Google all being absurdly profitable in the long term. Every one of those had a tremendous amount of naysayers based on poo-pooing the company strategy of not immediately pursuing profitability. They were all wrong in every case then and they're wrong now.
It's designed better for 4 players. In that case, 3 other players directing all 7 robbers and knight robbers to the clear lead really acts as a negative feedback loop to counter any snowball. That's not nearly as possible from just 1 behind player
We're fast approaching the point that any value someone can provide through a digital interface could be better done by a model. What do we use digital interfaces for? Practically everything.
Oh well not being a plumber, electrician, or farmer... but our society's current productivity, technology, automation reduced our need for 80% of the population needing to be farmers to now 1.3% in the US. Can you imagine what the equivalent of 1 billion digital engineers unlocks in understanding and implementing robotics?
Yes when the knowledge jobs are all done best by AI the rest will follow shortly. we will need to adapt to being "useless" as far as work goes and find other sources of worth. There's still a lot of people who want to compare it to Bitcoin hype around here, IMO the next few years everything is going to change way faster than than it ever has.
For the record I always thought Kurzweil and that crowd was clowns, now I think I was the wrong one
> IMO the next few years everything is going to change way faster
Honestly, after hearing this for the past 20 years (ever since ML and LLMs became a thing), it is actually more like the level-5 autonomous car hype and less like Bitcoin. Only that the driverless car hype never required such a humongous investment bubble, as does the Statistical-Text-Generator-as-AI one.
Meanwhile I haven't seen any real progress that I'd care about in a while.
Is GPT-4xyz better than the last one? I'm sure some benchmark numbers say that. But the number of applications where occasional hallucinations don't matter is very small, and where it matters nothing really changed. Companies are trying to use it for customer support but that predictably turned out to be a legal risk. Klarna went all-in on AI and regrets it now.
Some media are talking about Microsoft writing 30% of their new code with AI, but what Nadella actually said is less impressive: "maybe 20-30% of the code that is inside of our repos today in some of our projects are probably all written by software".
Which, coincidentally, is the ratio of code that can be autocompleted by an IDE without LLM, according to Jetbrains.
I have yet to see any evidence that anything will change way faster than it ever has, aside from the readiness of many younger people to use it in everyday life for things it really shouldn't be used.
Yes they have gotten better. If you give Gemini 2.5 the right context it seems to solve whatever. Drop in the folder + docs and it tends to be right about how to proceed now. I think people who don't find LLMs useful now aren't trying with the right context.
I’m with you. Weak version of a singularity seems likely. Recursive self improvement isn’t just possible, it’s inevitable. Models are capable of extrapolation, but they don’t even need it to do good interpolation which itself is enough to get us recursive self improvement.
I tend to think that it’ll have an optimistic ending. The key to solving most political problems is eliminating scarcity.
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
People have been predicting the singularity to occur sometimes around 2030 and 2045 waaaay further back then 2015. And not just by enthusiasts, I dimly remember an interview with Richard Darkins from back in the day...
Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will.
They'll likely be a component in the AI, but likely not the thing that "drives"
Vernor Vinge as much as anyone can be credited with the concept of the singularity. In his 1993 essay on it, he said he'd be surprised if it happened before 2005 or after 2030
There is a strong financial incentive for a lot of people on this site to deny they are at risk from it, or to deny what they are building has risk and they should have culpability from that.
> "Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”
If that's really true, why is there such a big push to rapidly improve AI? I'm guessing OpenAI, Google, Anthropic, Apple, Meta, Boston Dynamics don't really believe this. They believe AI will make them billions. What is OpenAI's definition of AGI? A model that makes $100 billion?
Because they also believe the development of superhuman machine intelligence will probably be the greatest invention for humanity. The possible upsides and downsides are both staggeringly huge and uncertain.
And why are Altman's words worth anything? Is he some sort of great thinker? Or a leading AI researcher, perhaps?
No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.
Altman did play some part in bringing ChatGPT about. I think the point is the people making AI or running companies making current AI are saying be wary.
In general it's worth weighting the opinions of people who are leaders in a field, about that field, over people who know little about it.
> Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
> If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this.
Slightly more detail: until about 2001 Yudkowsky was what we would now call an AI accelerationist, then it dawned on him that creating an AI that is much "better at reality" than people are would probably kill all the people unless the AI has been carefully designed to stay aligned with human values (i.e., to want what we want) and that ensuring that it stays aligned is a very thorny technical problem, but was still hopeful that humankind would solve the thorny problem. He worked full time on the alignment problem himself. In 2015 he came to believe that the alignment problem is so hard that it is very very unlikely to be solved by the time it is needed (namely, when the first AI is deployed that is much "better at reality" than people are). He went public with his pessimism in Apr 2022, and his nonprofit (the Machine Intelligence Research Institute) fired most of its technical alignment researchers and changed its focus to lobbying governments to ban the dangerous kind of AI research.
Political organization to force a stop to ongoing research? Protest outside OAI HQ? There are lots of thing we could, and many of us would, do if more people were actually convinced their life were in danger.
> Political organization to force a stop to ongoing research? Protest outside OAI HQ?
Come on, be real. Do you honestly think that would make a lick of difference? Maybe, at best, delay things by a couple months. But this is a worldwide phenomenon, and humans have shown time and time again that they are not able to self organize globally. How successful do you think that political organization is going to be in slowing China's progress?
Humans have shown time and time again that they are able to self-organize globally.
Nuclear deterrence -- human cloning -- bioweapon proliferation -- Antarctic neutrality -- the list goes on.
> How successful do you think that political organization is going to be in slowing China's progress?
I wish people would stop with this tired war-mongering. China was not the one who opened up this can of worms. China has never been the one pushing the edge of capabilities. Before Sam Altman decided to give ChatGPT to the world, they were actively cracking down on software companies (in favor of hardware & "concrete" production).
We, the US, are the ones who chose to do this. We started the race. We put the world, all of humanity, on this path.
> Do you honestly think that would make a lick of difference?
I don't know, it depends. Perhaps we're lucky and the timelines are slow enough that 20-30% of the population loses their jobs before things become unrecoverable. Tech companies used to warn people not to wear their badges in public in San Francisco -- and that was what, 2020? Would you really want to work at "Human Replacer, Inc." when that means walking out and about among a population who you know hates you, viscerally? Or if we make it to 2028 in the same condition. The Bonus Army was bad enough -- how confident are you that the government would stand their ground, keep letting these labs advance capabilities, when their electoral necks were on the line?
This defeatism is a self-fulfilling prophecy. The people have the power to make things happen, and rhetoric like this is the most powerful thing holding them back.
> China was not the one who opened up this can of worms
Thank you. As someone who lives in Southeast Asia (and who also has lived in East Asia -- pardon the deliberate vagueness, for I do not wish to reveal too many potentially personally identifying information), this is how many of us in these regions view the current tensions between China and Taiwan as well.
Don't get me wrong; we acknowledge that many Taiwanese people want independence, that they are a people with their own aspirations and agency. But we can also see that the US -- and its European friends, which often blindly adopt its rhetoric and foreign policy -- is deliberately using Taiwan as a disposable pawn to attempt to provoke China into a conflict. The US will do what it has always done ever since the post-WW2 period -- destabilise entire regions of countries to further its own imperialistic goals, causing the deaths and suffering of millions, and then leaving the local populations to deal with the fallout for many decades after.
Without the US intentionally stoking the flames of mutual antagonism between China and Taiwan, the two countries could have slowly (perhaps over the next decades) come to terms with each other, be it voluntary reunification or peaceful separation. If you know a bit of Chinese history, it is not entirely far-fetched at all to think that the Chinese might eventually agree to recognising Taiwan as an independent nation, but now this option has now been denied because the US has decided to use Taiwan as a pawn in a proxy conflict.
To anticipate questions about China's military invasion of Taiwan by 2027: No, I do not believe it will happen. Don't believe everything the US authorities claim.
3/4 companies in the Bay Area senior software engineer interviews require a System Design interview where they will tell you "what if you had 10m users" and expect a distributed write-heavy sharding answer
You’re not wrong in the literal sense. But the “inside baseball” of that question is just that it’s a prompt to talk about how you would horizontally scale a system should the need arise. It’s not a prompt to start questioning whether 10mm or 200mm is the specific limit.
Lots of people were mad that my employer developed a new distributed NoSQL database engine, but it was literally just an API to encapsulate what an application doing "sharded MySQL" would do in its own data tier. A lot of this is a question of framing and storytelling.
reply