They are literally called "Large Language Model". Everybody prefers the term AI because it's easier to pretend they actually know things, but that's not what they are designed to do.
Have you actually played these games? I put in some hours on Hitchhikers Guide, and It was anything but natural. Maybe once you get far enough in the game and learn the language that is effective it gets easier, but I never got there. You wake up in the dark and have to figure out how to even turn on the light. Then you have to do a series of actions in very specific order before you can get out of your bedroom.
Figuring it all out is part of the fun, but outside the context of a game it would be maddening.
As for Eliza, she mostly just repeats back the last thing you said as a question. “My dog has fleas.” “How does your dog having fleas make you feel?”
Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.
This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.
Well, you could always focus on the ridiculous environmental impact of llms. I read once that asking ChatGPT used 250x as much energy as just googling. But now google incorporated llms into search so…
I grew up on the banks of the Hudson River, polluted by corporations dumping their refuse into it while reaping profits. Anthropic/openai/etc are doing the same thing.
Yes. It's horrible. Probably 250x as much as watering your lawn per 1M ChatGPT queries. Except your sprinklers' vendor probably incorporates ChatGPT in their marketing, so they're literally using water to sell you tools to use water!
Oh the humanity!
I can't take those eco-impact threads seriously. Yes, ChatGPT uses compute, compute uses water and electricity. So does keeping your lawn trimmed and your dog well, and of the three, I bet ChatGPT is actually generating most value to everyone on the net.
Everything we do uses electricity and water. Everything that lives uses energy and water. The question isn't whether, or how much, but what for. Yes, LLMs use a lot of energy in absolute terms - but that's because they're that useful. Yes, despite what people who deny basic reality would tell you, LLMs actually are tremendously useful. In relative terms, they don't use that much more energy or water vs. things they displace, and they make up for it in the improvements.
Want to talk environmental impact of ChatGPT et al.? Sure, but let's frame it with comparative figures for sportsball, concerts, holiday decorations, Christmas lights, political campaigns, or pets. Suddenly, it turns out the whole thing is merely a storm in a teacup.
Have you read about the impact of data centers in non-US countries? Building a data center that requires potable water in a drought stricken country that lacks the resources to defend itself is incredibly destructive.
And I don’t have a dog but that water usage certainly provides the most benefit. Man’s best friend > online sex bot.
> Have you read about the impact of data centers in non-US countries? Building a data center that requires potable water in a drought stricken country that lacks the resources to defend itself is incredibly destructive.
And? Have you read about the impact of ${production facilities} in non-US countries? That's literally what industrialization and globalization are about. Data centers aren't unique here - same is true about power plants, factories, industrial zones, etc. It all boils down to the fact that money, electricity and compute are fungible.
Note: this alone is not a defense of LLMs, merely me arguing that they're nothing special and don't deserve being singled out - it's just another convoluted scenario of multinational industries vs. local population.
(Also last time I checked, the whole controversy was being stoked up by a bunch of large interest groups that aren't happy about competition disturbing their subsidized water costs - it's not actually a grassroots thing, but an industry-level PR war.)
> Man’s best friend > online sex bot.
That's disingenous. I could just as well say: {education, empowering individuals to solve more of their own problems, improving patient outcomes} > pets and trimmed lawns. LLMs do all of these and sex bots too; I'm pretty sure they do more of the former than the latter, but you can't prove it either way, because compute is fungible and accurate metrics are hard to come by :P.
look at the 1:10 plate. there's colonies that isn't yeast there. luckily, it got diluted out, or, maybe that plating was a bit more careless than the others.
i didnt mean to imply that all of the colonies aren't yeast.
> The best time to start terraforming a planet is 500 years ago.
For context, it took an estimated three-quarters of a billion years to oxygenate Earth's atmosphere. Even a speed-run of that is ... considerably longer than a few centuries.
The best time to start terraforming a planet is never. The idea is as absurd as a Dyson sphere/swarm. People should really grow beyond sci-fi ideas that were last fresh in the 1930's.
To your point, one of the most remarkable things I've read about both Mars and Venus, is that there was a time billions of years ago when they had more moderate temperatures and liquid water.
In a way, it's a tragedy that human civilization has only emerged at a time when both Mars and Venus have become much more uninhabitable than they used to be.
Probably because the period where the three (or even Earth and another) of them were inhabitable enough to sustain a technological civilization was very small, if it happened at all.
I’m only 1% serious, but how do we know for sure which direction evolution went in within the ape family?
It seems not entirely unplausible that we have at some point in the scientific chain of custody assumed the “lesser” apes “evolved into” the “more advanced” human.
But a species could easily branch and have the branch lose its geographic portability features (e.g.ability to manipulate environment, most exogenous behavior learning-based) if they are no longer selected for in a particular environment, and I’m not aware of anything in the fossil record that firmly establishes directionality. Am I wrong?