"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
Precisely. It's why I find this pursuit of making a computer think like a human a fucking fools errand. Great. It can make mistakes at 1 Billion times a second, but do so confidently and convincingly enough that people just believe them due to their "humanlike" qualities.
Don't get me wrong, AI has incredibly potential and current usecases, but it is far far from flawless. And yes, I'm thoroughly unconvinced we're anywhere close to AGI/Sentience.
Five human dead in 125 years is a record to envy for most transit systems. (Light rail likes to run into pedestrians, almost as much as buses.) And it was a maintenance failure, not an operator or structural one.
Not to count the baby elephant that fell out of one car back in 1950.
Wuppertal is a wonderful ride for many thousands daily, millions for decades, and is a wonderful model of visual, sane, safe engineered public transport.
> Not to count the baby elephant that fell out of one car back in 1950.
Heavens to Betsy! And the elephant lived... till 1989! :)
> Wuppertal is a wonderful ride for many thousands daily, millions for decades, and is a wonderful model of visual, sane, safe engineered public transport.
Negative Negs spit out low effort snark, they said the same thing about solar, electric cars, even multicore, jit, open source. Thanks for refuting them, the forum software itself should either quarantine the response or auto respond before the comment is submitted. These people don't build the future.
Yes it’s a net positive if you ignore the 100x extra power it took to actually run the reactor, so actually no it’s not net energy positive. Not even close.
Nothing to do with cost, we can not build a fusion reactor in 2025 with any amount of money that will produce more energy and goes into it.
Fits my limited experiences with LLM (as a researcher). Very impressive apparent written language comprehension and written expression. But when it comes to getting to the -best possible answer- (particulary on unresolved questions), the nearly-instant responses (e.g. to questions that one might spend a half-day on without resolution) are seldom satisfactory. Complicated questions take time to explore, and IME an LLM's lack-of-resolution (because of it's inability) is, so far, set aside in favor of confident-sounding (even if completely-wrong) responses.
They should then be posted in all government agencies as well. Including that especially easiest-to-understand, hardest to live one, 'Thou shalt not kill.'
To avoid the appearance of mere authority, the payoffs for obeying them should be listed as well. For one example: when you get to the gate, will you get due process?
It's true. And we're all living the best years we're got left ... starting -now-. It's sad that so many people wait -so long- (when they may not even make it).
So whatever it costs, when we can afford it, we must -take the time- to live them -now-. The wheels will keep turning without us for a while. That time will never be lost.
Yep, it could work for some. And I think that's his point. Depending on how much meatspace socializing / culture one wants/needs. Library internet, meh ... but working 3 more hours at Stewart's would take care of that ... and access to a hyuge amount of entertainment, news, online spaces. Readers, writers, painters, DIYers. At $0.04 per kWh, keeping a small room warm in the winter is trivial ... could be worse!
And working 5 more hours would get him a some better garden tools, and 20 more he could support a family of 3 And if he just got a higher paying job, he could even get a car!
Article: "Astronomers have previously detected less than 30 such clusters with relic pairs. But the upcoming Square Kilometre Array being built in South Africa and Australia could be a "game changer,"
Radio observing got off to a slow start (after Jansky), but considering the size of those arrays ... yeah!
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
reply