Hacker Newsnew | past | comments | ask | show | jobs | submit | reverius42's commentslogin

> Choosing to park my car correctly because I used get tickets is a reactive action.

How do you explain someone who chooses to park correctly and has never received a parking ticket?


Why do I need to? I didn’t propose some framework to analyze people who are perfect.

This thread chain is about people who do something and it doesn’t work out.


More of a labhome than a homelab at that point.

> It's emergent complexity, not compression.

They might be the same thing.

See also: https://news.ycombinator.com/item?id=31003493


That comment does not have anything to do with what we are discussing (i.e. R/DNA).

I would try it now with GPT-5.1.

In what ways to ad tech firms or non-profits use algorithms to assign you any kind of score that matters for your life?

A lot of that funding in the US goes to pay teachers money they then use to pay for health insurance -- which in other countries is often provided by the tax base at large and not counted as an education expense.

> Their unions were also destroyed.

By policy changes giving unions less power, enacted by politicians that were mostly voted for by a majority, which is mostly composed of the working class. Was this people voting against their interests? (Almost literally yes, but you could argue that their ideological preference for weaker unions trumps their economic interest in stronger unions.)


If your choices in an election are pre-selected, was it democratic?

"Thus, a caste system makes a captive of everyone within it."

Thinking that a text completion algorithm is your friend, or can be your friend, indicates some detachment from reality (or some truly extraordinary capability of the algorithm?). People don't have that reaction with other algorithms.

Maybe what we're really debating here isn't whether it's psychosis on the part of the human, it's whether there is something "there" on the part of the computer.


Yes, Chatbot psychosis been studied, and there's even a wikipedia article on it: https://en.wikipedia.org/wiki/Chatbot_psychosis

From that article, it doesn’t sound like it’s been studied at all. It sounds like at the current stage it’s hypothesis + anecdotes.

Even "simply following directions" is something the chatbot will do, that a real human would not -- and that interaction with that real human is important for human development.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: