Get a hard case. Rip out the foam and replace it with something more insulation-y. Now you have a cooler, a cooler that’ll keep the cold out instead of in.
What if it could? Or should (be able to produce FTE or close income)?
In that world, the amount of pointless shite - questing to “go viral” - would be reduced to near zero. That is, if the incentive were more quality, and less quantity, we’d be better off, yes?
That's tempting, but I still don't think it should. There would still be the quest to go viral. "Quality" would still be determined in the aggregate, which means that your income depends on appealing to the widest audience possible, which means high quality niche bloggers still don't get paid much.
Metrics are hard. Just making sure they reward one particular desired outcome doesn't mean you'll escape the unintended consequences.
Also, note that we are past the point of being able to reasonably able to manage any of this. Today, you'd need to come up with a reward function that cannot be maximized by AI. (And lest you think you can fix that by using site visitors to evaluate, most of them will be bots too.)
Anything that can provide income inevitably leads to a flood of garbage from people trying to game the system. The current ad-driven web resulted in SEO garbage and near-uselessness of search engines.
So there's an element of truth to that. And there are those who can contribute enough value, have enough audience, etc., that they can "coast" on those 2 blog posts a month and make significant income...
... but that's also not, nor should it be the median. I'm not sure how the economy functions if, say 8h/mo effort generates a median living wage.
This has been the best thing the government has ever done for Brazil's poorest by far,” according to Márcio Garcia, an economics professor at Pontifical Catholic University of Rio de Janeiro.
A single centralized system? Controlled by the Brazilian Central Bank? That is effective not optional? Is sold as a benefit for the already marginalized and repressed?
The issue with addiction is, it’s very often a symptom of other underlying issues. Relapse are common because too often the underlying problem isn’t treated. Overcoming the addiction is hard because it means facing the thing the addiction allows you to avoid.
Addiction is also common(ism) amongst those who suffer from NDP. In this case, is it truly addiction, or simply another tool in their NPD cache of weapons.
I don’t disagree with you. But it’s also important to be aware of some of the nuances and finer points. I also recommend reading “The Courage to be Disliked”. Not that it / Adler speak to addiction but it’s a thought provoking alternative to the Freudian paradigm.
2) If it’s not earned, it’s not Trust (or trust). Full stop.
3) Once lost, regaining Trust takes 5x to 10x - sometimes perhaps even more - effort to earn it back.
No matter how you cut it, this ^^^ is how Trust works.
Based on what I’ve seen, The Media is not aware of these rules. It’s certainly not interested in #3. Instead, it does the worst thing possible, it blames its customers (i.e., the consumers of its content). Using gaslighting as a proxy for Trust is a rookie mistake.
Editorial: If things weren’t stupid enough, Ars Technica drops another headline / title unworthy of a high school newspaper.
Sadly these stupid “journalists” believe they’re doing journalism. Worse, a high percentage of the public - yes, even on HN - believe them.
Ask HN: Does anyone know of a browser extension that’ll allow me to display: none any AT articles that are submitted. I’m tired of being insulted. Certainly there’s got to be better sources for the same information.
Slow down people. Let's stop jumping to biases and see what we have here.
Note upfront: I'm not suggesting AI is not having an impact. That would be foolish. But I will say there's *a lot* less to the conclusion of this study, simply because the data is questionable. It's not that they did anything wrong per se. I won't say that here because it'll end up a HN cluster fuck. Cluster fuck aside, the caveats and associated doubt are enough to say, "Don't bet the farm on this study." Great bander for the bar? Sure.
It's an interesting study but I've seen it called "absolute proof" and other type things. Don't be fooled, it's not that.
> "This study uses data from ADP, the largest payroll processing firm in America. The company provides payroll services for firms employing over 25 million workers in the US. We use this information to track employment changes for workers in occupations measured as more or less exposed to artificial intelligence"
a) I'm calling this out because I've seen posts on LinkedIn saying it was a sample of 25M. Nope! ADP simply does payroll for that many.
b) The size of the US workforce is ~165M, making ADP's coverage ~15% of the workforce.
c) Do the business ADP server come from particular industries, are of a particular size, in particular geographic locations? etc.? It's not only about the size of the sample - which we'll get to shortly - but the nature of the companies - which we'll also get to shortly.
> "We make several sample restrictions for our main analysis sample."
d) It's great that they say this, but it should raise an eyebrow.
> "We include only workers employed by firms that use ADP’s payroll product to maintain worker earnings records. We also exclude employees classified by firms as part-time from the analysis and subset to people between the age of 18 and 70."
e) Translation: we did a slight bit of pruning (read: cherry-picking).
> "The set of firms using payroll services changes over time as companies join or leave ADP’s platform. We maintain a consistent set of firms across our main sample period by keeping only companies that have employee earnings records for each month from January 2021 through July 2025."
f) Translation: More cherry-picking.
> "In addition, ADP observes job titles for about 70% of workers in its system. We exclude workers who do not have a recorded job title."
g) Translation: More cherry-picking.
> "After these restrictions we have records on between 3.5 and 5 million workers each month for our main analysis sample, though we consider robustness to alternative analyses such as allowing for firms to enter and leave the sample."
h) 3.5M to 5.0M feels like a large enough sample... if it wasn't so "restricted." Furthermore, there's no explanation on the 1.5M delta, and how adding or removing that much impacts the analysis.
i) And they considered that why? And did what they did why? It's a significant assumpt that gets nothing more than a hand wave?
> "While the ADP data include millions of workers in each month, the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy."
j) Translation: as mentioned above ADP !== a representation of the broader economy.
> "Further details on differences in firm composition can be found in Cajner et al. (2018) and ADP Reserch (2025)."
j) Great there's a citation, but given the acknowledgement of the delta isn't at least a line or two in order? Something about the nature of the delta, and THEN mention the citation?
k) Editorial: You might think this hand-wave is ok, but to me it's usually indicative of a tell and a smell.
l) Finally, do understand the nature of academia and null research (which has been mentioned on HN). In short, there is a (career / financial) incentive to find something novel (read: worth publishing). You advance your career by doing not-null
research.
Again, I'm not suggesting anything nefarious per se. But this study is getting A LOT of attention. All things considered, more than it objectively deserves.
__Again: I'm not suggesting AI is not having an impact. That would be foolish.__
It’s not hard to add a function that says “is this conversion about suicide? If so escalate to an intervention pathway.” You could even do it as an out of band batch process so it doesn’t increase latency.
OpenAI didn’t put in the simplest, smallest, easiest protection. (You could do it with a tiny LLM, batch up the conversion on a five minute interval with a cron job.) I could implement it for less than the operations team spends on lunch today. And certainly less than OpenAI will spend bringing their in-house council up to speed on the lawsuit.
Suicide is a crisis and it’s possible to intervene, but only if the confidant tries. In this case it was a machine with insufficient safety controls.
Fixing ChatGPT will 100% lower the suicide rate by exactly the amount of people who confide in ChatGPT about suicidal thoughts and who receive successful intervention. I can’t tell you what that number is ahead of time but I assure it’s nonzero.
> But for the parents to say, "We suspected NOTHING" just doesn't hold water.
If what you are saying here isn't malicious, it is at least ignorant. Parents often get very little clue that their child is going to kill themselves. Children can be hesitant to confide in their parents. Especially when someone is grooming them to kill themselves.
If they’re not providing essential links then it’s not journalism. They shouldn’t be given credit for a title they are not earning.
If you pet barks, do you still call it a cat? Of course not.
Links make it journalism. Not linking makes it reporting. They should not be considered synonymous.
The point is, people who should know better keep calling the likes of CNN journalism and those who don’t know any better keep believing they’re consuming content and forming understanding based on journalism.
reply