Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In some ways I'm starting to enjoy this. Do you remember the 419 scams? 'Hi, I'm a Nigerian prince named Michael Jordan. Give me $50 so I can buy some chemicals to clean a bunch of money I secretly stowed away and I'll send you $5000.' People actually used to fall for that. Of course some people probably still would (and a lot more certainly gets blocked by spam blockers) but I think overall society grew less substantially less gullible over time.

But in general I think most people still remain excessively gullible and naive. Social media image crafting is one of the best examples of this. People create completely fake and idealized lives that naive individuals think are real. Now with AI enabling one to create compelling 'proof' of whatever lie you want, I think more people are becoming more suspicious of things that were, in fact, fake all along.

---

Going back to ancient times many don't know that Socrates literally wrote nothing down. Basically everything we know of him is thanks to other people, his student Plato in particular, instead writing down what he said. The reason for this was not a lack of literacy - rather he felt that writing was harmful because words cannot defend themselves, and can be spun into misrepresentations or falsehoods. Basically - the argumentative fallacies that indeed make up most 'internet debates', for instance. Yet now few people are not aware of this issue, and quotes themselves are rarely taken at face value, unless they confirm ones biases. People became less naive as writing became ubiquitous, and I think this is probably a recurring theme in technologies that transform our abilities to transfer information in some format or another.



> but I think overall society grew less substantially less gullible over time.

Absolutely not. All that happened is most people became aware that "Nigerians offering you money are scammers." But they still fall for other get rich quick schemes so long as it diverges a little bit from that known pattern, and they'll confidently walk into the scam despite being warned and saying, "It's not a scam, dumbass. It's not like that Nigerian price stuff." If anything, people seem to be becoming more confident that they're in on some secret knowledge and everyone else is being scammed.


I'm inclined to disagree just because photoshop has had a measurable effect on the population being skeptical of photos which at one point were practically treated as the gold standard of evidence. It's still easy to find people who have fallen for photoshopped images, but it's also easy to find people expressing doubts and insisting they can "tell by the pixels". Sometimes even legitimate photos get accused of being photoshopped which seems healthy.


The other side of this is that ai tools are being treat like magic to the point that people are denying well documented events happened at all, such as the shooting of Charlie Kirk - conspiracies abound!

Also bizarrely a subsection of the population seems to be really into blatantly ai generated images, just hop onto Facebook and see for yourself. I wonder if it has something to do with whatever monkey brain things makes people download apps with a thumbnail of a guy shouting or watch videos that have a thumbnail of a face with its mouth open, since ai generated photos seem very centralized around a single face making a strange expression.


People of a profound initial bias will, in general, believe anything that supports that bias, and reject anything that challenges it, in both cases without any real consideration or thought whatsoever. So I don't think examples of individuals being "misled" by e.g. AI generated images or video, to extremes, is entirely realistic. Rather they were already at those extremes and will just eat up anything that appeals to those extremes.

To take a less politically charged example, imagine there is fake content 'proving' that the Moon landing is faked. Is that going to meaningfully sway people who don't have a major opinion one way or the other? Probably not, certainly not in meaningful numbers. And in general I think the truth does come out on most things. And when people find they have been misled, particularly if it was somebody they thought they could trust, it tends to result in a major rubber-banding in the opposite direction.


I call it the antibody effect. My favorite example is clickbait headlines like, "Five things you MUST do if you're doing this thing. You'd never guess #3!" It used to be everywhere and now it's nowhere.

AI is starting to show this effect - people stay away from em-dashes. There's that yellowish tinge and that composition which people avoid on art. Some of this is bad, but we can probably live without it.


> "Five things you MUST do if you're doing this thing. You'd never guess #3!" It used to be everywhere and now it's nowhere.

Try opening YouTube in an incognito window sometime. Scrolling through a few, I see:

* Banned Amazon Products you NEED to See to Believe!

* This has NEVER Happened Before... (Severe Weather Channel)

* Our Dog got Married and had PUPPIES! THE MOVIE Emotional

* I WENT TO GHOST TOWN AND SOMETHING HEARTBREAKING...


Absolutely. I block at least three channels a week because YouTube keeps recommending clickbait titles and 'tuber face thumbnails.

Bonus points if said 'tuber is pointing at something with their hand and also a red arrow and/or circle which id also blurred out.

Intolerable.


Instead of blocking channels, trying going through your watch history and deleting anything you've watched in the past that's similar to those. YouTube heavily leans on your history for recommendations, so if it's recommending those, it's because you've watched related stuff.

My YouTube feed never recommends any of that garbage.


Okay, maybe YT is the exception, but some of them claim it's because the algorithm punishes otherwise.


> I think more people are becoming more suspicious of things that were, in fact, fake all along.

Sadly they also become suspicious of things that are, in fact, facts all along.

Video or photo evidence of a crime become useless the better AI gets


> Video or photo evidence of a crime become useless the better AI gets

This is probably a good thing because photoshop and CGI have existed for a very long time and people shouldn't have the ability to frame an innocent for a crime or even get away with one just because they pirate some software and put in a few hours watching tutorials on youtube.

The sooner jurors understand that unverified video/photo evidence is worthless the better.


You needed special abilities to fake those things convincingly before. So it was rare. Now everybody can do it. It will become normal.

Additionally the trust in experts also went downhill, so verified will mean nothing.


> People actually used to fall for that. Of course some people probably still would (and a lot more certainly gets blocked by spam blockers) but I think overall society grew less substantially less gullible over time.

It is still a serious problem just want that to be abundantly clear. Several thousand people (in the US alone) fall for it every year. I used to browse 419 eater regularly and up until a few years ago (when I last really followed this issue) these scams were raking in billions a year. Could be more or less now but doubt it’s shifted a ton.


You seem to assume people who are the victims of scams are people who are more naive. But that’s not how it works. Scams try to catch people at their weakest. It’s not if but when.


> Scams try to catch people at their weakest. It’s not if but when.

The "weakest" probably also involves selection bias. What HN comments are really good at is triggering associations for me with things I once read. Today I finally found what recently lived in my memory as a vague "scam" that used probabilities: the "stock market newsletter scam" from John Allen Paulos's book [1]. The scam works like this: at every step, two variants with different predictions are sent out for some market characteristic. Only those who receive the correct prediction get the next newsletter, which is again split into two prediction variants. This continues, filtering down to a final, much smaller subset of receivers who have seen a series of "correct" predictions. The goal is to create an illusion of super predictive power for that final group and then charge them a premium subscription price.

Maybe this kind of scam is too sophisticated or not as effective today (due to modern anti-spam measures), but I wonder what other kinds of "selection bias" scams exist today

[1] https://en.wikipedia.org/wiki/Innumeracy_(book)


Let's think about this algorithmically.

If something "floods the zone with shit," it needs S amount of shit to cause a flood. But too much will eventually make the scam ineffectual. Widespread public distrust for the scam is (S+X)/time where X is the extra amount of shit beyond the minimum needed. Time is a global variable constrained by the rate at which people get burned or otherwise catch on to all other scams of the same variety. If we imagine that time-to-distrust shrinks with each new iteration of shit, then X the amount of excessive shit needed to trigger distrust should decrease over time.

The longer term problem is the externality where nothing is trusted, and the whole zone is destroyed. When that zone was "what someone wrote down that Socrates might have said," or "Protocols of the Elders of Zion," or "emails from unknown senders," that was one thing. A new baseline could be set for 'S'. When it's all writing, all art, all music and all commentary on those things, it seems catastrophic. The whole cave is flooded with shit.


> When it's all writing, all art, all music and all commentary on those things, it seems catastrophic. The whole cave is flooded with shit.

But writing in itself has been obviously untrustworthy since it started existing - something being written down doesn't in any way make it trustworthy. The fact that audio recording, photography, and video enjoyed this undeserved reputation of being inherently trustworthy was an accident of technology, and has come to an end.

Just like with writing, though, this doesn't signal a real problem of any kind. You should still only trust writing, audio, or video based on the source - as you always should have. All that's ending is the era of putting undue trust in audio/video from untrusted sources.

Of course, the big problems will be in the transition period, when most people still think they can trust these sources, or will think they can't trust actually trustable sources instead. But this will be temporary as things readjust.

And again, audio and video have been untrustworthy for a long time, for sensitive things. You should not have trusted video in itself even in the 40s - 50s, and audio and photos probably even in the 1910s were already somewhat easily manipulated. And this is even true in a legal context - audio or video evidence is not evidence in itself, it is only part of a witness testimony who can attest to the provenance and veracity.


Quantity and velocity of misinformation are both critical variables here.

Any one person's writing was always untrustworthy, but the majority of that bad writing didn't make it to a printing press, nor was it mass-distributed.

Let's accept the proposition that all forms of media have always been full of lies. We can say that debunking always follows lies, truth spreads more slowly than fiction. The quantity and velocity of additional misinformation - especially when machines are involved in writing infinite amounts of it in the blink of an eye - lays waste to the normal series of events where a lie can be followed by a debunking with linear speed and velocity. With LLMs and social media manipulation, falsehoods gain traction exponentially while truths remain linear.

There is likely not a "transition period" where people will adjust to this, precisely because there is no mechanism to inform them they're being swindled and screwed faster than the takeoff of the algorithms that are now screwing them.


The total amount of generated untrustworthy content is irrelevant. People must learn to only read content from trusted sources, and then it won't matter how much misinformation is being published in other places.

It was never difficult to publish large amounts of misinformation, AI is only making it cheaper.


If the signal/noise ratio dramatically decreases then finding trustworthy sources in the first place becomes more difficult.


>>The total amount of generated untrustworthy content is irrelevant

Of course it is relevant. Discerning which sources to trust takes valuable time. Sources which were once trusted may need to be reevaluated.

>>It was never difficult to publish large amounts of misinformation, AI is only making it cheaper.

What is the difference between difficulty and expense?


The old quote of, 'You can fool some of the people all of the time, all of the people some of the time'.

There is the hope that in dumping so much slop so rapidly that it will break the bottom of the bucket. But there is the alternative that the bottom of the bucket will never break, it will just get bigger.


It's "you can fool one thousand people once, but you can't fool once one thousand people". No, I think I got it wrong.


I personally know people who look down upon people who use LLMs to write code. There is a lot of hate in some of senior developers that I talk to. I don't know if this growing tendency to be suspicious of AI usage is good or bad. For example, towards the final semester of my bachelors degree, my algorithms class started reporting students for academic misconduct because they the TAs started assuming that all the optimal solutions to assignment problems were written by LLMs. In fact, several classmates started purposely writing sub-optmial solutions so that the TAs at least grade them without any prejudice.

I worry that because LLM slop also tends to be so well presented, it might compel software developers to start writing shabby code and documentation on purpose to make it appear human.


At the moment it is the other way around. LLMs rarely write good code if not instructed by someone that knows what they are doing. And even then the code is rarely good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: