The user who posted this is perfectly legit, and not the person who created the site. As their follow-up comment makes clear -- they weren't doing this to promote the site, but to warn others of its existence:
Pretty sure this has been happening for a long time now. Before AI/automation, they would just hire people in developing countries to manually promote products on the platform. You can take some steps to mitigate this: for example clicking on the username of people recommending products and seeing if they promote the products more than once or have a history of being active on other unrelated subs. Although they also manipulate that sometimes (purchasing old/actual user accounts), but there's not much else you can do at this point. Even the old word-of-mouth method is more unreliable these days because people just echo what they saw on social media.
> that will directly make my own, personal life worse.
Have you considered doing things that will make your life better? I use ReplyGuy to save myself like 30-60 hours monthly on each of my projects and it has made my life better.
At the expense of making everyone else's life slightly worse through bogus product recommendations and slowly degrading the quality of the medium you advertise in? Guys, have you considered taking from the commons? There's all free stuff there, it's benefited me personally.
I actually had to ask myself whether this was written by a real person or by a bot like ReplyGuy. I've never felt the need to ask that before. I hate this.
I thought I was so obviously over-the-top mimicking the bot from the subject link that it would be obvious satire (and thus point out how easily something could be done on HN)… But I’m not so sure…
Heh, I was so paranoid about the bot thing that I didn't even consider it could have been sarcasm. I even checked your last couple posts on HN after reading your comment just to see if your account had a history and if it was all shilling for products.
I'm convinced you're a real person. Sorry for not catching on to the sarcasm.
You sound like a mechanic who dumps his used motor oil down a storm drain to avoid the expense of proper disposal. Yeah, it's good for you, but it pollutes the environment for everyone.
You are polluting the digital commons. You should be ashamed of yourself.
I think it is becoming quite clear that Reddit is the most astroturfed place on the internet -- gaming Reddit for positive feedback and reviews is a lot easier and less painful than trying to game Google's results directly -- and there's an entire industry building up around it. Something to think about if you add "site:reddit" to product searches.
Reddit has always been heavily astroturfed (cough, Eglin Airforce Base), but since the mod blowup last year it almost feels like the bots and astroturfers are outnumbering the real users. The site has become unusable.
I think it's also pretty much impossible to avoid this.
As soon as something has enough trustworthiness and user base for product recommendations. It has value to the advertisers and they will "mine* out that value until it's exhausted.
As hard as it would be to enforce, I wonder whether this runs afoul the FTC rules regarding advertising/disclosure. It does seem to me like this qualifies as a “Robo-influencer” and thus should adhere to the guidelines the FTC put out [0].
The link to the "Stealth Marketing" site in the site footer looks like they don't care about adhering to any guidelines...which I guess should make the FTC's case clear-cut!
- Painkiller Ideas and Replyguy both use Azure DNS and Microsoft hosting which means the footer probably contains the creators' other projects
- while stealth.marketing is hosted very differently, it uses partially the same and partially very similar CSS
- stealth.marketing says it's "powered by reddit" with the reddit logo which I'm sure the Reddit PR department will absolutely hate when finding out, given they ruin the platform (probably not even using the API)
Looks like there's an easy way to make this all collapse
Yea this is gross. Even the examples they provide are gross. Someone struggling with debt, and a some robot gives them the impression that there's a person that cares and, by the way, you should spend your money on this service. Ewww.
They are gonna get sued if they try this on LinkedIn or Tiktok for sure. Circumventing those captchas (everyone who's tried scraping Tiktok before knows what I'm talking about, even in headed environments) is almost certainly illegal.
Oh, absolutely. Never heard of the LinkedIn scraping lawsuit before? They are totally willing to throw their lawyers at you for using the platform in a way it's not intended to be used in
More importantly I see this as a band aid for underperforming products. The startup space struggles with this already, but instead of focusing on the product deficiencies and building something people want now we just enable some SaaS that pretends to be a human spewing marketing masquerading as a legitimate human's perspective. Talk about a bottom of the barrel approach. The Internet is going to need to go insular and back to niche forums that are under lock and key from this toxicity.
All this is going to do is destroy one of the last remaining ways of getting an unbiased review: word of mouth. It won't make people buy more {PRODUCT}, it will just make people suspicious of every recommendation they read.
Maybe companies should focus on actually producing products that are good enough that they don't need to be covertly advertised to via CIA level psyops.
Slow though the FTC is I imagine that a bare minimum requirement for products like this will be disclosure that they've been written by an AI bot
Which will (a) Give consumers proper disclosure and (b) Give platforms the opportunity to filter or address as they like
Prior enforcement engagements like the CAN SPAM act etc didn't eliminate bad actors, but massively curtailed legitimate business participation in these areas
Many countries regulate fraudulent advertising, and the person _paying_ for it would generally be liable (along with the person providing it, usually).
If you've been on Reddit for a long time, IMHO, it was obvious that the front page was effectively taken over at least a decade ago by shilling and rigging (not counting recreational karma gaming that was going on forever before that).
And surely in recent years people have been doing things like this ReplyGuy, especially with LLMs and with lesser hacks before that, and more secretly.
What ReplyGuy gives us, with their apt roofie predator logo, is a face we can put to one of them.
You're a horrible person. Your mother should be crying. You're the vanguard of a trend that will damage the mental health of millions or billions, especially if taken at all farther.
Their "People love our replies" section shows a bunch of RandomWord-1234 accounts giving positive feedback. I wonder if those are included with the service.
Back in the day you would read something on the internet and wonder if it was a cat that was typing it, or a human. Now people feel offended it might be an AI typing it directed by a cat or a human
Profoundly obnoxious of course. It'll take some time, but hopefully a mix of manual flagging and automated countermeasures will be sufficient to quash this onslaught.
Meanwhile, as to the visionary genius behind this thing:
Experimental urbanist Scott Fitsimones shares how these mission-driven, blockchain-governed, collectively owned organizations could increase the speed and efficiency of building cities (among many other applications) -- all while pooling decision-making power in a radically collaborative way. Hear about how he started a "crypto co-op" that bought 40 acres of land in Wyoming and learn more about the potential for DAOs to get things done in the future.
Pretty much. Trust has always been a problem, especially on the internet, but people like this guy are going to simply ruin the public, free internet and destroy the very idea of crowd source.
We are going to resurrect the elite class of editors so we can ever find out any information that is not totally corrupt.
This idea is interesting. It’s nothing new; there are already people hired to plug their tools or respond to complaints, even here on HN. This could make a “community manager” a lot more affordable. For business that is nice.
As a user I fear the future, but I don’t see it as much different from a human doing the same thing. It will just get a lot more common.
On one hand I agree that this should be flagged and taken off homepage (as it is right now), but. Otoh more people need to be aware of such scams and simply not trust reddit, hn, etc as good source of information anymore.
Pricing is interesting: They have a free tier that provides 2 replies/month, next tier is $10/month for 20 replies/month. This is probably still more cost effective than replies done by humans though I understand that LLM API access can get pretty expensive at the moment, what models are they using?
Other than that it seems pretty cool, and ReplyGuy is a perfect name lol.
It's weird to try to imagine why he thought showing it here would get anything other than hate. It's like saying he found a way to smear poop on everyone's doorknobs.
If you believe the purpose of an economy is to meet peoples' needs, then it helps connect users that are openly looking for a tool to help them with projects that will help them with their need/problem/request, which is a win. Alternatively, if it really does make the world a worse place, then it incentivises governments to prevent their citizens wasting time looking at marketing online so they can get their act together and ban online advertising already, which is... a win. Hey, I'm just accepting your challenge.
You've created a disgusting thing that makes everyone's lives worse. And you'll justify it to yourself with the old "if I didn't do it someone else would have". Despicable.
There's already a few 'this is a bad thing to exist on the Internet' comments, so I'll say something else.
The four example comments feel a bit off, because while they are tailored to the informational context, they don't fit into the conversational context. They all start off with an introductory sentence that restates contextual information that's already present in the discussion thread.
> To launch the chrome browser when you click the text, ...
If you're reading the comments, you already know what question this is an answer to - wasting people's time with this high school essay nonsense is going to trip bullshit-o-meters.
> Ireland, like any other country, has its fair share of crime ...
No one starts a Reddit comment with this sort of equivocation unless they're trying to head off an anticipated downvote storm. Generic travel safety advice certainly doesn't need this.
> It's understandable that you're feeling anxious and overwhelmed with the debt and job loss.
This one does a lot of waffling, which people don't really do on Reddit unless they're talking about something they're passionate about. Just give the useful advice and move on.
Overall, they have a lot of the padding-the-word-count style that characterises LLMs that were trained on blogspam. Redditors tend to be both more direct and more personal in their writing styles. The LLM isn't so bad that it'll be immediately identified as AI, but it will come off as subtly ingenuine.
I was upset about it at the time, but last year's round of hostility to users that caused me to delete my reddit/twitter accounts when 3rd party clients were lost was a blessing.
Reddit is full of bots: thread reposted comment by comment, 10 months later - https://news.ycombinator.com/item?id=40211010 - April 2024 (272 comments)