It always seemed like Leta was on thin ice since it queried Googles Search API and then cached the results for 30 days, which I believe is against Googles TOS. I wonder if they finally noticed and got mad.
Just a warning that some of the search header images are NSFW. After seeing that they were randomized, I starting going through them and got a dildo for one.
EDIT: They've got them listed under an open index on https://4get.ca/banner/ if you want to see them all.
4 Chan continues to be the best actually "free"/borderline unmoderated forum to this date.
It just comes with the problems that come from allowing it's users to speak whatever they want - anonymously... Hence you usually get everything there, from the most insightful to the most offensive.
You might prefer a highly moderated platform such as HN, but 4Chan has it's own strengths if looked at from a natural position
it's not entirely unmoderated. some of the rules are being enforced fairly strictly - for example, NSFW images on SFW boards get reported and erased within minutes. blatant spammers, shills, and schizos get dealt with too. only residential IPs can post, which reduces the volume of shit quite a bit. a dedicated schizo can shit up a thread, a coordinated raid can shit up a whole board, but given the ephemeral nature of 4chan, it's like pissing in an ocean of piss.
rather, it is politically unmoderated. which is, of course, the pearl-clutching anathema.
Nor does one have to use a browser in order do "metasearch". For example, I have a script that queries over 60 search engines directly, from the command line, and returns SQL. I can combine results from many sources into custom SERPs comprised of simple HTML for text-only browser, no Javascript, CSS, etc.
Unfortunately, pooled search proxies have a finite lifetime, like scroogle.org
a bit tangential but has anyone noticed a serious degredation in quality with duckduckgo? its become completely unusable and ive had to switch to Bing :(
My guess is search's days are numbered and companies are "pivoting" away to other projects
I stopped using duckduckgo and switched to kagi shortly after the tankman fiasco[1]. Never been happier- if you want to support the continued existence of search, then pay for it.
We never blocked this image and we would have no incentive to either since we’ve been banned in China since 2014. Here’s my statement from back then: https://news.ycombinator.com/item?id=27528324
Kagi uses Russian search engine Yandex (EDIT: among several other sources) to produce search results, which means they pay them, which means indirectly sponsoring Russia's invasion of Ukraine.
There are more or less valid arguments for not excluding Yandex[1], but as a European, I want to avoid any of my money going to Russia if possible. And there is no setting to exclude Yandex from your Kagi search results.
If you stopped using duckduckgo because of the tankman fiasco, maybe you should reconsider if Kagi is right for you.
I'm honestly surprised they're legally allowed to do that. Isn't Yandex under sanctions and wouldn't paying them money as a US company fall under funding a sanctioned company?
By definition it also means your search queries are sent to Yandex, which may be a problem if you are pasting sensitive data there and belong to a risk group
Not by definition; from the Vlad response linked above: "we do not call all sources for all queries, as we balance cost efficiency with result quality - a delicate optimization". But I could understand how one may want to eliminate any possibility!
Second this. I finally gave up and subscribed to Kagi half a year ago, and man, I should have done it much sooner. The search results are just genuinely good.
This is not like trying to use Bing, and then half the time you have to do the same search on Google because of how poor Bing's results are. It feels like Google felt fifteen years ago: useful results without all the "sponsored links" garbage around it.
I recently bought a DDG subscription because of their duck.ai service.
For $10/month it’s great to have someone whose incentives are aligned with my own managing my relationships with AI companies I’d otherwise have to monitor constantly for privacy abuses.
I haven’t noticed a recent degradation in DDG’s search results, but I’m also turning to duck.ai more frequently and on the whole my search/investigation experience is better.
The one significant downside is that duck.ai limits the length of your chats, but considering the price that’s not surprising.
The direction I’d like to see the industry go is better integration of search results into AI chat, blurring the distinction between the two. That would make both products more compelling: search results are made more friendly with AI summaries, and original sources help to counter AI hallucinations and obsequious blather.
AI is killing websites[0]. Why visit a website if the AI summary is good? But soon, if everyone is only using AI results, then there will be no reason to create new websites, unless you don't care about anyone visiting your site except for AI crawlers.
[0]: I won't bother linking any articles since there are too many articles on the subject and whatever I link is probably not the site you want (or is maybe paywalled).
There are many serious ethical and practical problems posed by the rise of LLMs, and I agree that this is one.
My hope is that AI helps to fine tune inquiries and helps users discover websites that would otherwise not have been uncovered by traditional index-based search.
Unfortunately it’s in the interests of search and AI companies to keep you inside their portals, so they may be less than willing to link to the outside even when it would improve the experience.
Hard agree. I was recently at a talk from Jaron Lanier[0], who proposed that AI should, after every query, present on the right-side of the page a list of all clickable sources where AI gathered it's data from, so that we could verify accuracy, as well as allowing us to continue giving traffic to websites.
> AI should, after every query, present on the right-side of the page a list of all clickable sources
The default internet device these days is the phone; so many people don’t even use desktop any more. Space limitations on small screens mean that this is unlikely to be shown by default. Moreover, phone interfaces discourage most users from opening multiple new tabs forking off any webpage. You might show desktop users this and get some uptake, but that’s not enough to save the open web.
> Unfortunately it’s in the interests of search and AI companies to keep you inside their portals, so they may be less than willing to link to the outside even when it would improve the experience.
This is true, but aren't "AI" summaries directly opposed to this interest? The user will usually get the answer they need much more quickly than if they had to scroll down the page, hunt for the right result, and get exposed to ads. So "AI" summaries are actually the better user experience.
In time I'm sure that we'll see ads embedded in these as well, but in the current stage of the "AI" hype cycle, users actually benefit from this feature.
Yes, users can rely on "AI" summaries if they want a quick answer, but they've been able to do that for years via page snippets underneath each result, which usually highlight the relevant part of the page. The same argument was made when search engines began showing page snippets, yet we found a balance, and websites are still alive.
On the contrary, there's an argument to be made that search engines providing answers is the better user experience. I don't want to be forced to visit a website, which will likely have filler, popups, and be SEO'd to hell, when I can get the information I want in a fraction of the time and effort, within a consistent interface. If I do need additional information, then I can go to the source.
I do agree with the idea you mention below of search engines providing source links, but even without it, "AI" summaries can hardly be blamed for hurting website traffic. Websites are doing that on their own with user hostile design, SEO spam, scams, etc.
There is a long list of issues we can criticize search engines for, and the use of "AI" even more so, but machine-generated summaries on SERPs is not one of them IMO.
I guess you didn't take up my offer to search for how AI is killing traffic. There are numerous studies that repeatedly prove this to be true, this relatively recent article links to a big pile of them[0]. Why would anyone visit a website, if the AI summary is seemingly good enough?
My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic. Someone else posted this wonderful evidence[1] in the comments. LLMs are sycophantic and agree with you all the time, even if it means making shit up. Maybe things will improve, but for the last 2 years, I have not seen much progress regarding hallucinations or deterministic i.e. reliable/trustworthy responses. They are still stochastic token guessers with some magic tricks sprinkled on top to make results slightly better than last month's LLMs.
And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated. Where will AI summarize data, if there is no new data to summarize? I guess they can just keep rehashing the new AI-generated websites, and it will be one big pile of endlessly recycled AI shit :)
p.s. I don't disagree with you regarding SEO spam, hostile design, cookie popups, etc. There is even a hilariously sad website[2] which points out how annoying websites have become. But using non-deterministic sycophantic AI to "summarize" websites is not the answer, at least not in the current form.
> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.
Who cares if it's deterministic? Google changes their algorithms all the time, you don't know what its devs will come up with next, when they release it, when they deploy it, when the previous cache gets cleared. It doesn't matter.
Haha, I suppose the problem is that LLM outputs are unreliable yet presented as authoritative (disclaimers do little to counteract the boffo confidence with which LLMs bullshit) — not that they are unreliable in unpredictable ways.
I'm well aware of the studies that "prove" that "AI" summaries are "killing" traffic to websites. I suppose you didn't consider my point that the same was said about snippets on SERPs before "AI"[1].
> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.
I am firmly on the "AI" skeptic side of this discussion. And yet if there's anything this technology is actually useful for is for summarizing content and extracting key points from it. Search engines contain massive amounts of data. Training a statistical model on it that can provide instant results to arbitrary queries is a far more efficient method of making the data useful for users than showing them a sorted list of results which may or may not be useful.
Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.
> LLMs are sycophantic and agree with you all the time, even if it means making shit up.
Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.
> And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated.
That's baseless fearmongering and speculation. Websites might be impacted by this feature, but they will cope, and we'll find ways to avoid the doomsday scenario you're envisioning.
Some search engines like Kagi already provide references under their "AI" summaries. If Google is pressured to do so, they will likely do the same as well.
So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic. I do think that "AI" is a net negative for the world in general, but that's a separate discussion.
Sorry I didn't meant to discount your argument. I don't think SERPs are a valid comparison, AI is for me an apples vs. oranges comparison, or rather rocks vs. turtles :)
btw your linked article/study doesn't support your argument - SERPs are definitely stealing clicks (just not nearly as many as AI):
> In other words, it looks like the featured snippet is stealing clicks from the #1 ranking result.
I should maybe clarify: I have been using LLMs since the day they arrived on the scene and I have a love/hate relationship with them. I do use summaries sometimes, but I generally still prefer to just at least skim TFA unless it's something where I don't care about perfect accuracy. BTW did you click on that imgur link? It's pretty damning - the AI summary you get depends entirely on how you phrase your query!
> Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.
What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough? I can imagine as a tech-savvy individual, that you still verify from time to time and remain skeptical but think of 99% of the users who don't care/won't bother - who just assume AI summaries are fact. That's where the crux of my issue lies: they are selling AI output as fact, when in fact, it's query-dependent, which is just insane. This will (or surely has) cost plenty of people dearly. Sure, reading a summary of the daily news is probably not gonna hurt anyone, but I can imagine people have/will get into trouble believing a summary for some queries e.g. renter rights - which I did recently (combination summaries + paid LLMs), and almost believed it until I double-checked with a friend who works in this area who then pointed out a few minor but critical mistakes, which then saved my ass from signing some bad paperwork. I'm pretty sure AI summaries are still just inaccurate, non-deterministic LLMs with some special sauce to make them slightly less sketchy.
> Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.
Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.
> So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic.
I agree the web will survive in some form or other, but as my Register link shows (with MANY linked studies), it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.
Just to add fuel to the fire...AI output is non deterministic even with the same prompt. So users searching the same thing may get different results. The output is not just query dependent
> What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough?
I don't verify the accuracy regularly, no. And I do concede that I may be misled by the results.
But then again, this was also possible before "AI". You can find arguments on the web supporting literally any viewpoint you can imagine. The responsiblity of discerning fact from fiction remains with the user, as it always has.
> Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.
I'm not any better at it than any proficient search engine user.
The issue I see with that Imgur link is that those are not search queries. They are presented as claims, and the "AI" will pull from sources that back up those claims. You would see the same claims made by web sites listed in the results. In fact, I see that there's a link next to each paragraph which will likely lead you to the source website. (The source website might also be "AI" slop, but that's a separate matter...) So Google is already doing what you mentioned as a good idea above.
All the "AI" is doing there is summarizing content you would find without it as well. That's not proof of hallucinations, sycophancy, or anything else you mentioned. What it does is simplify the user experience, like I said. These tools still suffer from these and other issues, but this particular use case is not proof of it.
So instead of phrasing a query as a claim ("NFL viewership is up"), I would phrase it using keywords ("NFL viewership statistics 2025"). Then I would see the summarized statistics presented by "AI", drill down and go to the source, and make up my mind on which source to trust. What I wouldn't do is blindly trust results from my biased claim, whether they're presented by "AI" or any website.
> it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.
I don't disagree that this feature can impact website traffic. But I'm saying that "killing" is hyperbole. The web is already a cesspool of disinformation, spam, and scams. "AI" will make this even worse by enabling website authors to generate even more of it. But I'm not concerned at all about a feature that right now makes extracting data from the web a little bit more usable and safer. I'm sure that this feature will eventually also be enshittified by ads, but right now, I'd say users gain more from it than what they lose.
E.g. if my grandma can get the information she needs from Google instead of visiting a site that will infect her computer with spyware and expose her to scams, then that's a good thing, even if that information is generated by a tool that can be wrong. I can explain this to her, but can't easily protect her from disinformation, nor from any other active threat on the modern web.
Summary is supposed to give you a taste of what the link destination talks about. If most of the page information can be fitted in one paragraph of summarization, the problem is with webpage, and visiting that webpage would have been a waste of the user time.
Let me remind you of recipe websites as an example of how summaries can be better by ignoring all of the useless crap that has nothing to do with making the dish
I find that if I describe an esoteric bug to a high powered LLM, I often get to my answer more quickly than if I trawl through endless search results. The synthesis itself is a valuable addition.
Frequently I cannot even find source documents which match my exact circumstances; I’m uncertain whether they actually exist.
agreed, the other half is that most websites now are just AI generated slop that makes you wonder why you even bothered to look at the actual website instead of the llm.
I would recommend trying Ecosia, their search have become really good. Better than DuckDuckGo, Bing and Google to be honest. They use a mix of Bing, Google and a few other things, most recently their own index which they collaborate on with Qwants (only for German and French at this point).
Originally I switched due to their environmental focus and the way they run the company, but the quality of the results keeps me there. They have their own ! query, like DuckDuckGo. So !maps for Google Maps and !w for Wikipedia.
While I like the general idea of Ecosia (in that it's a less harmful ad-funded service) they do share user IP addresses with their search partners (Google and Microsoft).
> We, and our search partners, collect your IP address in order to protect our service against spammers trying to conduct fraud or to up-rank specific search results.
This shouldn't necessarily stop anyone; I think it should just be mentioned when it is suggested as an alternative to DuckDuckGo. You probably wouldn't switch from a search engine that proxies all favicons to avoid tracking to one that sells your identity to Google and Microsoft for tree-money.
That's weird to hear. I've been using DDG daily since years and it's gotten progressively better, though lately every search engine's top results are often AI generated trash. To combat this it seems that DDG recently added a feature to every link in the upper-right corner to "block this site from all results" which is something I've been waiting for since SEO optimizing trash became a thing.
DuckDuckGo has always been bad or just adequate for some specific purposes. Though it’s been my default search engine for a long time, I do use the “!g” bang command on the search query to switch to Google when I find that DDG’s results aren’t relevant or adequate.
In the last year or so, I look for the summaries from “Search Assist” and the dive into a chat with the (limited?) LLM models that it provides. It’s my go to for LLM usage. It’s rarely and for more complex needs that I go to ChatGPT.
Truth is no one has as much data as Google and no one can build as good of a search engine as they can. Google results "suck" on purpose. They want you to google something multiple times so they can serve you more ads. But they are totally capable of building a good search engine
Kagi is proof of this. Kagi results are almost all the Google search API. It shows that Google is completely capable of building a better search engine if they wanted to
Im expecting a future where we dont have “pages” on the internet anymore, but its just the backbone for generative AI content and if you want to promote your brand you need to pay the AI providers to put your content in responses.
Eventually, the entire notion of “searching the web” will be seem as archaic as the rotary phone.
I've found Brave Search to be quite good. You can disable the AI summaries if you prefer. However, I generally find them very helpful. It's, of course, private similar to DDG. They use their own crawler as well as getting results from Google and other sources.
Hey, thanks for the kind words. I just wanted to clarify that Brave Search is 100% independent and doesn’t source results from any third -party (see here for more details: https://brave.com/blog/search-independence/)
The degradation is on all search engines. Nothing is as good as it once was. Even the pay-for-it search engines are catching junk and floating it to the first page.
All our different search metrics have been up over time, so would love to know more about specifics if you'd care to reach out to me (email in profile) and we can look into it more deeply.
How do metrics tell you that a search query produces the result the user really wanted rather than fooling the user or forcing them to settle or even give up? A lot of my queries involve clicking a link and then realizing the site is garbage and just closing the tab.
So I submit a query, click a link, then I give up and close the tab. How do you tell that apart from me finding exactly what I wanted and staying there? Whether it's my first query or the 10th (and final) query in a chain of re-queries, you still have no way of knowing when I've found what I wanted or when I've given up.
I have DDG set as the primary in some places and Google as the primary on other devices, so I’ve used both in parallel for years.
To be honest, DDG has always been far behind Google. It’s fine when I know my search result is going to be in the top 10 of any engine I use, but the moment I need to search for anything non obvious I don’t even bother with DDG any more.
DDG does seem marginally worse today than it was maybe 5 years ago. It falls off rapidly past the first few results. Now it even seems like it just starts mixing generic results from some popular adjacent keyword into the results and hopes we don’t notice as users that it stopped trying to search by page 2.
I've started to wonder/worry that maybe it's not the search engines (excluding Google, I won't apologize for them). What if there's just nothing to search for? If there is little on the internet besides trash and a few big portals? Much of what you might be searching for whether you know it or not will be a reddit post, or Facebook, or Stackoverflow. And some of those places don't even allow for proper indexing by crawlers. Worse than that nightmare fuel is the idea that 2025 just isn't the same internet as we grew up with, where everyone was racing to shovel as much real content onto it as they could... today it's a bunch of grifters hoping to be influencers or Youtube personalities or skeevy scammers AI-generating slop but not much else.
And so, even if Google was the same thing it was back in 2010, there's no longer anything for "search" to find. And I hope you all downvote me to -50 and scream at me for being a retard with some snarky-assed abuse detailing how and why I am wrong. Because I don't want to be correct about this.
Unfortunately, I am also worried that is the case.
There was an era where there were a lot of completely free sites, because they were mostly academic or passion projects, both of which are subsidized by other means.
Then there were ads. Banner adds, Google's less obtrusive text ads, etc. There were a number of sites completely supported by ads. Including a lot of blogs.
And forums. Google+ managed to kill a lot of niche communities by offering them a much easier way to create a community and then killing it off.
Now forums have been replaced by Discord and Reddit. Deep project sites still exist but are rarer. Social media has consolidated. Most people don't have personal home pages. There's a bunch of stuff that's paywalled behind Patreon.
And all of that has been happening before anyone threw AI into the mix.
i dont know their internals but its very clearly not. You can try side by side. Extremely basic searches fail. It seems intermittent and inconsisent. Maybe their backend to Bing fails and the fallback is terrible. Just guessing
Right this moment it seems to work. 2 Days ago Id search for something basic like "CSS colors" and not get back a single usable result
Are you perhaps getting AI-generated trash that is just SEO optimized? I've noticed a TON more of these results in DDG and Google lately. You can now block those websites completely from DDG as of very recently (or at least I only noticed it very recently, and it's a true godsend to filter out all this AI-generated trash).
Weird, DDG supposedly uses Bing which should be indexing everything. Then again this is Microsoft, who can't even get local search working - Win11 lately can't even find Add/Remove Programs on my PC, I have to go through Settings and click 18 times before I find it.
re blocking: after every search in the upper-right corner of each link I see 3 dots which opens a menu and offers "block this site from all results".
> My guess is search's days are numbered and companies are "pivoting" away to other projects
Pretty much. Most (all?) search engines have basically stopped indexing the web. If you create content that doesn't make it through social media and has significant links, Google won't just index your website.
No, it's not under-ranking your site. It's plainly not indexing it. So if you have weird, specific content out there; it simply won't show up for a particular search.
Search is pretty much over and no one is interested in getting that fixed.
This has the same problem that most public searxng instances seem to have nowadays, which is that they don't work. Either you just get an error about rate limiting or you get results totally unrelated to your search. I just tried a couple random searches about geographical locations (in English) and got back a bunch of results in Chinese.
I had been using baresearch.org (a searxng instance) but it's recently become unusable, apparently due to the engines it aggregates cracking down on such things. I tried some other instances but they also don't work. It's a bummer because I thought searxng was pretty great for the last year or two.
I've been selfhosting my own to avoid this issue. Once in awhile a search provider will be unavailable but its pretty consistent in pulling in the major ones.
It doesnt require many resource and would be easy enough to run it on docker compose alongside a valkey/redis instance. I have mine on k8s but i dont think there is a helm chart easily found.
DeepSeek seems to go the way of trying to please everybody. They offer two alternatives. which you could use separately or both in the same time (named in an obvious way): DeepThink and ... well ... Search :)
Your circles might have a little more technical literacy than most. I'm working part time in retail at a hardware store currently and the amount of people who come in looking for parts specified exclusively by a single AI overview is mindboggling. People repairing car engines come in looking for bolts with specific lengths, materials, and thread pitches that AI told them they needed. I haven't had anyone come back and explicitly tell me that AI led them wrong, but I'm sure they've had to make multiple trips back out here.
The shift towards LLM-based search products is significant as they offer more conversational and personalized responses compared to traditional search engines. This change is driven by users seeking quicker, more relevant answers and a better overall experience
So it must be true, right? Coz that's the only thing I searched for. I got my answer. Why would I search for the opposite? My bias was confirmed. I'm happy and will repeat the results to all my friends, who will search for the same thing to confirm and get confirmation!
Yes it's meant sarcastically. Personally I agree with you that you want to look at pros and cons, both sides.
But there are many, oh very very many people out there who would literally do exactly what I wrote. They do it all the time. That's why we have these echo chambers everywhere. And AI as we can see here is not making it better.
And it's not just the "general population". I see it on a technical level too at work. Developers just trusting the AI output. It sounded confident when it said it found the root cause, fixed the bug and added tests. So it's good to commit, right?
While I still often just search with Kagi, I have found it often easier to write a fullblown natural language question into Kagi Assistant, to query an LLM, which then replies and gives me the references, where it supposedly found that info. If the reply is weird, I can click through to the references and check that out.
Isn't traditional search going to have the same issue? If you search about how chocolate is good for you, you'll turn up plenty of sites willing to confirm your beliefs, AI summary or not.
Sad to see it go, at the same time I never used it and it seems that the rationale is highly pragmatic, so you certainly won't find me protesting the decision.
Privacy is an uphill battle, we should use our efforts where they make the most impact.
fair enough- i used it a few times but brave was just more convenient- also for everyone here brave does its' own indexing and you can downrank and uprank sites and it will remember it without an accout
more important than a proxy is having an alternative to google because it is crazy how much they censor search results. just add brave search as a secondary.
A VPN and a privacy-focused browser have similar practical usefulness to a private search engine. They cannot be used to create a private search engine.
You don't need a whole browser for that, just a VPN. And that'd likely get their servers blocked for their users if Google's cracking down on them already.
https://developers.google.com/terms
> you will not [...] keep cached copies longer than permitted by the cache header
reply