Hacker News new | past | comments | ask | show | jobs | submit login
Chrome phasing out support for User-Agent (infoq.com)
672 points by oftenwrong on March 25, 2020 | hide | past | favorite | 318 comments



The weird thing about this is that the only company I've seen doing problematic user-agent handling in recent years is Google themselves. They have released several products as Chrome-only, which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent. Same with their search pages, which on mobile were very bad in every non-Chrome browser purely based on user agent sniffing.


If you have the new Chromium-based Edge ("Edgium") installed: the compatibility list at edge://compat/useragent is really interesting.

Edgium pretends to be Chrome towards Gmail, Google Play, YouTube, and lots of non-Google services; on the other hand, it pretends to be Classic Edge towards many streaming services (HBO Now, DAZN, etc.) because it supports PlayReady DRM, which Chrome doesn't.

[Edit] Here is the full list: https://pastebin.com/YURq1BR1


This is off topic but do you know why Edge is the only browser to support DRM for streaming? Or is that incorrect?

I see lots of people who have to use edge on order to get 4k content from Netflix; presumably because of the DRM issues.


Other browsers support DRM too, but with different tradeoffs.

Chrome uses Widevine, but one of Chrome's philosophies is that you should be able to wipe a Chrome install, reinstall Chrome, and have no trace that before/after are the same person. That means no leveraging machine-specific hardware details that would persist across installs. "Software-only DRM", essentially.

Edge on Windows (and Safari on OSX) are able to leverage more hardware-specific functionality --- which from a DRM perspective are considered "more secure", but the tradeoff is a reduction of end-user anonymity (i.e. if private keys baked into a hardware TPM are involved).

Last I checked, Chrome/Firefox were capped at 720p content, Safari/Edge at 1080p, though it looks like Edge is now able to stream 4k.


Its absurd that paying customers get a worse experience than just using the piratebay.


Last time I used piratebay, I saw a lot of porn and malware/scam ads. I had to find and install a torrent client. Then I had to make sure I was downloading a movie that had enough seeders. And then I couldn't watch the movie until (and if) the download finished.

When I use netflix, I have a much better experience.


I know this is all anecdotal, but last time I used a torrent site, I found the movie immediately and it pulled the whole thing down in under 3 minutes. Could be that it was a newer movie and pretty popular. I do see a lot of older stuff that's not being seeded much anymore.


> When I use netflix, I have a much better experience.

If you're on Linux, you won't be able to stream at 1080p, let alone 4k. Netflix even went out of their way to disable workarounds that users developed.


1) I'm not on Linux. 2) I'd rather have the convenience of streaming at all than the best possible resolution.

I don't know what the resolution of my TV is, but I highly doubt it's over 1080p, if that.


Streaming above 720p also doesn't work via Firefox on any operating system.


According to the test pattern videos I'm getting 1080p in Firefox on Windows


Try "actual" content videos. You can press ctrl+alt+shift+D to open a debug overlay that shows the playback resolution.

The actual DRM limitations also vary by content (and region) - with some titles I get 720p on Linux while some other titles are limited to SD, while I get 1080p on Windows Edge on those same titles.


They may not have the same DRM restrictions as actual content? It's not that Firefox is not technically capable of rendering the videos, it's that it doesn't give the DRM the control that Netflix and rightholders want to require in exchange.


Source?


https://help.netflix.com/en/node/23931

> Mozilla Firefox up to 720p


Some torrent clients support sequential downloading, which will be equivalent to streaming with most video formats. And obviously there's uBlock for the ads.

DRM on streaming and BluRays made it so that any usage outside basic consumption on prescribed devices is better served by illegal means.


You can have a similar experience with a private tracker and a seedbox. The content is curated (no malware), there's a larger selection, the quality is sometimes higher, and peers generally have better connections.

After you pick your torrent, it takes the seedbox a few seconds to download the content. Then you can stream your download using emby, vlc over http, or whatever you prefer.


You obviously haven't tried popcorntime.


Used it once, cost me €840.

With torrents you can get a film in minutes. With popcorn you are exposed the entire time you watch the film.

In Germany they monitor peer connections and send a payment demanding an out of court settlement. After two years they escalate to a court appearance in a remote town. If you don't show, you lose and they turn it over to debt collection.

But I have to say Birdman was a great film.


This is exactly why I use a VPN. Last thing anyone needs is to be inconvenienced by BS laws bought buy special interests.


One of the best solutions out there.


Which is illegal.


And torrents (of copyrighted content) aren't? You're completely missing the point of this thread.


That's kind of the point, the illegal solution is better than the legal one.


Netflix doesn’t have most stuff.


You can use webtorrent - https://webtorrent.io/


I don't know this technology, but I'd really recommend using a VPN that provides a SOCKS proxy for your Bittorrent connections. Otherwise you're just announcing your IP address related to your torrent activities to the whole world.


"Last time I used piratebay, I saw a lot of porn"

Why did you use piratebay unless that was your goal?


I don't live in Europe but I canceled my Netflix the minute I learned that Reed Hastings is taking advantage of the coronavirus situation to increase Netflix's bottom line by limiting streaming quality to everyone in Europe after one phone call with a French politician.

There are plenty of torrent streaming and download clients that work just as well and are just as convenient as Netflix, without needing to rely on a central authority.


Wait til you see how amazing the experience can be with Usenets...


How do I get into using Usenet?


You need: 1. Usenet provider (this allows you to download the links) and Usenet indexer (this allows you to search for content you want because files like movies usually have obfuscated filenames). Example of the former is Frugal Usenet, example of the latter is Drunken Slug (free-ish) or DogNZB (paid). One of the many things you need to watch out for with providers is retention, the length of time they keep the files so you can download them. In general, the older content you find on the indexer, the less likely it's still going to be available. The downloading itself can be done in an automated way (for example with Sonarr and the like) or kind-of manual with tools like SABnzbd.


Thanks a lot!


Not absurd so much as a major reason why the piratebay exists.


> one of Chrome's philosophies is that you should be able to wipe a Chrome install, reinstall Chrome, and have no trace that before/after are the same person.

Why would that be their philosophy? It sounds like some kind of privacy-motivated idea which seems contrary to Google’s typical philosophy. Or is it more about portability?


So any user on Edge can be hardware fingerprinted easily? I can see why other browsers stay far away.


That's a good question. Is there anything to stop a disreputable advertiser/tracker hijacking the EME DRM scheme for tracking?

The evercookie project doesn't appear to leverage EME, for what that's worth. https://samy.pl/evercookie/


I am not sure about Edge specifically, but as someone who tries to use mostly open source software: Digital Rights Management (DRM) requirements often directly conflict with licensing related to open source software.


Not the only browser to support DRM. But the only browser to support PlayReady on Windows, which brings added security compared to what Widevine offers on Windows.

Another popular choice for high quality is Safari on macOS because it implements Apple's FairPlay.


Wait, Netflix et al use FairPlay in Safari on macOS?

I'm surprised, because Fairplay is publicly crackable.


So is WideVine.


It is?! The only public way to decrypt that which I'm aware of stopped working 15 years ago.


If you still see pirated copies of shows marked with WEB-DL (rather than Webrip), there's a way of decrypting the content directly. I really doubt the methods that are used are public, though.


*Added security for the remote server but massively reduced security for the end users computer.


Any source?


It allows websites to run arbitrary code blobs which interact with a hardware backdoor in your CPU with a higher permission level than the OS. With the multiple exploits that have been found in intel cpus and the Management Engine itself there is no way you should be letting any website do that.


Other browsers support Widevine which is by far the more popular DRM scheme.


Simply because Netflix uses Playready DRM for 4k streaming, which is even harder to bypass and requires WinRT API (?) to even able to use the recent version.

Currently only Microsoft itself even try to implement it on their own Chromium-based browser.


There are different kinds of DRM. Streaming websites allow different quality for different kinds of DRM. E.g. they allow best quality only for best protected DRM (which should use encryption all the way from Netflix webserver to your display). There's software DRM (decrypting stream inside proprietary blob) which is considered weaker, so you'll receive acceptable quality in Chrome. I don't know why Chrome did not implement the most secure DRM. Hopefully Microsoft will contribute their patches back.


I'm guessing Edge specifically also has to do that not only because of chrome==good queries, but also because of many, many edge==bad queries.


This is why Edgium calls itself “Edg”, not “Edge”, in the UA string.


For me that list is empty. Edge-dev 82. Maybe I'm doing something wrong?


IIRC, Edge-dev has an empty list. On stable and beta get the list.


Which makes sense, given that the list could change anytime (ideally, it wouldn’t even exist) and no developer should rely on Edgium identifying as Chrome or Classic Edge.


I had been thinking recently as I've been using Firefox more that Google maps had got clunky. With a little fiddling prompted by your comment, it turns out Maps sniffs specifically to reduce fluid animations on Firefox (and probably some other browsers).


Thnak you, this had been bugging me for a while. Looks like I'll need to permanently install a UA-switcher extension.

Yesterday I saw a HN comment saying you can add the (?|&)disable_polymer=1 parameter to the end of YouTube URLs to make the site much faster - iirc Polymer is extremely slow on Firefox only. This extension was also linked: https://addons.mozilla.org/en-US/firefox/addon/disable-polym...

Unfortunately there doesn't seem to be any workaround for ReCaptcha on FF. I generally end up opening the website in the GNOME or KDE (Falkon) browser which use something like WebKit/Blink - there it works on the first try every time.


Some advice about ReCaptcha, the audio test is way easier and usually only makes you do it once or twice (as opposed to the 5-10 times you usually have to do when you have disabled tracking). Sometimes it will say you aren't eligible or something, just refresh the page and it will let you try again.


The audio test can also be defeated by ML extensions that do it for you automatically


Hey wait a minute!


I wonder what will be the point of these captchas if most people started avoiding them by installing extensions. If the end goal is to put significant resource constraint on the scrapper or bots, you can do it quite easily by running something to short the usage up for a few seconds if you think someone is suspicious.


I just use the contact site to send them a note to cut it out. There are better ways to prevent spam. Like my password that I just entered in the latest case. I'll go elsewhere in the next, they have competitors that won't be that different in price and easier to use.


I had been having an issue with Google Sheets and Firefox that the app decides to change row height randomly.

On Firefox only. Obvious solution to which being...


It actually insert a line break on enter, make it invisible and can't be deleted only on firefox.

Pretend firefox as chrome makes it works perfectly.

They lock the community thread and fixed that after several days I found the finding and post it there.

Shame on you, google.


Oh my god. I knew it was inserting a line break when I hit enter but I didn't realize it was a FF only issue. gdi Google.


Not the first time. For some reason Google really doesn't like people talking about the games they play with browser detection.

(That's not snark - I really don't get it. They don't appear to mind people talking negatively about a lot of other stuff they get up to. Maybe lingering antitrust fears from the 90's MS suit?)


I mean we are in 2020, none of this should be news, but then a lot of people only discovered them now. They have been doing this since early 2010s.

They have been a hypocrite from the start, but people got too consumed with its free Gmail, RSS Reader along with its Do no Evil they decided to trust them blindly.

The current browser, and Web Tech scenario is pretty much Google's way or the highway. So I am glad Apple kept Safari as the only option on Apple platform. Not allowing them to dictate everything.


I reported this as a Firefox bug [1] in 2015. Google fixed it on their side in 2017, but it’s back. :(

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1205573


Companies are afraid of legal liability much more than they’re afraid of bad PR.


I was hitting that too! I wondered what I was doing. I kept getting these weird fields. That fucking sucks. I'd like someone to explain google's point of view about why this is happening? I do override user agent on some systems so random websites work.


Indeed, I was wondering why sheets was so broken to always randomly insert line returns.


Same... I've been played.


really working hard to shove that poison apple down your throat.


I wouldn’t really shame them on this one. Google docs and sheets uses contenteditable under the hood. Contenteditable isn’t specified or standardized anywhere and varies wildly between browsers, and sometimes between browser versions.


So even that workaround is being phased out with this UA support change?


Oh wait...THAT might be the problem?? I've been having that issue too! I have to cut the cell content, delete the cell, and then paste the cell content back in in order for the row height to appear the same. It never even occurred to me to switch browsers, I thought it was an issue with Google Sheets.


Same except it stopped not long ago. Maybe they fixed it ?


https://support.google.com/docs/thread/18235069?hl=en

They do, I'm not sure that should be called as a 'fix' though.

Because it even works perfectly on firefox as long as you spoof your useragent to chrome.


Maybe. Still not switching away from Firefox though. I have Chrome installed because WebRTC is a lot better and teleconferencing needs have shot up recently, and because I lock my phone away in a Kitchen Safe and use 2FA with Authy the Chrome app, but Firefox is my daily driver :)


> Obvious solution to which being...

Installing an extension to spoof your user agent? Since we wouldn't want to reward Google being anti-competitive.


Or... quit using buggy app?


Except, that has been pointed out in the parent comments, it's not the application, it's a deliberate bug targeted towards a different browser(hence changing the user agent fixes it).


There's a strong argument to be made here that the "buggy app" is Google Sheets.


Except, there’s always the (Libre)Office!


The browser is not the app I had in mind.


What versions of Firefox and Chrome are you seeing different behavior in Google Maps? I get the same experience in Chrome 80 and Firefox 74 on OSX Catalina.


Wow, that's proper evil.


Every single Google product is slower on Firefox and it’s hard to not call this malice and artificial. Many people check out Gmail and GMaps on Firefox and go back to Chrome because of their clunkiness on Firefox.


"Google ain't done, till Firefox won't run"?


> malice and artificial

are you actually asserting that Google is purposefully adding code/"tweaking" their web apps to run slowly on browsers other than Chrome?

do you have any evidence at all for this other than anecdotes about people experiencing Google web app clunkiness on Firefox?


It could also be a passive, malicious de-prioritization of bugfixes for Firefox that would cause the same effect. It seems like this would be a more likely scenario.


I would believe that if changing the user agent or toggling some flags didn't fix it.


Probably a lot of the other way around as well. Google employees likely use Chrome and Docs/Sheets heavily, so they get a rapid high-quality feedback loop with any bugs or performance issues affecting the Chrome devs and people around them.


If there was direct evidence it would warrant its own post, so I think your comment has been made in bad faith since people are talking about their own experiences.

That said; if it's possible to measure firefox/chrome performance (with altered user-agents) it would make for a good blog post.


How is "hard to not call this malice and artificial" people talking about their own experiences?


The "hard not to call" portion takes it from the realm of objective fact and into subjective measure.


Huh, I guess "hard not to call" could be interpreted as a disclaimer or qualifier meaning "I can't prove this, and it's only my opinion".

However, I interpreted it differently, to basically mean that because of objective fact, of all the explanations you can think of, only one of them is defensible. In other words, it's similar to saying "admit it, this is the only reasonable conclusion".


The original post had the phrase "hard to not call" rather than "hard not to call".

The meaning is inverted by swapping the "to" and "not".



As a semi-counter:

Slack, Skype, and Zoom video calls don't work in Firefox, even though WebRTC is an open standard. But Google Hangouts works perfectly.

I'm loath to give Google credit for something that ought to be standard practice, but of the major (key word) free video conferencing options, they seem to be the only one that's Firefox-compatible.


I try out google products on occasion and find them rubbish compared with alternatives, so go back to non-google products.


Regarding google properties what do you prefer and why?


Google maps does tend to be better than OSM when it comes to route finding, I use !gm when I want routing. On my phone I use apple maps though.

Gmail is painful compared with OWA (work) and zoho (home), I stopped using my gmail account for new stuff about a year ago.


I prefer mu4e (mu for email) although ultimately I'm still using my gmail account just with another interface. I like the idea of ultimately just setting up a mail server but realistically its really really hard to switch everything at once when you have a lot of existing accounts so I'll probably be maintaining a gmail account forever.

I'm trying maps.me because unlike google it does offline walking directions but I haven't used it enough.

I really want to like duck duck go but it feels like google still provides better results.


> I really want to like duck duck go but it feels like google still provides better results.

I find I get better results with ddg than google. YMMV I guess.


I us brouter for my usecase of foot/bike nav in hilly terrain. Using OSM And for the UI.


In fact, just this week I thought about why Google doesn't make its mobile maps website fast again. It is such a pain to use it on older phones and I totally don't get, why it has to be that slow (doesn't matter if chrome or Firefox).


Here in Safari, Gmail is not only 10x buggier than it used to be before the redesign, it also uses at least 10x more client-side resources (CPU, network, ...). A handful of open Gmail tabs single-handedly use more CPU over here than hundreds of other web pages open simultaneously, including plenty of heavyweight app-style pages.

It’s hard to escape the conclusion that Google’s front-end development process is completely incompetent and has no respect for customers’ battery or bandwidth.


I recently learned about Superhuman. Via the Acquired podcast. Not a recommendation; I haven't used Superhuman.

But I remain astonished there's an apparently very successful startup who's entire effort is a "pro" webmail client for gmail.

It would have never occurred to me to create a better webmail.

As I reach my greybeard years, I'm increasingly aware that I've been doing everything wrong.


Superhuman looks nice.

>I'm increasingly aware that I've been doing everything wrong.

Any theme to this?


> The weird thing about this is that the only company I've seen doing problematic user-agent handling in recent years is Google themselves.

I frequently consume web articles with a combination of newsboat + Lynx, and it's astounding how many websites throw up HTTP 403 messages when I try to open a link. They're obviously sniffing my user agent because if I blank out the string (more accurately, just the 'libwww-FM' part, then the site will show me the correct page.

I'm pretty sure that the webmasters responsible for this are using user agent string blocking as a naive attempt to block bots from scraping their site, but that assumes that the bots that they want to block actually send an accurate user agent string the first place.


> I'm pretty sure that the webmasters responsible for this are using user agent string blocking as a naive attempt to block bots from scraping their site, but that assumes that the bots that they want to block actually send an accurate user agent string the first place.

That is exactly what they are doing, and it works really well.

We blocked user agents with lib in them at reddit for a long time.

Any legit person building a legit bot would know to fake the agent string.

The script kiddies would just go away. It drastically reduced bot traffic when we did that. Obviously some of the malicious bot writers know to fake their agent string too, and we had other mitigations for that.

But sometimes the simplest solutions solve the majority of issues.


> Any legit person building a legit bot would know to fake the agent string.

What, that's totally backwards. Anyone using a bot to do things that might get blocked by publishers fakes the string, legit purposes should really show who / what they are.


It actually is encouraging people to have useful user agents. By default most people end up with a user agent that's something like "libcurl version foo.bar.baz", which isn't actually a description of who or what they are; given the prevalence of curl, it really just tells you that it's a program that uses http.


We only blocked agent strings with "lib" in them. You could change the agent to "WebScraperSupreme.com" and it would have been fine (and in fact some people did do that).


Yes perhaps. But it caused problems for regular users like this fellow. I also have tried various 'download via script' for web pages for offline use. I thought I had a problem on my end, I never realized I could have been getting blocked.


Hard to argue with the economics of that mitigation though. The abuse:legitimate use ratio is probably pretty high. Getting rid of user agent strings will bring back the scaling problems, as they should probably be addressed directly.


> They have released several products as Chrome-only, which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent.

This seems like a pretty good reason in itself why they might be interested in phasing out User-Agents.


It's the exact opposite. Without User-Agents, sites need to depend on feature detection, and closing the feature discrepancy between Chrome and other browsers is more complicated than just spoofing your UA to get Google to serve you functioning versions of their products.


I don't quite follow your comment.

I'm saying, the hypothetical flow from Google is:

1. Our Chrome detection relies on the User-Agent header.

2. But people can just lie in the User-Agent header.

3. Let's get rid of it and use something that's harder to lie about.

Closing any feature discrepancy isn't a goal here, as far as I can see. The whole point is to lie to the user that a feature discrepancy exists when it doesn't.

You can make the argument that Google is free to do their browser detection however they want (and therefore doesn't need to solve this problem by eliminating User-Agents), but this is still an obvious example of the User-Agent header causing problems for Google.


I interpreted your parent's comment differently; namely, if Google's developers can't do User-Agent detection, then internally even they will have to improve how they develop (eg. via feature detection), making their products more compatible with other browsers.

Many people assume Google, as an upper-level business decision, purposely makes products work better on Chrome in order to vendor-lock users to the browser. Maybe that's true; or maybe it's developers being lazy and using User-Agent detection. Removing their ability to do so might actually improve cross-browser compatibility of Google products.


Well, the GP is saying that the sites work perfectly fine when they just change the User-Agent.

So Google developers don't need to improve feature detection - that part is working fine already.


This is going to end up being the IE of Google, funny that it is also a browser (Chrome).


That's the opposite of what is happening though


We are talking about the same object in two entirely different contexts.


Google is probably so big that we might as well consider Chrome and rest of the Google as separate entities.


You can see Chrome devrels on Twitter expressing disappointment with Chrome-only web sites, saying that they raise the issue internally. Of course we have no visibility into what happens after that, but it's an indicator that you're right.


I’m guessing all of Google’s internal apps are only tested on Chrome, with plenty of Chrome extensions, which means that all of the developers have to use Chrome to make the tools work, and at that point, switching back and forth between different browsers is a pain so none of the other browsers get the love they deserve.

The attitude of “it works on Chrome, I don’t care about anything else” is fairly widespread anyway. Just to stem the tide a little bit I’ve been developing on Firefox and Safari first, and then checking Chrome last.

I got bitten before when I made a browser game, and then noticed that it was all sorts of broken on Edge, even though Edge supposedly had all the features I needed. It turns out that Edge did have all the features I needed, but I had accidentally used a bunch of Chrome features I didn’t need. The easy way out is to turn things off when I detect Edge. The hard way is to find all the broken parts and fix them. So nowadays, I don’t do any web development in Chrome.


At least in my part of the googleverse, we have automated tests running in all the browsers (even ie11).

But I'll admit I will also poke around outside of the tests, and I'll usually only be doing that in chrome, unless I've had a bug report about firefox in particular. And I'll only really open up Safari when I'm testing VoiceOver. ChromeVox just isn't good enough.


Oh, I’m sure there are automated tests. But if you have 50,000 developers using Google Docs in Chrome, they’re gonna submit some high-quality bug reports internally, whenever it breaks.


Please, yes, develop on Firefox first. We all need to promise to do this.


Considering that there's been an internal and external bug filed about the US states not being in alphabetical order on contacts.google.com for years, making it impossible to type 'new y' to get New York, I don't think raising it as an issue will help much.


I hear what you're saying, but they pay people enough to follow a potential company-wide policy: Don't f-ck with user agents!


It's easier to change the thing you're in charge of than it is to make a new company-wide policy.


this is exactly the opposite of what we should do when they degrade experiences of competing browsers!


I’ve started seeing alphabet employees use this as an excuse: “oh that happened on team x there’s nothing I could’ve done”. On small technical issues the excuse is fine - on large moral issues it does not work.


In large corporations politics are abound. If the Chrome division cannot get other divisions to behave through other means, this is fine.

You can see that they should have fought harder and escalated, but issues like this are probably not the ones most upper-middle management want to potentially damage their career for.


It's not fine, but it's certainly to be expected.


A fair number of websites will still block perfectly working features based on what OS you use.

Some examples I've seen using the latest Firefox on *BSD:

Facebook won't let you publish or edit a Note (not a normal post, the builtin Notes app). I think earlier they wouldn't play videos but they might have fixed that.

Chase Bank won't let you log in. Gives you a mobile-looking UI which tells you to upgrade to the latest Chrome or Firefox.

In these cases if you lie and say you're using Linux or Windows it works flawlessly.


In addition to these, I've also noticed outlook365 (use it for accessing work email) gives a very minimal interface to the point of being nearly unusable with a FreeBSD user agent. Switch to a more popular one and I get full functionality.


I am guessing Banks only test their site against popular OS and Browser for security reasons.


It's usually just that the company doesn't want to have to deal with potential support calls that come in about their website not working on x os in x browser, so they do the bare minimum to disable it - then if a user complains about it not working, they can plausibly deny supporting the os/browser configuration.


This takes more effort than letting it just work. People running an OS more obscure than Linux are not the type that are going to call asking for support. We are used to supporting ourselves.

Did I mention it's the same code as a working configuration?

I think it's more likely somebody did not know how to properly parse user-agent and they blocked more than they intended to.


Ok then, don't tell all the other financial institutions I use FreeBSD then, they are all letting me through without issue.

It sounds a lot like you are making excuses for them and bad/lazy/poorly thought out code.


I think you mean their broken idea of what a security reason is. Banks are generally really bad at actual web and mobile security.


I would guess they have built something into chrome that gets even more data that isn't user-agent based.

UA has a lot of limitations and is fairly easy to work around giving data to for power users. I would imagine Google didn't want to keep playing around with that.


> I would guess they have built something into chrome that gets even more data that isn't user-agent based.

Chrome includes a unique installation id in requests to Google owned domains. They don't need any cookies or user agents to guess who you are and best of all they don't have to share that information with their competition.

https://news.ycombinator.com/item?id=22236106


I know Netflix used to block the Firefox on Linux user agent for no reason


Not for a technical reason, but they had a reason: they provided no support or guarantee that Netflix would ever work on Linux + FF (Ubuntu + Chrome was guaranteed) and they didn't want any support calls for something that they wouldn't help people with anyway.

A lot of stuff gets blocked for this reason. The company doesn't want you calling them because HD video doesn't work on Firefox even though you pay for HD quality, they do not test or guarantee Firefox compatibility in the slightest and yet they have to talk to an angry customer now. It makes business sense to redirect people to supported use cases when you know your product probably won't work as intended otherwise.

You don't have to agree with the decision (and you can always cancel your membership if you do) but they had their reasons.


Why not a banner saying that it's not supported and may have issues? You might lose customers if you simply block them from using it at all.


> and they didn't want any support calls for something that they wouldn't help people with anyway.

Even knowing what they were doing, I fielded at least two support requests asking what was going on. I can only hope I wasn’t the only one.

Now that everything plays nicely I just happen to have no interest in Netflix for other reasons...


Exactly. This is going to turn into a game of whack-a-mole whereby we need to load the latest firefox extension that tricks websites into thinking we're using Chrome.

Or we could build for Firefox. There's always that.


The things that replaces the user agent will still be enough to differentiate Chrome from Firefox and Safari.


Chrome team members are face-palming with the best of us whenever a Google product does backwards things like filtering based on user agent string.

Google isn't a singularity ️


Facebook also uses the user-agent string to determine which version of a site to send to someone. I installed a user-agent spoofer a while back and messenger.com would fail due to it every few refreshes (as evidenced by JS console).


To get Microsoft Teams to run in Chromium there used to be a user-agent hack to make it pretend to be Chrome. This was superseded by someone packaging up using Electron. And finally this has been superseded by Micrsoft themselves supporting Linux using something that looks and feels like Electron again.

So, basically, Microsoft using user-agent to detect Chrome....


> ...which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent.

Which is probably why Google wants to phase out the user agent.

For sure whatever Google invents to replace it will not be so easily circumvented.


Some Google properties are broken on Chromium, even.


Ikea does it too with some of their tools (just sucks).


I'm sure Google won't build in some proprietary way for them to identify Chrome.

/s


I mean they already did. The goal is to replace user agent parsing with a simple field that says exactly what browser and version this is.


You mean like a user-agent string?

Gee, I wonder how this is going to end: https://webaim.org/blog/user-agent-string-history/


"Oopsie" said Google to Firefox.


> https://github.com/WICG/ua-client-hints

I don't really understand how this will result in any real difference in privacy or homogeneity of the web. Realistically every browser that implements this is gonna offer up all the info the server asks for because asking the user each time is terrible UX.

Additionally this will allow google to further segment out any browser that doesn't implement this because they'll ask for it, get `null` back and respond with sorry we don't support your browser, only now you can't just change your UAS and keep going, now you actually need to change your browser.

And if other browsers do decide to implement it, they'll just lie and claim to be chrome to make sure sites give the best exp... so we're back to where we started.


> I don't really understand how this will result in any real difference in privacy or homogeneity of the web.

It does a little: sites don't passively receive this information all the time, instead they have to actively ask for it. And browsers can say no, much like they can with blocking third party cookies.

In any case I'm not sure privacy is the ultimate goal here: it's intended to replace the awful user agent sniffing people currently have to do with a sensible system where you query for what you actually want, rather than infer it from what's available.


Switching it from passive to active means you can count it towards https://github.com/bslassey/privacy-budget . Yes, sites can ask for all sorts of things, but if they ask for enough that they could plausibly be fingerprinting you then they start seeing their requests denied.

(Disclosure: I work at Google, speaking only for myself)


Is the "privacy budget" an actual feature of chrome or just an idea? I've never heard of it until now.


It's a proposal for how to prevent fingerprinting: https://blog.chromium.org/2019/08/potential-uses-for-privacy...


It prevents others fingerprinting, not Google though. Isn't there that x-Client-Data header than chrome only sends to Google domains?


The X-Client-Data header is documented in https://www.google.com/chrome/privacy/whitepaper.html#variat... and Chrome uses it to run experiments to make the browser better. It's not used for fingerprinting.

(Still speaking only for myself)


So why not still send limitied information by default in the User-Agent header, and if they ask for it, send more information in User-Agent header? (keep everything in one spot?)

Why are we creating redundant headers?


The problem is that without User Agent sniffing in some circumstances there is no other way of working round a browser bug e.g. There are cases where browsers will report that it supports such feature using one of the feature checks but the implementation is garbage. The only way is to have a work around based on user-agent sniffing.

Sure a lot of developers abuse the feature but I fear this might create another set of problems.


The other way is not using that feature until all browsers you care about implement it correctly.


It is rarely an options. Additionally defects are introduced into features that have been supported for quite a while.


That’s not a pragmatic solution.


> It does a little: sites don't passively receive this information all the time, instead they have to actively ask for it. And browsers can say no, much like they can with blocking third party cookies.

Lets run through that scenario:

sites that don't need this info still aren't gonna ask for it or use it. sites that want it will get it this way and even if you respond with "no" that's useful to them as well for fingerprinting and as a way to fragment features to chrome only. So, what's changed?


> sites that want it will get it this way and even if you respond with "no" that's useful to them as well for fingerprinting

To an extent, sure. But to follow the model of third party cookies, let's say client hints are used extensively instead of user agent and all cross-domain iframes are blocked from client hint sniffing. All the third party iframe is going to be able to detect is whether user has a client hint capable browser or not. That's a big difference from the whole user agent they get today.

The idea is that this won't be a Chrome-specific API. It's been submitted to standards bodies, but Chrome is the first to implement. For example, Firefox have said they "look forward to learning from other vendors who implement the "GREASE-like UA Strings" proposal and its effects on site compatibility"[1] so they're not dismissing the idea, they're just saying "you first".

https://mozilla.github.io/standards-positions/#ua-client-hin...


You could also not give the useragent to the iframes.


Who knows how many sites that would mess up, though. Backwards compatibility is both the web’s greatest strength and biggest pain point.


If it's well designed, then the system will only be able to query for feature support rather than ask what browser is in use.

I have a feeling Google won't do it that way, because they intentionally gimp most of their apps on non-Google browsers for no reason other than to be dicks.


That requires running Javascript instead of having a server-side call.


Good. User-agent strings are a mess. Here is an example of a user-agent string. Can you tell what browser this is?

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13

How did they get so confusing? See: History of the browser user-agent string https://webaim.org/blog/user-agent-string-history/

Also, last year, Vivaldi switched to using a user-agent string identical to Chrome’s because websites refused to work for Vivaldi, but worked fine with a spoofed user-agent string. https://vivaldi.com/blog/user-agent-changes/


If companies like Google wouldn't abuse the user agent string to block functionality, serve ads, force their users to specific browser then companies like Google wouldn't have to use fake UA strings and then maybe companies like Google wouldn't have to drop their support.


Anything to do with HTTP is a mess!


Chrome 0.2 on Windows XP?


As a web developer, I have very little trouble reading the User-Agent header.

User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.1.2222.33 Safari/537.36

Sec-CH-UA: "Chrome"; v="74"

Sec-CH-UA-Full-Version: "74.0.3424.124"

Sec-CH-UA-Platform: "macOS"

Sec-CH-UA-Arch: "ARM64"

Sec-CH-Mobile: ?0

Isn't moving this information to a separate Sec-CH-UA headers going to make things _more_ messy? Especially if it's in _addition_ to the frozen User-Agent header?

Aren't we still going to have the issue with needing to fake even the new Sec-CH-UA header?

If we're going to freeze the User-Agent header, that's fine, but don't just move the unfrozen info to a separate header. Now you have 2 problems.

Aren't we just making the problem worse?


Very little trouble? That user agent says it’s Mozilla, then says it’s Windows 10, then says it’s Apple, then says it’s Gecko, then says it’s Chrome, then even though it said it was Windows it goes on to say it’s Safari.

The individual headers, on the other hand, tell you EXACTLY what the system and browser is.


[flagged]


Well that's quite rude, you don't know me at all so you have no idea how much experience I have. And it's also quite contradictory... if the string is only rarely to be parsed by developers, why would "any web developer worth their salt" bother memorizing every browser's user agent? Experienced web developers don't have anything better to do than memorize a useless string they rarely have to work with?


This is a good idea, and is something I've thought of for a while; the user agent header was a mistake from both a privacy and a UX perspective.

Ideally, web browsers should attempt to treat the content the same no matter what device you are on. There shouldn't be an iOS-web, and a Chrome-web, and a Firefox-web, and an Edge-web; there should just be the web. In which case, a user-agent string that contains the browser and even the OS only encourages differences between browsers. Adding differences to your browser engine shouldn't be considered safe.

Beyond that, the user agent is often a lie to trick servers into not discriminating against certain browsers or OSes. Enough variability is added to the user-agent string that a server can't reliably discriminate, but it still remains useful for some purposes in JavaScript and as a fingerprint for tracking.

Which brings me to privacy. It's not as if there aren't other ways to try and fingerprint a browser, but the user agent is a big mistake for privacy. It'd be one thing if the user-agent just said "Safari" or "Firefox", but there's a lot more information in it beyond that.

If the web should be the same web everywhere, then the privacy trade-off doesn't make much sense.


I agree, but this also is incredibly dependent on the major players (e.g. Google) not going off on their own making changes without agreement from other browsers...

There are still issues today where chrome, edge, and Firefox render slightly differently. I certainly agree user agent isn't terribly necessary, but it's literally the only hook to identify when css or JavaScript needs to change... Or to support people on older browsers (e.g. Firefox ESR). How can I know when I can update my website to newer language versions without metrics confirming my users support the new ES version?

I would argue simplifying the UA, product + major revision, maybe, or information relevant to rendering and JavaScript only


Thinking cynically, it could be a power-move by Google to strengthen their hold on the ecosystem.

Right now when they go out and make their own API changes without consensus (which already happens), it's possible to distinguish the "for Chrome" case and still support the standard. But if there were no User-Agent, and Google wanted to strongarm the whole group into something, and 90% of browsers are Chromium-based, devs will likely just support the Chromium version and everyone else will have no choice but to fall in line.


I think it's perfectly fair to lean towards cynicism whenever Google goes out on their own making changes to Chrome.


You mean, like SPDY/HTTP2 ?


As far as I can tell, HTTP/2 is such a major improvement that no strong-arming is necessary. Speaking as a consumer of the web, as an individual who runs their own website, and at a developer working at a company with a major web presence.

The web suffers a ton from the “red queen” rule in so many different ways anyway—you have to do a lot of work just to stay in the same place.


But, is it really such an improvement ? Or is it just an improvement for Cloud provider that keep pushing the Kool-Aid ?

I still see a lot of contradicting benchmark and, apart from some Google Apps, personnally, I have not seen a lot of sites actually really leveraging HTTP2 (including push).

But maybe you did put and leverage HTTP2 on your own website ? At your company ? Did you use push ? Do you use it with CDN ?


> But, is it really such an improvement?

Yes, unequivocally. It’s amazing, even without push. The websites that use it are faster, and the development process for making apps or sites that load quickly is much more sane. You don’t have to resort to the kind of weird trickery that pervades HTTP/1 apps.

> Or is it just an improvement for Cloud provider that keep pushing the Kool-Aid ?

I don’t see how that makes any sense at all. Could you explain that?

> But maybe you did put and leverage HTTP2 on your own website ? At your company ? Did you use push ? Do you use it with CDN ?

From my parent comment,

> Speaking as a consumer of the web, as an individual who runs their own website, and at a developer working at a company with a major web presence.

My personal web site uses HTTP/2. It serves a combination of static pages and web apps. No push. HTTP/2 was almost zero effort to set up, and instantly improved performance. With HTTP/2, I’ve changed the way I develop web apps, for the better.

My employer’s website uses every technique under the sun, including push and CDNs.


>> Or is it just an improvement for Cloud provider that keep pushing the Kool-Aid ? > I don’t see how that makes any sense at all. Could you explain that?

I've seen a few CDN having a page loading a grid of image in HTTP/1 at page load, and then load the same stuff with HTTP/2 on a button click. It indeed shows you a nice speed up.

Except, when you block the first HTTP/1 load and start with loading with HTTP/2 first and flush cache between loads, the speedup vanishes. The test is disingenuous, it is not testing HTTP/2 but DNS cache velocity.

So, those type of website makes me rather cautious. And the test, for the small scale workloads I work with, have not been very conclusive.

Do you have serious articles on the matter to recommend ? Preferably not CDN provider trying to sell me there stuff.


> Except, when you block the first HTTP/1 load and start with loading with HTTP/2 first and flush cache between loads, the speedup vanishes. The test is disingenuous, it is not testing HTTP/2 but DNS cache velocity.

The demos I’ve seen use different domain names for the HTTP/1 and HTTP/2 tests. This makes sense, because how else would you make one set of resources load with HTTP/1 and the other with HTTP/2? This deflates your DNS caching theory.

I didn’t rely on tests by CDNs, though. I measured my own website! Accept no substitute! The differences are most dramatic over poor network connections and increase with the number of assets. I had the “privilege” of using a high-RTT, high congestion (high packet loss) satellite connection earlier this year and difference is bigger.

What I like about it is that I feel like I have more freedom from CDNs and complicated tooling. Instead of using a complicated JS/CSS bundling pipeline, I can just use a bunch of <script>/<link>/"@import/import". Instead of relying on a CDN for large assets like JS libraries or fonts, I can just host them on the same server, because it’s less hassle with HTTP/2. If anything, I feel like HTTP/2 makes it easier to make a self-sufficient site.

Finally, HTTP/2 is so dead-simple to set up on your own server, most of the time. It’s a simple config setting.


> My employer’s website uses every technique under the sun, including push and CDNs.

Are you actually seeing good results from push? I have seen many projects try to use it, but am not aware of any that have ended up keeping it.

(Disclosure: I work at Google)


> Are you actually seeing good results from push?

Push isn’t worth it, from what I understand. I think that’s the conclusion at work.


For comcenter.com I push CSS, except if the referrer is same origin.

I _think_ it's working pretty well as far as I can tell.


If you were up for running an A/B test (diverted per-user, since cache state is sticky) and writing up the results publicly I'd love to see it!


Well, that is a shame, it was to me the main selling point that could eventually win me over.


It is a power move because they use a chrome id to identify you. User agent isn't important to them by very important to others.


Maybe web publishers need to let go of this idea of pixel-perfect rendering and identical JavaScript behavior across browsers, and instead just worry about publishing good content. The web is not Adobe InDesign or Photoshop. At its essence it is a system for publishing text and hyperlinks that point to other content. Get the content right and don't worry so much about whether the scrollbar is 2 pixels thick or 3.


I, too, remember how this opinion was repeated as nauseam about a decade ago. It didn’t make much sense then, and even less now.

Nobody today expects identical rendering: people are used to responsive websites, native widgets etc. The problem people are actually experiencing (far less now than in the past) were more serious, such as z-axis ordering differences resulting in backgrounds obscuring content.

For JavaScript, I struggle with how non-“identical behavior” would express itself, except as a blank page and a small red icon in devtools.


I don't know.

If I'm connecting to a site with Lynx, I sure as heck don't want them to try to serve me some skeleton HTML that will be filled in with JS. Because my browser doesn't support JS, or only supports a subset of it.

User Agent being a completely free form field is the real mistake IMO. Having something more structured, like Perl's "use" directive, might have been better.


The problem with services using the user-agent to determine whether or not to allow a client access to a resource outweighs any benefit. I'm in the "it was a mistake to include this in the spec" camp.


One problem with this is that browsers don't behave the same. For example, iOS Safari prevents multiple pop-up windows to be opened by a single user interaction. Each one requires clicking back to the original page and allowing the popup. Now you might say, "Why would you ever want to do that?" But there are always going to be edge cases—in this case it's an integral part of one of the features of autotempest.com. But that's just one example. And the only way we can detect whether that behaviour is going to be blocked is by checking the UA.

I can understand why this is a good thing for privacy. Like many things to do with security on the web though, it's just a shame that bad actors have to ruin so many things for legitimate uses. (The recent story on Safari local storage being another example of that...)


That just makes things harder for those who wants this information. Still you can fingerprint browser by features and API support, but it requires javascript now and up to date library with recent features support check. I mean that it doesn't prevent obtaining this information, it's still available for big players who has big data


At my employer we are using UserAgent to detect the browser so that we can drive SameSite cookie policy for our various sites (e.g. IE11 and Edge, which we still support, doesn't support SameSite: None).

There are a variety of scenarios where this comes up (e.g. we ship a site that is rendered, by another vendor, within an iframe; so we have to set SameSite: None on our application's session cookie so that it's valid within the iframe, thus allowing AJAX calls originating from within the iframe to work based on our current auth scheme.. BUT only within Chrome 70+, Firefox but NOT IE, Safari, etc).

Just providing this as an example of backend applications needing to deal with browser-specific behavior, since most of the examples cited in other comments are about rendering/css/javascript features on the client and how UserAgent drives that.


The proposed User Agent Client Hints API would replace this: https://wicg.github.io/ua-client-hints/


The User Agent Client Hints API looks like a very early draft. I could not see any proposed timeline for implementation or estimate of when this might become a supported standard.

I would not personally rely on this as a substitute or replacement for User Agent by September (Google Chrome 85).


Going way back to the original iPhone Web Apps session at WWDC in 2007, they specifically cautioned about the problem of sniffing UA strings.

Of course, the reality of the web meant they had to do a bunch of compatibility hacks to get pages to display well.

(Gecko appeared in the original Safari on iPhone UA, IIRC)


The good news on that front is that IE11 and non-Chromium versions of Edge will likely never stop supporting UserAgent


We are in the same boat. Certain browser/OS combinations don't handle Same-Site correctly, so we are using UA sniffing to work around their limitations by altering Same-Site cookie directives for those browsers. We will likely have to look at some other mechanism for dealing with nonconforming Same-Site behavior.


These days, it feels like the sole use of User-Agent is as a weak defence against web scraping. I've written a couple of scrapers (legitimate ones, for site owners that requested machine-readable versions of their own data!) where the site would reject me if I did a plain `curl`, but as soon as I hit it with -H "User-Agent: [my chrome browser's UA string]", it'd work fine. Kind of silly, when it's such a small deterrent to actually-malicious actors.

(Also kind of silly in that even real browser-fingerprinting setups can be defeated by a sufficiently-motivated attacker using e.g. https://www.npmjs.com/package/puppeteer-extra-plugin-stealth, but I guess sometimes a corporate mandate to block scraping comes down, and you just can't convince them that it's untenable.)


Preventing scraping is an entirely futile effort. I've lost count of the number of times I've had to tell a project manager that if a user can see it in their browser, there is a way to scrape it.

Best I've ever been able to do is implement server-side throttling to force the scrapers to slow down. But I manage some public web applications with data that is very valuable to certain other players in the industry, so they will invest the time and effort to bypass any measures I throw at them.


As a person who scrapes sites (ethically), I think it's impossible or pretty damn near impossible to prevent a motivated actor from scraping your website. However, I've avoided scraping websites because their anti scraping measures made it not worth the effort of figuring out their site. I think it's still worth for do minimal things like minify/obfuscate your client side JS and use some type of one time use request token to restrict replay-ability. The difference between knowing that I can figure it in 30 minutes vs 4 hours vs a few days is going to filter out a lot of people.

Of course, sometimes obfuscating how your website works can make it needlessly more complicated, so it's a trade off.


Checking the user-agent string for scrapers doesn't work anyway. In addition to using dozens of proxies in different IP address blocks, archive.is spoofs its user agents to be the latest Chrome release and updates it often.


Meanwhile, you can still use youtube.com/tv to control playback on your PC from your phone—but only if you spoof your User-Agent to that of the Nintendo Switch [1]. Sounds like they are more interested in phasing out user control than ignoring the header entirely.

[1] https://support.google.com/youtube/thread/16442768?hl=en&msg...


Oh wow. I used that in the past and it worked great. I didn’t realize Google broke it only to force us to use their app.

What a bunch of turds.

Thank you for the Nintendo Switch pro-tip.


Yes. I firmly believe this is an attack on user control.

For example, I believe they REALLY want us to use the youtube app:

- Viewing youtube.com on a new iPad pro, Goolag lies and says "your browser doesn't support 1080p."

- Ok, change to desktop version in app. Goolag once again lies and says "your browser doesn't support full screen." They also lie and say they've redirected you to the "desktop version", and nag you with a persistent banner that you should return to the safety of the mobile website.

- Ok, change to "request desktop version" via user agent. Full functionality. Full screen is DEFINITELY possible with a javascript bookmark. 1080p+ is DEFINITELY possible. Ads blocked in browser.

If I were to use the app, they would have FULL CONTROL.


Are you deliberately misspelling Google as Goolag to make it sound like gulag?


This feels very ivory tower. It reminds me of the "You should never need to check user agent in JavaScript because you should just feature detect!!". Well in the real world that doesn't work every time.

The same is true for server side applications of user-agent. There are plenty of non-privacy-invading reasons to need an accurate picture of what user agent is visiting.

And a lot of those applications that need it are legacy. Updating them to support these 6 new headers will be a pain.


Chrome will support the legacy apps by maintaining a static-user agent. It just won't be updated when chrome updates. If you want to build NEW functionality that where you need to test support for new browsers, you do that via feature detection.


Most of the time when people use user agent for a purpose they think is appropriate, it doesn't even work correctly. YMMV


Interesting. We don't use UA to track customers, but it has been invaluable information for trying specific bugs. Eg, twice in the past 2 months, I've had to fix weird bugs that didn't make sense. The only way I was able to solve them was to look for patterns in which browsers and versions those who reported the bugs were using. Both turned out to be to do different iOS Safari cookie related bugs that only occurred in specific versions. Without logging the UA there would have been no way I would have been able to discover those bugs and create workarounds for those iphone users.

I'm all for preventing tracking, but I can't imagine a time where all browsers behavior so similarly that we won't have to write workarounds for browser bugs and differences. As a developer I can't imagine caring about Edgium vs Chrome, but it's important to know what the underlying engines are.


New proposed syntax adds even more noise:

    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
    AppleWebKit/537.36 (KHTML, like Gecko)
            Chrome/71.1.2222.33 Safari/537.36
    Sec-CH-UA: "Chrome"; v="74"
    Sec-CH-UA-Full-Version: "74.0.3424.124"
    Sec-CH-UA-Platform: "macOS"
    Sec-CH-UA-Arch: "ARM64"
Why not getting rid of the `User-Agent` completely?

It's already bad infrastructure design to have the server do different renderings depending on `User-Agent` value.


Why not getting rid of the `User-Agent` completely?

Try browsing the web without any UA header for a week or two, and you'll understand. You get blank pages, strange server errors, and other weird behaviour --- almost always on very old sites, but then again, those also tend to be the sites with the content you want. Using a UA header, even if it's a dummy one, will at least not have that problem.

(I did the above experiment a long time ago - around 2008-2009. I'm not sure whether sites which expect a UA have increased or decreased since then.)

I agree with getting rid of all that new noise, however.


I mean at first this change will cause all of these errors too until the servers migrate from user agent string to device hints. Getting rid of UAs would actually force meaningful update rather than migration.


Getting rid of UAs would actually force meaningful update rather than migration

...and effectively block access to a bunch of existing content on the Internet, still very valuable, whose owners may not have the effort to spare to make any changes.


Why the hell does a regular website need to know what OS and CPU architecture I got?


I can already see the permission pop ups for those:

> for best performance this website would like to know what type of device you are using?

While requesting every single "hint" and there is "ok" button and greyed out "read more or declide this request" tiny line.


The browser isn't for "regular" websites, it's for all websites.

And believe it or not, there are crazy JavaScript bugs that are OS-dependent.

I remember when I was writing a library around the audio API, and the ways it behaved on Chrome were different across Macs, Windows and Android. Detecting the OS with the user-agent string was literally the only way to build code that would work.

Now I've never personally come across that for the CPU architecture, but I certainly wouldn't be surprised if there were behavioral differences between 32-bit and 64-bit processors somewhere that affects some JavaScript function or HTML5 call somewhere.


Visitor metrics -> audience segmentation / fingerprinting -> advertisers


Yeah, aren't we just making things a lot more messy? Especially if we're not planning on removing the User-Agent header?

This pattern keeps repeating itself, freeze "Mozilla/5.0", start changing "Chrome/71.1.2222.33", freeze that, start changing "Sec-CH-UA", etc. Browsers will start needing to fake "Sec-CH-UA" to get websites to work properly, etc.


I can understand including the browser and version (to work around bugs that are not detectable with feature detection), the OS, OK I guess there are also a few OS-specific bugs?

What the heck is the CPU architecture good for?


Some download sites offer the correct binaries for your current system as default option.


It's so sites can auto serve 64bit vs 32bit (etc) binaries when downloading software.


It's great design if you're trying to push Google products.


This was recently discussed on HN:

3 months ago: https://news.ycombinator.com/item?id=21781019

1 year ago: https://news.ycombinator.com/item?id=18564540


From the git repo:

> Blocking known bots and crawlers Currently, the User-Agent string is often used as a brute-force way to block known bots and crawlers. There's a concern that moving "normal" traffic to expose less entropy by default will also make it easier for bots to hide in the crowd. While there's some truth to that, that's not enough reason for making the crowd be more personally identifiable.

This means that consumers of the Google Ad stream have one less tool to identify bots, and will pay Google for more synthetic traffic, impressions and clicks; this could be a huge revenue boost for Google. A considerable amount of their traffic is synthetic. I doubt this was overlooked.


does this mean there will no longer be a way of determining if the device is primarily touch (basically all of "android", "iphone" and "ipad") or guesstimating screen size ("mobile" is typical for phones in the UA) on the server?

https://developer.chrome.com/multidevice/user-agent

i wonder what Amazon will do. they serve completely different sites from the same domain after UA-sniffing for mobile.

is the web just going to turn into blank landing pages that require JS to detect the screen size and/or touch support and then redirect accordingly?

or is every initial/landing page going to be bloated with both the mobile and desktop variants?

that sounds god-awful.


Presumably you'll grab the dimensions (could cache after first load) and then render dynamically based on that. If you're doing some sort of if statement on the server to deliver content based on screen size you're probably doing it wrong. Obviously I can't speak for every mobile user, but for myself, it's infuriating to have a completely different set of functionality on mobile.


> If you're doing some sort of if statement on the server to deliver content based on screen size you're probably doing it wrong. Obviously I can't speak for every mobile user, but for myself, it's infuriating to have a completely different set of functionality on mobile.

there's not a "right" and a "wrong" here; it's about trade-offs.

you're either stripping things down to the lowest common denominator (and leaving nothing but empty space on desktop) or you're wasting a ton of mobile bandwidth by serving both versions on initial load (the most critical first impression).

you frequently cannot simply squeeze all desktop functionality from a 1920px+ screen onto a 320px screen - unless you have very little functionality to begin with. Amazon (or any e-commerce/marketplace site) is a great example where client-side responsiveness alone is far from sufficient.

https://www.walmart.com/ does it okay, but you can see how much their desktop site strips down to use the same codebase for desktop and mobile.


browser feature detection is the way grown up developers have been doing this for several years now. user agent sniffing is dumb because it bundles a ton of assumptions with a high upkeep requirement, all wrapped up in an unreadable regex. It's been bad practice for ages; I'd be surprised if that's how Amazon is doing it still.


> browser feature detection is the way grown up developers have been doing this for several years now.

and how do these grown up developers feature-detect when js is disabled? or are they too "grown up" to deal with anything but the ideal scenario?

> I'd be surprised if that's how Amazon is doing it still.

why don't you go there and open up your "grown up developer" devtools.


If your site doesn't use JS, you don't need features. Just use responsive HTML.


How do you handle browsers that render HTML differently?


the realistic answer to this line of questioning is : "we don't care about the edges because they constitute such a small percentage of the user base."


Use different HTML that renders the same everywhere relevant


Can't happen soon enough. As a frequent user of various non-mainstream browsers i'm sick and tired of seeing "your browser isn't supported" messages with download links to chrome/etc. At least in the case of Falkon it has a built in user agent manager, and I can't remember the last time flipping the UA to firefox/whatever actually caused any problems. Although, i've also gotten annoyed at the sanctimoniousness web sites that tell me my browser is to old because the FF version I've got the UA set to isn't the latest.


if your browser isn't supported, it's not the browser's fault, rather the website you go on not to support your browser.


I log this for coarse statistics about what our user base is running, but that is about it.

The good news: IE use is down over the last year to only about 40%.

The bad news: the growth elsewhere is all Chrome, with less than 1% Firefox or Safari. There’s a tiny sprinkling of Edge, as well, but I forget the numbers on that.

Our users are state and county offices and medical facilities, rather than private individuals, so the users are somewhat captive to whatever their organization mandates.

The only browser detection we do is in client side scripting to detect if the browser can directly display a PDF inline (or not, in the case of IE11)


I would much prefer a new version of the user-agent string. Normalize basic information (like OS and browser versions) without revealing too much (build numbers).

That would let servers still get necessary info without having to run even more javascript. It can just be in querystring format to simply parsing on both client and server.


Any user agent string will eventually be forced down the same path. Web sites use them to deny content. And the browsers will continue to try to match more patterns so their users see the content.

As long as they exist, I can see no escaping this arms race.


Larry Page no longer wants to be a “good net citizen”?

https://groups.google.com/forum/m/#!msg/comp.lang.java/aSPAJ...


I view this as an attack on the web as it stands.

Google wants to create a walled-garden net. Goog-net. All ads, shopping, videos, documents, email, locations, articles flowing through THEIR protocols and THEIR servers and THEIR fiber. No possibility of blocking ads they don't want blocked. No URLs. No Agent strings. No user control. Only user consumption.

If they have to allow some small chump players to have a piece of this cake (a la: "Oh trust us, AMP is an 'open' protocol and anyone can host it. it's not just for our own benefit in the end. Trust us.") in order for the entire population to accept their changes bit by bit, so be it in their eyes. They know we would reject an outright takeover.


This is OK...I guess? I mean it's great to get rid of that overloaded carbuncle of user-agent, but that will just lead to a new round of interpreting "hints". shrug

Google is a serial abuser of user-agent already so this is somewhat ironic.


The first time one of my articles appear in HN, I'm kinda excited


Fantastic. Thank you Chrome team! Especially for those who dont execute arb JS, this is a huge +.

Personally, I would like to drop the line completely and not send the key at all, but it's a start.


totally agree!!!


So Google found good way to fingerprint users without user agent and found that a lot of user agents are forged and this stopped working anyway. It's time to switch to API support forging.


User-agent is super useful to human people. But corporate people don't have a use for it. They will get that information via running arbitrary code on your insecure browser anyway. So, because mega-corps now define the web (instead of the w3c) this is life.

But it doesn't have to be. We don't have to follow Google/Apple-Web standards. Anyone that makes and runs websites has a choice. And every person can simply choice not to run unethical browsers.


> User-agent is super useful to human people.

For what? Honest question. You have to be like a 5th-level user agent wizard to make any sense of user agent strings, since every browser now names every other browser. How do you do anything useful with this in a way that's forward-compatible?


I look at the logs of my websites with my eyeballs manually after a perl script to winnow them down (ie, remove hits form me, hits from tor, etc).


Not sure why you are being downvoted since your statements are correct.

Few advertisers rely on user agent for ad targeting since it can be easily mocked with each HTTP request. It is used for fingerprinting, sure, but from my experience, mostly as a way to identify bot traffic.

It is also true that the advertisers that fingerprint people rely on JS that executes WebGL code in order to get data from the machine.

Finally, you are right that it doesn't make sense that a company like Google dictates these standards since they have a conflict of interests worth almost a trillion dollars.


Unfortunately they are either unethical or have other problems (or most commonly, both); I have made suggestions how to make a better one. See other comment elsewhere they explain


This is insane. You know, no HN post to a google blogspot site works for me because these jerks are the only ones that discrinate on UA?

Google engoneering is Sooooooo disconnected from the rest of the world, I think we need legal regulation to stop them from doing stuipd things like this. Do they have any idea how many things need it?

HNers with a position of power at work, I plead with you: please advocate banning of Chrome at work and replacing it with any one of the webkit based alternatives or firefox. These people are insane. Every month I hear of some ridiculous thing. They took out navbar url parameters, add links to in page words, now this!

For those who think this is good for privacy...it is not! This is the same old sneaky ass evil thing they do. UA can be used to finger print you but it's very easy to set a generic user agent. Actually, if you look at user agents most of them have the latest string for Chrome, IE or firefox so it isn't useful without a whole lot of other details correlated with it. You know what the exception is? Android and iPhone browsers that incluse your device make and model in the UA and apps that includes whole lot more like facebook's apps.

Do you know what a "flexible" api like the one they're talking about allows? More fingerprintable data points! The fact that you even use that api is a privacy issue. Let's sat clienthints allows for 10 different variations of responses from clients,your specific client details might have just 3 things different from the mean and bam, now they can track your specific device. With UA,all versions of a client have the same exact detail and most people need an extension to change it, so it makes it much less easy to finger print.

This is the same ol sneaky bait and switch Google pulls. The content of your UA is not the privacy concern (although it contains too much at times) , it is the fact that it can be correlated with timing info,IP (especially v6),and if they already know your UA they will also use client default http header options to identify and track you without consent.


Any idea on how to identify devices then? We currently check the user agent to to send a new code when an user logs in from a new device. How would you do this without user agent?


Use a cookie?


This change does not remove the user agent. In practice it just hides OS and the version but the user may opt-in to send those to a particular site.


Ah, the end of the countless references to KHTML :)

As a long time KDE user I'm a little sad, but also fully aware this day would come.


How can we use a browser that doesn't pretend to be Netscape Navigator? This will never work :)


if you want to go to the source of that story:

https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...


I was wondering. Isn't the page rendered on mobile and desktop based on user-agents? How would that work now?


The typical way this is done these days is by media queries in CSS, so you'd write a rule for styling based on screen width, like

        @media (max-width: 550 px) {
           body {
              background-color: white;
           }
        }
turns the background white on small screens.


gp is likely asking about how servers decide to redirect to the m.* version instead of the desktop version (or in some cases serve a different mobile version under the same domain), in which case, yes, it’s usually user agent sniffing.


There not phasing out User-Agent strings entirely, they're actually upgrading them: https://github.com/WICG/ua-client-hints

It looks like there's more fine grained control in the new version.


Javascript-only is not an upgrade.


UA strings have never been an accurate indication. If you're not using JS, then you probably have no reason to be sniffing the UA string to detect browser features, since most of those features are JS-related anyway.

It's an upgrade for the people who actually need to get an indication of the supported features and APIs of the user's browser. Otherwise, you should be using media queries.


One exception: you might want to user sniff IE and serve a completely different version due to all the CSS problems. (I know you can use IE-only comments too, but I’ve been in the situation where making a modern version simultaneously IE9-compatible was just too frigging maddening.)


A bigger, site-breaking one from further up in this thread: https://news.ycombinator.com/item?id=22685632


Detecting dumb search crawlers, that don't support major features required for my webapp, and displaying a fallback splash has been the only reasonable way I've found.


If you want to just change the styling and layout of the page depending on the user's device, then you can use css's media queries[0]. But if you want to serve two totally different pages (one for mobile and another for desktop), then I don't see how it can be done without JS or reading the user agent.

[0] : https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queri...


If you want to serve two totally different pages, you use two totally different URLs, and don't try to second-guess what the user asked for.


Not usually, no. CSS media queries are used to format according to display size. But as a sibling here has indicated, client hints will replace the user agent here.


I thought it used Javascript to detect screen size. At least it should react to resize events and if the dimensions are something that align with mobile, it should switch to mobile mode.


In a lot of cases you shouldn't even use Javascript for this, responsive layouts can be built using CSS media queries based on viewport size.

More advanced webapps might occasionally need to do something fancier than that if the mobile vs desktop functionality is (for some reason) substantially different instead of just rearranged.

https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queri...


AFAIK it's also done using CSS


Finally someone step up to stop the UA madness!!!

Now, all we'll need is a way to not send anything at all!!!


> While removing the User-Agent completely was deemed problematic, as many sites still rely on them, Chrome will no longer update the browser version and will only include a unified version of the OS data.

So, nearly all of the information that makes User-Agent strings problematic will remain. They're just phasing out precise version information.


isn't it concerning that Google decides to follow and implement even though the conversation on github concluded by "it should be rejected by W3C"?


So basically, Google shat their own bed and is just now beginning to realize that it stinks. Attempts to invent the universal internet user toll booth are back on track.


Stupidity. user-agent spoofing is a fact of life for many projects. Whatever feature they're going to come out with to replace UA will be spoofable too soon enough.


sorry, anyone knows the link of the original source of this?


No! I loved User-Agent because I could fake other user agents, e.g.

- being the google crawler to get past paywalls

- being a Mac user agent to get free internet access at some hotels


ahhh


More specifically, Google thinks they're the central authority as to what Chrome will do.


If they're the dominant web browser people will assume you are using Chrome anyway.


Annoying, as I just added a user agent based workaround for another Chrome compatibility problem (the increased security on same-site cookies, which can't be handled in a compatible way with all browsers).


Because, there can be only one user agent...!


As usual, this will fuck up the users, and not the techy nerds making such decisions, but the average joe because things on the internet will be broken for them.


How will things be broken? Google is not removing the user agent, they're just freezing it. So all sites that currently depend on the user agent will continue to do just fine. New sites can use client hints instead, which are a much more effective replacement for user agent sniffing.

This solution very specifically places the burden on "techy nerds" and not users, so I'm not sure where you're coming from.


Right, using user agent on the client side has been unsalvageably broken for a long time. Other things, like checking the existence of window.safari or window.chrome are more reliable.

For the server side, I’m not too aware of too many cases it’s useful other than analytics, and there is too much info leakage and fingerprinting happening anyway.

So killing user agent doesn’t really seem user-hostile, save for the fact that the company doing it has near monopoly market share and doesn’t need to provide a user agent, as it’s assumed that everyone is writing code to run on Google’s browser. In that sense it’s a flex.



This last year I've been noticing things breaking on the Internet for me here and there. I'm a Firefox user. This really wasn't the case in most of the past decade.

This kinda reminded me of the late 00's. It was quite common that the odd government or enterprise website was IE6 only.

All hail the new IE6.


I use Safari with no plugins. Even Disney World has a broken website for buying tickets for me. The web is breaking because it's gotten way too complex and the fight against trackers is leading to random failures of things that used to work.


The web is breaking because we are reaching the point where developers are able to assume WebKit/Blink and get away with it. It is imperative that technical folk adopt Firefox to hold back the tide.


Safari is WebKit. The trouble probably isn't the engine, it's ITP messing with some analytics thing.


Which is a shame, but I would lay the blame squarely at the feet of the team who built a checkout that throws errors when their analytics events don't fire. QA should really include a manual run w/ an adblocker...


To be fair, you are using a browser that makes it impossible to test in unless you happen to have a current mac.


Thats true, but its quite likely that simply testing on a couple available browsers and avoiding browser specific checks means that safari (and other non mainstream browser) users will be fine.

There aren't really that many actual standards compliance differences between most browsers, the real problem is all the undefined garbage they are forced to run. Back when I ran a html/css/javascript validator in my browser it frankly shocked me the number mainstream sites that weren't even delivering valid html/css/javascript. In my experience developing a pretty dynamic web site (actually it was a management front-end for a rather complex application) most of our browser differences were caused by bugs that went away simply by providing correct code.

(BTW: my wife has similar problems on her mac)


I had Build-A-Bear not work for me on Firefox at the checkout process. Had to switch to Chrome to make the purchase. But aside from that, I typically don't see any issues.


Give an example.


Seems they considered this issue and created a work-around:

>While removing the User-Agent completely was deemed problematic, as many sites still rely on them, Chrome will no longer update the browser version and will only include a unified version of the OS data.


This is unquestionably good though.

Instead of relying on a user agent which doesn't tell the entire story web site developers will need to check whether or not a feature exists in a browser before using it.


this will fuck up the users

That's a downside if it happens, but the upsides (privacy, forcing devs to use feature detection instead, etc) still means it's worthwhile.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: