Hacker News new | past | comments | ask | show | jobs | submit login

> We need more of them, not less.

While not sacrificing security, of course.




What security? The very narrow instance where your browser allows a MITM attack, and for some bizarre reason the attacker can't change the user agent to match Chrome's?

I am of the feeling that if Google's security practices require trusting an attacker not to change a user agent -- well frankly, in that case I am skeptical these changes are actually about security. Because the Google security team is smart, and I assume they wouldn't do stupid things.

But I am cautious about jumping to "this is malicious." There is certainly enough anecdotal evidence for a reasonable person to claim that Google targets competitors and tries to use changes like this to hurt them. However, it's not (currently) enough evidence for me. I am still naive enough to believe there is some semblance of good will at the company.

But I feel very comfortable saying that this is security change won't do much good, and the Google security team probably just thought, "why not turn it on?" without caring about potential consequences to competitors, because they don't think about anything outside of Google's ecosystem. I generally don't get the feeling that Google engineers are malicious, just that they're thoughtless and/or careless. I don't get the feeling that they're trying to mess up the web ecosystem, just that they act impulsively and feel very strongly that people shouldn't be questioning them; and that when people do question them, they tend to dig in their heels and become very condescending very quickly.

But again, I know there are Edge devs and Vivaldi devs that would call me naive.


If the browsers in question don't correctly implement all necessary standards to guard against XSS, frame-busting, and MITM attacks, Google will do what it can to protect its users against foot-shooting.

Changing the UA is equivalent to "voiding the warranty," so I'm not surprised Google isn't taking extraordinary measures. At some point, if your users really want to shoot their feet, there's only but so much you can do to stop them.


If a browser doesn't implement the standards to guard against MITM attacks, what makes you think it implements the standards to guard against user-agent manipulation during the MITM attack?

You've misunderstood what I'm getting at here. It's not the user that's going to purposefully change their agent -- it's that a browser that is insecure to the point that you can't trust it to log in is also insecure to the point that you can't trust its user agent to be reported correctly.

The entire security exercise is pointless because compromised browsers lie. They don't respect user preferences. An attacker who intercepts and modifies a request isn't going to suddenly start being honest with you when you ask what browser that request came from.


The code paths to change UA and implement XSS protection are different code paths.


No, User-Agent is no longer a forbidden header for Javascript fetch requests[0].

To be fair, both Chrome and Firefox have outstanding bugs where they haven't yet implemented the correct specs. But there is no reason to assume that a spec-compliant browser will block Javascript from setting the User Agent for a request. It's likely to allow it, because allowing it is the correct behavior.

Even if it wasn't the correct behavior, it's silly to assume that a browser that doesn't implement XSS protection is suddenly going to get good security when it comes to implementing UA freezing in request headers. I don't think there's a world where a browser maintainer says, "it's too much work for me to respect CORS, but I really want to make sure I'm following this obscure forbidden headers list".

[0]: https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_...


Software security is not a super-goal. Acceptable risk levels vary with circumstances. If you browse wikipedia on a kiosk-style machine it doesn't really matter whether it's mining bitcoin or not.

Or if you're visiting north korea then perhaps using your host's prescribed software stack may be preferable over bringing your own.

Even logging into an account from an insecure machine can be acceptable if you're using a low-value account. Some people do use throwaways regularly.


Google's assumption for its user's accounts though is that they're used as "keys to the kingdom." They're not optimizing for the throwaway account experience regarding security.


I felt compelled to once again post the classic quote that's taken on a whole new meaning in this coming digital dystopia:

"Those who give up freedom for security deserve neither."

The vision of "security" that these large companies have is to secure complete control over every aspect of your life, and the sooner the population realises that and stops believing their propaganda, the better. Google's continued destruction of the Internet through things like obfuscating URLs, purposefully degrading search results, and invasive RECAPTCHA-based surveillance is absolutely disgusting.


Whose security? If Google can track everything I do online, that's not 'secure' in my book, so Chrome is out.


Security is distinct from privacy. The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software, regardless of their producers' business models and data hygiene.


> The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software

I disagree that they are "the most secure" browsers, let alone software. They fail to isolate remote scripts properly; that people were capable of executing timing attacks against the CPU (Specter et.al.) shows that they are not really very secure.

Browsers which don't execute Javascript and advanced CSS (Lynx being one extreme example) are going to be much more secure by default.


There are four major dimensions to security: attack surface; depth of defense, or how much an attacker can do once they're in; proactive measures to find security bugs (e.g., fuzzing); and code quality.

You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.

The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.

I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?


This is kind of a strange analysis. Sort of infamously, Dan Bernstein, who is sort of a pioneer in these privilege-separated defensive designs, foreswore them in a retrospective paper about qmail. Really, though, I'm not sure I'm clear on the distinction you're drawing between attack surface reduction and privilege separation, since both techniques are essentially about reducing the impact of bugs without eliminating the bugs themselves.

You might more coherently reduce security to "mitigation" and "prevention", but then that doesn't make much of an argument about the topic at hand.


What I meant by "attack surface" here is probably a lot narrower than what you're used to. I'm using it to focus on the code size concern. I was trying to visualize it in terms of "how many opportunities do you have to try to break the system" (as surface area) versus "what can you actually do once you've made the first breach" (as volume), and didn't fully coherently rewrite the explanation to excise the surface area/volume distinction I originally made.


Google actually has additional security checks that require JavaScript, and they won't let you log into a secured account with JavaScript disabled.

https://m.slashdot.org/story/347855


> Security is distinct from privacy.

No, it's not. Security is not a goal in itself, it can not be, security is only about guaranteeing other goals, there is no security absent all other goals. What it means for software to be insecure is that it doesn't ensure your goals are met. For many, privacy is an important goal. If the software that you are using compromises your privacy that you value, then that software is not secure.


I am much more concerned about someone being able to impersonate me (security) than to know what I'm doing (privacy). This doesn't mean im unconcerned about the latter.

If secure software compromises privacy in ways that concern you, it may not be the right software for you to use, but it is still secure (and potentially more secure than other software that you feel better protects your privacy).


> I am much more concerned about someone being able to impersonate me

Well, great?!

> (security)

Erm ... no?

> than to know what I'm doing (privacy)

Privacy is not about what your software knows, it's about who else gets access to that information. Software allowing access to your information to parties other than the ones that you intended is a vulnerability class commonly called "information leak".

> This doesn't mean im unconcerned about the latter.

And thus it is, as per the common understanding of the word, a security concern.

> If secure software compromises privacy in ways that concern you

That's just logical nonsense. You might as well be saying "If secure software kills you in ways that concern you, [...]".

> it may not be the right software for you to use, but it is still secure

So, let's assume your browser had a bug where for some reason, every website could read all the data in the browser. Like, could access the storage, cookies, cache, history, page contents, everything. But no write access. This is obviously purely a privacy violation ... but, according to your definition, not a security problem, right?


> And thus it is, as per the common understanding of the word, a security concern

Yes, but not when talking about cyber-things. Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.


> Yes, but not when talking about cyber-things.

Yes, precisely there.

> Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.

So, you are telling me the user is intending the information leak? I'm not sure I understand: You say it's not a security matter if the "leak" is intentional. But then, if a user is transmitting information intentionally ... why would you call that a leak?

Or do you mean the leak is intended by Google or whoever and that is why it's not a security problem?! But then, what if a hacker intentionally installs a back door on your system and uses that to leak your information ... then that wouldn't be a security problem either, would it? Or is that where the "secret" part comes in, and it would only be a security problem if the hacker didn't tell you that they stole all your data?


Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here). If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.


> Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here).

Well, but do they actually have your permission?

> If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.

Well, for one, are they not doing their things secretly? Is the mere fact that you can find out about it enough to call it "not secret"? Is the mere fact that you didn't refuse where you didn't even really have an option to refuse permission?

Let's suppose a food manufacturer put a new pudding on the market. Included with the package is a 500-page explanation of everything that you need to know about it. Somewhere on those 500 pages, all ingredients are listed. Most are mentioned using the most unusual names. Among the ingredients is a strong carcinogen. A carcinogen that doesn't contribute anything to the taste, the look, or anything else you would value. All it does is make producing the pudding cheaper to produce.

Now, a biochemist could obviously know what is going on if they were to read the 500 pages, so it's not secret that the carcinogen is in the pudding. Also, the packaging says that you agree to the conditions of use in the 500 pages if you open the package, so you gave them permission to feed you that carcinogen.

Would you agree, then, that this pudding is not a health safety risk, it's simply the manufacturer not living up to your health preferences?

Also, I don't really understand how permission can make something not a security problem. It seems like that's all backwards?! I generally would first check a product for security problems, and then give permission based on the presence or absence of security problems. And one of the security risks to check for would be software leaking information to whereever I don't want information to leak to. Why should the fact that the manufacturer of some piece of software announces or doesn't announce that they leak certain information have any relevance to whether I condier the leak a security problem? If I don't want my information in the hands of Google, then how am I any more secure against that leak just because Google told me about it?


Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

It's not a good analogy for a whole host of other reasons, but that's one of them.

You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.


> Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

1. Well, one thing that IT security professionals surely don't use is "cyber", that's a term from the snake oil corner of the industry.

2. People in IT security most definitely do not distinguish between security problems that the manufacturer intended as a feature and security problems that were caused any other way. You create a model of things you want to protect, and if a property of a product violates that, then that is the definition of a security problem in your overall system, obviously. The only difference would be whether you report it as a vulnerability or not, as that would obviously be pointless for intentional, publicly announced features.

> It's not a good analogy for a whole host of other reasons, but that's one of them.

Really, even that would not be a good reason, as it smells of essentialism.

> You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.

No, I am using the exact standard definition, and the only sensible one at that. It obviously makes no sense to have a definition of "security" that says nothing about whether your system is secure. If you consider Google having access to your data a threat in your threat model, then whatever properties of your system that give Google access to your data is a security problem in your system, it's as simple as that.

The only thing that matters is whether your overall system reaches its protection goals or not, not whether some component by itself would be considered vulnerable in some abstract sense. And that obviously applies in the opposite direction as well: If you run some old software with known vulnerabilities that you can not patch, but you somehow isolate it sufficiently that those vulnerabilities can not be used by an attacker to violate your protection goals, then that system is considered secure despite the presence of vulnerabilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: