Security is distinct from privacy. The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software, regardless of their producers' business models and data hygiene.
> The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software
I disagree that they are "the most secure" browsers, let alone software. They fail to isolate remote scripts properly; that people were capable of executing timing attacks against the CPU (Specter et.al.) shows that they are not really very secure.
Browsers which don't execute Javascript and advanced CSS (Lynx being one extreme example) are going to be much more secure by default.
There are four major dimensions to security: attack surface; depth of defense, or how much an attacker can do once they're in; proactive measures to find security bugs (e.g., fuzzing); and code quality.
You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.
The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.
I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?
This is kind of a strange analysis. Sort of infamously, Dan Bernstein, who is sort of a pioneer in these privilege-separated defensive designs, foreswore them in a retrospective paper about qmail. Really, though, I'm not sure I'm clear on the distinction you're drawing between attack surface reduction and privilege separation, since both techniques are essentially about reducing the impact of bugs without eliminating the bugs themselves.
You might more coherently reduce security to "mitigation" and "prevention", but then that doesn't make much of an argument about the topic at hand.
What I meant by "attack surface" here is probably a lot narrower than what you're used to. I'm using it to focus on the code size concern. I was trying to visualize it in terms of "how many opportunities do you have to try to break the system" (as surface area) versus "what can you actually do once you've made the first breach" (as volume), and didn't fully coherently rewrite the explanation to excise the surface area/volume distinction I originally made.
No, it's not. Security is not a goal in itself, it can not be, security is only about guaranteeing other goals, there is no security absent all other goals. What it means for software to be insecure is that it doesn't ensure your goals are met. For many, privacy is an important goal. If the software that you are using compromises your privacy that you value, then that software is not secure.
I am much more concerned about someone being able to impersonate me (security) than to know what I'm doing (privacy). This doesn't mean im unconcerned about the latter.
If secure software compromises privacy in ways that concern you, it may not be the right software for you to use, but it is still secure (and potentially more secure than other software that you feel better protects your privacy).
> I am much more concerned about someone being able to impersonate me
Well, great?!
> (security)
Erm ... no?
> than to know what I'm doing (privacy)
Privacy is not about what your software knows, it's about who else gets access to that information. Software allowing access to your information to parties other than the ones that you intended is a vulnerability class commonly called "information leak".
> This doesn't mean im unconcerned about the latter.
And thus it is, as per the common understanding of the word, a security concern.
> If secure software compromises privacy in ways that concern you
That's just logical nonsense. You might as well be saying "If secure software kills you in ways that concern you, [...]".
> it may not be the right software for you to use, but it is still secure
So, let's assume your browser had a bug where for some reason, every website could read all the data in the browser. Like, could access the storage, cookies, cache, history, page contents, everything. But no write access. This is obviously purely a privacy violation ... but, according to your definition, not a security problem, right?
> And thus it is, as per the common understanding of the word, a security concern
Yes, but not when talking about cyber-things. Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.
> Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.
So, you are telling me the user is intending the information leak? I'm not sure I understand: You say it's not a security matter if the "leak" is intentional. But then, if a user is transmitting information intentionally ... why would you call that a leak?
Or do you mean the leak is intended by Google or whoever and that is why it's not a security problem?! But then, what if a hacker intentionally installs a back door on your system and uses that to leak your information ... then that wouldn't be a security problem either, would it? Or is that where the "secret" part comes in, and it would only be a security problem if the hacker didn't tell you that they stole all your data?
Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here). If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.
> Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here).
Well, but do they actually have your permission?
> If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.
Well, for one, are they not doing their things secretly? Is the mere fact that you can find out about it enough to call it "not secret"? Is the mere fact that you didn't refuse where you didn't even really have an option to refuse permission?
Let's suppose a food manufacturer put a new pudding on the market. Included with the package is a 500-page explanation of everything that you need to know about it. Somewhere on those 500 pages, all ingredients are listed. Most are mentioned using the most unusual names. Among the ingredients is a strong carcinogen. A carcinogen that doesn't contribute anything to the taste, the look, or anything else you would value. All it does is make producing the pudding cheaper to produce.
Now, a biochemist could obviously know what is going on if they were to read the 500 pages, so it's not secret that the carcinogen is in the pudding. Also, the packaging says that you agree to the conditions of use in the 500 pages if you open the package, so you gave them permission to feed you that carcinogen.
Would you agree, then, that this pudding is not a health safety risk, it's simply the manufacturer not living up to your health preferences?
Also, I don't really understand how permission can make something not a security problem. It seems like that's all backwards?! I generally would first check a product for security problems, and then give permission based on the presence or absence of security problems. And one of the security risks to check for would be software leaking information to whereever I don't want information to leak to. Why should the fact that the manufacturer of some piece of software announces or doesn't announce that they leak certain information have any relevance to whether I condier the leak a security problem? If I don't want my information in the hands of Google, then how am I any more secure against that leak just because Google told me about it?
Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.
It's not a good analogy for a whole host of other reasons, but that's one of them.
You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.
> Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.
1. Well, one thing that IT security professionals surely don't use is "cyber", that's a term from the snake oil corner of the industry.
2. People in IT security most definitely do not distinguish between security problems that the manufacturer intended as a feature and security problems that were caused any other way. You create a model of things you want to protect, and if a property of a product violates that, then that is the definition of a security problem in your overall system, obviously. The only difference would be whether you report it as a vulnerability or not, as that would obviously be pointless for intentional, publicly announced features.
> It's not a good analogy for a whole host of other reasons, but that's one of them.
Really, even that would not be a good reason, as it smells of essentialism.
> You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.
No, I am using the exact standard definition, and the only sensible one at that. It obviously makes no sense to have a definition of "security" that says nothing about whether your system is secure. If you consider Google having access to your data a threat in your threat model, then whatever properties of your system that give Google access to your data is a security problem in your system, it's as simple as that.
The only thing that matters is whether your overall system reaches its protection goals or not, not whether some component by itself would be considered vulnerable in some abstract sense. And that obviously applies in the opposite direction as well: If you run some old software with known vulnerabilities that you can not patch, but you somehow isolate it sufficiently that those vulnerabilities can not be used by an attacker to violate your protection goals, then that system is considered secure despite the presence of vulnerabilities.