Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes, but not when talking about cyber-things.

Yes, precisely there.

> Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.

So, you are telling me the user is intending the information leak? I'm not sure I understand: You say it's not a security matter if the "leak" is intentional. But then, if a user is transmitting information intentionally ... why would you call that a leak?

Or do you mean the leak is intended by Google or whoever and that is why it's not a security problem?! But then, what if a hacker intentionally installs a back door on your system and uses that to leak your information ... then that wouldn't be a security problem either, would it? Or is that where the "secret" part comes in, and it would only be a security problem if the hacker didn't tell you that they stole all your data?



Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here). If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.


> Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here).

Well, but do they actually have your permission?

> If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.

Well, for one, are they not doing their things secretly? Is the mere fact that you can find out about it enough to call it "not secret"? Is the mere fact that you didn't refuse where you didn't even really have an option to refuse permission?

Let's suppose a food manufacturer put a new pudding on the market. Included with the package is a 500-page explanation of everything that you need to know about it. Somewhere on those 500 pages, all ingredients are listed. Most are mentioned using the most unusual names. Among the ingredients is a strong carcinogen. A carcinogen that doesn't contribute anything to the taste, the look, or anything else you would value. All it does is make producing the pudding cheaper to produce.

Now, a biochemist could obviously know what is going on if they were to read the 500 pages, so it's not secret that the carcinogen is in the pudding. Also, the packaging says that you agree to the conditions of use in the 500 pages if you open the package, so you gave them permission to feed you that carcinogen.

Would you agree, then, that this pudding is not a health safety risk, it's simply the manufacturer not living up to your health preferences?

Also, I don't really understand how permission can make something not a security problem. It seems like that's all backwards?! I generally would first check a product for security problems, and then give permission based on the presence or absence of security problems. And one of the security risks to check for would be software leaking information to whereever I don't want information to leak to. Why should the fact that the manufacturer of some piece of software announces or doesn't announce that they leak certain information have any relevance to whether I condier the leak a security problem? If I don't want my information in the hands of Google, then how am I any more secure against that leak just because Google told me about it?


Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

It's not a good analogy for a whole host of other reasons, but that's one of them.

You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.


> Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

1. Well, one thing that IT security professionals surely don't use is "cyber", that's a term from the snake oil corner of the industry.

2. People in IT security most definitely do not distinguish between security problems that the manufacturer intended as a feature and security problems that were caused any other way. You create a model of things you want to protect, and if a property of a product violates that, then that is the definition of a security problem in your overall system, obviously. The only difference would be whether you report it as a vulnerability or not, as that would obviously be pointless for intentional, publicly announced features.

> It's not a good analogy for a whole host of other reasons, but that's one of them.

Really, even that would not be a good reason, as it smells of essentialism.

> You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.

No, I am using the exact standard definition, and the only sensible one at that. It obviously makes no sense to have a definition of "security" that says nothing about whether your system is secure. If you consider Google having access to your data a threat in your threat model, then whatever properties of your system that give Google access to your data is a security problem in your system, it's as simple as that.

The only thing that matters is whether your overall system reaches its protection goals or not, not whether some component by itself would be considered vulnerable in some abstract sense. And that obviously applies in the opposite direction as well: If you run some old software with known vulnerabilities that you can not patch, but you somehow isolate it sufficiently that those vulnerabilities can not be used by an attacker to violate your protection goals, then that system is considered secure despite the presence of vulnerabilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: