No, my mother's maiden name is not a secret. And some questions like "who was your best friend in elementary school?" might have different answers depending on when you ask me. Plus, unless my best friend's name was Jose Pawel Mustafa Mungabi de la Svenson-Kurosawaskiwitz (we used to call him Joe) it's pretty easy to guess with a dictionary attack. The only way to answer these questions securely is to make up an answer that's impossible to guess, which results in a second password.
> You password must contain these particular characters
I understand that this rule is to prevent people from using passwords like "kittycat", but "k!ttyc4T" is still less secure than "horse battery staple correct".
There seems to be an easy solution: use a password manager and save the answer to the question as an additional password.
(This is actually a FR to any password manager's product team: it's time to treat things like 2FA recovery code and secret question answers as first class citizen in your product).
That's what I do as well, but that defeats the purpose of the secret question being something only I know and will not forget. And that's because I am aware of the flaw of this system; someone naive might actually fill out that question with the honest answer and leave himself wide open to being exploited. Password manager are not a solution, they are a band-aid fix to a problem we should not be having in the first place.
> Jose Pawel Mustafa Mungabi de la Svenson-Kurosawaskiwitz
How in the #%*^ did you figure out my secret question?
I absolutely hate security theatre. And these kinds of things are just that. In fact, I’m sure that difficult to remember passwords make us less secure as we forget or write them down.
I remember that a not-so-recent investigation recommended five words. (Also you got the order wrong, "correct" was at the front, but you'd probably get it right second try, so the concept is still good.)
Yeah, I was referencing XKCD 936. Of course everyone should use a set of dice to roll their own truly random diceword passphrase and use five or six worst. My point was that adding numbers and other special characters does not actually make the password more secure than four, five, six or however many random plain English words that just use lower-case characters, so this rule should never be enforced.
Oh, that was probably the point, that it's very weak but still better than k!ttyc4T. We both got downvoted for not picking up that implication. This thread requires too much mind-reading for me.
Note that most of the signers are from companies which collect substantial consumer information for revenue purposes. Hence the emphasis on "updating". And the absence of "turn up browser security levels to max" or "get a good ad blocker".
Also, any password manager that's "cloud based" is potentially a security hole. Yeah, they say the server is secure. Right.
> Also, any password manager that's "cloud based" is potentially a security hole. Yeah, they say the server is secure. Right.
The entire point of end-to-end encryption is that you don't need to trust the server. If your password manager has access to your secrets (i.e. you don't control the secret key/password itself), then you have bigger problems than a potentially untrustworthy host.
We use 1Passwodr at work, at my suggestion from 10-12 years ago where it was an app on your device with an encrypted on device file you could chose to store on iCloud/Dropbox/GoogleDrive/wherever.
Then they changed to the web app and implemented teams, which is what we use today.
Work has decided the risk of 1Password going rogue is acceptable - but that's in the full knowledge that since they are serving the Javascript that's doing the client side encryption/decryption, there's no guarantee they can't serve (or be coerced into serving) malicious JavaScript that decrypts and exfiltrates all credentials and secrets any user has access to.
Pragmatically, I'm (mostly) OK with accepting that. If we have a threat model that realistically includes the sort of state level actor who could coerce a company like 1Password to launch an exploit against us - then we've lost already. Like James Mikkens said "YOU'RE STILL GONNA BE MOSSAD'D UPON!!!"
One of my hobbies is recreational paranoia though. So I use something else (KeyPass) for my personal stuff now.
To be fair, this letter is about information security, not privacy.
Maximizing privacy is a somewhat different goal, and recommendations for how to do so would differ from person to person. Some people really don't care about privacy. And for some other people, adblocker and tracking-blocker software is sufficient for their privacy needs. Whereas for certain people in certain parts of the world, literally the only way they can browse the Web safely is with Tor running on a temporary TailsOS drive.
A significant fraction of every high-profile industry security person I know has signed this thing. There are people on that list that I'm not super impressed with, but also people everybody is impressed with. No argument that this thing is motivated by commercial interests is going to survive, and a lot of this is advice that security cool kids have been giving for upwards of 10 years.
Updating software is good advice. Do you realize how many CVEs are reported on a daily basis? Once you've got a password manager you're largely protected against phishing, so the biggest target becomes your computer, and the most likely way to compromise that would be through outdated software with public vulnerabilities.
What do you expect your browser security levels to the max to do? Browsers are designed to be secure from default settings.
Almost all CVEs are basically irrelevant to everyone that doesn't have some obligation to keep on top of patching them. Meanwhile, auto-updates are RCE by default.
Indeed. I'm far more worried about picking up a supply-chain hack via updates than I am that some low-profile denial-of-service attack will actually affect me; the updates themselves historically have caused me far more actual denials of service than they fix.
CVEs are better viewed as "a uniform numbering system that ensures we are talking about the same bug" today. But updating software is good anyway.
> Browsers are designed to be secure from default settings.
Not quite. They are usually designed to be both fast and safe, but neither goal is considered "done" yet in modern ones. If you want max security, you'll likely have to disable all performance boosts like JS JIT.
The LastPass hack is a good example of that happening. Weak master passwords and a smaller number of KDF rounds, made the situation worse.
Realistically, most users benefit from using a reputable cloud-based password manager, and should focus on securing it with a strong password and MFA. You should also change your passwords if your password manager is breached.
Yeah - but where does the code doing the encryption/decryption come from? 1Password serves me the Javascript that encrypts/decrypts my vault every time I open my work 1PW webapp.
It's not reasonable to assume their server is "secure" not just from evil-hakzors and script kiddies, but also from government agencies with things like Technical Capability Notices and secret FISA warrants and NSLs with gag orders (or whatever their jurisdictional equivalents are), and also from threats like offensive cybersecurity firms with clients like disgruntled royalty in nepotistic moncharcy nations states who send bonesaw murder teams after dissident journalists.
I (mostly) trust AES (assuming it's properly implemented, and I exclude the NSA from that, and the equivalent agencies in at least a handful of other major nation states).
I have a lot less trust in owners and executives at my password vault vendor or their cloud hosting company or their software supply chain. If I were them, I'm pretty sure I wouldn't be able to stick up for my users the way Ladar Levison and Lavabit did. There's no doubt that the right federal agency could apply enough pressure on me and my family/friends to make me give up all my users unencrypted vaults. Sorry, but true.
Max browser security levels and a good ad-blocker will not prevent you from getting phished or hacked more than an encryption-audited cloud-based zero-knowledge vault, where server compromise is irrelevant. All competent #1 cloud-based password managers are like that.
> All competent #1 cloud-based password managers are like that.
If you say so...
Sadly there could potentially also be a supply chain attack that happens to make its way into the client you use to view your supposedly secure vault. Odds are they use npm, btw.
Phish resistant MFA is worth mentioning. You and all your staff with access to critical credentials should have something like YubiKeys, so you can't (as easily) get tricked into entering some TOTP (or email/sms) code into a fraudulent website.
At least that ups the threshold to "someone who can not only poison your dns or MITM your network, but can also generate trusted TLS certs for the website domain they're phishing for".
And SMS should be retired completely for authentication, not simply deprecated as NIST did in SP 800-63B with companies like banks assuming full liability for losses to others if they continue with this unacceptably insecure mechanism.
"The lobby group for Australian telcos has declared that SMS technology should no longer be considered a safe means of verifying the identity of an individual during a banking transaction."
The update thing struck me as slightly out of touch; if I were to make a list of my top 10 most used consumer products that can be updated, probably 8-9 of them have abused updates to make things worse.
We spend so much time training people that if you hit update, it’s going to suck: you’re going to suddenly get ads in your favorite app, or some new feature is going to get paywalled, or the UI is going to completely change with no warning. It seems counterproductive to accept that our industry does this stuff and then publish an open letter finger-wagging people for not updating.
Password managers are one of those things I am still stunned is staying popular for advice, even though it's nearly akin to "use one password for everything". I assume a big part of it is the affiliate deals subscription password managers have with infosec influencers.
There are absolutely valid use cases, but they are much fewer and further between than people claim.
It's quite different from use one password everywhere. My threat vector I wish to protect against that some random website I signup to will mismanage passwords and end up with them leaked, causing every website using that password to be compromised. Remembering hundreds of unique passwords is unreasonable, thus, password manager.
Considering the amount of times my email has ended up in a leaked dataset, and the only accounts I've ever had visibly compromised were ones I did not use a password manager for, this seams to be the correct mindset.
No. If a shitty service stores your password in plain and leaks it, this won't affect your other accounts, unless you reuse passwords.
I simply can't remember dozens of passwords, so a pw manager is the best I can do realistically. Yes, it's a single point of failure, but so is using the same pw everywhere.
It's completely the opposite of "use one password for everything". When you do that any single compromise of a website you have an account on means all your accounts are likely compromised. With a password manager you have a long random password for every single website, meaning a compromise is siloed to just that site.
Even if your password vault is stored on the cloud you're likely using a very secure passphrase for it that has 0 reuse anywhere else, so even if your password vault is stolen it's impossible to brute force.
For a hacker to comprise your password vault it would likely involve hacking your computer, which if you're keeping your software updated is a very difficult task these days without the target user's active help.
Depends on your threat model. I went all in on 1Password when I realized that realistically the most likely attack vector for me is phishing, which it absolutely protects against (won't be duped by a fake site and auto fill password).
It would be interesting to do a study (if one hasn't already been done) on whether password manager use reduces the number of compromises an individual has or not.
I think if used correctly they can be a net benefit, but the question is how many users actually use them correctly. Isn't the security they offer based on a user only having to remember a single complex and unique password for the manager, and then let it handle unique and complex passwords for everything else. The question is, however, how many users just set the password manager password to 'ImSecure123!' and use it to autofill the same old reused passwords they've always used?
This is why all the top/good password managers will alert you of: 1) password reuse between sites and 2) weak passwords. One can hope that the users will listen to those suggestions. In an organization, you can enforce compliance.
> even though it's nearly akin to "use one password for everything"
It's not at all akin to that.
Firstly, every respectable password manager requires multi-factor authentication to log in to. Someone finding out the password to your manager is almost never sufficient. They would probably need to find it out as well as gain physical access to a device of yours which has the manager installed.
Secondly, the whole issue of "use one password for everything" is that if one site gets hacked and they store passwords insecurely (or, indeed, if the people who run the site are themselves malicious), then someone can use that same password to access all of your other accounts. So you have to trust the security of every single site you make an account with.
Using a password manager doesn't have that problem, since each site is being provided with a different password. So then you don't have to trust any website, you only have to trust the password manager itself. And you don't have to use a big cloud-hosted one if you distrust them - there are many password managers that you can just run locally on your computer (though without the cloud benefits of backup / disaster recovery). You can also just use a notebook with a padlock or something - frankly it doesn't really matter how you track your passwords, as long as nobody can get to it but you, and you use a different password for everything, and you have some plan for disaster recovery. That's the idea.
I'm not a CISO just a random dog on the internet, but this open letter seems to assume that privacy is not a part of your security posture and that spear phishing isn't common these days. (Is 'spear phishing' still the term for targeted electronic scams to steal credentials/access?)
I realize not everyone is using a physically stripped burner, a graphene os install, etc and not everyone works at a high value financial, govt, or infra target but for those of us who need to deal with opsec or are commonly targeted by spear phishing this advice seems abysmal.
In the current political climate of the US, if you are living or traveling here and the current party isn't cheering for you personally, you really should be considering both state-sponsored attacks and no longer have the luxury of assuming good faith by the state. Telling people to enable cheap drive by attacks that are in active use by certain government agencies is irresponsible malpractice at best and actively evil at worst.
Source: I've worked at analytics companies that actively deanonymized users using cookies when available. We used wifi and Bluetooth details when available. We built "multi channel marketing" which was just taking any information we could scrape from the user to fingerprint them and cross reference and deanonymize them so we could sell interactions to businesses like geofenced price discrimination, value of users, and could offer cross website information on shopping habits/financial profile. The shit I did 15 years ago didn't go away and no matter how much I wish I didn't write that, it was the tip of the iceberg and relatively benign.
The piece is explicitly about retiring outdated security advice and doesn't claim to provide a complete, coherent defensive posture (that posture would have to depend on who you are and what your threat model is!). I don't like that they included the "recommendations for the public" section, but I don't think there's a reasonable way to read it as intending to be a complete action plan.
> Never scan QR codes: There is no evidence of widespread crime originating from QR-code scanning itself.
> The true risk is social engineering scams...
Exactly. My grandma is very susceptible to phishing and social engineering, I don't want her scanning random QR codes that would lead to almost identical service to the one she would think she is on and end up with identity theft or the likes.
> Regularly change passwords: Frequent password changes were once common advice, but there is no evidence it reduces crime, and it often leads to weaker passwords and reuse across accounts.
Forced password changes are one of those security theater exercises that drive me absolutely nuts. It's a huge inconvenience long-term, and drives people to apply tricks (write it on a post-it note, or just keep adding dots, or +1 every time).
Plus, if your password gets stolen, there's a good chance most of the damage has already been done by the time you change the password based on a schedule, so any security benefit is only for preventing long-term access by account hijackers.
Sure, if you use unique passwords, then changing passwords isn't as useful. Yet we shouldn't judge a security policy based on the existence or not of another policies.
What you are judging then is a whole set of policies, which is a bit too controlling, you will most often not have absolute control over the users policy set, all you can do is suggest policies which may or may not be adopted, you can't rely on their strict adoption.
A similar case is on the empiric efficacy of birth control. The effectiveness of abstinence based methods is lower than condoms in practice. Whereas theoretically abstinence based birth control would be better, who cares what the rates are in theory? The actual success rates are what matters.
So, since this seems to be relevant im a CISO myself.
And i would definitely not agree with everything in this letter.
Personally, i think the worst part about it is handling a low probability as something that's not gonne happen. Thats, especially in IT-Sec, one of the worst practices.
To take on point as example - the "never scan public QR codes".
Apart from the fact that there have been enaugh exploits in the past (The USSD "Remote Wipe", iOS 11 Camera Notification Spoofing (iOS, 2018), ZBar Buffer Overflow (CVE-2023-40889), etc) even without an 0day exploit qr codes can pose a relevant risk.
As a simple example, not to long ago i was in a restaurant which only had their menu in form of a qr code to scan. Behind the QR code was the link to an PDF showing the menu. This PDF was hosted on a free to use webservice that allowed to upload files and get a QR code link to them. There was no account managed control about the pdf that they linked to, it could be replaced at any time opening a whole different world of possible exploitations via whatever file is being returned.
Sure you could argue "this is not a QR code vulnerability just bad practice by the restaurant owner" - but that's the point. For the user there is literally no difference if the QR code itself has a malicious payload or if the URL behind it has (etc etc).
While we in the tech world might understand the difference, for the John and Jane Doe this is the same thing. And for them its still a possible danger.
Apart from that, recently a coworker linked me a "hacker" video on youtube showing a guy in an interview talking about the O.MG cable. Sure, you might say this is also an absolutely non standard attack vector, yet it still exists. And people should be aware it does.
My point is - by telling people that all those attack vectors are basically "urban myths" you just desensitize the already not well enough informed public from the dangers the "digital" poses to them. And from my personal view, we should rather educate more than tell them "don't worry it will be fine".
It's funny your warning about QR codes goes onto warn about PDF exploits. Yet you clicked the link to this article, by your own definition opening you up to "a whole different world of possible exploitations via whatever file is being returned". It's the nature of the internet to follow links, but our updated browsers keep us safe from exploits.
When was the last time you saw an un-targeted mass 0-day exploit campaign? There haven't been any for modern browsers. If we're talking about 0-days, you likely known there have been zero-click iMessage/WhatsApp vulnerabilities in the past. There's no protecting against those, but you're not here warning users to disable iMessage and WhatsApp. What's more realistic is making sure users keep their software updated, and trust that QR codes and links aren't going to waste a 0-day worth a million dollars on you.
First of all, the problem here is more a point of trust.
Ill try explain based on your example with "any link".
If you type amazon.com you trust that there will be amazon.com returned and not any maleware. On a QR code, the target url isn't as obvious so the user should be aware that a qr code, even if for example below it says "hackernews - the best news in the IT world" the qr code could still link to "https://news.xn--combinator-xwi.com" (edit : because ycombinator is a nice website it auto resolves the unicode char here : bad example tho but i dont have the time to recraft it and i guess you know unicode link/url tricks therefor i can just let it be the way i pasted it) did u spot the difference? Its not a regular "y" and just could get you on a fishing page. So ye even just know "urls" that you review on a qr code still can be dangerous if not typed by yourself. And than, for alot of users it prolly wouldn't event take that of a measure to trick them. Its not like the average Jane/John Doe does very good on url verification - else alot of scammers would go bancrupt.
Therefor i hope you understand you don't need a 0day. I also stated that in my answer but you seem to be so keen focusing on me listing some 0days (to disprove the initial article) that you kinda lost my point.
Also - sure everyone should keep his/her device updated - noone said anything else. Apart from that no i wouldn't recommend people to use whatsapp but that was't the point and im not actually sure why you mentioning it but here i said it : i wouldn't recommend it if that helps ¯\_(ツ)_/¯
Edit: not to forget - i for myself know that clicking on unknown links poses a certain risk and have several measures in place to reduce this risk.
>It's funny your warning about QR codes goes onto warn about PDF exploits. Yet you clicked the link to this article, by your own definition opening you up to "a whole different world of possible exploitations via whatever file is being returned". It's the nature of the internet to follow links, but our updated browsers keep us safe from exploits.
you really don't know what they did.
In the time of containerized OSs and virtualized-everything it's silly to guess.
"Never scan public QR codes" is functionally equivalent to "never type in a URL and never click on a link". Other than the smallish scan-specific attack surface that you mention and then largely dismiss, there's nothing that makes QR codes more dangerous than any other way of delivering links.
It's somewhere between impractical and impossible to evaluate a URL and know anything about its "safety". So if you can't make your Web browser impervious enough to tolerate basically any crap a server may send back to your satisfaction, then your only answer is a total walled garden.
Well we are as sadly so often in the world of only "black and white" discussion without ignoring gray areas.
While i pointed out that i think that the claim of public qr codes are always safe and cannot pose any danger is wrong, i also didn't state you should wall yourself in and handle like everything is f0rk3d.
You, as with everything in life, should evaluate whats worth risks and what not. Scanning a QR code in a museum linking an audio track to describe the exhibt, scanning a qr code in a restaurant for a menu, scanning a qr code from a sticker on a traffic light.
These are 3 completly different scenarios that can be weighted different and therefor not be answered with a single "yep good/bad" for every situation. My initial point regarding the article was that i don't think stating scanning public placed qr codes is always safe. People should not just NEVER scan a public qr, but they should understand possible risks, they should learn how to evaluate which risks are worth taking, and also learn what thinks they should look for. My point is that of make the public more informed.
Well i just listed the O.MG cable to show that there are alot of people not knowing that such things exist. My point is that of : people should be better informed about what vectors of attacks exist. So mentioning the cable (in relation to a coworker coming to my desk and asking about it) was just an example of how informed average Joe/Jane are and that i think this is the more important part - educate the public not just tell them not to worry.
The piece isn't about getting people not to worry, it's about not wasting the worry on things that aren't real threats. People have limited space in their brains for things to worry about.
>Personally, i think the worst part about it is handling a low probability as something that's not gonne happen. Thats, especially in IT-Sec, one of the worst practices.
If you are an online service provider, sure. Low probability means it's going to happen, especially as you scale with users.
For a small business IT team? You can't keep a clean sheet, the strategy is to reduce the probabilities of an incident and reducing its damage, but it will never be zero, if only because you have non-technical users that need to do actual work.
p(incident) is just yet another variable you need to do tradeoff engineering on, and obsessing over reducing it to 0 will probably compromise other tradeoffs like ease of use of the system.
It's a special case of ironic when in an attempt to get a specific variable to 0 (which is impossible with most variables anyways) you end up compromising that specific variable. So if you force users to use lots of passwords and password managers and MFA, and limit their capabilities, they end up circumventing your security systems and advice, so they introduce an issue (but of course it will be the users fault, and not the CISO's fault, their job is secure).
Well even tho i think at the end of your comment you went a bit out of the way, i get your point and i agree to a certain point.
You cannot reduce the risks to 0 - that's a matter of fact and i would never claim you could.
I tend to say its a question of cost/gain. If the cost the attacker has to pay (work/invest/...) is higher than the possible gain (data/funds/...) you are on a good track for your companies security.
Im btw not working for an ISP, rather something you would see as a smaller sized IT company. Therefor i also have certain points where i in theory could go alot harder on security, but i don't because its not feasible.
Another thing especially in that regard i find important is trying to educate your users, at least we work on that. We don't just enforce hard rules on them, but we also try to make sure they understand why we have these rules and mechanisms in place - not to annoy them but to protect them.
Finally, thats my favorite point of your article, "force users to use lots of passwords ".
Well our business has to undergo regular audits by partners which are lets say rather meticulous when it comes to the security of our systems. These enforce certain things on us we have to than enforce on our users even if we don't think its good.
So ye, now you can blame on me that i enforced something on our users, but keep in mind - it was also enforced on me - i even discussed certain things with these partners trying to explain to them why some measures sound cool on paper but in reality are just impractical - not that anyone would care. So we implement it.
Therefor the next time you argue that some security measure is just an CISO that doesn't really care about its users, maybe keep in mind that some things are forced upon us even tho we don't like and don't support them.
I can see why you would take "online service provider" to mean an ISP, but I meant it to include SaaS and apps like whatsapp, google, etc.. as well
>Therefor the next time you argue that some security measure is just an CISO that doesn't really care about its users
Oh I didn't mean to imply that, there's no doubt that IT admins that overimplement security policies care in general, the critique is not about motives, rather the efficiency. I don't argue that they don't care or even that they are wildly inefficient, just that they are suboptimal on this specific point by going overboard.
I worked for a company that had 8-12 different employee passwords across various systems. There was no SSO, they each password had different requirements, and required changes at different intervals ranging from 30-90 days. Consequently every employee had a post-it note directly on the laptop with most or all of their passwords. The outdated IT policy security was so strict that real world security was abysmal.
I find it interesting that the comment about VPNs offering little additional privacy or security benefits is wrapped up under 'Avoid Public WiFi' rather than being called out explicitly. It drives me nuts all the ads I see for NordVPN or whatever claiming that by using their services you are now totally safe from all the hacks. If anything, it makes the median user less safe because they have a false sense of security.
Slight tangent: My wife's place of work has recently instituted a minimum 16-character password rule with the standard complexity requirements. They also encourage the use of password management software, as well as enforcing password changes every 6 months.
Where I see a flaw in this is the initial login.
If you're not already on your computer to access the password manager, how do you retrieve the essentially non-memorisable password to unlock your computer in order to get to the password manager to retrieve the essentially non-memorisable password?
The password to unlock the computer, therefore, must be able to be remembered. This pretty much excludes 16-character auto-generated passwords for anyone but a savant.
Am I missing something obvious here? (MFA using an authenticator app on the phone? Is that something that Windows / Mac/ Linux supports?)
I've not met anyone who doesn't just increment a digit at the end every 6 months.
And any password length requirement beyond 8 always ends up being just a logical extension of 8 character password (like putting 1234 at the end), if 16 characters is required one would just type their standard password in twice.
If a any of the old passwords (potentially from unrelated applications) get leaked, it's almost trivial to guess current password.
Yeah, that's kinda my point, increasing the complexity requirements counter-intuitively reduces, or at least doesn't change, the actual level of security provided.
It's a wetware limitation. Not that we don't have methods that could improve it, it's just that they're not yet implemented at this specific point of contact. Interestingly.
For 1, you can still have extremely malicious networks. It's true that your web traffic is likely encrypted but... What services are exposed on your machine? Do you have mapped samba shares?
For 5 - session cookies are one of the main things stealers look for. Deleting cookies is absolutely good advice until browsers build in better mitigations against cookie theft.
For 6 - if there was a standard interface how password managers could rotate my creds, I would sure as hell use it. Force rotating passwords is only "bad" if people need to remember them.
Any random credentials stored in a vault absolutely should be rotated periodically, there is no reason not to.
I don't see the point of this letter, none of the "bad" advice they call out is harmful to security in any way, if people feel safer avoiding public wifi, so be it. Is it just a call out to other cisos to update their security hygiene powerpoints?
While you and I would love it if password managers would rotate creds, we're not yet at the point where people will use password managers. They're still using CompanynameFall2025!. Next month, they'll dutifully rotate their password to CompanynameWinter2025! because their work policy is still stuck on shitty standards.
> This kind of advice is well-intentioned but misleading. It consumes the limited time people have to protect themselves and diverts attention from actions that truly reduce the likelihood and impact of real compromises.
When you've got 15 seconds to _maybe_ get someone to change their behavior for the better, you need to discard everything that's not essential and stay very very far away from "yes, but" in your explanations.
I don't really like the name. When you say 'Hacklore' I think of the hackers at MIT and such. That stuff is really cool and shouldn't be stopped or suppressed!
This is good advice, and there's good people on the signature list, but why is this is an open letter? This feels navel-gazey and straight out of 2017.
That open letter is filled with malice, so I can only guess that it's either trolling or a bad taste joke (due people could think are outdated recommendations and spread, lets remember the flat-earth thing).
Accurate? Lets take the Wifi (Other users already commented the other ones). Open a wifi access point with the name of the restaurant, intercept the DNS requests and serve your filtered stuff.
PS: If the text is real and not trolling, the keyword in the text is 'rarely happen', which we could apply to car seatbelts then.
Then what? The user presumably sees TLS certificate warnings since you don't have valid certicates. HSTS would prevent downgrades to plain HTTP and is pretty common on sensitive websites.
Isn't the better advice to avoid clicking through certificate warnings? That applies both on and off open wifi networks.
There is a privacy concern, as DNS queries would leak. Enabling strict DoH helps (which is not the default browser setting).
I am afraid that it is not only about privacy (that they recommend ignoring), there are many options to chose, like CA vectors, lets say TrustCor (2022), e-Tugra (2023), Entrust (2024), Packet injection vectors, or Click here or use your login first vectors as you commented, bugs and configurations.
This ones known. Therefore I just cannot believe that those who wrote the open letter did not even though about such significant events from the past year, I remark the past year, or even on zero-days.
We are talking about people connecting to an unknown unsupervised network, that we do not know what new vulnerabilities will be published on main stream also, and the ones of the open letter know it because they are hiding behind the excuse of "rarely".
This gets complicated because you're not safe on your home or corporate network either when CAs are breached. The incident everyone talks about, DigiNotar (2011), had stolen CA keys issuing certificates that intercepted traffic across several ISPs. If that's the threat you're looking to handle, "avoid public wifi" isn't the right answer. Perhaps you're doing certificate pinning, application level signing, closed networks, etc.
> Entrust (2024)
I recently wrote a blog post[1] about CA incidents, so I notice this one isn't like the others. Entrust's PKI business was not impacted by the hack and Entrust remains a trusted CA.
> Click here or use your login
Password manager autofill is the solution there, both on public wifi and on a corporate network. Perhaps an ad blocker as well.
> people connecting to an unknown unsupervised network
Aren't most people's home networks "unsupervised"?
Why do you talk about home networks "unsupervised" when we are talking about public networks, access points, created to hunt people?
Do you notice that your proposed solutions try to fix a problem, isn't it? The open letter does not propose solutions; it merely denies them.
It is needed to be sincere with people, those "incidents" have happened for a long time, and unfortunately will keep happening (given the history), bad actors hunting, yesterday the CAs, and tomorrow? So if one connect to an open wifi one may fall victim to a trap, probably not at home but in an Airport or other crowded places with long waits, and even if you do not browse another app in background will be trying to do it.
It was needed many years to make people just sightly aware, and now they -if the text is real- pretend to undo it. But to be sincere I really do not mind much, I just perceive that open letter as malicious.
CA compromise feels like an exotic attack, beyond what "everyday people and small businesses" should worry about. There's no solution to CA compromise offered because the intended audience is not getting hacked in that way. If your concern is that high risk individuals need different advice, I agree, but the letter also makes that clear they are not the focus.
Are there specific, modern examples of CA compromise being used to target low-risk individuals? Is that a common attack vector for low-risk individuals and small businesses?
There's the typical mix of good and bad points in this manifesto, but I wish the people willing to sign their names to it had a better record of success implementing the call to action inside their own organizations first:
We call on software manufacturers to take responsibility for building software that is secure by design and secure by default—engineered to be safe before it ever reaches users—and to publish clear roadmaps showing how they will achieve that goal.
Knowing what rules not to follow and what isn't a risk is important to know where to invest energy.
Tech and non tech users have a budget to spend on IT Sec, so if you impose a lot of useless or marginally useful rituals along with the useful prophylaxis, the user will be forced to drop some of the measures, so it's better to drop some rules early on by policy rather than letting users decide what good practices to avoid.
Don't worry about cookies or bother using a VPN, because... you are being tracked anyway? What's the point of including such a defeatist stance?
> the real world across industry, academia, and government.
Gotcha, so no one here gives a shit about privacy. They only care about avoiding the inconveniences of fraud and leaked secrets.
Use a password manager and a feature-complete adblocker (ublock origin on Firefox). Send messages over end-to-end encrypted channels. Use a VPN along with your adblocker and some kind of cookie/browser-id isolation if you don't want your traffic stalked.
BTW, I really would like to have a way to partially clear cookies – i.e., I don't want to be signed out of gmail, and maybe not out of the Mechanic's Bank of Alaska or Amazon or Netflix, but most other things could go. I don't think this is easy in Chrome, Safari or other mainstream browsers, is it?
Yesyes, I do know that Big Ad can mostly stitch together some proxy profile of me anyway, but it would be more blurry.
None of my opinions of this manifesto are positive. This is a defeatist position. It dangerously conditions people to be more casual about their privacy and safety.
There are still legitimate reasons to clear cookies, to turn off Bluetooth/NFC beaconing, and to occasionally rotate passwords (vis a vis password managers) as it costs nothing to accomplish, and very little in the way of tradeoffs. So...why not?
The probability of a random individual being the target of a sophisticated state sponsored attack is low, but the probability of being caught up in a larger dragnet and for data to be classified, aggregated and profiled is very high. So why not make it just a bit harder for them all?
If anything, let's chip away at this problem bit by bit. Make their life a bit harder...their datacenters a bit hotter. Add random fud to the cookie values, constantly switch VPN endpoints, randomize your mac address on every WiFi association, constantly delete old comments, accounts, create throwaway accounts, create proxies and intermediaries, rotate your password and 2FA -- use any legal means to frustrate any adversarial entities -- commercial or otherwise. They want information? They want your data? Fine, overwhelm them with it. THAT should be the proper modern privacy-focused manifesto. This is utterly bewildering...
...but then I get to the signatories and this nonsense suddenly made all the sense in the world:
> Sincerely, Heather Adkins, VP, Cybersecurity Resilience Officer, Google
> Aimee Cardwell, former CISO UnitedHealthGroup
> Curt Dukes, former NSA IA Director, and Cybersecurity
Executive
> Tony Sager, former NSA Executive
You can only avoid rotation on passwords that are MFA-protected.
If you implement a password manager, you must mandate auto-fill only and actively discourage (via training) copy/paste of credentials to a web site. Train the users to view “auto-fill not working” as a red flag. (This doesn’t apply to non-website credentials). Mandate all passwords to be auto-generated. Mandate that the only manually-entered password is the one for the password manager. Of course, you must have MFA on the password manager entry.
This will allow your users to comply with frequent password rotations much more easily. Auto-fill requirement/culture is critical to reducing phishing success, especially for tired eyes.
I might be alone in this, but I feel the advice regarding 2FA and password managers is putting people into risk.
My mom using those would be one “I don’t know where I put that” away from permanently losing access to her pictures or any other similar access. This is as potentially harmful as any attack.
I don’t understand why people promote password managers for individuals. You don’t need to store your password in a central location that is a prime target to hackers; even if it’s encrypted, that’s more of a risk than keeping one of your own.
And some of the previous advice they’re stepping back from like avoiding QR codes you’re unfamiliar with is still good advice; you should be careful and not expose yourself too much.
1. People are terrible at creating strong passwords. People will NOT create hundreds of strong passwords.
2. People will not use complex solutions unless actively and rigidly enforced.
3. At best, we can hope that they can create one really good passphrase. That's combined with MFA.
There are people that are exceptions to those, but they're vanishingly small percentage of the population. And unfortunately, there are a way, way more people that think they have something better but are deluding themselves -- like bad card counters that casinos are happy to have at the blackjack table or non-experts rolling their own crypto.
This has the energy of "Remove all DEI initiatives because we have solved workplace discrimination."
> This kind of advice is well-intentioned but misleading. It consumes the limited time people have to protect themselves and diverts attention from actions that truly reduce the likelihood and impact of real compromises.
I dislike any methodology that claims its intent is to talk down to people for whatever declared reasoning. People are capable, and should be helped to make decisions based on all available information.
> Regularly change passwords: Frequent password changes were once common advice, but there is no evidence it reduces crime, and it often leads to weaker passwords and reuse across accounts.
When I worked as a security professional the breaches were nearly always from someone's password getting leaked in a separate public breach. If those individuals had changed that password the in house breach would have been avoided.
> People are capable, and should be helped to make decisions based on all available information.
To relay a quote, with the source not being very important: "I'm not going to waste a dime on cybersecurity when my officers need bullets and armor." People can be intelligent and capable and have minimal (if you're lucky) bandwidth or tolerance for cybersecurity advice. It's not the crisis they see every day. The advice given to unwilling listeners has to be focused and prioritized.
And... Password leaks and therefore rotations aren't an issue if people are using a strong main password for their manager. Then a leak doesn't transfer to another account and the manager will loudly tell them when a password is found in breach data -- which lines up with NIST's modern advice of avoiding password complexity and rotation, since they've found it to lead to minimal (at best) gained security.
> > Regularly change passwords: Frequent password changes were once common advice, but there is no evidence it reduces crime, and it often leads to weaker passwords and reuse across accounts.
> When I worked as a security professional the breaches were nearly always from someone's password getting leaked in a separate public breach. If those individuals had changed that password the in house breach would have been avoided.
You completely missed the point. The good advice is to not reuse passwords. That alone would have stopped the in house breach.
> Secret questions
No, my mother's maiden name is not a secret. And some questions like "who was your best friend in elementary school?" might have different answers depending on when you ask me. Plus, unless my best friend's name was Jose Pawel Mustafa Mungabi de la Svenson-Kurosawaskiwitz (we used to call him Joe) it's pretty easy to guess with a dictionary attack. The only way to answer these questions securely is to make up an answer that's impossible to guess, which results in a second password.
> You password must contain these particular characters
I understand that this rule is to prevent people from using passwords like "kittycat", but "k!ttyc4T" is still less secure than "horse battery staple correct".
reply