I think that many instances are disabling checks in order to troubleshoot something and then never enabling it again. And we see this on all levels - development, devops and also sysops of course. Just quickly disabling something without leaving a TODO, disabling a MSDeploy certificate check because a proper PKI has never been set-up, just ignoring any certificate errors in LAN management tools because installing a certificate is so hard.
Kind of related, but a little off-topic: I think tying name checking to encrypting traffic was a mistake. They are two different use cases, and shouldn't have been so tightly coupled.
Sometimes I care only about my traffic being encrypted, and resent having to jump through hoops to ignore the name mismatch. Sometimes I care only about assurances that the name is correct, and don't care about having the traffic encrypted.
Encryption without authentication is not very useful; if unauthenticated, your ISP could middle-man your connection and effectively decrypt everything while making you think you're encrypted.
If that's not in your threat model, and you want encryption for another purpose, then I could understand that, but currently, protecting the endpoints against malicious attackers in the middle is the big value of TLS.
I just today reviewed a PR with a default insecure option. But here we’re working on local networks where there’s no way to get a certificate because there’s not a domain name that points to the local IP address.
At least with HTTPS over the local network, it can frustrate attempts to break into it. That said we are sure to call it “https-insecure”.
> But here we’re working on local networks where there’s no way to get a certificate because there’s not a domain name that points to the local IP address.
There are options for this, that needn't cost.
In a company with controlled workstations, have your own CA and push the trust of that through GP or in your standard OS build, point whatever domain you like (“real” or just a local-only DNS entry) at your host and sign a cert for that with the internal CA. Generating keys & certs for each workstation can be automated. You could sign a wildcard subdomain and key+cert for each machine or person, .pc101.devteam.internal/.johndoe.devteam.internal, so each user can setup multiple dev/test services on their box without requiring support from the infrastructure team.
Or get a wildcard from LE or similar for some [sub]domain, point *.[sub.]domain to 127.0.0.1 and let everyone have the keys for that. You need to renew every 3 months or less and redistribute the cert, but that isn't massive hardship and can be automated. Also make sure this is only for dev/test though, not internal tooling that might handle read data/controls, or having multiple (potentially many) people with access to the private key is a massive security issue.
Using proper names & certs and not encouraging disabling checks for local dev/test, reduces the risk of code that doesn't do proper verification getting let out into the wild later when someone forgets to switch checks back on.
If you are already maintaining a CA for any other reasons then use that or the same process to maintain another, otherwise the second option is probably the path of least extra work/cost.
Or you can do none of that razzle dazzle which, if you stop to actually think about it, doesn't really bring in any security. Yeah, "let everyone have the keys for that", that's sure much more secure against some vaguely imagined threat connected with people already running arbitrary stuff on your internal network.
I'm not sure why the level of snark is so high there, given I'd already mentioned the potential security implications meaning those keys should not also be used for non- dev/test resources.
A key reason for maintaining secure connections for everything in local dev environments is to practise for best practice: keeping your dev environments close to production configuration without doing that by lowering production's level of doing-stuff-right accidentally (or intentionally) through insecure settings (like not verifying certificates) leaking out of dev into other environments.
At least this thread isn't full of people complaining that UAs (and related libraries) should still just trust self-signed certs, and not accepting any explanation of why that is a bad idea, which used to be the norm…
TLS is not an end goal in itself, we don't use it simply because "TLS is double-plus-good, raw HTTP is crimethink", sorry, because it's a "best practice" (whatever that term actually means): we use it because it provides us with transport layer security against some specific threats. What threats do your proposed approaches help to secure against, except "developers have mental ability to set up and use a configuration that'd be insecure in prod"? The only even remotely reasonable threat I can think of is the scenario in which your network hardware manufacturer (be it Cisco, or Huawei, or whatever) is wiretapping you. Indeed, it is a valid threat for e.g. Google (PRISM does exist) which is why they've switched to using TLS everywhere. But aside from that?
Correct. Can you point out on the dolly where I said otherwise?
> The only even remotely reasonable threat … But aside from that?
You either didn't read what I said properly, or are deliberately misreading it.
I didn't suggest using TLS properly in dev was for specific threat protection in dev environments, but for stopping dumbed down things in dev accidentally getting out into prod, and that it is “practise for best practice in production”.
Unless of course someone has (or thinks they have!) reason to breach commonly accepted good practise and have real data in dev, in which case dev is a de-facto production environment from a security standpoint.
> "best practice" (whatever that term actually means)
It is a well understood term. I'll not spend my time explaining it as you'll easily find that information elsewhere if you care to.
If the root CA is in a place that is inaccessible then there are no CRLs to check against for example. Root CA may exist outside of the airgapped env. Especially if the root CA is one that produces self signed certs. You are back to insecure TLS
A root doesn’t produce “self-signed certificates”. That especially doesn’t make any sense. What do you think the “self” references in “self-signed” certificate?
Add the root to your trust store, if you trust it, and you’re done.
What’s more concerning is someone working on (assumingly) secure, sensitive, air-gapped networks knows this little about TLS?
You mean create a root CA, install it as a trusted CA on _every single_ client that will interact with the client, manage revocations (whole thing in itself), and handle all the other management that goes along with being an authority (local or otherwise, it makes little difference).
There's nothing stopping you for creating your own root CA and using it to sign certificates for any other domain. You can create a certificate for google.com if you wanted and be signed with your own CA.
Now, obviously, you couldn't actually use that certificate publicly. If you were to try to MitM someone, their client wouldn't accept the certificate because your root CA's certificate won't be in their trusted list.
But add that root CA to your own system, and it'll work fine.