Hacker News new | past | comments | ask | show | jobs | submit | prussian's comments login

True, it can help Microsoft SQL Server as well. In SQL Server 2022, they finally added Strict Encryption. I'm glad to see more databases are removing these strange STARTTLS like features.


I think people forget that some of this software may be relatively fast. The problem is, most corporate environments are loaded up with EDRs and other strange anti-malware software that impede quick startup or speedy library calls. I've seen a misconfigured Forcepoint EDR rule block a window for 5 seconds on copy and paste from Chrome to Word.

Another example: it takes ~2 seconds to run git on my work machine

    (Measure-Command { git status | Out-Null }).TotalSeconds
while running the same command on my personal Windows 11 virtual machine is near instant: ~0.1 seconds. Still slower than Linux, but not nearly as bad as my work machine.


Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/


I use https://github.com/FiloSottile/mkcert for my internal stuff.


Just use wildcard certs and internal subdomains remain internal information.


A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.


Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future


It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.


That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.


Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.


> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?


There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.


There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application.


I issue a wildcard cert for *.something.example.com.

All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.

Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.

something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).

The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.


This is the DNS setup I’d have in mind as well.

Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).

An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is


I'm "ok" with that risk. It's less risky than other solutions, and there's also the issue that hijacked.something.example.com needs to be resolved by the internal DNS server.

All of this would most likely need to be an inside job with some relatively big criminal energy. At that level you'd probably also have other attack vectors which you could consider.


This is also my thinking.. if someone compromises your VM that is responsible for retrieving wildcard certs from let's encrypt, then you're probably busted anyway. Such a machine would usually sit at the center of infrastructure, with limited need to be connected to from other machines.


Probably most people would deem the risk negligible, but it’s still worth to mention it, since you should evaluate for yourself. Regarding the central machine: the certificate must not only be generated or fetched (which as you said probably will happen “at the center”) but also deployed to the individual services. If you don’t use a central gateway terminating TLS early the certificate will live on many machines, not just “at the center.”


You are absolutely right. And deployment can be set up to open up additional vulnerabilities and holes. But there are also many ways to make the deployment quite robust (e.g. upload via push to a deploy server, distribute from there). ... and just by chance, I've written a small bash script that helps to distribute SSL certificates from a centrally managed "deploy" server 8) [1].

[1]: https://github.com/Sieboldianus/ssl_get


It's the opposite - there is a risk, but not a larger risk. Environment traversal is easier through a certificate transparency log, there is almost zero work to do. Through a wildcard compromise, the environment is not immediately visible. It's much safer to do wildcard for certs for internal use.


Environment visibility is easy to get. If you pwn a box which has foo.internal, you can now impersonate foo.internal. If you pwn a box which has *.internal, you can now impersonate super-secret.internal and everything else, and now you're a DNS change away from MITM across an entire estate.

Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...


Can't you have a limited wildcard?

Something like *.for-testing-only.company.com?


Yes, but then you are putting more information into the publically logged certificate. So it is a tradeoff between scope of certificate and data leak.

I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.


I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs.


Made up problem, that approach is fine.


I wish there was a way to remove public information such as this. Just like historical website ownership records. Maybe interesting for research purposes, but there is so much stuff in public records I don't want everyone to have access to. Should have thought about that before creating public records - but one may not be aware of all the ramifications of e.g. just creating an SSL cert with letsencrypt or registering a random domain name without privacy extensions.


Isn't this a case of shrinkwrap contracts? Use (viewing) is acceptance? https://en.wikipedia.org/wiki/Shrinkwrap_(contract_law)


Except there you were shown the agreement and had to click or whatever.

For event tickets, you are not even made aware there is an “agreement “


Honestly? Given I've seen crashes and printk messages from AMDGPU with words like "General Protection Fault," I'd say memory safety is probably the most important thing missing in these GPU drivers.


It's probably because FIPS 140-2 doesn't list it. I know machines booted with fips=1 and fips certified openssl, etc, openssh won't accept ed25519 keys for key auth.


I'm wondering if the real issue is the user doesn't have a DisplayPort 1.4 supporting cable or equipment. The speeds, color (10bit) and the resolution suggest to me that could be the real problem. I doubt the monitor would intentionally ship with such out of spec edid, especially since the monitor claims support from 48Hz to 144hz, likely for variable refresh rate.


I was wondering that, too. I couldn't find the actual model of the monitor anywhere on the page. But LG has several monitors in its UltraGear line that do 3440x1440 at 144Hz: https://www.lg.com/us/gaming-monitors Presumably he's got one of them?

In that case it's not so much the EDID that's wrong but something else in his setup that won't work with those capabilities, and either Windows and Apple just don't default to maxing out the refresh rate, or they do but are able to detect that it's the wrong cable. Or it's a graphics driver issue?


Thought exactly the same.

I wouldn't be surprised if MacOS and Windows simply default to 60Hz unless manually selected otherwise, just to reduce customer service tickets.

Debugging this issue on linux is maybe an exciting journey, debugging it over the phone with a end-user who only has one cable and one monitor is just a PITA.


I'd consider it a Linux bug if the kernel drivers don't transparently hide high color depths and refresh rates that aren't supported by the display/cable/GPU's maximum supported DP data rate.


DisplayPort cables don't have identification chips inside them. The only way for the machine to know if the cable is capable of running at DP1.4 speed is to try to bring up the link at that speed. The software does correctly hide modes that the endpoints say they cannot support, though that won't help when one endpoint lies.


I saw this was noted online as a potential issue so I did try two different cables first and neither of them worked. It's completely possible that neither of those cables were up to spec either, so I've ordered one that is VESA Certified to support DisplayPort 1.4 and I'll have to check and see if they work without the hack when they come in. I'm on my Mac now and it just lists 85 and 50 Hz in the display settings which seems odd.


PayPal's "secure browser" effectively becomes broken by Firefox's first part isolation. that took some time to figure out.

In terms of being blocked by CloudFront (not cloudflare),I actually got a website to fix their policies by just emailing their tech support and showing that simple user-agent changes bypasses their policy anyhow.


I want to add agreement to this, but to be more precise: people should own the things they buy and artificial (software ) means of locking people out of features shouldn't be allowed.


The doctrine of first sale says that companies cannot restrict the second (or later owner) of a thing. You may notice that some books printed in the UK have some wording on the copyright page about how you can't sell the book (nor give it away) without requiring the subsequent owner to follow the same "license". In the US, I can give away a book, or sell it, and no condition that the publisher makes me agree to will apply to the next person.

Physical goods should be required to follow the doctrine of first sale. There should never be any possible conditions on subsequent owners. If the first owner "unlocks" a feature, it should be unlocked for every subsequent owner.


If you opt out of the heated seat package when you purchase the car, that doesn't mean the manufacturer can't add that hardware in a disabled state. That also doesn't mean you own the heating feature after you opted out

That's like asking Intel to fix a processor you overclocked


Here's an analogy: the year is 1950. I bought a car and a radio is built in. But I didn't pay the extra radio fee, so a wire is intentionally left out and the radio does not work. But the car is mine--I could choose to scrap it, radio hardware and all; I owe the company nothing, and I am the owner of a car with a nearly-functional radio. Then how could somebody object to my going in and fixing the radio, if it is my property to begin with?


Back then, the dealer would remove the radio before handing the car over to you. In its place would be a panel blocking the hole in the dashboard.

I used to work for a radio shop, and it was reasonably common for us to remove the radio when customers did not want it in their new car. Some wanted to have no radio for religious reasons, some businesses wanted the absolute cheapest vehicle possible for their employees, most wanted to install their own aftermarket radio.


The answer is: the year is 1950, and property rights are respected.

The year is 2023. The goal of Big Tech is the elimination of ownership and the rise of perpetual rental income.


If I paid them money and they gave me hardware (in this case a heated seat) then I can do what I want with it, sucks for them if they don't want me to. They can give me the non-heated seat. And yeah if I brick my car trying to jailbreak it that's on me, fine.


There are semantic games at play here, I suspect.

The manufacturer sold the hardware configured in a certain state; the same device could have been configured differently depending on price. Once the device is sold, the new owner is a petty tyrant over the state of his own property.

But if I don't own the heating "feature" (promise of a result), I don't care. I am pretty sure that the warranty indemnifies the company against the hardware actually being fit for said purpose and therefore will not guarantee a result anyway, so what do you "own" in the first place, if not the device itself?

[edit: grammar, readability]


The far better analogy for what Tesla is doing is "it's like Intel preventing you from overclocking your chip". Sure, you should not expect support for your hacked seat warmers.


i came here to say this


Nah, that's like asking Intel to enable two more cores on a two core processor made from a four core binned chip that they also sell four core processors based on. The only difference is software.


Intel briefly did have a scheme where you could pay to unlock parts of the processor that were disabled for segmentation reasons: https://en.wikipedia.org/wiki/Intel_Upgrade_Service

It was abandoned due to backlash but that didn't stop Intel from doing artificial segmentation, so instead of buying a chip with "3MB" of cache and being able to unlock it to 4MB later, now you buy a chip with "3MB" of cache and 1MB of dark silicon that's permanently lasered off at the factory. I get the objections, but the alternative isn't really an improvement.


Then they should pay me rental fees in compensation for electricity I have to purchase to haul their hardware around.



The readme isn’t super clear how you use this. Is there some way I can for example use this to install Spotify with no file system, microphone, and camera access?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: