Content-addressed (or input-addressed) component stores, like Nix. If you're blindly assuming which libraries exist on the target system, you've already failed.
Part of the point of Debian and Redhat style package management was to be able to assume that if a dependency package was installed, that would provide specific libraries on the target system in a specific place.
The Linux model of global shared libraries is an abject failure. It’s a failed model — and the prevalence of requiring Docker simply to launch a program is evidence of this.
Windows doesn’t have global library folders that are polluted by a million user scripts. And you can reliably launch software that’s 25 years old. It works great. Linux’s model was a nice effort but ultimately a failure.
> Windows doesn’t have global library folders that are polluted by a million user scripts.
C:\WINDOWS\SYSTEM32
It's called "DLL Hell" for a reason. It used to be very common for every program you installed to dump several libraries in that directory. It included program-specific libraries (so that directory would have a mix of internal libraries for every program you ever installed; pray there were no naming conflicts!), compiler runtime libraries like the C library (and it was not uncommon for a program to overwrite an already existing runtime DLL with an older version, breaking other programs which expected the newer version), and sometimes even operating system libraries. It got so bad that IIRC Microsoft made Windows automatically detect when an important operating system DLL had been overwritten, and restore it from a secret copy of the original DLL it had stashed elsewhere.
> It used to be very common for every program you installed to dump several libraries in that directory.
Maybe back on Windows 95? Hasn’t been the case or an issue in as long as I can remember.
> It's called "DLL Hell" for a reason.
Linux shared library hell is at least a full order of magnitude more complex, fragile, and error prone. Launching a Linux program is so outrageously unreliable that everyone uses Docker just to run a bloody program! It’s so so bad. At least in the year two thousand and twenty four.
Programs potentially duplicating dependencies at a quadratic rate is an equally abject failure. It's the other extreme of the dependency sharing spectrum. Share, but don't overshare. If two programs share a dependency, and that dependency is the exact same (by content or by its inputs, when using a pure build system), they may share the dependency. If not, you install the dependency "twice" (it's not really twice, because it's not the same dependency, so two dependencies, each installed once).
Quadratic in the sense that, in the worst case, you have N programs sharing the same set of M dependencies, requiring N*M components worth of space. With sharing, it would only be N+M.
FS deduplication solves the problem at the storage level, but not at the transfer level. The components would still be downloaded multiple times if the downloader doesn't have a way to know what has already been downloaded.
Docker's way of doing things is basically an admittance of defeat by lazy engineers. We can do better than that.