There's been a continual oscillation between bring-your-dependencies (ranging from statically compiled executables, to composed things like docker containers, snaps, npm / ruby / php composer apps, etc.) and use-the-systemwide-stuff (like most linux apps).
The tradeoff is pretty obvious: on the one hand you get full control over what code you bring in, but you also get full responsibility of keeping it up to date and it does indeed take up more space. On the other hand you get systemwide updates, but also unpredictability which can break things.
In the former, you have a lot of work to do yourself to maintain your ecosystem between releases. In the latter, a greater amount of effort and responsibility goes onto those who maintain the packages at a distribution level to ensure mutually compatible package selections. This can be done (viz. most of the top tier distributions successfully doing this with occasional problems for decades) but we keep trying new all-in-one mechanisms every few years.
I'm not convinced that there's ever going to be a final answer to this, but it is clear that storage is cheap as chips...
I don't know, I think the answer is pretty obvious: use system-wide shared libraries for things that are very common, like widget toolkits, network libs, cryptography, and other system components. Otherwise it is part of the application, not the system, and should be with the application.
The reason you don't see this on Linux is because there is no such thing as a separation between 'system' and 'application' in its culture. Consequently there has never been a "base system" to target or keep compatibility with so applications have to either target a specific version of a specific distro (waste of time) or include everything above the stable kernel ABI in their product (waste of space).
It is actually a very simple problem to solve, it just isn't one that the Linux community is interested in solving simply, so instead they invent ridiculously complicated tooling like package managers and Flatpak, introducing a bunch of unnecessary limitations and yet more parts to break and ruin your day.
The tradeoff is pretty obvious: on the one hand you get full control over what code you bring in, but you also get full responsibility of keeping it up to date and it does indeed take up more space. On the other hand you get systemwide updates, but also unpredictability which can break things.
In the former, you have a lot of work to do yourself to maintain your ecosystem between releases. In the latter, a greater amount of effort and responsibility goes onto those who maintain the packages at a distribution level to ensure mutually compatible package selections. This can be done (viz. most of the top tier distributions successfully doing this with occasional problems for decades) but we keep trying new all-in-one mechanisms every few years.
I'm not convinced that there's ever going to be a final answer to this, but it is clear that storage is cheap as chips...