The last comprehensive study I read indicates that they improve internal and external code quality by 76% and 88% respectively while reducing productivity some[1]. If you have papers that indicate your claim I'd be interested in reading them or in ones that refute the metastudy linked below.
This. It's why vagrant was popular before the container revolution.
The killer app of Docker isn't the container, it's the depth and uniformity of the UX surrounding the container system. When that is broken by something on the host (non x86 cpu was a major pain for a while before popular images were x-built) and emulation gets in the way and is not as easy, or just mildly different (windows behind corporate firewalls that assign ips used by the docker engine for example), the ease of use falls away for non-power users and it's all painful again.
Tech like Docker for windows and Rancher Desktop and lima has largely matured at this point, but somebody could make a new machine and then the process of gradual improvement starts all over again.
golden-ratio is very convenient. I also split the frame to two windows at most, and most of the times i focus on one window, and the other window takes just 25% of the screen space.
What i have not found a solution yet, is when one window is expanded to 75% of space, then it collapses to 25%, the notes i wanted to focus on get moved up or down. After some trial and error i put my cursor somewhere downward not far from the text i want to be displayed, and most of the time it gets displayed after collapsing the space.
I have yet to figure out how to select a region, and golden ratio keep that in display after rebalancing.
I think zoom-mode is the more recommended package for this functionality nowadays, but I will defer to someone who knows more. I just know golden-ratio was a little jank when I tried it recently, but zoom works well.
Gatekeeping is part of running a project -- you have the right to refuse the contributions for any reason you'd like. They, likewise, have the right to fork your project and run a new one how they see fit. Do it too much though, and you'll drive the popularity of your project way down and nobody will use it.
Even with free (in both senses of the word) products, the market will always work itself out.
Copying isn't free though. Every front-end application built on Electron or running in a container that duplicates half of the OS stack burns more memory on my computer, forcing earlier upgrades of either the entire machine or RAM with each year that passes.
It's free to produce, but the cost of running more copied software always increases.
I think there are a number of per-unit costs of software, because your edge cases get tested more frequently. The risk goes up for users finding bugs and incompatible working environments. They demand more unusual features. If successful, your growing user base attracts the attention of competitors.
If coupled with hardware, the rising complexity of software makes the hardware more costly to develop and sustain. It may delay the introduction of newer or more valuable hardware features and products.
When consumer software still came on physical media, nearly everything advertised the amount of ram and minimum system and cpu requirements necessary to run the software.
That just isn't the case nowadays, so you don't know there will be trouble until it occurs at runtime.
I can limit the memory available to ALL the tools I use every day (I only do it for docker and jvm stuff), set priority levels, etc., but it feels very weird for a brand new laptop to hit swap after a few days of uptime.
Browsers have been grown, not designed. The competitive pressures exuded on browsers to render ill-specified things and hacks has resulted in something essentially where what it does is what it does. That's the case with a lot of software,because we made the choice as a community of programmers to not formally verify the things that we build. Thunk of the origin of the blink tag [1]. We decided to be hackers, and doers, not thinkers.
Just as testing changes the way you structure your software, designing via formal methods changes the code that you produce as well, and it will attract a different set of people than traditional software dev, just as architecture attracts a different set of people than carpentry.
The only thing I prefer the mac M3 PRO (work), hardware wise, to my MSI w/4060 and 64GB ram, is the battery life. Everything runs slower on my mac. There are some niceties (the color filter for my color-blindness, for example, and the keychain), but not a fan of
* lack of usb ports necessitating dongles
* lack of hdmi/dvi outs
* No camera lens cover
* Uncomfortable chiclet keyboard
* oversized touchpad
* overheating
* lack of power cores leads to slower parallel compilations
I do appreciate the ease of use of (especially) the network utilities of mac, but it's definitely not my preferred machine.
1. https://doi.org/10.1016/j.infsof.2016.02.004