That analogy may not be suitable for this case because value proposition between the aesthetics vs the function is different for visual art projects compared to software. There is also the maintainability factor where most aged software (especially the closed source ones in private sector) change maintainers every few years. Old maintainers most often lose access to the source code and become unreachable after leaving their job.
True, there is the "old maintainer" aspect, that differs.
But what exactly does this mean in relation that what are special forms in other languages are function calls in Rye?
Is the problem that you somebody could make their own control-structure-like functions? All these function calls have exactly the same evaluation rules, which is not something you can say about "special forms" in "normal" languages because special forms are exactly rules that break the regular evaluation rules.
Rye already has a lot of control-structure-like functions in it's standard library and many other functions that accept blocks of code and aren't related to control-structures. Yes, a "stupid" person can write "stupid" code in Rye, but you don't need much flexibility in any language to write stupid code.
I fully admit there are languages that are more suited for teams, and languages that are more suited for solo developers, for this reason.
That was also my point mentioning "million pixel website" because that "art" is directionless and crowdsourced and a painter can use the same pixels or even more flexible options to create beautiful images.
> I'm not quite sure why you'd want to run Erlang on it, but the hardware exists.
Erlang is invented before IoT was a thing to facilitate distributed computing for telecommunication in a highly reliable manner. It makes perfect sense to adapt it for driving fleets of cheap IoT devices.
Low resource footprint, written in Go, embed-able in any Go project as a library, compiles to mobile with little to no modification, supports config change without restart, has plugin API.
These were the reasons why we used it in my previous job.
It would be "artificial" only if LLMs performed badly despite having an equal amount of data containing examples of eastern customs in its training set. Even that's arguable since we don't (didn't) have the benchmarks for this particular case before.
It's no different than GPT answering a prompt with "That's a wonderful idea!", except it's in a different language than English. It's a good thing if LLMs can do this in every language and for any culture with no compromise.
I elaborated a bit when I edited my post, but to be more specific, I think LSP is a protocol that fails at its stated goals. Every server is buggy as hell and has its own quirks and behaviors, so editors that implement LSP have to add workarounds for every server, which renders the point of LSP moot. It's the worst of both worlds: editors are still duplicating effort, but with fewer, if any of the benefits of tools tailor-made for a specific editor-language combination. And that's not even touching on the protocol's severe performance issues.
Unsurprisingly, the vast majority of servers work much better with VSCode than other editors. Whether this was a deliberate attempt by Microsoft to EEE their own product, or simply a convenient result of their own incompetence, is ambiguous.
LSP is underspecified for sure. I don't think this is a situation that is limited to LSP though. It happens when software interfaces are underspecified (or post-hoc specified) with a strong dependence on a reference implementation (VSCode in this case) and the absence of a canonical validation test suite.
Exactly the same thing happened with VST audio plugins. Initially Cubase was the reference host, later Ableton Live became the reference and it was impossible to convince plugin developers that they were out of spec because "it works in Ableton".
My impression, having programmed against both the LSP and VST specifications is that defining well-specified interfaces without holes in them is not a common skill. Or perhaps such a spec (maybe ISO C is an example) is too expensive to develop and maintain.
> Linux separates things such that I was looking at C files in drivers/platform/x86 and header files in include/linux/platform_data/x86. And the ACPI code lives other places as well. It’s all very orderly, but at times it felt like navigating a grocery store that arranges products in alphabetical order. Logical, but not exactly cozy.
The ACPI code largely lives separately in Linux because it was contributed by Intel and is (as far as possible) intended to be dual-licensed GPL/BSD to ensure non-Linux OSes benefit from core improvements.