For example, many Linux distributions want to compile everything in their main repositories from source, all the way down. There are comments as such in response to the mailing list announcements. Your average Linux distribution probably had Go available, but it previously wasn't on the critical path of anything.
Why? Would you compile the compiler from source as well? From what? You need to compile the compiler's compiler from source as well, right? Where does it stop? And why is that location more valid of a decision than the one that doesn't require building the build system from source?
Same can be extended for other tools that are generally used in builds like make. I never heard someone say that they need to build make from source so they can build X, unless of course you're using something like Linux From Scratch.
I build cmake from source but TBF that's because projects sometimes depend on specific version ranges for features that they use (either brand new or recently removed).
You compile the compiler from source, then you use the compiler compiled from source to compile the compiler from source again, and then the compiler that you compiled from source using the compiler compiled from source should be essentially identical to the compiler compiled from source by itself (unless anything like Reflections on Trusting Trust is in play, but then a lot of bets are off).
Not really relevant here, but this is actually exactly how it's done in embedded systems like Yocto, everything from gcc, make, etc. is built from source (I believe the host compiler is used in a 3-stage bootstrapping process for gcc).
And in these cases you really see the impact of internal dependencies (building rust/llvm takes around 30-40% of the entire build). The upside is that you can patch and debug absolutely any part of the system.
The main thing is it gives the capability to adjust almost every detail, and they have a tendency to become important in embedded applications. To give one extreme example, an embedded intel board had a hardware errata which basically meant a very common sequence of instructions was unreliable, and the workaround involved patching the compiler to avoid emitting it, but then basically everything needed building with the patched compiler. Yocto lets you do that, it's even fairly easy, most traditional distros would struggle (Gentoo and nix are the other options, but I don't know how well they can do cross-compilation, which is also a big part of Yocto).
I've used it a fair amount, to build x86 on x86_64, to build arm 32 on x86_64, etc. It will also let you build x86_64 on x86_64 but for a different CPU type, so i can build packages/binaries for older systems on a newer system (like, a system with no avx-whatever can be build on a current-gen machine where the compilation goes way faster but builds binaries for the older system.)
Ah, cool. It's been a long time since I used it in anger, and it was not so hot then (yocto was I think the first system out of 3-4 I tried that actually gave me a functioning cross-toolchain, and gentoo's crossdev was one of those. This is like a decade ago though).
maybe longer than a decade, i've been cross-compiling on gentoo for at least 13 years without issue (that's 2012, for those playing along at home.) I say "at least" because i can't remember doing it prior to raspberry pi...
I think what they mean is, having all the source code that make up your system as well as building everything yourself, allows you to climb down to any level of the system and adjust the parts as needed, including the compilers and build system. It gives you full control of everything running on the machine.
This kind of control is not commonly seen because most people don't want or need to build it all from source. But it makes sense in some contexts, like for embedded or security critical systems.
Bootstrapping everything is exactly how it's done correctly--and how it's actually done in practice in Guix.
I mean sure if you have a business to run you outsource this part to someone else--but you seem to think it's not done at all.
Supply chain attacks have been happening pretty much non-stop the past years. Think it's a good idea to use binary artifacts you don't know how they were made (and thus what's in them)? Especially for build tools, compilers and interpreters.
>And why is that location more valid of a decision than the one that doesn't require building the build system from source?
Because you only have to review a 250 Byte binary (implementing an assembler) manually. Everything else is indeed built from source, including make, all the way up to Pypy, Go, Java and .NET (and indeed Chromium).
I didn't realize until I read this, but all software engineers would benefit from building everything from source at least once as an educational experience.
I've never gone all the way to the bottom, but now that I know it's possible I cannot resist the challenge to try it.
>Because you only have to review a 250 Byte binary
It's dishonest to not mention the millions upon millions of lines of source code you also have to verify to know that dependencies are safe to use. Compiling from source doesn't prevent supply chain attacks from happening.
In my opinion there is more risk in getting a safe Siso binary in going through this whole complicated build everything from scratch process vs Google providing a trusted binary to use since you have to trust more parties to not have been compromised.
That "probably" is doing a lot of heavy lifting; it's entirely your personal choice and responsibility to build the build system from source. The choice of language for said build system wasn't done for your particular preference.
Anyway, installing Go is easy enough, especially for someone who apparently builds Chromium from source already.