A handful of other areas are configured using Starlark in chromium. This particular use is in a very different capacity than Bazel - the Bazel equivalent in chromium is GN, and I have not seen any signs that GN will be replaced any time soon.
GN at least used to generate Ninja files. So I suppose now it will be generating Siso files?
edit: asked and answered, Siso is a "drop-in" replacement for Ninja, so presumably it can read .ninja files, and so GN probably didn't need to change much to accommodate it.
Kinda impressive and terrifying that Chromium needs its own build system. Kinda strange that Bazel was right there, also from Google, and they not only choose not to use it, but also reference it in the name of the new tool.
>Kinda impressive and terrifying that Chromium needs its own build system.
This is one area where I think it makes some sense to build your own. Most projects only use a fraction of the capability of a typical build system. The last time I did this I managed the whole thing in just 300 lines of code.
I stronly disagree. While you use a fraction of the features you often mistake it works for it is right and to something subtle is wrong. You end up either not working or dedicating a lot of effort fixing weird corner cases that you didn't think of
>You end up either not working or dedicating a lot of effort fixing weird corner cases that you didn't think of
That's not my experience at all. Part of my job is to think about weird corner cases, I'm not sure why a build script should be any worse than what I normally deal with.
Now that I'm on a real computer (keyboard) maybe I can elaborate a bit.
It isn't hard to create a basic build system. However there are a lot more rare corner cases than anyone who sets out to create one as a side project realizes and so in almost all cases their build systems work for the easy/common situations, but in lots of weird cases it doesn't work. Often those corner cases will happen on setups that make sense for one exactly person in the world but for everyone else is stupid. (think cross compiling on haiku-os for a vax running netbsd)
Which is to say if you want to create a build system that needs to be your only goal. You can maybe make the goal create the build system that some project needs (Chromium in this case) needs, but you won't be doing any hacking on the project (Chromium) itself because all your time will be spent on the build system.
Since everyone I know thinks of build systems as a side project they should just choose one that works. I use cmake, which works well despite the warts. I've heard good things about build2 and bazel, but I have no experience with either. I have used ninja as well (that is handwritten ninja files), but it isn't intended for that purpose and so in general I wouldn't recommend it.
It isn't worse than other corner cases - but for almost everyone it isn't what their boss wants them to spend all their time on. Even if working for your own fun you probably want to do something else.
If Android was anything to go by, migrating build systems is a risky endeavour. Miso claims compatibility with ninja, so I’m guessing this route was deemed easier to make incremental improvements.
Even so, a build system migration of any kind being anything other than Bazel given the design goals of Bazel and the heritage of Chrome is an implicit indictment of Bazel itself.
Most likely Chromium needs to build on a system which doesn’t support Java. Like ChromeOS. That excludes Bazel, at least unless cross-compilation is supported (likely a monstrous headache for ChromeOS). It’s a good reason to rewrite Bazel in Rust.
Yeah, thats fair, but if I understood right- this is a custom built tool to be compatible with Ninja.
That work building “yet another build tool” could have gone in to programmatically generating bazel BUILD files. So, there was an active choice here somewhere; we just don’t know all the information as to why effort was diverted away from Bazel and toward building a new tool.
I trust them to make good decisions, so I would like to understand more. :)
Seems like Siso supports Starlark, so maybe its a step in Bazels direction after all.
There is a ton of tools and custom logic used by/with/for the GN ecosystem in chromium that I imagine would be difficult to port.
This tool is substantially less complex than Bazel, nor is it a reimplementation of Bazel. Ninja's whole goal in life is to be a very fast local executor of the command DAG described by a ninja file, and siso's only goal is to be a remote executor of that DAG.
This is overall less complex than their first stabs at remote execution, which involved standing up a proxy server locally and wrapping all ninja commands in a "run a command locally which forwards it to the proxy server which forwards it to the remote backend" script.
It reminds me of how Blaze (which became Bazel) was designed to be mostly compatible with the build files of a previous build system written in Python.
I wonder if AOSP will also move over to Siso. Since it is advertised as a drop in replacement it would take less resources than the Bazel migration which got canceled. The readme explicitly calls out a feature used by AOSP, so it is plausible that thought was put into it.
Sure, but only that team gets to put “designed and implemented new build system” on their resume. See how many meet/hangout/alo variants came out of Google. In companies of that size the “here” in NIH is a lot more localized to smaller units.
For example, many Linux distributions want to compile everything in their main repositories from source, all the way down. There are comments as such in response to the mailing list announcements. Your average Linux distribution probably had Go available, but it previously wasn't on the critical path of anything.
Why? Would you compile the compiler from source as well? From what? You need to compile the compiler's compiler from source as well, right? Where does it stop? And why is that location more valid of a decision than the one that doesn't require building the build system from source?
Same can be extended for other tools that are generally used in builds like make. I never heard someone say that they need to build make from source so they can build X, unless of course you're using something like Linux From Scratch.
I build cmake from source but TBF that's because projects sometimes depend on specific version ranges for features that they use (either brand new or recently removed).
You compile the compiler from source, then you use the compiler compiled from source to compile the compiler from source again, and then the compiler that you compiled from source using the compiler compiled from source should be essentially identical to the compiler compiled from source by itself (unless anything like Reflections on Trusting Trust is in play, but then a lot of bets are off).
Not really relevant here, but this is actually exactly how it's done in embedded systems like Yocto, everything from gcc, make, etc. is built from source (I believe the host compiler is used in a 3-stage bootstrapping process for gcc).
And in these cases you really see the impact of internal dependencies (building rust/llvm takes around 30-40% of the entire build). The upside is that you can patch and debug absolutely any part of the system.
The main thing is it gives the capability to adjust almost every detail, and they have a tendency to become important in embedded applications. To give one extreme example, an embedded intel board had a hardware errata which basically meant a very common sequence of instructions was unreliable, and the workaround involved patching the compiler to avoid emitting it, but then basically everything needed building with the patched compiler. Yocto lets you do that, it's even fairly easy, most traditional distros would struggle (Gentoo and nix are the other options, but I don't know how well they can do cross-compilation, which is also a big part of Yocto).
I've used it a fair amount, to build x86 on x86_64, to build arm 32 on x86_64, etc. It will also let you build x86_64 on x86_64 but for a different CPU type, so i can build packages/binaries for older systems on a newer system (like, a system with no avx-whatever can be build on a current-gen machine where the compilation goes way faster but builds binaries for the older system.)
Ah, cool. It's been a long time since I used it in anger, and it was not so hot then (yocto was I think the first system out of 3-4 I tried that actually gave me a functioning cross-toolchain, and gentoo's crossdev was one of those. This is like a decade ago though).
I think what they mean is, having all the source code that make up your system as well as building everything yourself, allows you to climb down to any level of the system and adjust the parts as needed, including the compilers and build system. It gives you full control of everything running on the machine.
This kind of control is not commonly seen because most people don't want or need to build it all from source. But it makes sense in some contexts, like for embedded or security critical systems.
Bootstrapping everything is exactly how it's done correctly--and how it's actually done in practice in Guix.
I mean sure if you have a business to run you outsource this part to someone else--but you seem to think it's not done at all.
Supply chain attacks have been happening pretty much non-stop the past years. Think it's a good idea to use binary artifacts you don't know how they were made (and thus what's in them)? Especially for build tools, compilers and interpreters.
>And why is that location more valid of a decision than the one that doesn't require building the build system from source?
Because you only have to review a 250 Byte binary (implementing an assembler) manually. Everything else is indeed built from source, including make, all the way up to Pypy, Go, Java and .NET (and indeed Chromium).
I didn't realize until I read this, but all software engineers would benefit from building everything from source at least once as an educational experience.
I've never gone all the way to the bottom, but now that I know it's possible I cannot resist the challenge to try it.
>Because you only have to review a 250 Byte binary
It's dishonest to not mention the millions upon millions of lines of source code you also have to verify to know that dependencies are safe to use. Compiling from source doesn't prevent supply chain attacks from happening.
In my opinion there is more risk in getting a safe Siso binary in going through this whole complicated build everything from scratch process vs Google providing a trusted binary to use since you have to trust more parties to not have been compromised.
That "probably" is doing a lot of heavy lifting; it's entirely your personal choice and responsibility to build the build system from source. The choice of language for said build system wasn't done for your particular preference.
Anyway, installing Go is easy enough, especially for someone who apparently builds Chromium from source already.
They want to "improve their build processes", huh? Yeah, there are some problems, like ginormous build times, for which there are some very well-known and pedestrian solutions ("physical code design"). But these aren't impressive enough to get promoted.
I just wanted to point out after reading the thread of the Chromium maintainers that it looked to me like they rushed out this change without proper consideration of all possible implications. Other comments there show that the developer isn’t intimately familiar in depth with the Chromium project at large, yet making changes with very wide impact.
I apologize if anyone got me wrong, especially if there are any Googler here that took this personally.
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
I wonder if the end goal is to use Bazel for Chromium and Siso is an incremental step to get there
reply