I mean this is fine for an advertisement, but it woefully oversells clang if you're trying to meaningfully compare these things. A lot of what they're saying is misleading and or false.
In the early days, clang was significantly faster in compilation than GCC. They also barely implemented any code optimization. Now that clang generates code which is about 90% as fast, generally, as C++, its compilation speeds and memory usage have understandably bloated considerably.
Note that I say 90% as fast generally. It still hasn't caught up completely.
Clang pioneered LTO, but GCC does it better now.
Other people have mentioned gcc's previously terrible error message and inability to dumb ASTs.
Which is why some people get a bit nervous when one tool (clang and the LLVM universe in general, curl, WebKit) becomes such a massive de facto standard it completely marginalizes competition. This is even true if something is Open Source: X11 edged out everything else in that general space (MGR, NeWS) to the point those two things I just mentioned are pretty well forgotten, and we don't know what we lost because of that.
It's possible some spaces can't have more than one player due to network effects (like network protocols, such as the Web); the history of the Internet looks like a Pod People or Borg plot where a more diverse ecosystem is consumed and replaced by a single all-consuming entity that gradually assimilates all distinct individuals. What we lost in diversity we gained in losing bizarre email gateways, I suppose. But languages are meant to be written to actual, real-world, written down standards, right? No possibility of friction when moving from one compiler to another, right?
> GCC is licensed under the GPL license. Clang uses a BSD license ...
Looks like this hasn't been updated in a while. As of Clang 9.0 they migrated everything to the Apache 2.0 license, which is not nearly as permissive as BSD. Apache 2.0 mixes US Contract law with Copyright law, and that is considered wholly "not permissive enough" by many, most notably OpenBSD which is stuck on Clang 8.0.1. They also migrated the libc++/libc++abi C++ standard libraries from MIT to Apache 2.0 as well (which was a real dick move), but they don't care.
Interesting, I didn't know that And that after the BSDs were quite happy that a viable non-GPL compiler arrived on the market. Wasn't their final switchover just in 2017?
I wish more people would have seen through this, but alas.. this is the real reason. Corporate pressure, agenda, not the best interest of the open source community.
"1) Some contributors are actively blocked from contributing code to LLVM."
> These contributors have been holding back patches for quite some time that they’d like to upstream. Corporate contributors (in particular) often have patents on many different things, and while it is reasonable for them to grant access to patents related to LLVM, the wording in the Developer Policy can be interpreted to imply that unrelated parts of their IP could accidentally be granted to LLVM (through “scope creep”).
[..]
> This is a complicated topic that deals with legal issues and our primary goal is to unblock contributions from specific corporate contributors."
Legally dubious relicensing was not only unnecessary, it is now preventing 9.0> use and future contributions, OpenBSD, which has a long history of opposing Apache 2.0. And using LLVM/Clang as the default compiler for the kernel/userland and a ports tree with 10,000 software packages.
How is a reverse patent retaliation clause in the Apache v2.0 license not in the best interest of the LLVM community? It provides more patent protection for LLVM
At this point, OpenBSD has decided that two very popular open source licenses (GPLv3 and Apache 2) are unacceptable to them. That has walled them off from a lot of open source software. They seem to think that it's incumbent on everyone else to adopt licenses they like. They are going to continue to be disappointed.
The point of open source is that individuals from the community can read and modify their tools. Anything that makes that harder is a bad thing, it might be justified but it’s still bad.
As far as I know that page is many years old, from when clang was very new and comparing to e.g. Elsa made sense. The gcc comparison is probably pretty out of date.
I mean Elsa and and PCC are cool, but who actually uses them? Clang and GCC are pretty much all there is, but regardless it’s nice to know what people have tried/are trying to do. These bullet points in particular make me want to tinker with PCC and see how much it is (and is not) capable of:
> The PCC source base is very small and builds quickly with just a C compiler.
> PCC doesn't support Objective-C or C++ and doesn't aim to support C++.
To make it easier to share code with other systems, Plan 9 has a version of the compiler, pcc, that provides the standard ANSI C preprocessor, headers, and libraries with POSIX extensions. Pcc is recommended only when broad external portability is mandated. It compiles slower, produces slower code (it takes extra work to simulate POSIX on Plan 9), eliminates those parts of the Plan 9 interface not related to POSIX, and illustrates the clumsiness of an environment designed by committee. Pcc is described in more detail in APE—The ANSI/POSIX Environment, by Howard Trickey.
I'm obviously aware that it wasn't the main compiler/compiler suite for Plan 9 (of which I've submitted links to papers about quite a few times), but it was there.
When compilation speed matters a lot more than runtime. I use it when working on plain c codebases that are slightly larger. It compiles about 10-20x faster than GCC and clang on -O0 meaning 0.3s to run my code vs 5s.
Considering how old this page is (2009 it seems) when they update it they should include msvc. Cannot imagine Microsoft will not open source it for another 10 years.
MSVC’s STL (which I work on) is now open source, under the Apache+LLVM license: https://github.com/microsoft/STL . At this time, there are no plans to open source the MSVC compiler.
"Clang can serialize its AST out to disk and read it back into another program, which is useful for whole program analysis.
GCC does not have this. GCC's PCH mechanism (which is just a dump of the compiler memory image) is related, but is architecturally only able to read the dump back into the exact same executable as the one that produced it (it is not a structured format)."
Clang, you had me at 'hello'.
The point I outlined above is just icing on the cake!
GCC can dump its internal representation at almost every stage of the compilation process.
However, it's intended for diagnosing issues in the compilers or plugins ; it's definitely not meant to be used as an interoperable format to be loaded back into a program.
gcc doesn't even provide a way to specify the output path for the dump file (too bad, as reliable AST dumping could enable implementing ast-based-ccache (instead of preprocessed-code-based-ccache, for compilation-caching of preprocessor-less languages)).
In the early days, clang was significantly faster in compilation than GCC. They also barely implemented any code optimization. Now that clang generates code which is about 90% as fast, generally, as C++, its compilation speeds and memory usage have understandably bloated considerably.
Note that I say 90% as fast generally. It still hasn't caught up completely.
Clang pioneered LTO, but GCC does it better now.
Other people have mentioned gcc's previously terrible error message and inability to dumb ASTs.
I don't think this is up to HN standards.