> It is easy to see that C++ is fit as a general-purpose programming language–adoption by millions is a testament to that.
I really wish the std would drop this pretense and focus on C++'s strong point: Continue being the fastest systems language possible.
Everywhere in the std lib you can see compromises that require rewriting substantial portions for any real time application.
Things like: shared_ptr eagerly using atomics whenever possible; std::string allocating; the lack of built-in faster std::allocator replacements like bump allocators, memory pools, etc; no lockless and wait free concurrency primitives; no architecture-aware thread pools or even architecture descriptions; no IPC primitives; etc.
Considering how many C++ developers are working on things like games, high performance server applications, databases, and operating systems, it's just bizarre how inappropriate the standard headers are for these tasks.
Even something trivial like casting bytes off the wire to a packed struct is an exercise in frustration due to aliasing rules that should have been encoded in the type system and invisible to the user.
This is not pedantic at all. Aliasing issues are really subtle and hinder so many optimizations. It's not clear that there is a way to even fix this without substantially breaking backwards compatibility.
Yeah the number of cases where otherwise really trivial optimizations are prevented because of soundness concerns regarding aliasing is simply enormous.
> If you want to be pedantic -- in theory C++ can never be the fastest systems language possible because of the language's rules about aliasing.
True but the only practical competitor is Rust, and they gain some alias information (mut) and lose other aliasing information (type punning is fully allowed in unsafe code all the time).
Not so, unsafe code still has restrictions in Rust due to pointer provenance. You can override these restrictions but it's very much an explicit operation, the default is that pointers with incompatible provenance will not alias. This is how Rust models its equivalent to TBAA, and the concept is spreading to C/C++ as well.
> Not so, unsafe code still has restrictions in Rust due to pointer provenance.
That's separate from what I'm referring to. In C++ a float* and an int* can never alias. In Rust f32* and u32* are allowed to. Meaning in a situation where whether they can alias can't be established by provenance (e.g. in a separately compiled function where the compiler at compilation time can't tell where the float and int came from) then C++ is able to use types to rule out aliasing, but Rust cannot.
Your specific gripes seem reasonable (I suspect design-by-committee plays a big part), but:
> I really wish the std would drop this pretense and focus on C++'s strong point: Continue being the fastest systems language possible.
You've essentially described C, not C++. C++ has a different philosophy and makes different trade-offs.
C++ is at least still pretty committed to the you only pay for what you use principle. As far as I know RTTI is the only real exception (corrections welcome), but even RTTI can be disabled in many compilers.
Some of your gripes with the standard library can be addressed with libraries, perhaps from the Boost project.
I'd disagree pretty strongly with that. C is more focused on being relatively simple to implement and backwards compatibility with the past 50 years. (I read a blog post by a C committee member talking about that recently, wish I could find the link)
Just look at the garbage fire which is the standard library. qsort. strtok. rand.
C++ should (in theory at least) be able to match or surpass C for _any_ performance benchmark, because it simply gives you more tools in your toolbox. For example, C is never going to be able to beat std::sort because it can't monomorphize in the compare function.
I'm not saying C++ is perfect either (looking at you unordered_map and regex). But I am saying people look at C with rose colored glasses.
Yeah, I think a degree of complexity is required to achieve being the fastest compiled language.
To achieve that, you need stuff like monomorphized templates, actual arrays and slices being distinguished from pointers, well-defined rules on pointers (like Rust's rules about references and mutable references, or strict aliasing in C++), and the freedom to let the compiler reorder structure fields for better alignment and space usage. Runtime polymorphism, like virtual functions in C++, are better as a core language feature rather than something implemented as a struct of function pointers, because that way the compiler can devirtualize some calls.
All of these things, I think most C programmers would be against adding to C. And pointer rules are historically controversial; for example, Dennis Ritchie was against adding noalias to C in 1988[1].
In this thread: all the people that have never looked at C++ disassembly. Maybe the C++ "language server" of your choice should have a mode where it prints all the constructors destructors copy constructors and what not on top of your code?
"Zero cost abstraction" is a big fantasy. C++ is the king of hidden control flow and the costs are everywhere.
As someone who reads C++ disassembly/decompilation for a living... Yeah kinda. It depends.
If the target code is full of virtual function calls, yeah it gets gross. But if you need dynamic dispatch, C isn't going to be any better, you'll just be reinventing vtables by hand. Similarly, resource acquisition and release happens in C too, you just have to do it by hand instead of letting the destructor do it for you.
One ultra gross thing I see all the time in C++ disassembly though: constantly creating copies of a std::string, using them once for a comparison or something trivial, and then throwing them away. Multiple times in the same function. Its the developers fault, they shouldn't be creating new objects, they should be passing a pointer or a string_view or something. Unfortunately, C++ is copy rather than move by default, so its too easy to do this by accident.
Another gross thing? (Not C++ specific) 20 functions in a row that are just a single return instruction. Since functions have to have distinct addresses (that's my understanding at least) the compiler/linker can't easily fold identical functions together. Implementations can optimize as long as everything still works (the "as-if" rule). And I know some compilers do. But in practice, it seems they suck at it. So I get to look at 20 lines of assembly in a row that are just `bl`.
> In this thread: all the people that have never looked at C++ disassembly. Maybe the C++ "language server" of your choice should have a mode where it prints all the constructors destructors copy constructors and what not on top of your code?
That tool is called Godbolt at gcc.godbolt.org. This is common enough that Godbolt is a verb with a lot of C++ programmers.
Thanks for the shitty attitude but it sounds more like you're looking at the disassembly of debug builds or don't understand how constexpr/consteval work.
> C++ is at least still pretty committed to the you only pay for what you use principle. As far as I know RTTI is the only real exception (corrections welcome), but even RTTI can be disabled in many compilers.
I had a program that had to parse a lot of integers like "12345". The std way to do it was an order of magnitude slower than writing my own simple parsing code. I have no idea what the std version was doing, but it was crazy to see my program's execution time (measured in hours) dominated by int parsing. The handwritten version eliminated that.
Even there, you can use a subset of C++ and get pretty good compile times. Worse than C, as you're still dragging around the full weight of a C++ compiler, but still.
the only pay what you use thing is bullshit. because everything you want to use actually costs something. the point being that a better implementation would give you the same things without a cost...
"only pay for what you use" means that you do not pay for things you do not use. You are absolutely correct that everything has a cost, but that is not what this slogan is about. The "only" is load-bearing.
Boost has its uses and in many cases it's an improvement over the standard library, but performance isn't one of its key properties. Some parts of boost are about as fast as they can be, but that's the exception.
Even if somebody put such a thing, or two, in the standard library there would be again someone showing that they managed to come up with a better solution. For their workload. And under constraints that are only valid under their use-case and constraints that only they would know. Not standard library developers. And that's the thing - the premise that "one size fits all" even exists is wrong. I used to think that way as well but generic solutions to high-performance algorithms do not really exist.
It was my understanding that one can easily add their own allocator to any STL container, though as you point out you have to write that allocator yourself
If the allocator is unaware of your type, how do you initialize the underlying object? That is - how do you invoke the default or non-default constructor of type T over the memory region you just allocated?
That's not the allocator's job, is it? It's up to the allocator's user to call one of the several available varieties of placement new on the block of memory allocated.
It is a technical argument. Allocator's responsibility is to allocate memory. Constructing objects in that memory is not (and must not be) its responsibility.
> This would have been a point if that's the control you don't have with the allocator interface. But you do.
Well, std::allocator::construct() has been removed in C++20. You still have std::allocator_traits::construct but it just calls std::construct_at().
> you're not suggesting invoking placement-new oneself all over the place whenever one allocates the backing storage
Well, that's what all the standard containers do (vector, list, map, etc). Or you can use std::construct_at() instead but that effectively is a variant of the placement new. Of course, you'd better use std::make_unique()/make_shared() (which normally still devolves to placement new IIRC) instead of manipulating raw pointers and invoking constructors manually.
So the complexity is not really in writing the "C++ allocator" but in writing a sufficiently complex memory management logic that will actually make your application/algorithm run faster. And that only you can know given that you're familiar with the memory allocation patterns in your code. C++ allocator is only an interface that allows you to capture that logic and makes it feasible to apply through the code.
If you're in a position where you know you need a different allocator, and you have your own allocator, plugging it in should be relatively easy in comparison.
You can but it's definitely not easy to do correctly.
There's also a lack of traits that can describe what's required of a specialized allocator. For example, std::map only needs to allocate 1 object at a time in practice while std::vector needs to allocate a contiguous count of objects.
> What is much less prevalent is a demand from average C++ users for memory safety features; they’re much more concerned about compilation speed. When most C++ developers haven’t adopted tools like Coverity and C++ core guidelines checkers, it is hard to claim that memory safety features substantially improve their lives at least from their point of view.
I don’t really agree with this. There’s also a group of developers that want more safety, but don’t feel like setting up (or in the case of Coverity purchasing) additional tooling. Some people just want a decent out of the box experience.
I'll make a similar, but fundamentally different, claim to what you're quoting: There isn't much demand for half assed memory safety features. And C++ memory safety features are - nearly by definition - half assed. They're opt-in (meaning all the third-party and system code you link against doesn't have coverage), slow (meaning you can't use it in production), and tend to catch the trivial bugs rather than the hard-to-find ones that keep you up at night.
That sentence stood out to me as well. I don't think C++ developers is necessarily the best people to ask when discussing whether memory safety is important and urgent to work on or not. A lot of developers are relatively shielded from the consequences of bugs and security vulnerabilities, either by their programs not being exposed to the wild in any major sense or by bureaucracy being in between them and any consequences.
At some point, the deficiencies of a language becomes a concern for its end-users rather than its developers.
That actually seems unrealistic to me. Of course, people will always become better at doing the things they do, and we can always try harder, but many of these issues take time to consider and find during code reviews, and a few just slip by.
That's why tools like Coverity exist, but you have to spend time and money to set those up, meaning it's only done when absolutely necessary (or one dedicated person is really pushing for it).
Choosing a different language is basically free in the beginning, and it will impact which sort of bugs will be caught by default and which won't.
C++ is also massively hurt in this regard by not having a package manager. JS is a very risky language by default, but a few tools just with their default settings will already help massively.
I guess Nix could be considered the missing package manager for C and C++, but it's still niche and definitely not a "default" like pip, npm or cargo.
Yeah that stood out to me too. I guess the problem is that only a small percentage of developers both understand the need for memory safety and are vocal about it. That is very different from the number of users who would benefit from better memory safety.
When someone encounters a crash, their actionable item isn’t to go ask the C++ committee for better safety. That’s an unassailable wall for most developers, assuming they even understand what’s happening and that there are better options out there.
C++ Developers aren't complaining for two reasons:
1. Most of those who care about memory safety have already left the building.
2. Those who are still in the building and care don't speak up because their colleagues don't value it, and many of their colleagues will view them as having lower competence.
I think it's less about not wanting memory safety and more that compilation speeds are so time-wastingly abysmal that it's a few orders of magnitude more important to address.
The whole deal with interpreted languages was to avoid sitting there for a few minutes every time you make a god damn one letter change, with the unfortunate but usually acceptable trade-off of some execution speed.
I know I'm weird, but as long as I can compile individual units and link them to other already-compiled units then compilation speed is something I don't care about much at all.
yes, that's why I used the disclaimer "as long as I can compile individual units and link them".
The problem with C++ templates is accurate, and is one of the several reasons why I avoid using templates in my own C++ code. I don't have that freedom in the code I write for my employer, though.
> In C++ there is no individual units.
Yes, there are. Those unit boundaries are often blurred by other C++ features, but they do exist.
There's a still a notion of the compilation unit, which is the cpp file on which you invoke the compiler. Due to templates being prevalent in C++, included files contain the implementation as well; so any changes you make there lead to re-compiling a lot more compilation units than, for example, in C.
That is kind of restating my point. There might be these translation units, but they're tightly coupled to all the other translation units, making them not very individual IMO.
And the additional trade-off that some bugs are only noticed at runtime when that particular line is executed while it could have been noticed by the compiler of a strongly typed language. Pytype helps but at this point you have a static analyzer that potentially runs as slow as a compiler without the additional performance benefit.
There’s no good reason for type checking to be super slow. I’m no fan of Go, but the language compiles insanely fast while being fully statically typed.
As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once. This isn’t a problem with static typing. It’s a problem with C++, and to a lesser extent C.
> As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once.
That's one of the things that can slow compilation down but it's definitely not the only one. It helps that precompiled headers (and maybe modules?) can go a long way towards reducing and possibly eliminating these costs as well.
I think some (most?) of the larger remaining costs revolve around template instantiation, especially it impacts link times as well due to the fact that the linker needs to do extra work to eliminate redundant instantiations.
> due to the fact that the linker needs to do extra work to eliminate redundant instantiations.
Yeah, I see this as another consequence of C++'s poor compilation model:
- Compilation is slow because a template class in your header file gets compiled N times (maybe with precompiled headers). The compiler produces N object files filled with redundant code.
- Then the linker is slow because it needs to parse all those object files, and filter out all the redundant code that you just wasted time generating.
I'm not sure I'd call the design "bad". At the very least it's a product of the design constraints, and I'm not sure there's an obviously better implementation without sacrificing something else. I think separate compilation and monomorphization are the biggest contributors, but I wouldn't be surprised if there was something I was forgetting.
Somewhat related, there was some work in Rust about sharing monomorphized generics across crates, but it appears it was not a universal win at the time[0]. I'm not sure if anything has changed since that point, unfortunately, or if something similar could be applied to C++ somehow.
> I'm not sure I'd call the design "bad". At the very least it's a product of the design constraints, and I'm not sure there's an obviously better implementation without sacrificing something else.
It was a product of the design constraints in the 70s when memory was expensive, and compilers couldn't store a whole program in memory during compilation.
The problem C++ has now is that the preprocessor operates on the raw text of a header file (which is a relic from C). This means the same header file can generate totally different source code each time its included in your program. C++ can't change that behaviour without breaking backwards compatibility. So headers get parsed over and over again "just in case" - wasting time producing excess code that just gets stripped back out again by the linker.
The way C++ works doesn't make any sense now that memory is so much cheaper. Go, C#, java, rust, zig - basically every compiled language younger than C++ compiles faster than C++ because these languages don't contain C++'s design mistake.
Rust doesn't share monomorphized generics across crates, but at least each crate is compiled as a single compilation unit.
This was already the case with languages like Modula-2 and Object Pascal in the 1980's, C++ works that way because it was designed to be a drop-in in UNIX/C without additional requirements.
> As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once.
Sort of. The primary issues are:
1) The C/C++ grammar is garbage.
Note that every single modern language has grammatical constructs so that you can figure out what is "type" and what is "name" without parsing the universe. "typedef" makes that damn near impossible in C without parsing the universe, and C++ takes that to a whole new level of special.
2) C++ monomorphization
You basically compile up your tempate for the universe of every type that works, and then you optimize down to the one you actually use. This means that you can wind up with M*N*O*P versions of a function of which you use only 1. That's a lot of extra work that simply gets thrown away.
The monomorphization seems to be the biggest compile time problem. It's why Rust struggles with compile times while something like Zig blazes through things--both of those have modern grammars that don't suck.
1. No, the grammar is not the issue per se. As you say, C has the same problem, and C code invariably compiles dozens of times faster than C++, and both Zig and Rust have modern grammars, but Zig compiles about as quickly as C and Rust is only somewhat faster than C++ (depending on features used).
2. This is incorrect. What's happening is that each template instantiation for a new set of template arguments requires reprocessing the template to check types and generate code, and also that is done per translation unit instead of per project. Each distinct template instantiation increases the compilation time a bit, much more than it takes to parse the use itself. That's why it's easy to have a small C++ source that takes several seconds to compile.
> 1. No, the grammar is not the issue per se. As you say, C has the same problem, and C code invariably compiles dozens of times faster than C++, and both Zig and Rust have modern grammars, but Zig compiles about as quickly as C and Rust is only somewhat faster than C++ (depending on features used).
Sorry, the C++ grammar is terrible. There are lots of things where C++ can't figure out whether something is a class or name or template or constructor call until looking way far down the chain.
However, you are the first person I think I have ever heard claim that Rust is faster than C++. Rust is notoriously slow to compile.
Zig generally compiles much faster than most C projects I've used. However, that is difficult to lay at the hands of C as a lot of those are the build system being obtuse.
When did I say otherwise? What I said was that it's not the main cause of C++'s long compilation times. The grammar causes other problems, such as making it more difficult to write parsers for IDEs, and creating things like the most vexing parse.
>However, you are the first person I think I have ever heard claim that Rust is faster than C++. Rust is notoriously slow to compile.
It's kind of a mixed bag. Given two projects of similar complexity, one in C++ and one in Rust, the one written in Rust will take longer to compile if organized as a single crate, because right now there's no way to parallelize compilation within a single crate. However, compiling the C++ version will definitely be the larger computational task, and would take longer if done in a single thread. Both contain Turing-complete meta-languages, so both can make compiling a fixed length source take an arbitrarily long time. Rust's type system is more complex, but I think C++'s templates win out on computational load. You're running a little dynamically typed script inside the compiler every time you instantiate a template.
(one of the reasons Go compiles fast is its compiler is really bare bones, comparatively speaking it does very little in the optimization area vs what you would see out of .NET and JVM implementations, not even mentioning GCC or LLVM)
Idk as long as you can develop at speed I don't see why a static analyser that's a few times slower than compiling couldn't run on the latest commit overnight? More as a sonarqube type thing I suppose.
Nevermind the language itself, we need a way to pull compilers and project dependencies, pinned to their specific versions, with a single, ergonomic tool. vcpkg seemed really promising but the fact that they didn't start with library versioning from the get go was a very stupid decision, and nowadays versions are pinned to specific commit hashes rather than actual dependency versions, and libraries that weren't previously versioned cannot be fetched if your project relies on an older pre-vcpkg version of a library and cannot be upgraded for one reason or another. These days I find myself struggling more and more with proper dev setup where I have an up-to-date compiler on any machine I want to build the code along with its libraries without running into a rabbit hole of compiler and library issues.
It's not always easy to use, but the focus on reproducible builds and caching is really nice.
Edit: I guess it doesn't fetch a specific compiler for you by default, but you could probably ship your toolchain somewhere and pin it in your WORKSPACE file.
In the olden days, we always checked the compiler, libraries, and essential build tools being used into version control along with the code. That way you could always be sure that you could compile the code.
This stopped working so well with Windows, where you usually can't just copy executables out of version control and run them (you have to run an installer instead), but it still works pretty well for the Unices.
Not in that pinning versions or the APIs or the actual contents of the packages we rely on, so much that every time we update a (rather small) set of dependencies there's nearly always some weirdness around vcpkg itself or the builds.
Just the last week we updated to a newer tag and building openssl failed setting up the nasm build dependency, claiming it already existed.
Which it did - there was a "nasm" folder in the tools directory it was trying to install it in, from the last time it was installed presumably and somehow got it's internal state messed up. But this caused a fatal error. I eventually worked around it by deleting the nasm directory from every build machine and letting it reinstall exactly the same package again.
But the time before there was also a "random" build error, claiming that xz wasn't installed, despite the log showing it was just installed as a dependency in the line above. I guess this was fixed upstream, as the "workaround" was to use a tag from a month or so previous, but now seems to be fixed.
Perhaps I'm using it wrong, perhaps you should "always" completely blast away the global vcpkg folder (and any vcpkg stuff cached in build directories from it's cmake integration) every time you touch it. But it's still time and effort for something that probably should be seamless.
vcpkg are forced to alter source packages because in the majority of cases upstream doesn't have working cross platform builds. They have weird broken CMakeLists.txt, incomplete compiler support, weird build incompatibilities with other libraries etc.
1. It uses formalism to try to deny that there is some competition for brain-share and number-of-users among programming languages, or rather around the communities around programming languages.
2. It ignores how C++ has, multiple times, semi-reinvented itself rather than "being C++", spurred by features, idioms or use patterns in other languages, and even managed to mostly "eat their lunch", for better or worse (e.g. the D language).
3. A call for "aiming for coherence", while ignoring how it sometimes contradiction with other principles, such as: "What you don’t use, you don’t pay for" (the zero-overhead rule); and failure to argue even for a balance.
4. "In committee, we frequently spend time on things that only a small number of people care about." <- but these are sometimes extremely important things, relevant to many, and in the future perhaps most programmers; and a small number of people care about them because most people aren't aware of their current or future importance. This is especially poignant where those things will only become important if the committee adopts them, and otherwise can be argued retroactively to never have been worthy of any discussion.
C++ isn't C++. It is every incarnation of the language, every language subset defined in a code standard (for a project or an organization), over decades.
Sure you can write virgin code in C++ and choose some "modern" incarnation of the language or some subset thereof. But there exists billions of lines of legacy that most of us have to deal with. Daily or occasionally. Usually without the option of rewriting.
Then there's the tooling. The horrific, antiquated build process which we try to automate with tools that just make things even more complex. The lack of a practical standard library that says "you know, those things you often have to do, we should actually offer those". It is neither helpful nor useful that C++ doesn't have room for HTTP, JSON, basic networking abstractions etc.
I would not recommend a beginner learn C++ today. I'd recommend Rust or Go. Most people are going to be more productive AND produce more robust code in fewer years. One can be annoyed by that statement, but you would have an uphill battle claiming that it is wrong.
I don't think C++ is worth the investment for a new programmer. Dealing with C++ just gets more complex the more changes we make to it and the more we evolve it. C++ isn't just one spec - it is the aggregate of all specs and practices that have existed because that's what you risk ending up working on in real life.
It's a bit much history.
From the perspective of the programmer I think it is a much better idea to put some energy behind a fresh start. And try to remember the mistakes that were made. Right now, for C++ developers, Rust looks like the closest thing to a fresh start. But it may be that Go is going to make a lot of C++ programmers a lot happier too since not all C++ code lives in constrained environments or actually has to deliver bare metal performance.
I don't think the C++ community even wants C++ to turn into something that is comparable to Rust or Go. Doing that to C++ would require throwing things out and creating something very different.
I like that this doc at least says that C++ is unergonomic. I'm always saying that good way to learn language patterns and idioms is to look into standard libraries implementations.
And when you look into C++ libraries/stdlib, they often look like they're written in another language entirely. This is not normal.
> I'm always saying that good way to learn language patterns and idioms is to look into standard libraries implementations.
Why would that be true? When you write a library, you are writing code to cover all possible uses; everything within the scope of your library should at least be considered, even if you personally have no need of that particular bit of functionality. But when you write a program it only has to do one thing, so of course it's going to be simpler. To me, it seems obvious that (good) library code will be very different from (good) application code.
(I've used libraries that were written like applications, but they were bad libraries; I was constantly fighting the fact that the library author wrote only for their own use-case, and didn't consider any other.)
>> I'm always saying that good way to learn language patterns and idioms is to look into standard libraries implementations.
>
>Why would that be true?
This isn't always true, and can't be generally expected, but I think it is true in the case of Go's standard libraries. These are high quality, and are one of my top examples of good, real source code to study (along with the DOOM and Quake source code). A nice touch in Go's library documentation is that you can click any API element (type, function, etc.) to be brought directly to its source code.
Yes, there are differences between writing libraries and writing programs. But there is value in studying well-written source code, even when it has concerns and requirements that differ from yours. You can adapt what you learn to your needs.
I hear you. But I’ve also learned a lot about how to write idiomatic rust from scrolling through rust’s standard library. You’re right - because it’s written to support lots of programs, it sure is packed with a lot of functions I’ll probably never use. But it’s still quite beautiful and readable. Much more so than C++.
I'm not very familiar with rust, but isn't the point to mostly avoid `unsafe` and write safe code? Strange that basic std vec functions like insert and remove call `unsafe` - isn't this an example where you don't want to be like the std lib?
At the end of the day, rust is still a systems language. Lots of useful things require unsafe, including most data structure implementations and FFI - including syscalls.
I see rust's safety guarantees like having a good static type system. Static type systems don't claim to prevent all bugs, but they do catch an awful lot of bugs at compile time in practice. Rust's default safety with opt-out unsafe blocks work the same way. Despite what some zealots would have you believe, the point isn't to make every single line of code "safe". Good rust code still uses unsafe code - for example your binary includes unsafe code whenever you use Vec or Box from std. But all unsafe code blocks are explicitly called out as such, tested thoroughly (eg with miri) and usually constrained to a small part of your program. You can think about it as, C or C++ programs are 100% unsafe. Rust programs are usually only ~2% unsafe or so. That makes a huge difference in practice.
Unsafe code can also usually be encapsulated in safe wrappers. (Eg std::io::File wraps the unsafe call to open(), and std::Vec wraps some raw pointer operations). Unsafe also doesn't turn off the borrow checker. The only difference is unsafe blocks allow you to dereference pointers, call unsafe functions, and a few other things like that.
> Strange that basic std vec functions like insert and remove call `unsafe`
The standard library has more unsafe than most programs, because "data structures that need a lot of unsafe" has historically been an argument for being put in the standard library, because that way they'll be reviewed very carefully by experts.
I think you're right, and perhaps I think the underlying issue uncovered by OP might be: the lack of a single core set of features to achieve a sense of mastery with, that feeling like you've understood an implementation stdlib might have given.
(I will admit I've felt profoundly stupid when I go from "how hard could it be to implement std::optional/reference counted pointers/etc., anyway?", to say GNU's implementation of it in libc++.)
What the feeling of security that new features giveth, having to work with large idiosyncratic codebases and a wide variety of toolchains taketh.
> However, I don’t want to give the impression we should say no to all proposals. There are plenty of opportunities to improve our user’s lives through proposals. Here are a few concrete examples: [all 3 are just library additions that exist outside the standard already]
This may not have been the intent, but the way I read it, it's saying that language evolution is now pointless and only library additions can be considered. That would be a pretty dire situation for C++ overall.
Never mind that this means many language issues will never be fixed, it would also mean that implementations face the burden of yet more standard library additions which quickly fall behind external libraries because of the added burden of ABI lockin, and now multiplied by several vendors vs just one third party library.
I agree with everything written here - C++ is never going to be Rust, or any other language, but it can at least become better than it is now, in some common-sense ways.
In my view, C++'s biggest problem is its design-by-committee structure, leading to a lack of pragmatism - in other words, perfect is the enemy of good. Pragmatism is what you see when you look at the standard libraries of languages like Python and Java: are they perfect? Far from it. But they prioritize being at least good enough for a good amount of situations, over being perfect. Which ends up being far more useful for regular people. And being useful for regular people is what makes a programming language successful.
But design by committee means if your proposal isn't perfect, it will get shot down. So the language remains bad for everyone's use cases, rather than at least becoming somewhat good for some people's use cases.
C++ has major flaws that cannot be rectified without serious breaking changes. With that said, Herb has been experimenting with a new cpp frontend with sane defaults [1].
In my opinion, the world is on standby until Anders Hejlsberg feels like tackling a modern, next generation systems language.
Herb Sutters' proposal is absolutely the right direction to go to reign C++ back into being a sane simpler language from the mess it has grown into. I doubt it'll ever happen though.
It seems that in the eyes of most of corporate America's management, C++ is a legacy language, and should be replaced with Java (which tbh makes sense in many cases given that Java's tooling and libaries are better than those of C++). Other companies are looking at Rust as a successor to C++.
Even if Herbs's work was completed and implemented by major compilers, it's hard to see much uptake - who's going to be approving porting legacy C++ code to a new modern C++, yet alone approving C++ for new projects?
Maybe if there is a Sutter C++ successor, it should drop the "C++" name which carries legacy associations, and call itself something new. Present itself as C++ successor, even if it comes from a starting point of being 99% backwards compatible with modern (say C++11 or later) C++. Sounds superficial, but I bet it'd make a difference in perception!
I have seen that the biggest issue with C++ is that legacy C++ is quite hard to refactor to the new patterns. For example, I have been working on font substitution in LibreOffice, and I'd like to use a more functional style of programming, but I'm getting stuck because of an overuse of classes.
I've been reading Functional Programming in C++ by Ivan Cukic and I'd like to adopt this - it does require a lot of refactoring of this massive codebase.
Perhaps this is a massive, ancient codebase issue more than a C++ issue, though.
Not really. Look at the standard template library. It’s not really using classes with inheritance. It’s using non-member functions, often with iterators.
Ironically I’ve just learned, however, that lambdas are actually anonymous classes! I’m currently up to the bit in yhe book I referred to before that explains how to do currying in C++.
The more I read, the more I think I’m working against the code base than I am against C++. I’m actually very curious how ranges work, something the book talks about later.
I think the great challenge with breaking changes is demonstrating how other systems have done breaking changes with great success. They are quite rare. And no, the python2->python3 migration is not one of them. Neither is ip4->ip6.
The important part about breaking backward compatibility is allowing forward compatibility. Java is good in that regard, for each breaking change, you could still create a library that worked for Java N and Java N+1.
It is possible to have a network supporting IPv4 and IPv6. I don't think the speed of migration is an issue. It is not a tremendous success but it is not a failure either.
In Python forward compatibility was often impossible. The migration benefit was low and the cost was high. Python is a great example how not to break things. I am little surprised that whole language is still popular after such failure of leadership.
It’s even worse. They happily introduce breaking changes for some things, but not others.
I have once wasted days of work updating a codebase to build with C++/20 because the new standard suddenly changed the type returned by `u8` string literals (available for a decade already, it was introduced in C++/11) from const char* into const char8_t*, and they made these types incompatible.
I feel the C++ committee saw the major quality-of-life improvements from C++11 and convinced themselves that Python's ergonomics with templated C++'s speed was possible. I won't say it's impossible, but the backwards compatibility sure seems like too much weight.
Not the OP, Carbon currently still doesn't have a working implementation, it is clearly a tool for Google to migrate away from C++ without rewriting the world, they are quite open that it isn't trying to be anything else, it is only an experiement, and it may even fail as an experiment.
Carbon gets talked a lot by people that somehow miss the language goals.
Honestly, the changes in C++ 20 and 23, plus the upcoming changes for 26 (looking at you, Concurrency TS v2) have so drastically improved the language it's almost like the ES5 -> ES6 evolution of JavaScript.
I might be in the minority here, but I genuinely enjoy writing modern C++.
My complaints are about it's dependency management story and lack of integrated, standardized tools for things like dependencies, testing, logging, etc.
I don't know if you're in the minority or I am, but I find modern C++ to be borderline intolerable (and I've been programming mostly in C++ from before there were C++ compilers).
I used to love C++ (and still do if we're talking about older standards), but the new stuff is just a baroque torture.
I don't mond C++ evolving into some thing of its own beyond all recognition, but there is a distinct lack of modern "C with Classes" language. Basically, C on steroids. There are attempts at that, but none is perfect and/or has enough traction to be viable.
You can still have "C with Classes" in modern C++ if you really want that. Just set a style guide that limits your code to that.
There's more viable coding styles in C++ than I can honestly even bother to count. There's absolutely no reason to limit the language to one specific coding style when doing so would alienate large groups of users and you can set those limits yourself on your project.
Yeah, I know and that's how I've been using it for the past decade, but it's not that simple. The language keeps changing and this forces adding cruft to the code if one wants to use newer compilers.
For example, there are cases where the existence of move semantics necessitates adding some boilerplate when working with STL containers. There's no functional reason for it, just something to please the compiler.
I’m excited for Zig for this reason. I’ve been writing a lot of rust lately and - well, it’s fine. Good at what it’s trying to do. But Zig seems a lot more fun. Much more in the spirit of C.
Anything C++11 forward is alright with me, though 17 introduces some very nice conveniences like <execution>, <filesystem> and most critically <optional>. Honorable mention to std::clamp too.
You're absolutely not in the minority, old C++ was awful and there have been so many improvements to the language and standard library to help write simpler, clearer code.
I was just recently appreciating the if statement with an initializer in C++17, which helps me express exactly what I want for variable scope in one line instead of multiple awkward ones.
Honestly, everyone working with C++ sticks to a select subset of the language that they've chosen for their project, or the project they're contributing to. Nobody knows all of C++. Personally, I don't mind C++ forking out in different directions and accepting diverse proposals; while I wouldn't bother to use them myself, I realize that it may be useful to other people.
C++ is an engineer's language, and it's ridiculous to imagine that we'd ever need a C++2.0 that cleans it up. Subjectively, you could say that some features are "ugly", but this is an evolutionary process, and there are bound to be vestigial features.
Yes, there are memory safety issues, but in practice, these are isolated in very few places. Take a compiler like LLVM for instance: most developers are working on transforms or analyses, and they're exposed to zero manual memory management. Sure, the Pass Manager needs to build passes, and the IR needs to be allocated, but that's about all the manual memory management there is.
Personally, I couldn't care less about standardized argv-parsing, as each project has its own set of complex requirements. There are JSON parsing libraries available for C++, and I don't see why it should be standardized. Faster hashing in std could be a low-priority feature, but projects like LLVM have their own optimized version of std data structures and algorithms.
I suppose a module system could be useful; most C++ projects are built with CMake which is already very good at finding and linking dependencies. Personally, my biggest pain point is compile-times, but that's really an LLVM/Clang problem.
Overall, the article doesn't seem to be written by someone who has a lot of experience with large C++ codebases.
> Yes, there are memory safety issues, but in practice, these are isolated in very few places. Take a compiler like LLVM for instance: most developers are working on transforms or analyses, and they're exposed to zero manual memory management.
It is really easy to get use-after-free in LLVM passes due to using eraseFromParent() instead of removeFromParent(), etc. As I recall, in some places the optimization code goes through really awkward patterns in order to keep track of the IR nodes that are dead to avoid UAF, none of which would be necessary if LLVM were written in a GC'd language. (Note: I'm not saying LLVM should be written in a GC'd language.)
It's a response to Rust. Rust is significantly better than C++, and most importantly it's the first language that could actually replace C++ for the things C++ is generally used for.
I guess the C++ community is coming to terms with not being the top dog of "zero-cost abstraction" languages anymore.
Really I think it's too late for C++. They had literal decades to fix very basic flaws, and they just haven't done it.
Accidental octal literals. Case fall-through. Missing `return`s. Accidental string literal addition. The whole module system mess.
They focused way too much on adding complex features and not at all on fixing footguns. As a result C++ is painful and dangerous. Too late to fix IMO.
TFA cites a much more satisfying since technically substantial distillation of "what C++ is", especially regarding technical goals and technical philosophy: the "Direction for ISO C++" document, 2022, by core members of the ISO C++ committee. [1]
Even without legislation, I could imagine that the US government could try to enact software procurement rules that favor or require memory-safe implementations.
I'm not sure this is best done by holding on to implementations of those ideas that have serious flaws. Yes, learning is slow, and most languages seem to start with a handful of ideas someone wants to realize and then ignoring a lot of the difficult stuff others took a long time to solve.
For whatever reason, the software industry never learns this lesson. Maybe because people would rather move on to something new entirely and new entrants are not aware of what they're losing at all.
> It is easy to see that C++ is fit as a general-purpose programming language–adoption by millions is a testament to that.
No, that is false. It is akin to arguing that Christianity must be true because 2.4 billion Christians can't be wrong. The fallacy is easy to see because the argument can be applied equally to the world's 1.9 billion Muslims and 1.2 billion Hindus and 500 million Buddhists, etc. And yet these groups hold mutually-exclusive positions and so at least N-1 of them must be wrong.
Humans are social animals. With only a few exceptions, we tend to conform to the group. That tends to make us get stuck in very deep ruts. "Everyone is doing it" is absolutely no indication that "it" is a good idea.
[UPDATE] A lot of people seem to be missing the point here, so I feel the need to clarify: I'm not saying C++ is analogous to a relgion. I'm using the exclusivity of religious belief just as a short-cut to show that it is possible for large groups of people to hold false beliefs without getting into the weeds of which of those beliefs are actually false. The point is not that C++ is analogous to a religion, just that "adoption by millions" is not a valid argument for its merits.
[UPDATE2] I am also not saying that religion has no value beyond the truth of its objective claims. I am only saying that (some of) the world's major religions do make objective claims, some of those claims are mutually exclusive, and so some of them must be objectively false, and therefore it is manifestly true that large groups of people can hold objectively false beliefs, and therefore the fact that large groups of people hold a position is not evidence that that position is objectively true. Being "fit as a general purpose programming language" is an objective claim.
Yeah, probably should have left the religion analogy out. It wasn't essential to the point you were making and was based on a false model of religion.
There's actually a pretty complex matrix in religious disagreement with or without exclusivity. And not all religions are exclusive at all. Islam's views of other Abrahamic religions is complicated, but it is exclusive about non-Abrahamic religions. Christianity sees Judaism as having been correct, but incomplete. Hinduism isn't essentially exclusionary (particularly to other religions which evolved from old Vedic practices), but is often confused with modern Hindu nationalism, which is. And even within single "religions" there are complicated lines where e.g. some Christians consider themselves to be in "communion" with some Christians, but not others.
So, yeah, bad example. It's a terrible thing to have pulled out to try to show clear mutually exclusive groups.
The problem with trying to come up with examples of large numbers of people holding objectively false beliefs is that you have to look outside the realm of science. The whole point of science is that it provides a mechanism for resolving disagreements about objective truth objectively (i.e. experiment) and so you just don't get a lot of people holding objectively false beliefs, at least not for any length of time. Religion is all that's left.
Most of what humans talk about isn't science or religion, and to reduce the world to just that is pretty weird to say the least. Most things can't be resolved by appeals to objective truth. It sounds like you're working on a model where that's how one resolves conflicts, but literally most of the history of knowledge, humanity, whatever label you want to put on it -- isn't that. Even science isn't by any means that binary. Disagreements can last centuries. And we're even at a spot in science where a lot of the interesting stuff fumbles around for decades before we can even come up with experiments that could possibly test it; and some of it we won't ever be able to test. (A lot of cosmology isn't testable.)
But the point me and a few others were making is that you don't seem to know much about religion, so it's probably not a good thing to use in analogies. Religion is definitely not a set of neatly divided mutually exclusive beliefs. Almost all of the world's adherents come from two families of religions -- Abrahamic or Vedic -- and within those groups there's a whole lot of similarity and varying levels of theological exclusivity.
I feel like gambling or the stock market may be better examples. If one person takes a long position on a stock and another short, they have mutually exclusive beliefs about it, and one of them will be wrong.
It's easy to see that driving on the RIGHT side of the road is a fit as a general mode of transportation. Adoption by millions is a testament to that.
It's easy to see that driving on the LEFT side of the road is a fit as a general mode of transportation. Adoption by millions is a testament to that.
Both statements are true. The author didn't say only C++ is fit. Nor did he say a program written in BOTH C++ AND python (i.e. a system of driving on both left and right sides of the road) is fit.
The counter-argument (C++ is NOT fit as a general purpose programming language) is invalidated by millions of programmers who use it as such in the same way that thumbs are not fit for grasping is invalidated by, well, grasping with your hands.
It doesn't mean pliers are not fit for grasping just because thumbs are fit for grasping.
You're conflating types of evidence and types of arguments.
No. I'm not saying that C++ is not fit as a general purpose programming language. It very well may be. All I'm saying is that "adoption by millions is a testament to [the fitness of C++]" is not a valid argument. If it were, "adoption by millions is a testament to truth of [objective religious claim X]" would be a valid argument, and it manifestly isn't because different religions make mutually exclusive objective claims.
"Fitness as a general purpose programming language" is (at least in part) an objective claim. If a million people professed to believe that, say, brainfuck was fit as a general purpose programming language that in and of itself would not make it so. This is not the case for being a cohesive religion. If a million people profess to believe some religious belief, that in and of itself is sufficient for that belief to be a cohesive religion.
> If a million people professed to believe that, say, brainfuck was fit as a general purpose programming language that in and of itself would not make it so.
No, but if millions of people actually did manage to use Brainfuck for general purpose programming, then that would be evidence that it really is fit as a general purpose programming language, even if it's not ideal.
Brainfuck isn't in that position, hence no one thinks it's fit, but C++ is. People do use it for general purpose programming, even if their program could be rewritten in a garbage-collected language.
IMO the reason C++ isn't a general purpose programming language is due to memory management. Many many many applications can be built without having to worry about the garbage collector and the productivity gains of using a GC language is so so worth it. And I know you can force C++ into acting like a GC language, but why go through the effort? C++ is a precision tool for building complex and performant systems and that is nothing to be ashamed of, but it is not something you would use for web api's, or a quick script, or UI's or any quick and dirty project. I also feel like the rust community is forcing the language into places where it shouldn't really be. But yeah - people can argue about what general purpose means to them, just my 2c.
GC is not required for memory safety. The proper use of GC nowadays is for dealing with problems that inherently involve spaghetti-like reference graphs for which no other memory management strategy is suitable. Using it as mere convenience might be okay for quick prototyping, but it ultimately leads to half-baked, hard-to-refactor code requiring a lot of CPU and memory overhead at runtime.
"Christianity is true" and "Islam is true" are mutually-contradictory positions. "C++ is fit as a general-purpose programming language" does not contradict "X other language is fit as a general-purpose programming language". So your logic in your first non-quote paragraph doesn't work.
[Edit: To respond to the actual point: For every claim that X is fit as a tool to do Y, the evidence that millions of people use X to do Y is in fact proof of the claim. If the claim had been "C++ is the best language for general purpose programming", then your argument would have merit. But that wasn't the claim.]
> "to show that it is possible for large groups of people to hold false beliefs"
The claim from the article is not about their beliefs, it is about their activity. If a million people say that one can live many different lifestyles from a tent but those people actually live in suburban houses, they can potentially all be wrong. If a million people actually do live in tents while living many different lifestyles, QED, it is demonstrated - they aren't holding false beliefs, full stop. It's not a belief anymore, it's a fact. The fact that they are doing it shows it can be done at all, and the large numbers show it's not an extreme claim that only one or two weirdo obsessives could contort themselves enough to use, it's general enough for millions to do over many lifestyles.
> "Everyone is doing it" is absolutely no indication that "it" is a good idea.
It is too; "When in Rome" is advice because whatever the Romans are doing, it isn't killing them or getting them into fights or mugged or annoying someone powerful. If you don't have any reason to do otherwise, eating what the Romans eat, drinking what they drink, behaving how they behave, is a far far better starting point than almost any other. As a guest in someone's house, trying behave how the homeowners behave is a good idea; 90+% of starting points will be worse, most things you could try to eat will make you ill or kill you, most of the world's thousands of programming languages are toys or niche domain systems or wildly outdated or proprietary and gone out of business. The ones millions of people use? Pretty good idea to use one of those, unless you have very good reasons for doing otherwise.
> The claim from the article is not about their beliefs, it is about their activity.
No, the claim is about a property of a programming language. The activity is cited as evidence in support of the claim of the fitness of C++ as a general-purpose programming language.
The problem with C++ is that it is a legacy language, and so the fact that a zillion people use it today might be because it's a good language, but it might also be because it has so much institutional inertia behind it that this is enough to override the fact that it's a totally shit language. The Catholic Church has been around for 2000 years, but that doesn't necessarily mean that its factual claims have any merit. The Church's success might be because it is in communion with the truth, or it might be because it has so much institutional and societal inertia that it keeps chugging right along despite having no actual merit. The success of the Church might also be due to people subscribing to the logical fallacy that because a lot of people subscribe to it that it must have some merit, which after a while becomes a self-sustaining cycle. (Note that a self-sustaining cycle is different from a self-fulfilling prophecy because the latter actually becomes true if enough people subscribe to it.)
It's a programming language that has been used by lots of people in a lot of different contexts. The author of the article think it's a good definition for "general purpose programming language". The original author has never said "N people can't be wrong", it's the reframing of top comment. You can disagree with the "general pupose"-ibility of C++ but I don't think it's an honest way to interpret the argumentation of the author of the article.
I'm not saying it is. I'm using the exclusivity of religious belief just as a short-cut to show that it is possible for large groups of people to hold false beliefs without getting into the weeds of which of those beliefs are actually false. The point is not that C++ is analogous to a religion, just that "adoption by millions" is not a valid argument for its merits.
My argument applies equally well to that: the mere fact that a religion has large numbers of adherents does not in and of itself show that it is either good or useful. But that's a harder case to make because it turns on what is meant by "good" and "useful", and those are things about which reasonable people can disagree.
To be clear, "fitness as a general purpose programming language" is also something about which reasonable people can (and manifestly do) disagree. All I'm saying is that having large numbers of adherents is not a valid argument in favor of fitness any more than it is an argument in favor of goodness or usefulness. It's possible that all it shows is that a lot of people drank the kool-aid.
[UPDATE] It's also possible that most of the people using C++ think that it sucks, and they are all just using it because everyone else is using it.
A religion does not have to be true to serve a positive social function. Arguably, the persistence of the most ancient and widespread faiths suggests that they do serve such a function.
That depends on your criteria for fitness. There are millions of people using homeopathic remedies. That doesn't mean that homeopathy is actually fit for any of the tasks that people employ it for.
It's both and neither. Some countries are considering legislation to require "memory-safe" tooling for "critical" workloads, and C++ has never been placed on the "memory-safe tooling" list.
The best forecast I know is Sean Parent's on ADSP episode 160.
[0:23:57] SP: We're also discussing internally around pending legislation around safety and security, what Adobe's response is going to be. Right now our thinking is we would like to publish a roadmap on how we're going to address that. That is not finalized yet in any form, but I expect a component of that roadmap is going to be that some of our critical components will get rewritten into Rust or another memory-safe language.
[0:24:28] CH: When you say "pending legislation", is that a nod to some pending legislation that you actually know is on the horizon? Or just anticipating that it's going to happen at some point?
[0:24:38] SP: Oh yeah, no. There are two bills (sorry I don't have...)
[0:24:44] CH: It's all right, we'll find them and link them in the show notes afterward.
[0:24:48] SP: Yeah, I can hunt down the links. The one in the U.S. that's pending basically says that the Department of Defense is going to within 270 days of the bill passing (and it's a funding bill which means it will probably pass late this year - early next year) that the Department of Defense will establish guidelines around safety and security including memory safety for software products purchased by Department of Defense. The E.U. has a similar wording in a bill that's slowly winding its way through their channels. I don't have insight into when that will pass. The U.S. one will almost certainly pass here within a month or two.
[0:25:43] CH: Oh. Wow.
[0:25:44] SP: There's a long way between having a bill pass that says almost a year later they have to establish a plan for what they're going to do, right. So it's not hard legislation in any way. But I view this-- I can send you a link. There was a podcast I listened to recently on macOS folklore. [...] It's talking about how in the early '90s there was a somewhat similar round of legislation that went around around POSIX compliance. Basically the Department of Defense decided that in order to have portable software, every operating system that they purchased had to have POSIX compliance. And there was a roadmap put into place. That's why Apple pursued building their own UNIX which was A/UX and eventually partnered with IBM to do AIX. And Microsoft in the same timeframe had a big push to get POSIX compliance in Windows OS. The thinking was eventually in order to sell to the government your operating system it would require POSIX compliance. What actually happened, if you wanted to buy just traditional Macintosh operating system you would just say "well I require Photoshop or pick-your-application and there is no alternative that runs under UNIX so therefore I need an exception to buy macOS" and it was extra paperwork but it got signed off on. So really never materialized into hard restrictions on sales of non-POSIX-compliant OSes. I expect the safety legislation to take somewhat the same route, which is, there will be pressure to write more software in memory-safe languages. When you don't write software in memory-safe languages there is going to be more pressure for you to document what your process is to mitigate the risks. And this is initially all in the realm of government sales, although there is some discussion in both the E.U. legislation and on the U.S. side of extending this to a consumer safety issue. But there will be an escape hatch because you couldn't wave any kind of magic wand as a legislator and say "you can't sell software anymore if it's written in C++". The world would grind to a halt. So there will be an escape hatch, and there will be pressure. So as a company you have to look at how are you going to mitigate that risk going forward. And what's your plan going to be so that you can continue to sell products to the government. And how do you make sure that you're not opening up a competitive threat. If you've got a competitor that can say "well we're written entirely in Rust so we don't have to do the paperwork" that becomes a faster path. So you want to make sure that you're aware of those issues and that you've got a plan in place to mitigate them.
Thank you. I have mostly heard people confusing the CISA stuff with "legislation," but this sounds like something that is actually legislation. I'll have to dig into it. Thank you.
Lack of modules support is the biggest thing keeping me from doing more c++ greenfield. So much friction in managing a pointless interface<->implementation layer.
I'm not well versed in C++ best practices, but unless it's a non-option for your situation, have you considered single file headers or perhaps even a unity build[1] approach? Of course, both are just workarounds for proper module support and have their drawbacks and limitations.
Yes! Proper modules support would give c++ a well needed boost (no pun intended!). msvc is kinda there, but it breaks here and there with intellisense.
This is the never-ending problem with programming languages.
If you make big breaking changes, the community suffers - even if the changes are very well intentioned. Look at the long tail of Python 2 vs 3 issues. If it happens once it's not so bad, but the older the community and the greater the size of its existing ecosystem - the more that pain becomes. Rust, from what I've seen, understands this issue.
I really wish the std would drop this pretense and focus on C++'s strong point: Continue being the fastest systems language possible.
Everywhere in the std lib you can see compromises that require rewriting substantial portions for any real time application.
Things like: shared_ptr eagerly using atomics whenever possible; std::string allocating; the lack of built-in faster std::allocator replacements like bump allocators, memory pools, etc; no lockless and wait free concurrency primitives; no architecture-aware thread pools or even architecture descriptions; no IPC primitives; etc.
Considering how many C++ developers are working on things like games, high performance server applications, databases, and operating systems, it's just bizarre how inappropriate the standard headers are for these tasks.
Even something trivial like casting bytes off the wire to a packed struct is an exercise in frustration due to aliasing rules that should have been encoded in the type system and invisible to the user.