I wonder how have they measured it... But in any case, one might argue that truly human writing will (already is) a competitive advantage - depending on the domain of course, we do not always care about that
On the VPS we use:
- 80 (standard http)
- 443 (standard https)
- 22 (obv for standard ssh)
- 9090 (metrics / internal so I can have an idea of the generic usage like reqs/s and active connections)
Client-Side: The -R 80:localhost:8080 Explained
The 80 in -R 80:localhost:8080 is not a real port on the server. It's a virtual bind port that tells the SSH client what port to "pretend" it's listening on.
No port conflicts - The server doesn't actually bind to port 80 per tunnel. Each tunnel gets an internal listener on 127.0.0.1:random (ephemeral port). The 80 is just metadata passed in the SSH forwarded-tcpip channel. All public traffic comes through single port 443 (HTTPS), routed by subdomain.
So What Ports Are "Available" to Users?
Any port - because it doesn't matter! Users can specify any port in -R:
ssh -t -R 80:localhost:3000 proxy.tunnl.gg # Works
ssh -t -R 8080:localhost:3000 proxy.tunnl.gg # Also works
ssh -t -R 3000:localhost:3000 proxy.tunnl.gg # Also works
ssh -t -R 1:localhost:3000 proxy.tunnl.gg # Even this works!
The number is just passed to the SSH client so it knows which forwarded-tcpip requests to accept. The actual routing is done by subdomain, not port.
Why Use 80 Convention?
It's just convention - many SSH clients expect port 80 for HTTP forwarding. But functionally, any number works because:
- Server extracts BindPort from the SSH request
- Stores it in the tunnel struct
- Sends it back in forwarded-tcpip channel payload
- Client matches on this to forward to correct local port
- The "magic" is that all 1000 possible tunnels share the same public ports (22, 80, 443) and are differentiated by subdomain.
"The dream of the widespread, ubiquitous internet came true, and there were very few fatalities. Some businesses died, but it was more glacial than volcanic in time scale. When ubiquitous online services became commonplace it just felt mundane. It didn’t feel forced. It was the opposite of the dot com boom just five years later: the internet is here and we’re here to build a solid business within it in contrast with we should put this solid business on the internet somehow, because it’s coming."
True; but I also think that the nature of software development is different from traditional engineering - we are not constraint by the physical world; there is more options, ambiguity and creativity involved
Engineering practices are written in blood. No one bleeds from software bugs (excluding things like Therac 25 which is classified as a medical device) so programmers will never be engineers.
There are some things which rely on their software for safety (space shuttle, perhaps cars, ...). For these cases, I really, really hope they're being coded in a way that looks like engineering, not like vibe.
Nobody probably dies if/when hearing aids crash. But then, I've not had to reboot any of them in the 20+ years I've had them.
In most cases, you're right, but then there actually is life- and safety-critical software; so I guess - it depends and that's also one of the potential ambiguity sources.
AI slop at its highest. It was both a bad and good day for open-source; good that this PR was rejected, but the fact that a thing like this has happened and author's justifications of the AI choices and reasoning were really, really bad.
Thanks! The Unique Fingerprint ID is basically a hash of all the collected fingerprint fields. [1]
The Canvas Deep Fingerprint Hash is higher entropy and includes baseline shapes, emoji rendering, winding rules etc. [2]. It’s meant to capture subtle rendering differences between systems.
The issue with safer c++ and modern c++ is the mirror of the problem with migrating a code base from c++ to rust. There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++. If there was a c++23{} that let the compiler know that only modern c++ and libc++ existed inside it would make a huge difference by making those boundaries clear and you can document the assumptions at that boundary. Then move it over time. The optimizer would have an advantage in that code too. But they don't want to do that. The least they could do is settle on a standard c++abi to make interop with newer languages easier, but they don't want to do that either. They have us trapped with sunk cost on some gaint projects. Or they think they do. The big players are still migrating to rust slowly, but steadily.
I'm wondering if the C++ -> Rust converters out there are part of the Solution: After converting C++ to Rust, then convert Rust to C++ and you now have clean code which can continue to use all the familiar tooling.
> I'm wondering if the C++ -> Rust converters out there are part of the Solution
Are there C++-to-Rust converters? There are definitely C-to-Rust converters, but I haven't heard of anyone attempting to tackle C++.
> After converting C++ to Rust, then convert Rust to C++ and you now have clean code which can continue to use all the familiar tooling.
This only works if a hypothetical C++ to Rust converter converts arbitrary C++ to safe Rust. C++ to unsafe Rust already seems like a huge amount of work, if it's even possible in the first place; further converting to safe Rust while preserving the semantics of the original C++ program seems even more of a pie in the sky.
I'm not questioning what you do once you get to that hypothetical Rustic++. I'm questioning whether it's possible to get there using automated tooling in the first place.
> There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++
Your complaint doesn’t look valid to me: the feature in the article is implemented with compiler macros that work with old and new code without changes.
And not everywhere, as there are many industrial scenarios where Rust either doesn't have an answer yet, or is still in early baby steps regarding tooling and ecosystem support.
I’m not really sure how checks like this can rival rust. Rust does an awful lot of checks at compile time - sometimes even to the point of forcing the developer to restructure their code or add special annotations just to help the compiler prove safety. You can’t trivially reproduce those all those guardrails at runtime. Certainly not without a large performance hit. Even debug mode stdc++ - with all checks enabled - still doesn’t protect against many bugs the rust compiler can find and prevent.
I’m all for C++ making these changes. For a lot of people, adding a bit of safety to the language they’re going to use anyway is a big win. But in general guarding against threading bugs, or use after free, or a lot of more obscure memory issues requires either expensive GC like runtime checks (Fil-C has 0.5x-4x performance overhead and a large memory overhead). Or compile time checks. And C++ will never get rust’s extensive compile time checks.
It could have gotten them, had the Safe C++ proposal not been shot down by the profiles folks, those profiles that are still vapourware as C++26 gets finalised.
Google just did a talk at LLVM US 2025, regarding the state of clang lifetime analyser, the TL;DW is we're still quite far from the profiles dream.
They rival Rust in the same way that golang and zig do: they handle more and more memory-safety bugs to the point that the delta to Rust’s additional memory-safety benefits doesn’t justify the cost of using Rust any more.
Zig does not approach, and does not claim to approach, Rust's level of safety. These are completely different ballparks, and Zig would have to pivot monumentally for that to happen. Zig's selling point is about being a lean-and-mean C competitor, not a Rust replacement.
Golang is a different thing altogether (garbage collected), but they still somehow managed to have safety issues.
Rust borrow checker rules out a lot of patterns that typical C++ code uses. So if C++ would get similar rules, they still cannot be applied to most of the existing code in any case.
I see that C++26 has some incredibly obscure changes in the behavior of certain program constructs, but this does not mean that these changes are improvements.
Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".
Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.
So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:
>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.
I believe there were some changes in the November C++ committee meeting that (ostensibly) alleviates the some of the contracts/hardening issues. In particular:
- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.
- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.
Mixed mode is about the same function compiled with different evaluation semantics in different TUs, and it is legit. The only case they are wondering about is how deal with inlined functions and they suggest ABI extensions to support it during the link-time. None of what you said is an issue.
> The possibility to have a have a well-formed program in which the same function was compiled with different evaluation semantics in different translation units (colloquially called “mixed mode”) raises the question of which evaluation semantic will apply when that function is inline but is not actually inlined by the compiler and is then invoked. The answer is simply that we will get one of the evaluation semantics with which we compiled.
> For use cases where users require strong guarantees about the evaluation semantics that will apply to inline functions, compiler vendors can add the appropriate information about the evaluation semantic as an ABI extension so that link-time scripts can select a preferred inline definition of the function based on the configuration of those definitions.
The entirety of the STL is inlined so it's always compiled in every single translation unit, including the translation units of third party dependencies.
Also it's not me saying, it's literally the authors of the MSVC standard library and the GCC standard library pointing out these issues [1]:
Legit as in allowed and not an issue as you're trying to convey, ok? I read the paper if that wasn't already obvious from my comment. What you said is factually incorrect.
Not sure I understand what point you're trying to dispute. It's not obvious at all that you read either my post or the paper I posted authored by the main contributors to MSVC and GCC about the issues mixed-mode applications present to the implementation of the standard library given that you haven't presented any defense of your position that addresses these issue. You seem to think that just declaring something "legit" and retorting "you are incorrect" is a sufficient justification.
If this is the extent of your understanding it's a fairly good indication you do not have sufficient background on this topic and may be expressing a very strong opinion out of ignorance of this topic. It's not at all uncommon that those with the most superficial understanding of a subject express the strongest views of said topic [1].
Doing a cursory review of some of your recent posts, it looks like this is a common habit of yours.
I have literally copy-pasted the fragments from the paper you're referring to which invalidate your points. How is that not obvious? Did you read the paper yourself or you're just holding strong opinions yourself, as you usually do whenever there is something to backlash against C++? I'm glad you're familiar with the Dunning-Kruger effect, this means there is some hope for you.
The problem is that violation of preconditions being UB in a hardened implementation sort of defeats the purpose of using the hardened implementation in the first place!
This was acknowledge as a bug [0] and fixed in the draft C++26 standard pretty recently.
> The proposal simply included a provision to turn off hardening, nothing else.
(Guessing "the proposal" refers to the hardening proposal?)
I don't think that is correct since the authors of the hardening proposal agreed that allowing UB for hardened precondition violations was a mistake and that P3878 is a bug fix to their proposal. Presumably the intended way to turn off handling would be to just... not enable the hardened implementation in the first place?
Using #ifndef NDEBUG in templates is one of the leading causes of one-definition rule violations.
At least traditionally it was common to not mix debug builds with optimized builds between dependencies, but now with contracts introducing yet another set of orthogonal configuration it will be that much harder to ensure that all dependencies make use of the same evaluation semantic.
Writing is such a powerful and often underrated and underutilized tool; I don't think it's an overstatement to say that it's at par with fire and should be in the top 5 of all-time humanity inventions/discoveries.
reply