From the final code, this is called for every rendered post:
const post = await getPost(postId);
But...we should basically never be doing this. This is totally inefficient. Suppose this is making a network call to your Postgres database to get the post data. It will make the network call N number of times. You are right back at the N+1 query problem.
Of course if you're using SQLite on a local disk then you're good. If you have some data loader middleware that batches and combines all these requests then you're good. But if you're just naively making these requests directly...then you're setting up your app for massive performance problems in the near future.
The known solution to the N+1 query problem is to bulk load all the data you need. So you need to render a list of posts, you bulk load all their data with a single query. Now you can just pass the data in directly to the rendering components. They don't load their own data. And the need for RSC is gone.
I'm sure RSC is good for some narrow set of cases where the data loading efficiency problems are already taken care of, but that's definitely not most cases.
Wouldn't this be easy to fix by injecting a a version number field in every JSON payload and if the expected version doesn't match the received one, just force a redirect/reload?
Forcing a reload is a regression compared to the "standard" method proposed at the start of the article. If you have a REST API that requests attributes about a model, and the client is responsible for the presentation of that model, then it is much easier to support outdated clients (perhaps outdated by weeks or months, in the case of mobile apps) without interruption, because their pre-existing logic continues to work
Arguable that it's a 'regression'...loading pages is kinda the normal behaviour in a web browser. You can try to paper over that basic truth but you can't abstract it away forever. Also, the original comment I replied to said it would be a 'big challenge', but if you accept that the web is the web and sometimes pages can load or even reload, then it's not really a 'challenge' any more at all.
Vercel's skew protection feature keeps old versions alive for a while and routes requests that come from an old client to that old version, with some API endpoints to forcibly kill old versions if need be, etc. I find it works reasonably well.
Your solution doesn’t work perfectly, it works perfectly in the sense that your engineers wont see errors related to this situation; but it does not work perfectly in that your users have a crappy experience. For example if you have some long form and after a user inputs a ton of stuff, you just refresh their browser for them and wipe it all out, then that is a crappy experience. Or you refresh their browser when their internet connection is bad and then prevent them from using your app until the whole thing reloads.
Maybe that doesn’t matter for your use case or you’re willing to do a lot more legwork to prevent issues like that from occurring but there will always be tradeoffs.
If you force a reload before the rollout is complete, the user will still experience skew, because you haven't finished the rollout. The website will be completely unusable for a significant fraction of users. You might as well turn off the website during the rollout. This is the main concern of skew - how to keep the website usable at all times for all users across versions.
If your rollout times are very short then skew is not a big concern for you, because it will impact very few users. If it lasts hours, then you have to solve it.
After the rollout is complete, then reload is fine. It's a bit user hostile but they will reload into a usable state.
For most large scale apps (web or native) rollouts take multiple hours or even days. Ramps are slow to avoid widespread incidents and allow canary analysis to detect issues.
If you are at that scale then surely you put up some load balancers which can look at the client request (eg a request header `Accept-Version: abc123` or whatever) and route it to the correct backend server that can handle the request.
If you're not at that scale ie if your app deploys in a few seconds then just forcing a reload seems like a perfectly feasible strategy.
I skimmed over this and imho it would be better to cut like 30% of the exposition and split it up into a series of articles tackling each style separately. Just my 2c.
Hey, thanks for sharing your thoughts! I appreciate you putting this out there.
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
From my perspective, the article seems primarily focused on promoting React Server Components, so you could mention that at the very top. If that’s not the case, then a clearer outline of the article’s objectives would help. In technical writing, it’s generally better to make your argument explicit rather than leave it open to reader interpretation or including a "twist" at the end.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
I appreciate the suggestions but that’s just not how I like to write. There’s plenty of people who do so you might find their writing more enjoyable. I’m hoping some of them will pick something useful in my writing too, which would help it reach a wider audience.
htmx boost functionality is an afterthought in the main use case it is marketed for (turning a traditional MPA into something that feels like a SPA), but it's actually super useful for the normal htmx use case of fetching partial updates and swapping them into a page.
If you do something like <a href=/foo hx-get=/foo hx-target="#foo">XYZ</a> the intention is that it should work with or without JavaScript or htmx available. But the problem is that if you do Ctrl-click or Cmd-click, htmx swallows the Ctrl/Cmd key and opens the link in the same tab instead of in a new tab!
But if you do <a href=/foo hx-boost=true hx-target="#foo">XYZ</a> everything works as expected–left-click does the swap in the current tab as expected, Ctrl/Cmd-click opens in a new tab, etc. etc.
Also another point–you are comparing htmx's boost, one feature out of many, to the entirety of Turbo? That seems like apples and oranges.