Hacker News new | past | comments | ask | show | jobs | submit login

> A page shouldn't communicate with other instances; servers should.

That requires a server. The point of c2 is it's a wiki: you can edit the content just by browsing to the URL and clicking a button. Sure, self-hosted static HTML sites are a really good solution to lots of problems; editing wikis isn't one of them.

The alternative view is that visiting the page brings up your own personal server, allowing you to participate in the network; it just so happens that the server is written in JS and runs in a browser tab.

> The server can render the result of input from other instances once and display it to readers many times

This is exactly what decentralising a site is supposed to avoid: wiki is an experiment in collectively owned content; removing central points of contact/ownership is the next step.

> > With a JS renderer, the server side rendering can be thrown away.

> That's a bit like saying, 'with a single can of soda, you can get rid of running water!'

Maybe a better analogy is selling seeds rather than vegetables: you save yourself costs on transport, storage, refrigeration, etc. by offloading a bunch of cultivation work on to your customers. It's exactly the wrong approach if you're trying to run a supermarket. If you're an experimental botanist, and a few specialists keep asking you for produce, it's probably a good idea to save yourself time and money by empowering them with seeds.

> Replacing server-side rendering with client-side JavaScript is strictly worse.

It's worse, but not strictly so; for the reasons I've listed above.




> That requires a server. The point of c2 is it's a wiki: you can edit the content just by browsing to the URL and clicking a button.

If I want other people to see my edits, then they need a way to know that I made them, which requires some agreed-upon rendezvous point — i.e., a server.

> This is exactly what decentralising a site is supposed to avoid: wiki is an experiment in collectively owned content; removing central points of contact/ownership is the next step.

So support a federated system, in which one user's server contacts another. Without servers, how will I see your edits when your browser is offline, or simply no longer visiting the page? Your edits will have to live somewhere else — a server (whether it's your server, my server or c2.com's server is beside the point: it's a server).

> Maybe a better analogy is selling seeds rather than vegetables: you save yourself costs on transport, storage, refrigeration, etc. by offloading a bunch of cultivation work on to your customers. It's exactly the wrong approach if you're trying to run a supermarket. If you're an experimental botanist, and a few specialists keep asking you for produce, it's probably a good idea to save yourself time and money by empowering them with seeds.

I thought the whole point of Ward's Wiki was to be a neat place to discuss computer science & programming — in your analogy, to be the supermarket. Sure, it's a high-tech supermarket, conducting some really neat experiments.

If I lost my favourite supermarket because the owner decided to go into experimental botany and didn't bother to pay his supermarket rent, I'd feel similarly upset.

> > Replacing server-side rendering with client-side JavaScript is strictly worse.

> It's worse, but not strictly so; for the reasons I've listed above.

It's strictly worse if one has disabled JavaScript (as everyone who truly cares about privacy & security does): the site no longer works, and one gets no benefits at all.


> requires some agreed-upon rendezvous point — i.e., a server.

Or a P2P network with a DHT. This can be done right now with a dedicated client (BitTorrent, BitCoin, FreeNet, IPFS, etc.). There are existing browser plugins which will opportunistically use a P2P protocol instead of HTTP, e.g. if you get the IPFS firefox extension, enable the "DNS lookup" option, and visit chriswarbo.net it should fetch the page via P2P. Projects like IPFS are currently experiments, but are aiming for browsers to eventually support (something like) them natively, alongside HTTP/HTTPS/FTP/etc.

Projects like WebTorrent are trying to implement this kind of thing in JS.

> Without servers, how will I see your edits when your browser is offline, or simply no longer visiting the page?

Again, distributed storage (DHT, etc.).

> I thought the whole point of Ward's Wiki was to be a neat place to discuss computer science & programming — in your analogy, to be the supermarket.

I've been told many times that my analogies are terrible ;) In this case, the supermarket represented some commercial Web site, with a clear separation between business and customer, where the business wants as much ownership and control as possible, and will go out of its way to keep customers happy (as long as it's profitable).

From what I can tell, Ward is doing this for the love of it. There is no profit to chase, so any project costs (like running a server) are a drain, and make it more likely to collapse. Removing those costs helps the project, even if it inconveniences visitors. Like the botanist, who wants to get on with their research rather than spending time growing produce for others.

Likewise, the visitors are contributors, not customers. They're not just after some product with as little transaction friction as possible (at least, the most valuable ones aren't; I assume most visitors just read something then leave). They're already investing their time into the project, so making things a little less convenient might be acceptable, if it means the project can stay afloat.

> It's strictly worse if

That's not how "strictly worse" works. If you claimed it's worse, I would emphatically agree (I hate single page JS "apps"!)

"Strictly worse" means that it is not better in any way; that the old version is a pareto improvement over the new one. It's not. There are reasons one might choose to do this. Those reasons are not ones that a commercial Web site should choose (exactly the opposite, in fact; they're like the supermarket); they're not ones that a static informational site should choose (e.g. they might choose to host on IPFS, but shouldn't go down the JS route); they shouldn't be chosen if identity/ownership are the goal (like indieweb, where self-hosted/managed servers make sense).

They do make sense if you want to throw a collaboration platform out into the world, with the only goal being to see what happens. That's what wiki was, so it makes sense.


Everything you write about a DHT is true, but … that's not what Ward's Wiki is doing.

It sounds like what it's doing is a very, very good fit for what the IWC guys are up to — and it could all be done without JavaScript!

> "Strictly worse" means that it is not better in any way; that the old version is a pareto improvement over the new one.

Viewed in links, lynx or eww, the old version is a Pareto improvement over the new, because the new version is nothing but a blank page, while the old version was full of information.


> Viewed in links, lynx or eww, the old version is a Pareto improvement over the new

Again, if you're qualifying the statement, it's not pareto. On transparency, water is a pareto improvement over Coca Cola. On growability, wood is a pareto improvement over steel.

The old version was an improvement over the new one; the new version is worse. They're not "pareto" or "strict" though, and I'm interested to see what the next steps are, building on this new foundation.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: