I think it’s just fantastic that the Ladybird browser is close to being usable. I was under the impression this was going to take many years before it became competitive.
While I haven't tried it myself, I've seen a few of the monthly summaries videos. Passing the tests and being fast enough for daily usage is two very different things and right now Ladybird doesn't appear to be all that speedy.
Still an amazing feat of development from the entire team.
Why are the tests so disconnected from the usability? My assumption is the tests are closer to a unit test, while browsing a page is essentially an E2E test, and if anything in the pipeline goes wrong (especially given that we use complex JS everywhere) the result is essentially useless.
There's not a linear relationship between the tests and usability. There are many tests for various character encodings, but viewing a web page means you're only "using" one of them, for example.
As such, 90% test pass rate but low usability simply means that 10% of the tests cover a lot of very visible usability features that ladybird hasn't addressed yet.
Even if they passed 100% of the tests, it's still possible they'd be too slow for practical, everyday use. Speed is not tested in these, only compatability.
web platform tests are closer to unit tests than to integration tests or to smoke tests. Many of those are also very hard to write and check for correctly, since there are tens of thousands of lines of specs, and thousands of web APIs.
Three years ago I was very skeptical of Ladybird. But two things have changed. First, they have funding for 8 full time engineers, which I definitely wasn’t expecting. Second, it’s been three years. So given that, I am more optimistic.
There’s still a very long way before they can compete with Chrome, of course. And I’m not sure I ever understood the value proposition compared to forking an existing engine.
The value proposition is not having vendor lockin and having WebKit/Blink be the defacto behaviour. For example the Ladybird team have found and raised spec issues in the different specs.
Another example is around ad blockers -- if Blink is the only option, they can make it hard for ad blockers to function whereas having other engines allows different choices to be made.
>The value proposition is not having vendor lockin
there by definition is no vendor lock-in by forking an open-source engine. The worst case is the original maintainers going evil tomorrow and you being on your own, which is no worse than starting from scratch, except you saved yourself some ten million odd lines of mindless spec implementation in the case of a browser.
I’m not an expert in this field, but I don’t think I agree. The problem with a browser monopoly is that the monopolist does not have to obey specs — you can just do whatever you want, and force the specs to follow you.
If you fork that monopolist’s engine, you’re not making any immediate difference to the market. You’ll adopt all their existing behavior, whether or whether not it conforms to spec (and I would guess you would continue to pull in many of their changes down the road).
A brand new implementation is much more difficult, but if it works it’s much more meaningful in preventing a monopoly.
The issue is around maintenance/development burden. For example, when manifest V2 was dropped in favour of manifest V3 it is possible for a downstream project (Edge, etc.) to maintain V2 support. However, that gets harder the further along the projects go and the code diverges; that may mean keeping more code around (if interfaces or utility classes are changed/removed), or rewriting the support if the logic changes (such as the network stack).
It's like projects trying to keep Firefox XUL alive, or GTK+ 2 or 3.
The project has now moved from just updating the external dependency to working on that and possibly actively fighting against the tide. That is a lot harder and requires more work each time you update the dependency.
So in effect you have vendor lock-in. And if the vendor controls or affects downstream products like plugin developers (targeting manifest V3) or application developers (targeting GTK+ 3 or 4) then its even harder to maintain support for the other functionality.
That’s certainly an advantage, but I’m not sure that’s the value proposition.
It’s that Chrome and V8’s implementation has grown to match resourcing. You probably can’t maintain a fork of their engine long-term without Google level funding.
I'll guess that the remaining 10% will take more than another 90%, and also that it will keep growing as time goes on. Web standards are becoming more complex every day.
This is one huge blindspot in the web spec process in my opinion. Any new spec is considered on the context of existing browsers and very little consideration seems to be given to the scope of the web standards as a whole.
saying 'no' is the key to good software design, but in standards you can only 'champion' proposals — you can't champion the _lack_ of a proposal. the best you can hope for is inertia.
in my experience the only feedback that is welcome is around the details of an idea, never around whether the idea has merit in the first place, and you should expect to be reminded that implementers are the only people whose opinions actually matter.
--- end quote ---
and someone else in the same conversation:
--- start quote ---
You can't practically anti-champion standards that are small improvements to features that ought to have been abandoned, like Shadow DOM. Shadow DOM sucks, but it sucked a little less when they added CSS Module Scripts, Selection.getComposedRanges(), ElementInternals.shadowRoot…
Perhaps there should be levels of conformance and important businesses and government platforms should be required to work on all browsers that support at least level X, where level X is not everything and the kitchen sink, but really only the minimal stuff. No SPA, just forms and such basic things, accessibility should be very high and mandatory etc.
Yeah definitely. For the web it's more like the last 0.1% takes 99.9% of the time. And it's not like you can skip it either. Nobody is going to use a browser that is missing 0.1% of the web platform - that probably means something like 1% of websites are broke in some way and that's a terrible experience.
They are decades of work away from having a browser that would be competitive with Chrome or Firefox.
Don't hold your breath though. Looking at the September progress report[0] there are many many things to iron out. It's great progress but there are still several years of development for LB to be ready.
It really goes to show what a dedicated team can accomplish. Before Ladybird it was taken for granted that building an entirely new browser engine would take decades and people would laugh at you for even bringing it up.
Before Ladybird every time someone brought up about making a new web engine, pretty much every top voted comment here on HN was about how that was impossible to do, often bringing up how even a giant like Microsoft had to abandon their engine.
At least now the cynical pessimistic takes changed from "impossible, not even MS with their giant teams can do it" to "it may take decades for this small team to do it".
To be fair, when they started, they intended to write a browser from the bottom up, including such things as image and video decoders, networking, etc.
Don't worry. If the web browser ever becomes fast enough to be usable, even more javascript crap will be dumped on every website to slow it back down again