Good. Maybe we can fight back the browser complexity. When you have free browser money, it makes it much easier to partake in turning the web into morass of difficult to implement functionality, that then requires taking browser money.
I completely appreciate what you're saying. Then I look at the level of crazy complexity and backward compatibility in html/css/js/wasm processing. And then I wonder: what are you actually proposing here?
I'm assuming people with such a hatred for browser complexity absolutely love the way those Delphi programs worked back before Web 2.0 made browsers a viable GUI platform, because that's the direction we're going in if browser start dying. Browsers have become the de-facto way to work for most people, and it's a major why Microsoft has been losing market share.
People say HTML/CSS/JS/WASM is complex, but the Ladybird team is proving that a very small team can make a working browser in a few years. Thanks to the efforts of dedicated developers behind browsers, most of the web API, including rendering algorithms and such, has been painstakingly written out in detail.
> the level of crazy complexity and backward compatibility in html/css/js/wasm processing
Most people don't need insane levels of backwards compatibility or intense PWA support. That's just cruft that slows everything down and increases the attack surface for, to the user, no real gain.
Perhaps what we need is a lot of lightweight general-use browsers (based on a small number of engines) and then some heavyweight power-user browsers that can WASM to their hearts' content.
No, what you mean is 'most greenfield web dev projects don't need...'.
Most people do need those things, because assuming there's no civilisation spanning project to literally rewrite 90% of the web, without them their sites would break.
> Most people do need those things, because assuming there's no civilisation spanning project to literally rewrite 90% of the web, without them their sites would break
Sites most people visit do not require backwards compatiblity. And aside from like Google Docs, I doubt most folk are doing anything with WASM (outside plugins).
Look, in a world where Google subsidises browser development, this isn't an issue. We don't need to compromise. But if that funding stream disappears, you do have to compromise. And I'd argue a simple browser doing away with some of the more-complicated stuff would be (a) maintainable and (b) popular enough to pay itself back.
Unless you can make it so the only part that doesn't work is the bad part. A browser where ads don't work and tracking doesn't work but everything else works is a good browser.
Rewriting everything seem to have been regular exercise in general... So I don't see problems with doing it once more. How many sites are actually say older than 5 years. And how much work would they need?
Who said we have to kill off Wasm, Wasm (minus SIMD) is extremely simple and allows the user (the webdev) to ship the complexity, and not build the complexity into the browser itself.
> We use WASM in web apps that are used by a very large number of “regular” consumers. It would be stupid to kill that off.
Large numbers of users use lots of features. That doesn't change that most of them use none of them. WASM would continue to exist. It just doesn't need to be in every browser.
This is going to be unpopular but.. just to illustrate that we didn’t have to be stuck here. Using things like xpra/xephyr to serve a whole x11 gui over web is surprisingly easy and awesome and like 1/100th the complexity of a modern web stack.
This might not be cheap to serve, but it’s cheap to build, and it makes you wonder about the intersection and inflection of those cost curves. And of course we haven’t spent decades optimizing for it.
Don't get me wrong.. REST APIs, HTTP, HTML5, all wonderful. But as a user, the cost/benefits of ubiquitous JavaScript in depth simply to win interactivity and single page apps at the cost of um, everything wrong with the web (and by extension much of the world economy via surveillance capitalism) are a bit suspect.
iirc WASM bytecode closely resembles V8 IR. If you're writing a JS engine, might as well provide a more direct frontend to it... I don't think it adds much.
If we had as many browsers as OSs - somewhat interoperable but genuinely independent - then I would feel much better about the web. Compare NT/Darwin/*+Linux/(Net|Free|Open|Dragonfly)BSD/illumos (to say nothing of the long tail; you can in fact use Haiku for a lot) against Gecko/Blink/WebKit.
If you consider the BSDs and Illumos to be operating systems, you might as well consider Lynx/Ladybird/Servo/Netsurf/Trident as browsers.
For 97% of the world, there are four operating systems: Android, Windows, iOS, and macOS. There are three browsers: Chrome, Safari, and Edge. The rest is an irrelevant sidenote that only hobbyists and developers care about, existing at the grace of the megacorporations that sponsor them.
I have daily driven OpenBSD and FreeBSD, and I have tried to use anything that isn't Gecko or Blink. I feel comfortable saying that the OS situation is much better than the browser situation. I'm interested in technical merit / feature completeness, not popularity. The situation is likely to improve with Ladybird and maybe Servo, granted.
Agreed, and I would argue that it's even worse on the browser side. We have Chrome and Safari on iOS, the rest is essentially irrelevant. With regards to web standards, Edge is just another Chrome-reskin. When Apple sooner or later looses the WebKit monopoly on iOS, it will all be Chrome...
and there's plenty of somewhat usable browsers out there too.
But the OP's implication is that there ought to be a fully working browser (that satisfies the standards of the day), but creatable from someone in their garage.
That hasn't been true for cars, appliances or any modern equipment for ages. And the same phenomenon being applicable to software isn't really that unimaginable (nor a problem).
> That hasn't been true for cars, appliances or any modern equipment for ages.
Personally, I don't see that as progress. I don't need touch screens and surveillance everywhere in every major purchase I make. I fail to see what we have gotten in exchange for all the increased complexity.
If I had to make a guess, you, as a person, can't implement an OS, personal computer, or any other appliance in your home. Maybe you can do the wiring or manage to dig out the basement itself. Not sure why browsers specifically draw your ire.
Absolutely! Looking at this objectively, most of the web and browser developments over the last two decades have been for the benefit of Big Tech and business—not typical web users.
These developments have been forced on users to allow that mob to sell us more stuff, confine what we do, and spy on us and collect our statistics etc. Moreover, complicated web browsers provide a larger surface/more opportunites for attack.
Everything I want to do on the Web I could do with a browser from the early 2000s.
I mostly run my browsers without JavaScript. That kills most ads and makes pages load so much faster (as pages are much, much smaller). Without JavaScript I often see a single webpage drop from over 7MB down to around 100kB.
7MB-plus for a webpage is fucking outrageous, why the hell do we users put up with this shit?
It seems to me if all that Google infrastructure were to be busted up and browsers went their own way then the changes in the browser ecosystem would eventually force lower common denominator standards (more basic APIs, back to HTML, etc.).
With simper web tech being the only guaranteed way of communicating with all Web uses this would force the sleazeballs and purveyors of crap and bad behavior to behave more openly and responsibly. Also, users would be able to mount better defenses against the remaining crap.
In short, the market would be less accessible unless they reverted to lower tech/LCD web standards, and that'd be a damn good thing for the average web user.
> 7MB-plus for a webpage is fucking outrageous, why the hell do we users put up with this shit?
That's mostly due to insane web "frameworks" like React, and developers who (systematically) overuse and misuse them, and then test their websites on WiFi/5G and iPhones with superfast chips so they don't notice (their users do). The solution is to increase the capabilities of "native" Javascript and CSS, and put in massive effort into interoperability so web devs stop feeling the need for frameworks as "compatibility shims" (looking at you, IE and Safari). Those solutions are exactly what browser makers (sans Apple) have been focusing on lately.
The solution you recommend would have the exact opposite effect of what you intend.
"The solution you recommend would have the exact opposite effect of what you intend."
It depends on how one works and what one has to do and or wants to achieve. I've pretty much worked the Web without say JavaScript since it was first released. In fact, I even used to turn off the 'scripts' setting in Internet Explorer it's that long ago.
Over that time I've become addicted to the raw speed of page rendering sans JS, similarly the lack of annoying 'jitters' and hesitations in page rendering that it causes. Same goes for the absence of ads, etc. In fact, I've rarely had need to recourse to ad-blockers as I see so few ads without JS enabled.
Is working without or with very limited JS use an impediment? For many it clearly it is because Big Tech and vested interests have forced the tech down the throats of users in places where its use is not essential. For me however the Web mostly sans JS is not a problem. I've used the Web since its inception and I do everything I want, and that's always been so.
Occasionally, I'm forced to use JS when purchasing something so I'll first determine what I want sans JS then clear caches etc. And sometimes I even switch browsers to make the purchase—I see no need to give these snoopers more data than absolutely necessary, I expect the same privacy online as I'd get from walking into a retail shop and paying cash. So should everyone.
Sites that force JS I back out of faster than entering them—there are plenty more fish in the sea so to speak—more than I can ever consume in my lifetime. For example, on news sites that force JS and block access there are others running the same stories that do not, only neophytes aren't aware of that.
I cannot think of one instance where I've been locked out of such info and not found an alternative source for not having used JavaScript.
Enabling and disabling JS is dead easy on my browsers, my JS icon is red when it's off and green when on. A single click changes the state and a page refresh reloads the page in whatever state I want, it's dead easy to work this way.
Same happens on my phones, when I buy a new phone it takes me some while before I even insert the SIM as I first have to delouse it of all the Google and vendor shit—there's not one Google service I use or have need to use, there are many alternatives, NewPipe, F-droid, etc., etc.
I've nothing against JS, only the way it's used and much abused. What's desperately needed are JS browser engines that allow users to manage its functionality, what it's allowed to do—to kill access to user data by default or as specified, and or supply randomized crap data back to snoopers and so on so that users can take back control of their Web browsing.
Your point is only valid if the user wants certain enhanced features which is not always the case. For example, information sites and government services etc. that convey essential information do not need JS, likewise they don't even need CSS.
Look at it this way: written text contains the same information whether it's displayed in system typeface fonts, courier monospace or Garamond. Sure, Garamond looks much nicer and fancier but it's not essential to convey information. Same goes for much of the other overrated and much abused internet standards.
Of course, that statement will cause you apoplexy if you're a web developer for two reasons, likely your income depends on it, and second it's a notion so foreign to and removed from internet developers' thinking/daily experience that it's inconceivable to them. Well, enough of us outside that circle are now thinking this way not to let the notion die. Perhaps also you're not old enough to remember when everything was simple and we got our information in system or courier typefaces. Those limitations did not stop users from doing the essentials. The new generation of Web developers either have never known that or have conviently forgotten the fact.
There are no satisfactory reasons why such websites (or all websites for that matter) cannot recognize browsers with only limited facilities available such as those having only HTML, or HTML/CSS sans JS and then automatically issuing pages that support those modes—that is, other than commercial/vested interests. And in a nutshell, that's the key problem.
The Web has become so dysfunctional, operationally stereotyped and so commercialized because of these vested interests that many now consider it broken and are calling for it to be fixed. Moreover, unlike the early Web, it's now so important that the mess it's now in cannot be allowed to continue indefinitely, thus government intervention will eventually be necessary to regulate it to ensure that all users have access (ensuring minimum connectivity standards etc. to allow all browsers—even text-only ones—to have access essential information) in the same way governments have had to regulate other essential services and utilities in the past.
…And regulation is not new to world history, it goes back centuries. Long ago, snail-mail was initially regulated by governments and then regularized by international convention, same with rail guages with most of the world now running on standard gauge. Then there are motoring regulations, it took many decades but most of the rules and road signage across the world are now very uniform because of government intervention then later strengthened through international conventions/agreements. Same with many other matters, weights and measures, the ISO, etc.
Right, in the past that LCD conformity took many years to achieve. That said, the internet/Web is only a few decades old, it also now exists in a world that's highly connected which makes it much more capable of adapting to a fast-changing world.
The only reasons why most internet and Web techies are hostile and or are horrified by the notion of tighter regulations are that they've grown up in an environment sans regulations, taking away freedoms always hurts, especially for those unscrupulous people who rort the system, steal user information, etc.
The railway barons of the 19th Century thought the same but every country now has strict regulations governing railways from technical and safety requirements to rules governing ownership—and many of these regulations came into existence because operators/owners abused their privileges.
We're now seeing consumers (and thus lagging governments) catching up, and that's about time. Unfortunately, the internet took off quickly and it dazzled everyone, and we're only now coming to our senses—back down to reality. Another tragedy was that it was built on software which meant minimal physical resources/materials were needed. This allowed companies like MS, Meta, Apple etc. to exploit the fact and become the biggest and richest corporations in history. When companies can become that large and so rich in so very little time it doesn't require Einstein to determine something is very wrong with commerce and its regulation thereof.
Those at least have to be downloaded and installed by the user, which indicates a high level of intent/consent and is difficult to do accidentally. In the browser environment, malicious content can be navigated to without any user intent or consent whatsoever, which when combined with holes punched in browser sandboxes for the sake of fancy features makes for danger with a dramatically larger scope.
Right now, most untrusted code runs in the browser's sandbox, and that's great - outside of the realm of fancy 0 days, the damage is limited.
But if downloading apps becomes the norm again (like every online store asking you to get their app and an extra app for a discount program), I expect that socially engineering less technical users into downloading malware will become much easier.
Honestly, the end of everything being a chrome-based app and people making actual native desktop apps that run at 10x the speed with 1/10th the energy usage would be excellent. I really hope that does happen.
"that run at 10x the speed with 1/10th the energy usage"
How many current developers optimize their products for speed and energy usage?
I can see the very opposite happening: half-baked apps, whose massive portions were written using free-tier AI output, hogging gigabytes of RAM and four processor cores while the cursor is idly spinning and the laptop is becoming hot.
Compared to the past (and my memory goes back to Netscape Navigator 3, old person that I am), modern browsers seem to be technologically fine.
Aside from a few rushed features, all the things that have been coming to web are really lovely. I'll be very sad if this all slows down. We were just about at feature parity with native mobile apps.
But practically? How many sites actually offer an innovative and/or mobile-first version of their website anymore?
There was definitely a time when we had websites delivering various layouts based on the viewport size of the user agent. CSS media queries, flexible layouts, etc. were all very important innovations for a very short lived period of time.
Now, every serious web presence has moved on to offering their own mobile app, pushing users that direction. The browser was stubborn and erred on the side of privacy. So it didn't quite offer all the integrated (ahem, intrusive) means to interact with the user's device in order to bleed every penny and every bit of data mined from your usage and behaviors.
So I don't see anything lovely in the current situation at all. The traditional web -- you know the one where you surf with a web browser to discover the world -- has been dying for quite some time. It might even be dead and we just don't realize it yet.
We don't need web browser parity with mobile apps. We just need the web to be what the web is good at. It's a lost cause thinking that the web browser will ever integrate with a portable device quite the same as a native app. Those days are gone.
>Now, every serious web presence has moved on to offering their own mobile app, pushing users that direction.
If Apple didn't do everything in their power to slow down the adoption of PWAs you might have seen it take off by now. They still won't allow you to easily install a PWA to your homescreen, you basically have to be a power user (a reader here, and maybe not even then) to know about it.
That would be nice but up to now there's been no real consequences for Apple, the operators of the biggest walled garden. MS has also been a pretty bad actor in many ways, although their platform is slightly open, for now.
Clearly you’re not doing much front line web development.
Web browsers are incredibly capable and all the features they add are making browsers better and life easier for developers and experience better for users.
This is the sort of comment that back end developers make, who hate front end development.
You do realize, a terrible company will buy chrome and we will be forced to wait until something better arrives (yahoo is interested at the moment). It’s going to get much worse before it gets better.
1) chromium is open sourced and there are plenty of forks
2) You're being facetious if you're saying Firefox is much worse. Feature sets and performance are very similar. Most people would not notice the difference if reskinned
2.1) ditto for Safari or any of the chromium browsers.
3) a monopoly is good for noone (even the monopoly)
1) all those forks are soft fork that rely on Google's maintenance of chromium. So unless they are willing to invest a LOT more into development, or someone else does a hard fork and puts enough resources into it, whoever buys chrome will inherit a lot of power over those forks.
2) Without the funding from google search, Firefox's future is very much in question. Unlike Apple and MS, Mozilla doesn't really have other funds to pull from to maintain a browser.
There are forks but they're very limited in how far they can deviate from what Google wants. The Manifest v3 discussions show this. Extension APIs aren't a big part of browsers compared to all the other things they do, and there was clearly demand to keep Manifest v2 alive, so you'd expect at least one or two forkers to differentiate by doing that.
In practice the rebasing costs are so high that everyone shrugged and said they had no choice but to go along with it.
Chromium is open source, but not designed for anyone except Google to develop it. Nothing malicious about it, it's just that building a healthy contributor community is a different thing to uploading some source code. If you've ever worked with the Chromium codebase you'll find you have to reverse engineer it to work things out. The codebase is deliberately under-architected (a legacy of the scars of working on Gecko), so many things you'd expect to be well defined re-usable modules that could be worked on independently of Google are actually leaky hacks whose API changes radically depending on what platform you're compiling for, what day of the week it is, etc. Even very basic things like opening a window isn't properly abstracted in a cross platform manner, last time I looked, and at any moment Google might land a giant rewrite they were working on in private for months, obliterating all your work at a stroke.
There are reasons for all of this... Chrome is a native app that tries to exploit the native OS as much as possible, and they don't want to be tied down by lowest-common-denominator internal abstractions or API commitments. But if you view Chrome as an OS then the API churn makes Linux look like a stroll through a grassy meadow.
> The codebase is deliberately under-architected (a legacy of the scars of working on Gecko), so many things you'd expect to be well defined re-usable modules that could be worked on independently of Google are actually leaky hacks
I'm guessing you're specifically referring to Gecko's early over-use of XPCOM, which the Gecko team itself had to clean up in a long process of deCOMtamination [1].
I'm hopeful that if Servo ever gets enough funding to become a serious contender among browser engines (hey, KHTML was once an underdog too), that it might walk a middle path between overuse of COM-ish ABIs and what you described about Chromium. Servo is already decomposed into many smaller Rust crates; this provides a pretty strong compile-time boundary between modules. Yet those modules are all statically linked, and in a release build, that combined program gets the full benefit of LTO. Of course, where trait objects are used, there's still dynamic dispatch via vtables, but the point is that one can get strong modularity without using something COM-ish everywhere.
Incidentally, the first time I built Chromium (or more specifically, CEF) from source in late 2012, I was impressed as I watched hundreds of static libraries being linked into one big binary. Then as I studied the code (though not deeply enough to learn the things you described), I saw that Chromium didn't use anything COM-ish internally (though CEF provided a COM-ish ABI on top). That striking contrast with Gecko's architecture (which I had previously worked with) stuck with me. I wonder how much the heavily reliance on static linking and LTO (meaning whole-program optimization), combined with the complete lack of COMtamination, contributed to Chrome's speed, which was one of its early selling points.
[1]: Mozilla used to have a dedicated wiki page about deCOMtamination, but I can no longer find it.