It says "Browser doesn't support WebGPU" in both Chrome and Firefox in Linux
What's the situation of WebGPU, is it supposed to work in Linux or not?
I'm sure that I've seen other GPU-like things (water simulations etc...) run in my browser before so I'm not sure what's wrong this time, or how many different GPU-like API's exist for browsers other than WebGPU (like WebGL).
I sometimes enable and/or disable "hardware acceleration" in Chrome and/or Firefox because sometimes one or sometimes the other causes video problems (sometimes having hardware acceleration on can make videos slower despite what you'd think). Is this hardware acceleration setting related to WebGPU?
> What's the situation of WebGPU, is it supposed to work in Linux or not?
Chrome officially supports WebGPU on all platforms except Linux for now, you can force it on with a flag but it's obviously not meant for prime time yet. Firefox has yet to officially ship WebGPU at all but it's supposed to be coming in version 141, due at the end of July. Safari is still dragging its feet with no indication at all of when they'll get around to shipping.
I see, I thought 3D graphics in browsers already existed and worked in Linux for a very long time so this must be a new one then... what's the difference between WebGPU, and WebGL that allowed demos like this one since long ago? https://madebyevan.com/webgl-water/
Is there some reason WebGPU has difficulties getting implemented for Linux, like DRM features or so?
Different API, WebGPU is supposed to be a safe common denominator for Vulkan, DirectX 12 and Metal.
WebGL is that, but for OpenGL ES 2.0, and WebGL 2 is that but for OpenGL ES 3.0. WebGL is OK for graphics, but pretty hard to effectively adapt for compute / simulations.
WebGL has a bunch of limitations, but a big one is that you can only use the gpu to move around triangles vertices (vertex shader) and color them in (pixel shader).
If you want to do something like that cool water simulation you have to do some painful hackery to pretend all your data is actually vertices or colors in a texture. Even with hackery, there's still lots of things you simply can't do.
WebGPU supports compute shaders which let you read and write whatever data you want.
If I had to guess, the reason WebGPU is not implemented in Linux is because nobody is paid to add features to Linux. This means that new features are delayed by several years. WebGL is old, and well supported. WebGPU is newer, and has less support.
Google cares about Chrome Linux support and pays people to work on it, for two reasons. One is Chrome OS, and the other is that most Google engineers use Linux desktops for work.
Most likely minor behaviour differences or bugs in specific Linux Vulkan drivers or driver versions and the Chrome WebGPU team first wanting to get other platforms in shape. Chrome for Android has WebGPU enabled and that also runs on top of Vulkan, but I guess Google has more control over driver quality in the Android ecosystem than on Linux.
Huh, disappointing that it hasn't shipped in Chrome on Linux. There's no pressure for other browsers to implement it when it's not even on all of Chrome's supported platforms yet.
Chromium is typically the bundled browser, but around 2019 Google restricted certain aspects of Chromium under Linux, and in order to continue using things like browser login under Ubuntu, I needed to install Chrome instead.
Hardware acceleration perks seem like another feature that may have a Chrome/Chromium divide.
It's much simpler in design though. Each particle has a constant forward speed, and varying move-angle. For each step, it can either change it's turn-angle to be more left, or more right. The turn angle then determines how strong the 'glow' of a particle is. If it turns strongly to the left, the glow is strongly red. Vise versa for blue and right.
Then, for the upcoming iteration, the underlying point of the particle is evaluated by colour. If the colour the particle is under is red -> become more red. If it's blue -> become more blue. So this way, it's a dynamically shifting system where particles turn from blue to red depending on it's surroundings.
All calculations are made on the GPU. The positions are encoded into textures (last 3 positions to calculate current moving angle), then also the previous step's render.
Often particles becomes very isolated due to it's high turn, there are clusters. But then a huge pack of opposite-turners may come and disrupt this. Sometimes one colour wins.
I never knew WebGL or OpenGL, my first experience was learning WebGPU, because I planned to use it for the drawing of 90s.dev which is meant to be future-first. But WebGPU is just not there yet. And from what I read about it, it looks like it has only slightly better performance than WebGL2, being the common denominator to all post-2015 native GPU frameworks. So it's disproportionately harder to write, doesn't have nearly all the same features as native would, and all this to be only slightly faster. Which is why I switched to WebGL2 as the graphics backend (WIP). Now I have to learn how all the GL state functions work (ugh), I admit that was easier to reason about in WebGPU, mainly because it's easier to deeply digest and understand APIs when they have clear ins/outs that link together clearly and cleanly. Which GL doesn't. At all. Just a buncha confusing globals. But this helps, kinda: https://webgl2fundamentals.org/webgl/lessons/resources/webgl...
Yeah, GLs global state is awful. The least-bad way to deal with it is to build your own pipeline abstraction on top, similar to the native pipeline constructs of newer APIs like WebGPU, so most of the messy global state manipulation is centralized in your "bindPipeline" function. Then the rest of your codebase can mostly pretend it's running on a sane (albeit dated) API instead of a giant ball of spaghetti.
While WebGPU is supposed to be more efficient, the real appeal is that it supports new features like compute shaders, in a form that Apple will actually agree to implement (they refuse to implement newer versions of OpenGL due to legal disputes with Nvidia). And we care whether Apple ships it because of their monopoly on iOS browser engines. Of course, Apple hasn't actually shipped it yet either...
Shameless plug: sokol_gfx.h has an API quite similar to WebGPU and has a WebGL2 backend (most notably WebGPU-style immutable pipeline objects which on WebGL2 use a state cache to avoid redundant calls into WebGL2), I'd say that the API is even easier to use than WebGPU, but of course I'm biased ;)
...this is only an option if you're comfortable with C or Zig though (for native build targets you'd have more language options), although I keep rolling the idea around the back of my head to eventually add JS/TS to the language bindings generator.
PS: You're spot on with the performance comparison. It's baffling that WebGPU in some areas isn't any faster or even slower than WebGL2 (but tbf, D3D11 - which WebGL2 uses on Windows - is hard to beat when it comes to raw throughput - I wonder if a WebGPU D3D11 backend would beat the current D3D12 backend).
It runs worse but it being new made everyone believe and hope it would be an overall improvement and perform better. Graphics is extremely performance bound.
It runs just fine and adds important functionality that is entirely absent in WebGL (storage buffers and compute shaders). But I disagree with a lot of the design decisions that made me eventually give up on WebGPU. Like, what's the point if it's still 10 years behind current desktop capabilities, just like WebGL when it came out. And it adopted way too much of Vulkans needless complexity, some of which is not even necessary in Vulkan anymore but still in WebGPu, like render passes.
Render passes in WebGPU are more like in Metal and Vulkan's Dynamic Rendering extension (which is pretty much a direct copy of Metal's render pass concept). The problem with Vulkan 1.0 render pass objects is that they are baked objects, and worse, that pipeline objects required a render pass object reference (although it didn't have to be an actual render pass used for rendering, just a 'compatible' render pass - still a pretty bad design wart).
Metal-style transient render passes are actually a very good thing, especially for tiler gpus.
Storage buffers are 32bit blocks of memory so why not just use a 32bit texture as a storage buffer? With their implementation is it actually different?
You can't do random writes to textures in WebGL, which is required by the vast majority of algorithms. Some hacks exist, all of which come with severe limitations and performance penalties.
Treating a texture as a 'poor man's storage buffer' often works but is much more awkward than populating a storage buffer in a compute shader, and you're giving up on a lot of usage scenarios where WebGPU is simply more flexible (even simple things like populating a storage buffer in a compute shader and then binding it as vertex- or index-buffer).
It would have made sense 5 years ago when it wasn't clear that WebGPU would be delayed for so long, but now that WebGPU support in browsers is actually close to the finish line it's probably not worth the hassle.
I'm not a big fan of it, but that's quite an exaggeration. If it were "completely unusable", then people wouldn't be using it, and it wouldn't have been an issue. The fact that the situation isn't what we want it to be, but nevertheless is still tenable makes it a much harder problem to deal with.
It's really why enshittification can be a thing. Network effect-ed services can do random walks in quality or degrade significantly while still keeping staying power.
Enshittification, as described by C Doctorow and others is just one potential, but likely path for a popular service to take.
I’m European, so: heavy—ass regulation. Don’t trust the market.
We don’t trust capitalism with all important bits of civilization: government, water and food quality, care, education, etc. I don’t see why it has to be let loose here.
Social media has a special place in my heart. A hateful place. If it were up to me I’d ban it outright. It’s eroding the fundaments of society while contributing nothing of value. It’s like crystal meth in that regard.
WebGPU is experimental in Firefox all platforms, but especially on Linux. Chrome on linux should have it, but I've not gotten it to work - might be chromium, might be a flag, or something else.
Just wanted to mention huge props for including a video in your post. Most people browse the web on mobile which makes most of these web GPU demos literally unwatchable.
Really cool! We're building a marketplace for high quality webgpu games @ Wavedash. We made this little demo of unity's viking village you can see how it runs on your comp and if it uses webgl or webgpu (also optimizing load times rn sorry bout that). Press "c" to walk around
That's cool and all, but both WebGL and WebGPU builds of that Unity demo were already done 2 years ago by someone... so your post doesn't offer anything new or groundbreaking, sorry to say. See the links below:
What's the situation of WebGPU, is it supposed to work in Linux or not?
I'm sure that I've seen other GPU-like things (water simulations etc...) run in my browser before so I'm not sure what's wrong this time, or how many different GPU-like API's exist for browsers other than WebGPU (like WebGL).
I sometimes enable and/or disable "hardware acceleration" in Chrome and/or Firefox because sometimes one or sometimes the other causes video problems (sometimes having hardware acceleration on can make videos slower despite what you'd think). Is this hardware acceleration setting related to WebGPU?
reply