There's plenty of room for both approaches: a lot of projects can benefit from using a platform-agnostic API like WebGPU (web or native) directly, others might want to use engines. Anecdotally I use WebGPU (through wgpu) in a commercial application for a visualization, and would've never bothered to apply Vulkan or DX12 for that otherwise.
Documentation will keep improving with time. There have already been a number of high-quality tutorials and references created over the past few years, for example:
Is there even a good Middleware engine that can also target web? My impressions are that both Unity and Unreal expressed interest in WebGPU support, but what's publicly available is pretty barebones.
Also, I imagine the people working with WebGPU to begin with are either hobbyists, looking to learn and work in a professional role with WebGPU, or are in fact making that future Middleware (be it proprietary or open source).
I thought vulkan was getting better? I'm mostly interested from a platform agnostic compute perspective, and I've used some libraries that employed wgsl for that. I'd added it to the backlog of stuff to learn but lately it seemed like vulkan might be a better approach.
The problem is that people are reaching out to it, as means to avoid the Vulkan mess, when the real alternative is middleware engines.