Hacker News new | past | comments | ask | show | jobs | submit login

Yes, but not a single open-source driver can make a render with Blender 3D.



The default renderer Eevee is a GPU-only renderer that works fine on my RX 570 with the FOSS amdgpu driver.

You can also run Cycles' CUDA backend with ROCm on amdgpu (somewhat..).


> The default renderer Eevee is a GPU-only renderer that works fine on my RX 570 with the FOSS amdgpu driver.

However, you need Cycles, and not Eevee. They have very significant difference in quality.

https://all3dp.com/2/blender-render-from-eevee-simply-explai...

> You can also run Cycles' CUDA backend with ROCm on amdgpu (somewhat..).

No. No. No, you can't.

They support only some GPUs !!!AND CPUs!!!. No support for RX 5xxx-6xxx XT. They are very picky about the hardware they support.

https://github.com/ROCm/ROCm.github.io/blob/master/hardware....

https://github.com/ROCm/ROCm.github.io/blob/master/hardware....


570 is in that list so it seems they can?


I assume that's because Blender uses OpenCL?

I think a lot of progress is being made on that front.


Well, they just removed in entirely...

https://code.blender.org/2021/04/cycles-x/

> Deprecation

As part of the new architecture, we are removing some functionality. Most notably:

OpenCL rendering kernels. The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate.


Vulkan Compute shaders (or GLSL compute shaders, if supported by the OpenGL version) should be quite a bit more robust than OpenCL kernels. The main issue w/ shaders compared to OpenCL kernels is some missing features in the former, that will have to be optionally supported via standard extensions.

The programming model is also quite different, so code will have to be rewritten.


I'll piggyback on your comment for a question: I only very recently started getting into the basic of GPGPU coding. I hate vendor lock in, so I started out with OpenCL. I had a good time, but I do understand that more and more of non-CUDA world (however small that may be) is moving away from OpenCL. What are they moving to? Where should one direct one's "compute learning energy" (if one already excludes CUDA or other proprietary solutions)?


Not OP, but: Unfortunately you need to pick a platform and target that. Which API you use depends on the platform. DirectX, OpenGL, Vulkan and Metal all have compute capability.

So, I think you need to abstract your code at the services level, and provide an implementations for each platform you need to support (the number may be 1 or more than 1).

I think you need to direct your "compute learning" energy at the algorithm level. Numerics and all that. The GPU API:s are just the obtuse implementation detail, but not the meat of the thing.


In addition to picking a platform, you also essentially need to pick a generation. Vulkan 1.0 for example is missing subgroups and has lots of other limitations. Vulkan 1.2 on a modern GPU with an updated driver is pretty darned powerful, with an explicit memory model, control over subgroup size, a form of pointers, and a range of scalar datatype sizes (f16 is especially useful for machine learning, commonly having 2x the throughput as f32). It is possible to have multiple versions and select at runtime, but it's an additional complexity that needs to be considered, and can constrain your architecture to accommodate the compatibility modes.


> So, I think you need to abstract your code at the services level, and provide an implementations for each platform you need to support (the number may be 1 or more than 1).

There are some cross-platform capabilities via SPIRV-Cross. The LLVM folks are working on MLIR that might provide even better features wrt. a common abstraction layer going forward.


Thanks! That's a bit of a sad state of things :-(


Do you know what the missing features are? I'm very interested, I'm rewriting code from ocl to vulkan.


See https://news.ycombinator.com/item?id=27396634 for a basic intro, but IIRC the main differences involved are (1) support for general pointers in OpenCL, whereas data indirection in compute shaders involves varying levels of messiness, and (2) OpenCL introduces support for some mathematical functions, that is not in Vulkan Compute. (However, note that OpenCL 3.0 has also made many features optional, so the basic featureset may be more comparable as a result.)


Note that they are not removing GPU rendering. Just the OpenCL kernels. They'll still have NVIDIA and AMD kernels. As long as the open source drivers keep working towards supporting the required features those will eventually work on them.


> They'll still have ... and AMD kernels.

Do they? Blender uses OpenCL implementation for AMD GPUs.


Oh did I misremember that? It has been a hot minute since I last owned an AMD card. Apologies for the misinformation.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: