Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the cheapest way to get the same memory and memory bandwidth as a Mac Studio but also CUDA support?


CUDA is only on nvidia GPUs, I guess a RTX Pro 6000 would get you close, two of them are 192GB in total. Vastly increased memory bandwidth too. Maybe two/four of the older A100/A6000 could do the trick too.


RTX pro does not have NV-link, because money, however. Otherwise, people might not have to drop 40,000 for true inference GPU.


Somehow, it is still cheaper to own 10x RTX 3060s than it is to buy a 120gb Mac.


The Mac will be much smaller and use less power, though.


How does the introspection/debugging tools look like for Apple/Mac hardware when it comes to GPU programming?


Would almost be a no-brainer if the Mac GPU wasn't a walled garden.


Is that any different from nVidia?


Yes? Apple does not document their GPUs or provide any avenue for low-level API design. They cut ties with Khronos, refuse to implement open GPU standards and deliberately funnel developers into a proprietary and non-portable raster API.

Nvidia cooperates with Khronos, implements open-source and proprietary APIs simultaneously, documents their GPU hardware, and directly supports community reverse-engineering projects like nouveau and NOVA with their salaried engineers.

Pretty much the only proprietary part is CUDA, and Nvidia emphatically supports the CUDA alternatives. Apple doesn't even let you run them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: