The open-source project has picked up support for AMD’s ROCm 7 series, expanding its reach on both Windows and Linux.
The goal has not changed, even if the project has worn several coats of paint over the years.
ZLUDA exists to get CUDA software up and running on hardware that Nvidia would prefer stayed locked out.
Earlier incarnations targeted Intel GPUs, followed by a period of AMD-backed development aimed squarely at Radeon hardware and ROCm. The current effort is broader, positioning ZLUDA as a multi-vendor CUDA implementation with a strong emphasis on AI workloads.
At its core, ZLUDA acts as a drop-in replacement for CUDA. It intercepts CUDA API calls and redirects them to a different GPU runtime, allowing software written for Nvidia hardware to execute on other GPUs.
According to Phoronix, the latest milestone is full support for the ROCm 7 series. Until now, ZLUDA has been limited to ROCm 6.x, which has increasingly looked dated as AMD pushed its software stack forward.
With ROCm 7 support in place, ZLUDA can now target modern AMD GPUs across both Microsoft Windows and Linux. That matters if the project wants to be taken seriously outside of niche experiments.
CUDA remains one of the deepest moats in the AI industry, built over decades of tooling, libraries and developer habits. Breaking that grip has proven far harder than simply matching hardware performance.
There is still no clear data on how well translated workloads perform, and ZLUDA remains a work in progress rather than a production-ready solution. That uncertainty is a primary reason it has not gone mainstream.
Interest in CUDA translation layers is growing, especially as hyperscalers and software vendors look to loosen their dependence on a single GPU supplier. ZLUDA’s ROCm 7 support does not change the balance of power overnight, but it does show the wall is still being chipped at.