Ryzen Limitations and recommended settings#
This section provides information on software and configuration limitations.
6.4.4 release#
Note
ROCm 6.4.4 is a preview release, meaning that stability and performance are not yet optimized. Furthermore, only Pytorch is currently available on Windows - the rest of the ROCm stack is only supported on Linux.
AMD is aware and actively working on resolving these issues for future releases.
Linux#
Known issues#
Intermittent script failure may be observed while running Stable Diffusion inference workloads while suspending the system on AMD Ryzen™AI Max 300 series processors.
Intermittent script failure may be observed while running text-to-image inference workloads with PyTorch.
Windows#
Note
If you encounter errors related to missing .dll libraries, install Visual C++ 2015-2022 Redistributables.
Known issues#
If you encounter an error relating to Application Control Policy blocking DLL loading, check that Smart App Control is OFF. Note that to re-enable Smart App Control, you will need to reinstall windows. A future release will fix this requirement.
Intermittent script failure due to out-of-memory error may be observed while running inference workloads with PyTorch on Windows on Ryzen™ AI Max 300 series processors.
Intermittent application crash or driver timeout may be observed while running inference workloads with PyTorch on Windows while also running other applications (such as games or web browsers).
Failure to launch may be observed after installation while running ComfyUI with Smart App Control enabled.
Limitations#
No backward pass support (essential for ML training).
Only Python 3.12 is supported.
Only Pytorch is supported, not the entire ROCm stack.
The latest version of transformers should be installed, via pip install. Some older versions of transformers (<4.55.5) might not be supported.
The torch.distributed module is currently not supported. Some functions from diffusers and accelerate module may get affected.
On Linux, if you get an error regarding convolutions while running Generative AI workloads, setting this environment variable to 1 may help: MIOPEN_DEBUG_CONV_DIRECT_NAIVE_CONV_FWD
On Windows, only LLM batch sizes of 1 are officially supported.