7.1.1 release#

Linux#

Known issues#

  • Intermittent script failure may be observed while running text-to-image inference workloads with PyTorch.

  • ComfyUI with Strix Halo may experience stability issues with Flux.1 Schnell and SD 3.5 Large Turbo. Full resolution requires changes pending in the next ROCm Ryzen release and future OEM Kernel fix 6.14.0-1017.17.

    • Advanced users can manually install the correct kernel, as well as find the latest ROCm releases with TheRock

Windows#

Note
The following Windows known issues and limitations are applicable to the 7.1.1 release. Only Pytorch is currently available on Windows - the rest of the ROCm stack is only supported on Linux.
AMD is aware and actively working on resolving these issues for future releases.

Note
If you encounter errors related to missing .dll libraries, install Visual C++ 2015-2022 Redistributables.

Known issues#

  • If you encounter an error relating to Application Control Policy blocking DLL loading, check that Smart App Control is OFF. Note that to re-enable Smart App Control, you will need to reinstall windows. A future release will fix this requirement.

  • Intermittent script failure due to out-of-memory error may be observed while running inference workloads with PyTorch on Windows on Ryzen™ AI Max 300 series processors.

  • Intermittent application crash or driver timeout may be observed while running inference workloads with PyTorch on Windows while also running other applications (such as games or web browsers).

  • Failure to launch may be observed after installation while running ComfyUI with Smart App Control enabled.

Limitations#

  • No ML training support.

  • Only Python 3.12 is supported.

  • Only Pytorch is supported, not the entire ROCm stack.

  • The latest version of transformers should be installed, via pip install. Some older versions of transformers (<4.55.5) might not be supported.

  • The torch.distributed module is currently not supported. Some functions from diffusers and accelerate module may get affected.

  • On Linux, if you get an error regarding convolutions while running Generative AI workloads, setting this environment variable to 1 may help: MIOPEN_DEBUG_CONV_DIRECT_NAIVE_CONV_FWD

  • On Windows, only LLM batch sizes of 1 are officially supported.