7.2 release#

Linux#

Known issues#

  • Failures or instability may be observed while running SD3.5XL or FLUX inference workloads on configs with lower system memory (e.g., 32 GB). Users experiencing this issue are recommended to try the --lowvram and --disable-pinned-memory parameters in the run command.

Windows#

Note
The following Windows known issues and limitations are applicable to the 7.2 release. Only Pytorch is currently available on Windows - the rest of the ROCm stack is only supported on Linux.
AMD is aware and actively working on resolving these issues for future releases.

Note
If you encounter errors related to missing .dll libraries, install Visual C++ 2015-2022 Redistributables.

Known issues#

  • Disable the following Windows security features as they can interfere with ROCm functionality:

    • Turn off WDAG (Windows Defender Application Guard)

      • Control Panel > Programs > Programs and Features > Turn Windows features on or off > Clear “Microsoft Defender Application Guard”

    • Turn off SAC (Smart App Control)

      • Settings > Privacy & security > Windows Security > App & browser control > Smart App Control settings > Off

Limitations#

  • No ML training support.

  • Only Python 3.12 is supported.

  • Only Pytorch is supported, not the entire ROCm stack.

  • The latest version of transformers should be installed, via pip install. Some older versions of transformers (<4.55.5) might not be supported.

  • The torch.distributed module is currently not supported. Some functions from diffusers and accelerate module may get affected.

  • For ComfyUI, adding the --lowvram and --disable-pinned-memory parameters may help with lower-memory configs.

  • On Linux, if you get an error regarding convolutions while running Generative AI workloads, setting this environment variable to 1 may help: MIOPEN_DEBUG_CONV_DIRECT_NAIVE_CONV_FWD

  • On Windows, only LLM batch sizes of 1 are officially supported.