Skip to main content
Ctrl+K
AMD Logo
Radeon Software for Linux Version List
  • Community
  • AMD Lab Notes
  • Infinity Hub
  • Support

Use ROCm on Radeon GPUs

  • Prerequisites
  • How to guides
    • Linux How to guide
      • Install Radeon software for Linux with ROCm
      • Install PyTorch for Radeon GPUs
      • Install ONNX Runtime for Radeon GPUs
      • Install TensorFlow for Radeon GPUs
      • Install Triton for Radeon GPUs
      • Install MIGraphX for Radeon GPUs
      • mGPU setup and configuration
    • WSL How to guide
      • Install Radeon software for WSL with ROCm
      • Install PyTorch for Radeon GPUs on WSL
      • Install ONNX Runtime for Radeon GPUs on WSL
      • Install TensorFlow for Radeon GPUs on WSL
      • Install Triton for Radeon GPUs on WSL
      • Install MIGraphX for Radeon GPUs on WSL
  • Usecases
    • vLLM
      • vLLM Docker image for Llama2 and Llama3
      • GEMM tuning for model inferencing with vLLM
  • Compatibility matrices
    • Linux Compatibility
    • WSL Compatibility
  • Limitations and recommended settings
  • AI community
  • Report a bug
  • Limitations...

Limitations and recommended settings

Contents

  • 6.3.4 release known issues
    • WSL specific issues
  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments

Limitations and recommended settings#

This section provides information on software and configuration limitations.

Note
For ROCm on Instinct known issues, refer to AMD ROCm Documentation

For OpenMPI limitations, see ROCm UCX OpenMPI on Github

6.3.4 release known issues#

  • Intermittent script failure may be observed while running Stable Diffusion training workloads using TensorFlow

  • Intermittent script failure may be observed while running Triton examples

  • Increased memory consumption may be observed while running TensorFlow Resnet50 training workloads

  • Performance drop may be observed while running ONNX Runtime scripts with INT8 precision

  • Script hang may be observed while running RetinaNet training workloads with batch size 32 using TensorFlow

  • Performance drop may be observed while running Bert training workloads across multiGPU configurations

  • Black Image may be generated/observed while running Stable Diffusion 2.1 FP16 using Pytorch

WSL specific issues#

  • Intermittent application crash or driver timeout may be observed while using ComfyUI with WSL2 on some AMD Graphics Products, such as the Radeon™ RX 7900 Series

  • Intermittent script hang may be observed while running RetinaNet training workloads using TensorFlow

  • Intermittent application freeze may be observed when running ChatGLM workloads with Onnx Runtime and MIGraphX using fp16 or fp32 precision

  • Intermittent application crash or driver timeout may be observed while running Blender Cycles rendering along with Pytorch Inception V3 training scripts

  • Intermittent build error may be observed when running ROCM/HIP workloads using CMake. Users experiencing this issue are recommended to replace the Native Linux library filename (for example libhsa-runtime64.so.1.14.60304) in /opt/rocm/lib/cmake/hsa-runtime64/hsa-runtime64Targets-relwithdebinfo.cmake with the WSL library filename libhsa-runtime64.so.1.14.0 as a temporary workaround.

Important!
Radeon™ PRO Series graphics cards are not designed nor recommended for datacenter usage. Use in a datacenter setting may adversely affect manageability, efficiency, reliability, and/or performance. GD-239.

Important!
ROCm is not officially supported on any mobile SKUs.

Multi-GPU configuration#

See mGPU known issues and limitations.

Windows Subsystem for Linux (WSL)#

WSL recommended settings and limitations.

WSL recommended settings#

Optimizing GPU utilization
WSL overhead is a noted bottleneck for GPU utilization. Increasing the batch size of operations will load the GPU more optimally, reducing time required for AI workloads. Optimal batch sizes vary by model, and macro-parameters.

ROCm support in WSL environments#

Due to WSL architectural limitations for native Linux User Kernel Interface (UKI), rocm-smi is not supported.

Issue

Limitations

UKI does not currently support rocm-smi

No current support for:
Active compute processes
GPU utilization
Modifiable state features

Not currently supported.

Not currently supported.

Running PyTorch in virtual environments#

Running PyTorch in virtual environments requires a manual libhsa-runtime64.so update.

When using the WSL usecase and hsa-runtime-rocr4wsl-amdgpu package (installed with PyTorch wheels), users are required to update to a WSL compatible runtime lib.

Solution:

Enter the following commands:

location=`pip show torch | grep Location | awk -F ": " '{print $2}'`
cd ${location}/torch/lib/
rm libhsa-runtime64.so*
cp /opt/rocm/lib/libhsa-runtime64.so.1.2 libhsa-runtime64.so

previous

WSL support matrices by ROCm version

next

AI community

Contents
  • 6.3.4 release known issues
    • WSL specific issues
  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments

  • Terms and Conditions
  • ROCm Licenses and Disclaimers
  • Privacy
  • Trademarks
  • Statement on Forced Labor
  • Fair and Open Competition
  • UK Tax Strategy
  • Cookie Policy
  • Cookie Settings
© 2025 Advanced Micro Devices, Inc