Skip to main content
Ctrl+K
AMD Logo
Radeon Software for Linux Version List
  • Community
  • AMD Lab Notes
  • Infinity Hub
  • Support

Use ROCm on Radeon GPUs

  • Prerequisites
  • How to guides
    • Linux How to guide
      • Install Radeon software for Linux with ROCm
      • Install PyTorch for Radeon GPUs
      • Install ONNX Runtime for Radeon GPUs
      • Install TensorFlow for Radeon GPUs
      • Install Triton for Radeon GPUs
      • Install MIGraphX for Radeon GPUs
      • mGPU setup and configuration
    • WSL How to guide
      • Install Radeon software for WSL with ROCm
      • Install PyTorch for Radeon GPUs on WSL
  • Usecases
    • vLLM
      • vLLM Docker image for Llama2 and Llama3
  • Compatibility matrices
    • Linux Compatibility
    • WSL Compatibility
  • Limitations and recommended settings
  • AI community
  • Report a bug
  • Limitations...

Limitations and recommended settings

Contents

  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments
  • 6.2.3 release known issues
    • WSL specific issues

Limitations and recommended settings#

This section provides information on software and configuration limitations.

Important!
Radeon™ PRO Series graphics cards are not designed nor recommended for datacenter usage. Use in a datacenter setting may adversely affect manageability, efficiency, reliability, and/or performance. GD-239.

Important!
ROCm is not officially supported on any mobile SKUs.

Multi-GPU configuration#

See mGPU known issues and limitations.

Windows Subsystem for Linux (WSL)#

WSL recommended settings and limitations.

WSL recommended settings#

Optimizing GPU utilization
WSL overhead is a noted bottleneck for GPU utilization. Increasing the batch size of operations will load the GPU more optimally, reducing time required for AI workloads. Optimal batch sizes vary by model, and macro-parameters.

ROCm support in WSL environments#

Due to WSL architectural limitations for native Linux User Kernel Interface (UKI), rocm-smi is not supported.

Issue

Limitations

UKI does not currently support rocm-smi

No current support for:
Active compute processes
GPU utilization
Modifiable state features

Not currently supported.

Not currently supported.

Running PyTorch in virtual environments#

Running PyTorch in virtual environments requires a manual libhsa-runtime64.so update.

When using the WSL usecase and hsa-runtime-rocr4wsl-amdgpu package (installed with PyTorch wheels), users are required to update to a WSL compatible runtime lib.

Solution:

Enter the following commands:

location=`pip show torch | grep Location | awk -F ": " '{print $2}'`
cd ${location}/torch/lib/
rm libhsa-runtime64.so*
cp /opt/rocm/lib/libhsa-runtime64.so.1.2 libhsa-runtime64.so

6.2.3 release known issues#

  • ONNX RT EP will fall back to CPU with Llama2-7B model

  • Performance drop seen when running separate TensorFlow workloads across multiGPU configurations

  • Performance drop observed with RetinaNet when using MIGraphX

  • Hang observed with Ollama or llama.cpp when loading Llama3-70BQ4 on W7900

WSL specific issues#

  • Some long running rocsparse kernels may trigger a TDR.

previous

WSL support matrices by ROCm version

next

AI community

Contents
  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments
  • 6.2.3 release known issues
    • WSL specific issues

  • Terms and Conditions
  • ROCm Licenses and Disclaimers
  • Privacy
  • Trademarks
  • Statement on Forced Labor
  • Fair and Open Competition
  • UK Tax Strategy
  • Cookie Policy
  • Cookie Settings
© 2024 Advanced Micro Devices, Inc