Install PyTorch for ROCm#

Preparing Your System#

For PyTorch on Ryzen, it is required to operate on the 6.14 OEM kernel.

  1. To install the kernel, please run the following command:

    sudo apt-get install linux-oem-24.04c
    
  2. Once installation is complete, please reboot your system and ensure that you’ve booted into the 6.14 OEM kernel:

    uname -r
    

    Note
    This returns a 6.14.0 based string.

  3. If the kernel is correct, ensure the system is up to date:

    sudo apt update && sudo apt upgrade -y
    

Install PyTorch via PIP Installation#

AMD recommends the PIP install method to create a PyTorch environment when working with ROCm™ for machine learning development.

Check Pytorch.org for latest PIP install instructions and availability. See Compatibility matrices for support information.

** PCIe Atomics ROCm is an extension of HSA platform architecture and shares queuing model, memory model, signaling, and synchronization protocols.

Platform atomics are integral to perform queuing and signaling memory operations, where there may be multiple writers across CPU and GPU agents.

For more details, see How ROCm uses PCIe atomics.

  1. Enter the following command to unpack and begin setup:

    sudo apt install python3-pip -y
    
  2. Enter this command to update the pip wheel:

    pip3 install --upgrade pip wheel
    
  3. Enter the commands to install Torch and Torchvision for ROCm AMD GPU support. This may take several minutes.

    Important AMD recommends proceeding with ROCm WHLs available on repo.radeon. The ROCm WHLs available at PyTorch Foundation are not tested extensively by AMD as the WHLs change regularly when the nightly builds are updated. When manually downloading WHLs from repo.ryzen, ensure to select the compatible WHLs for specific Python versions. See Compatibility matrices for support information.

    Ubuntu 24.04

    wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.4.4/torch-2.8.0%2Brocm6.4.4.git36fa4b24-cp312-cp312-linux_x86_64.whl
    wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.4.4/torchvision-0.23.0%2Brocm6.4.4.git824e8c87-cp312-cp312-linux_x86_64.whl
    wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.4.4/pytorch_triton_rocm-3.4.0%2Brocm6.4.4.gitf9e5bf54-cp312-cp312-linux_x86_64.whl
    wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.4.4/torchaudio-2.8.0%2Brocm6.4.4.git6e1c7fe9-cp312-cp312-linux_x86_64.whl
    pip3 uninstall torch torchvision pytorch-triton-rocm
    pip3 install torch-2.8.0+rocm6.4.4.git36fa4b24-cp312-cp312-linux_x86_64.whl torchvision-0.23.0+rocm6.4.4.git824e8c87-cp312-cp312-linux_x86_64.whl torchaudio-2.8.0+rocm6.4.4.git6e1c7fe9-cp312-cp312-linux_x86_64.whl pytorch_triton_rocm-3.4.0+rocm6.4.4.gitf9e5bf54-cp312-cp312-linux_x86_64.whl
    

    Note: The --break-system-packages flag must be added when installing wheels for Python 3.12 in a non-virtual environment.

Verify Your PyTorch Installation#

Confirm if PyTorch is correctly installed.

  1. Verify if PyTorch is installed and detecting the GPU compute device:

    python3 -c 'import torch' 2> /dev/null && echo 'Success' || echo 'Failure'
    

    Expected result:

    Success
    
  2. Enter command to check if the GPU is accessible from PyTorch. In the PyTorch framework, torch.cuda is a generic way to access the GPU. This will only access an AMD GPU if one is available.

    python3 -c 'import torch; print(torch.cuda.is_available())'
    

    Expected result:

    True
    
  3. Enter the command to display the installed GPU device name:

    python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))"
    

    Expected result: Example: device name [0]: AMD Radeon Graphics

    device name [0]: <Supported AMD GPU>
    
  4. Enter the command to display component information within the current PyTorch environment:

    python3 -m torch.utils.collect_env
    

    Expected result:

    PyTorch version
    ROCM used to build PyTorch
    OS
    Is CUDA available
    GPU model and configuration
    HIP runtime version
    MIOpen runtime version
    

The environment set-up is complete, and the system is ready for use with PyTorch to work with machine learning models and algorithms.