Install ONNX Runtime for Radeon GPUs#
Refer to this section to install ONNX via the PIP installation method.
Overview#
Ensure that the following prerequisite installations are successful before proceeding to install ONNX Runtime for use with ROCm™ on Radeon™ GPUs.
Prerequisites#
Radeon Software for Linux (with ROCm) is installed.
MIGraphX is installed. This enables ONNX Runtime to build the correct MIGraphX Execution Provider (EP).
The half library is installed. See Verify if MIGraphX is installed with the half library.
If the prerequisite installations are successful, proceed to install ONNX Runtime.
NOTE Unless adding custom features, use the pre-built Python wheel files provided in the PIP installation method.
Verify if MIGraphX is installed with the half library#
$ dpkg -l | grep migraphx
$ dpkg -l | grep half
Expected result:
root@aus-navi3x-02:/workspace/AMDMIGraphX# dpkg -l | grep migraphx
ii migraphx 2.9.0 amd64 AMD's graph optimizer
ii migraphx-dev 2.9.0 amd64 AMD's graph optimizer
ii migraphx-tests 2.9.0 amd64 AMD's graph opt
$ dpkg -l | grep half
ii half 1.12.0.60000-91~20.04 amd64 HALF-PRECISION FLOATING POINT LIBRARY
NOTE Versions may vary between ROCm builds and installed versions of MIGraphX, but the desired result is the same.
Enter the following install command if the half library is not installed with MIGraphX:
$ dpkg -l | grep migraphx $ dpkg -l | grep half
NOTE The half library should come packaged with MIGraphX. If not, it can be installed with the following command:
$ sudo apt install half
Install ONNX Runtime#
Important!
Use the provided pre-built Python wheel files from the PIP installation method, unless adding custom features.
The wheel file contains the MIGraphX and ROCm Execution Providers (EP). Refer to Install MIGraphX for ONNX RT for more information.
Refer to ONNX Runtime Documentation for additional information on ONNX Runtime topics.
See ONNX Runtime Tutorials to try out real applications and tutorials on how to get started.
Option A: ONNX Runtime install via PIP installation method (Recommended)
AMD recommends the PIP install method to create an ONNX Runtime environment when working with ROCm for machine learning development.
Note
The latest version of Python module numpy v2.0 is incompatible with the ONNX Runtime wheels for this version. Downgrade to an older version is required.
Example: pip3 install numpy==1.26.4
To install via PIP,
Enter this command to download and install the ONNX Runtime wheel.
pip3 uninstall onnxruntime-rocm numpy
pip3 install https://repo.radeon.com/rocm/manylinux/rocm-rel-6.1.3/onnxruntime_rocm-1.17.0-cp310-cp310-linux_x86_64.whl numpy==1.26.4
Option B: Build from source for your environment, followed by local wheel file installation (Advanced)
Use this method for advanced customization usecases. This requires the user install the desired ROCm and MIGraphX versions, and creation of softlink prior to starting the build.
NOTE The build time typically takes ~45 minutes.
Prerequisites to build ONNX from source
Radeon Software for Linux (with ROCm) is installed
MigraphX is installed
Softlink is created
To create a softlink for
/opt/rocm
, enter the following command:language = bash linenumbers = true ln -s /opt/rocm* /opt/rocm
To build from source,
Clone the ONNX Runtime repository into the root directory.
cd / git clone https://github.com/microsoft/onnxruntime.git
Git clone AMDMIGraphX into the home folder.
cd ~ git clone https://github.com/ROCm/AMDMIGraphX.git
Create a docker image for MIGraphX.
Note
Refer to AMDMIGraphX Github for up-to-date ONNX Runtime and MIGraphX dependencies.
MIGraphX can still be built or installed from apt.
For MIGraphX package builds via RBuild, refer to these MIGraphx Github instructions to build within the docker container environment.
For MIGraphX package builds via CMake, refer to these MIGraphx Github instructions to build within the docker container environment.
Use the
groups
command to ensure that the user is part of the video, render, and docker groups in Linux to run the docker container.groups tthemist@aus-navi3x-02 ~/groups tthemist sudo video render docker
Run the following for a simple MIGraphX apt install:
cd AMDMIGraphX docker build -t migraphx . docker run --device='/dev/kfd' --device='/dev/dri' -v=`pwd`:/code/AMDMIGraphX -v /onnxruntime:/onnxruntime -w /code/AMDMIGraphX --group-add video -it migraphx apt install migraphx migraphx-dev half
Run rocm-smi to ensure that ROCm is installed and detects the supported GPU(s).
$ rocm-smi
Expected result:
======================================= ROCm System Management Interface ======================================= ================================================= Concise Info ================================================= Device [Model : Revision] Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% Name (20 chars) (Edge) (Avg) (Mem, Compute) ================================================================================================================ 0 [0x0e0d : 0x00] 32.0°C 73.0W N/A, N/A 1526Mhz 96Mhz 31.76% auto 241.0W 0% 50% 0x7448 ================================================================================================================ ============================================= End of ROCm SMI Log ==============================================
Configure Git to treat all directories as safe to use and run the build script.
cd AMDMIGraphX git config --global --add safe.directory "*" tools/build_and_test_onnxrt.sh
This builds ONNX Runtime and adds ROCm and MIGraphX EP support to the ONNX Runtime interface and requires multiple external repo pieces be checked out automatically prior to the build.
Install ONNX Runtime once MIGraphX is built.
$ pip3 install /onnxruntime/build/Linux/Release/dist/*.whl
Verify ONNX Runtime installation#
Confirm if ONNX Runtime is correctly installed.
$ python3
$ import onnxruntime as ort
ort.get_available_providers()
Expected result: The following EPs are displayed.
>>> ort.get_available_providers()
['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
Environment set-up is complete, and the system is ready for use with ONNX Runtime to work with machine learning models, and algorithms.