Install MIGraphX for Radeon GPUs#
MIGraphX is AMD’s graph inference engine that accelerates machine learning model inference, and can be used to accelerate workloads within the Torch MIGraphX and ONNX Runtime backend frameworks.
Torch-MIGraphX, which integrates MIGraphX with PyTorch
MIGraphX for ONNX Runtime backend, which integrates MIGraphX with ONNX
ONNX Runtime can be driven by either the ROCm™ Execution Provider (EP) or MIGraphX EP
Introduction to MIGraphX#
MIGraphX is a graph optimizer that accelerates the inference for deep learning models. It provides C++ and Python APIs that are integrated within frameworks like Torch MIGraphX, ONNX Runtime, and other user solutions. The following process summarizes the procedures that occur under-the-hood during the optimization and real-time compilation process.
MIGraphX accelerates the Machine Learning models by leveraging several graph-level transformations and optimizations. These optimizations include:
Operator fusion
Arithmetic simplifications
Dead-code elimination
Common subexpression elimination (CSE)
Constant propagation
When the aforementioned optimizations are applied, MIGraphX emits code for the AMD GPU by calling to MIOpen, rocBLAS, or creating HIP kernels for a particular operator. MIGraphX can also target CPUs using DNNL or ZenDNN libraries.
For more information on how to install MIGraphX, refer to AMD MIGraphX Github.
Prerequisites#
Radeon™ Software for Linux (with ROCm) is installed
Install MIGraphX#
Install MIGraphX on your computer. Once the install is completed and verified, proceed to install Torch-MIGraphX or MIGraphX for ONNX Runtime.
Run the following command to install MIGraphX:
$ sudo apt install migraphx
Next, verify MIGraphX installation.
Verify MIGraphX installation#
Perform a test to verify the MIGraphX installation.
migraphx-driver perf --test
Expected result:
will complete with no error
Next, proceed to install Torch-MIGraphX or MIGraphX for ONNX Runtime as applicable.
Install Torch-MIGraphX#
Install Torch-MIGraphX using the Docker installation method, or build from source.
Using Docker provides portability, and access to a prebuilt Docker container that has been rigorously tested within AMD. Docker also cuts down compilation time, and should perform as expected without installation issues.
Clone the
torch_migraphx
repository.git clone https://github.com/ROCmSoftwarePlatform/torch_migraphx.git
Change directory to torch-migraphx.
cd torch-migraphx/
Build image using the provided script.
sudo ./build_image.sh
Run container.
sudo docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined torch_migraphx
To build from source in a custom environment, refer to the torch_migraphx repository for build steps.
Verify Torch-MIGraphX installation#
Verify if the Torch-MIGraphX installation is successful.
Verify if torch_migraphx can be imported as a Python module.
python3 -c 'import torch_migraphx' 2> /dev/null && echo 'Success' || echo 'Failure'
Run unit tests.
pytest ./torch_migraphx/tests
Installation is complete and the system is able to run PyTorch through the python interface library, and scripts that invoke PyTorch inference sessions.
Install MIGraphX for ONNX Runtime#
Install onnxruntime-rocm
for ROCm by following these steps.
Prerequisites#
MIGraphX is installed
Latest PyTorch ROCm release is installed
Check Install PyTorch for ROCm for latest PIP install instructions and availability.
Important! These instructions are validated for an Ubuntu 22.04 environment. Refer to the OS support matrix for more information.
To install MIGraphX for ONNX Runtime,
Verify that the install works correctly by performing a simple inference with MIGraphX.
migraphx-driver perf --test
Ensure that the half package is installed.
$ sudo apt install half
Install
onnxruntime-rocm
via Python PIP.$ pip3 install onnxruntime-rocm -f https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0/
Verify MIGraphX installation for ONNX Runtime#
Verify that the install works correctly by performing a simple inference with MIGraphX.
$ python3
import onnxruntime as ort
ort.get_available_providers()
Expected result:
>>> import onnxruntime as ort
>>> ort.get_available_providers()
['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
This indicates that the MIGraphXExecutionProvider
and ROCMExecutionProvider
are now running on the system, and the proper ONNX Runtime package has been installed.
Installation is complete and ONNX Runtime is available through the Python interface library, as well as scripts that invoke ONNX Runtime inference sessions.
For more information on the ONNX Runtime Python library, refer to Get started with ONNX Runtime in Python.