Installing rocSHMEM#

This topic describes how to install rocSHMEM.

Requirements#

  • ROCm 6.4.0 or later, including the HIP runtime. For more information, see ROCm installation for Linux.

  • The following AMD GPUs have been fully tested for compatibility with rocSHMEM:

    • MI250X

    • MI300X

    • MI350X (Requires ROCm 7.0 or later)

    Note

    Other AMD GPUs might function with unknown limitations. For the complete list of supported hardware, see ROCm System Requirements.

  • ROCm-aware Open MPI and UCX. For more information, see Building dependencies.

  • Inter-node communication requires MPI, and is tested with Open MPI and CX7 Infiniband NICs.

Available network backends#

rocSHMEM supports two network backends:

  • The IPC (Inter-Process Communication) backend enables fast communication between GPUs on the same host using ROCm inter-process mechanisms. It does not support inter-node communication.

  • The RO (Reverse Offload) backend enables communication between GPUs on different nodes through a NIC, using a host-based proxy to forward communication orders to and from the GPU. In this release, RO is the only inter-node communication backend and is built on an MPI-RMA compatibility layer.

You can activate IPC and RO backends in the same rocSHMEM build. In this case, IPC handles intra-node communication, while RO handles inter-node communication.

Note

When RO is active, all atomic operations use the RO backend, even for intra-node communication.

Installing from a package manager#

On Ubuntu, you can install rocSHMEM by running:

apt install rocshmem-dev

Note

This installation method requires ROCm 6.4 or later. You must manually build dependencies such as Open MPI and UCX, because the distribution packaged versions don’t include full accelerator support. For more information, see Building dependencies.

Building dependencies#

rocSHMEM requires ROCm-Aware Open MPI and UCX. Other MPI implementations, such as MPICH, have not been fully tested.

To build and configure ROCm-Aware UCX 1.17.0 or later, run:

git clone https://github.com/ROCm/ucx.git -b v1.17.x
cd ucx
./autogen.sh
./configure --prefix=<prefix_dir> --with-rocm=<rocm_path> --enable-mt
make -j 8
make -j 8 install

To build Open MPI 5.0.7 or later with UCX support, run:

git clone --recursive https://github.com/open-mpi/ompi.git -b v5.0.x
cd ompi
./autogen.pl
./configure --prefix=<prefix_dir> --with-rocm=<rocm_path> --with-ucx=<ucx_path>
make -j 8
make -j 8 install

Alternatively, you can use a script to install dependencies:

export BUILD_DIR=/path/to/not_rocshmem_src_or_build/dependencies
/path/to/rocshmem_src/scripts/install_dependencies.sh

Note

Configuration options vary by platform. Review the script to ensure it is compatible with your system.

For more information about OpenMPI-UCX support, see GPU-enabled Message Passing Interface.

Installing from source#

You can select between two communication backends at build time for rocSHMEM: RO and IPC. The default configuration enables both backends, using IPC for intra-node communication RO for inter-node communication at runtime. In this configuration, rocSHMEM atomic operations always use the RO backend.

rocSHMEM also supports the IPC-only configuration, which allows atomic operations to use the IPC backend only.

RO and IPC backend build#

To build and install rocSHMEM with the hybrid RO (off-node) and IPC (on-node) backends, run:

git clone [email protected]:ROCm/rocSHMEM.git
cd rocSHMEM
mkdir build
cd build
../scripts/build_configs/ro_ipc

The build script passes configuration options to CMake to set up a canonical build.

Note

The only officially supported configuration for the RO backend uses Open MPI and UCX with a CX7 InfiniBand adapter. For more information, see Building dependencies. Other configurations, such as MPI implementations that are thread-safe and support GPU buffers, might work but are considered experimental.

IPC only backend build#

To build and install rocSHMEM with the IPC on-node, GPU-to-GPU backend, run:

git clone [email protected]:ROCm/rocSHMEM.git
cd rocSHMEM
mkdir build
cd build
../scripts/build_configs/ipc_single

The build script passes configuration options to CMake to setup a single-node build. This is similar to the default build in ROCm 6.4.

Note

The default configuration changed from IPC only in ROCm 6.4 (built with the ipc_single script) to RO and IPC in ROCm 7.0 (built with the ro_ipc script). Other experimental configuration scripts are available in ./scripts/build_configs, but only ipc_single and ro_ipc are officially supported.

Installation prefix#

By default, the build scripts install the library to ~/rocshmem. You can customize the installation path by adding the desired path as the script parameter. For example, to relocate the default configuration:

../scripts/build_configs/ro_ipc /path/to/install