Device software glossary

Device software glossary#

2026-02-20

4 min read time

Applies to Linux and Windows

This section provides brief definitions of software abstractions and programming models that run on AMD GPUs.

AMDGPU assembly#

AMDGPU assembly (GFX ISA) is the low-level assembly format for programs running on AMD GPUs, generated by the ROCm compiler toolchain. See AMDGPU assembly for instruction set details.

AMDGPU intermediate representation#

AMDGPU IR is an intermediate representation for GPU code, serving as a virtual instruction set between high-level languages and architecture-specific assembly. See AMDGPU intermediate representation for compilation details.

Global memory#

Global memory is the device-wide memory accessible to all threads, physically implemented as HBM or GDDR. See Memory model for global memory details.

Grid#

A grid represents the collection of all work-groups executing a single kernel across the entire GPU. See Grid for grid execution details.

HIP kernel#

A HIP kernel is the unit of GPU code that executes in parallel across many threads, distributed across the GPU’s compute units. See Device programming for kernel programming details.

HIP memory hierarchy#

The memory hierarchy pairs each thread hierarchy level with corresponding memory scopes, from private registers to LDS to GPU RAM. See Memory model for memory architecture details.

HIP thread hierarchy#

The thread hierarchy structures parallel work from individual threads to blocks to grids, mapping onto hardware from SIMD lanes to compute units to the entire GPU. See Hierarchical thread model for complete details.

LLVM target name#

The LLVM target name is a string identifier corresponding to a specific GFX IP version that is passed to the HIP compiler toolchain to specify the target GPU architecture for code generation. See ROCm compiler reference for details.

ROCm programming model#

The ROCm programming model defines how AMD GPUs execute massively parallel programs using hierarchical work-groups, memory scopes, and barrier synchronization. See Introduction to the HIP programming model for complete details.