AMD ROCm LLMExt documentation#
2026-03-20
2 min read time
AMD ROCm LLMExt (ROCm-LLMExt) is an open-source software toolkit built on the ROCm platform for large language model (LLM) extensions, integrations, and performance enablement on AMD GPUs. The domain brings together training, post-training, inference, and orchestration components to make modern LLM stacks practical and reproducible on AMD hardware.
LLM Task |
Features |
|---|---|
Training |
|
Post-training and alignment |
|
Inference and serving |
|
Distributed execution |
|
The ROCm-LLMExt source code is hosted on GitHub at ROCm/ROCm-LLMExt.
Note
ROCm-LLMExt 25.09 includes targeted updates to one component (llama.cpp) and introduces another (FlashInfer); four components remain unchanged (verl, Stanford Megatron-LM, Megablocks, and Ray).
ROCm-LLMExt documentation is organized into the following categories:
To contribute to the documentation, see Contributing to ROCm.
You can find licensing information on the Licensing page.