Skip to main content
Back to top
Ctrl
+
K
ROCm™ Software 6.2.1
Version List
GitHub
Community
Blogs
Infinity Hub
Support
ROCm Documentation
What is ROCm?
Release notes
Install
ROCm on Linux
HIP SDK on Windows
Deep learning frameworks
Build ROCm from source
How to
Using ROCm for AI
Installation
Training a model
Running models from Hugging Face
Deploying your model
Using ROCm for HPC
Fine-tuning LLMs and inference optimization
Conceptual overview
Fine-tuning and inference
Using a single accelerator
Using multiple accelerators
Model quantization techniques
Model acceleration libraries
LLM inference frameworks
Optimizing with Composable Kernel
Optimizing Triton kernels
Profiling and debugging
System optimization
AMD Instinct MI300X
AMD Instinct MI300A
AMD Instinct MI200
AMD Instinct MI100
AMD RDNA 2
AMD MI300X performance validation and tuning
Performance validation
System tuning
Workload tuning
System debugging
Using MPI
Using advanced compiler features
ROCm compiler infrastructure
Using AddressSanitizer
OpenMP support
Setting the number of CUs
ROCm examples
Compatibility
Compatibility matrix
Linux system requirements
Windows system requirements
Third-party support
User and kernel-space support matrix
Docker image support matrix
Use ROCm on Radeon GPUs
Conceptual
GPU architecture overview
MI300 microarchitecture
AMD Instinct MI300/CDNA3 ISA
White paper
MI300 and MI200 Performance counter
MI250 microarchitecture
AMD Instinct MI200/CDNA2 ISA
White paper
MI100 microarchitecture
AMD Instinct MI100/CDNA1 ISA
White paper
GPU memory
File structure (Linux FHS)
GPU isolation techniques
Using CMake
ROCm & PCIe atomics
Inception v3 with PyTorch
Inference optimization with MIGraphX
Reference
ROCm libraries
ROCm tools, compilers, and runtimes
Accelerator and GPU hardware specifications
Precision support
Contribute
Contribute to ROCm docs
Documentation structure
Documentation toolchain
Build our documentation
Provide feedback
ROCm license
Index