Fine-tuning and inference

Fine-tuning and inference#

Applies to Linux

2024-06-05

3 min read time

Fine-tuning using ROCm involves leveraging AMD’s GPU-accelerated libraries and tools to optimize and train deep learning models. ROCm provides a comprehensive ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and ROCm-aware versions of deep learning frameworks such as PyTorch, TensorFlow, and JAX.

Single-accelerator systems, such as a machine equipped with a single accelerator or GPU, are commonly used for smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately sized datasets. See Fine-tuning and inference using a single accelerator.

Multi-accelerator systems, on the other hand, consist of multiple accelerators working in parallel. These systems are typically used in LLMs and other large-scale deep learning tasks where performance, scalability, and the handling of massive datasets are crucial. See Fine-tuning and inference using multiple accelerators.