Fine-tuning LLMs and inference optimization

Fine-tuning LLMs and inference optimization#

Applies to Linux


3 min read time

ROCm empowers the fine-tuning and optimization of large language models, making them accessible and efficient for specialized tasks. ROCm supports the broader AI ecosystem to ensure seamless integration with open frameworks, models, and tools.

For more information, see What is ROCm?

Throughout the following topics, this guide discusses the goals and challenges of fine-tuning a large language model like Llama 2. Then, it introduces common methods of optimizing your fine-tuning using techniques like LoRA with libraries like PEFT. In the sections that follow, you’ll find practical guides on libraries and tools to accelerate your fine-tuning.