Radeon Usecases# Refer to the applicable guides to optimize Radeon specific usecase performance. Usecases LLM vLLM Docker image for Llama2 and Llama3 Prerequisites Download and install Docker image GEMM tuning for model inferencing with vLLM Collect GEMM shape details Conduct GEMM tuning Run vLLM inference with tuned GEMM Install Huggingface transformers Install Huggingface transformers LLM inference Model support matrix ComfyUI Install ComfyUI and MIGraphX extension Prerequisites Installation Installing the MIGraphX node for ComfyUI Using the MIGraphX node for ComfyUI