torchtune
PyTorch-native library for fine-tuning LLMs on consumer and enterprise GPUs
torchtune is a PyTorch-native fine-tuning library by Meta. It supports full fine-tuning, LoRA, and QLoRA with memory-efficient training that works on consumer GPUs (24GB VRAM). Covers Llama, Mistral, Gemma, Phi, and Qwen model families. Includes recipes for SFT, DPO, and knowledge distillation with built-in evaluation.
Pricing: Free
torchtune Alternatives
Explore 21 products in the Fine-tuning category. View all torchtune alternatives.
Hugging Face
The open-source AI platform with 500K+ models, inference endpoints, and fine-tuning tools
fal
Build the next generation of creativity with fal. Lightning fast inference.
OpenAI
API access to GPT, o-series reasoning, DALL-E, and Whisper models
Amazon Bedrock
Managed API access to foundation models on AWS with built-in fine-tuning and agent tooling
Is your product missing?