vLLM
High-throughput LLM inference engine with PagedAttention for efficient GPU memory usage
Open Source
vLLM is an open-source inference and serving engine for Large Language Models, originally developed at UC Berkeley. It uses PagedAttention to manage GPU memory efficiently, achieving up to 24x higher throughput compared to Hugging Face Transformers. It supports most popular open-source models including Llama, Mixtral, DeepSeek, and multimodal models like LLaVA. vLLM includes both a fast inference engine and a production-ready OpenAI-compatible serving server, making it a popular choice for self-hosted LLM deployments.
Pricing: Free
Is your product missing? 👀 Add it here →