📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
-
Updated
Sep 9, 2024
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
A fast, lightweight, parallel inference server for Llama LLMs.
Add a description, image, and links to the paged-attention topic page so that developers can more easily learn about it.
To associate your repository with the paged-attention topic, visit your repo's landing page and select "manage topics."