A high-throughput and memory-efficient inference and serving engine for LLMs
Issues are used to track tasks, bugs, and feature requests.