[2024/12] 🔥 DashInfer: Announcing the release of v2.0, now with enhanced GPU (CUDA) support! This version includes features like prefix caching (with GPU & CPU swapping), guided decoding, optimized attention for GQA, a lockless reactor engine, and newly added support for the VLM model (Qwen-VL) and MoE Models. For more details, please refer to the release notes.
[2024/06] DashInfer: v1.0 release with x86 & ARMv9 CPU and CPU flash attention support.
Written in C++ runtime, DashInfer aims to deliver production-level implementations highly optimized for various hardware architectures, including CUDA, x86 and ARMv9.
DashInfer is a highly optimized LLM inference engine with the following core features:
Lightweight Architecture: DashInfer requires minimal third-party dependencies and uses static linking for almost all dependency libraries. By providing C++ and Python interfaces, DashInfer can be easily integrated into your existing system.
High Precision: DashInfer has been rigorously tested to ensure accuracy, and is able to provide inference whose accuracy is consistent with PyTorch and other GPU engines (e.g., vLLM).
High Performance: DashInfer employs optmized kernels to provide high-performance LLM serving, as well as lots of standard LLM inference techniques, including:
Continuous Batching: DashInfer allows for the immediate insertion of new requests and supports streaming outputs.
Paged Attention: Using our self-developed paged attention technique (which we call SpanAttention), we can achieve efficient acceleration of attention operator, combined with int8 and uint4 KV cache quantization, based on highly efficient GEMM and GEMV implementations.
Prefix Cache: DashInfer supports highly efficient Prefix Cache for prompts, which accelerates standard LLMs and MultiModal LMs (MMLMs) like Qwen-VL, using both GPU and CPU.
Quantization Support: Using DashInfer's InstantQuant (IQ), weight-only quantization acceleration can be achieved without fine-tuning, improving deployment efficiency. Accuracy evaluation shows that IQ has almost no impact on model accuracy, for detail, see :doc:quant/weight_activate_quant
.
Asynchronous Interface: Request-based asynchronous interfaces offer individual control over generation parameters and request status of each request.
Supported Models:
Mainstream Open-Source LLMs: DashInfer supports mainstream open-source LLMs including Qwen, LLaMA, ChatGLM, etc., and supports loading models in the Huggingface format.
MultiModal LMs: DashInfer supports MultiModal Language Models (MMLMs) including Qwen-VL, Qwen-AL, and Qwen2-VL.
OpenAI API Server: DashInfer can easily serve with fastChat to achieve OpenAI-compatible API server.
Multi-Programming-Language API: Both C++ and Python interfaces are provided. It is possible to extend C++ interface to Java, Rust and other programming languages, via standard cross-language interfaces.
DashInfer provides various many quantization technology for LLM weight, such as, int{8,4} weight only quantization, int8 activate quantization, and many customized fused kernel to provide best performance on specified device.
To put it simply, models fine-tuned with GPTQ will provide better accuracy, but our InstantQuant (IQ) technique, which does not require fine-tuning, can offer a faster deployment experience. Detailed explanations of IQ quantization can be found at the end of this article.
In terms of supported quantization algorithms, DashInfer supports models fine-tuned with GPTQ and dynamic quantization using the IQ quantization technique in two ways:
The quantization strategies introduced here can be broadly divided into two categories:
In terms of quantization granularity, there are two types:
For the detailed user manual, please refer to the documentation: Documentation Link.
In <path_to_dashinfer>/examples
there are examples for C++ and Python interfaces, and please refer to the documentation in <path_to_dashinfer>/documents/EN
to run the examples.
The VLM Support in multimodal folder, it's a toolkit to support Vision Language Models (VLMs) inference based on the DashInfer engine. It's compatible with the OpenAI Chat Completion API, supporting text and image/video inputs.
We have conducted several benchmarks to compare the performance of mainstream LLM inference engines.
We compared the performance of Qwen-VL with vllm across various model sizes:
Benchmarks were conducted using an A100-80Gx1 for 2B and 7B sizes, and an A100-80Gx4 for the 72B model. For more details, please refer to the benchmark documentation.
We evaluated the performance of the prefix cache at different cache hit rates:
The chart above shows the reduction in TTFT (Time to First Token) with varying PrefixCache hit rates in DashInfer.
Test Setup:
We compared the guided output (in JSON format) between different engines using the same request with a customized JSON schema (Context Length: 45, Generated Length: 63):
The high-performance implementation of DashInfer MoE operator is introduced in this paper, and DashInfer employs the efficient top-k operator RadiK. If you find them useful, please feel free to cite these papers:
@misc{dashinfermoe2025,
title = {Static Batching of Irregular Workloads on GPUs: Framework and Application to Efficient MoE Model Inference},
author = {Yinghan Li and Yifei Li and Jiejing Zhang and Bujiao Chen and Xiaotong Chen and Lian Duan and Yejun Jin and Zheng Li and Xuanyu Liu and Haoyu Wang and Wente Wang and Yajie Wang and Jiacheng Yang and Peiyang Zhang and Laiwen Zheng and Wenyuan Yu},
year = {2025},
eprint = {2501.16103},
archivePrefix = {arXiv},
primaryClass = {cs.DC},
url = {https://arxiv.org/abs/2501.16103}
}
@inproceedings{radik2024,
title = {RadiK: Scalable and Optimized GPU-Parallel Radix Top-K Selection},
author = {Li, Yifei and Zhou, Bole and Zhang, Jiejing and Wei, Xuechao and Li, Yinghan and Chen, Yingda},
booktitle = {Proceedings of the 38th ACM International Conference on Supercomputing},
year = {2024}
}
The DashInfer source code is licensed under the Apache 2.0 license, and you can find the full text of the license in the root of the repository.