Abstract
Large language models (LLMs) are known for their high demand on computing resources and memory due to their substantial model size, which leads to inefficient inference on moderate GPU systems. Techniques like quantization or pruning can shrink model sizes but often impair accuracy, making them unsuitable for practical applications. In this work, we introduce ScheInfer, a high-performance inference engine designed to speed up LLM inference without compromising model accuracy. ScheInfer incorporates three innovative methods to increase inference efficiency: 1) model partitioning to allow asynchronous processing of tasks across CPU computation, GPU computation, and CPU-GPU communication, 2) an adaptive partition algorithm to optimize the use of CPU, GPU, and PCIe communication capabilities, and 3) a token assignment strategy to handle diverse prompt and generation tasks during LLM inference. Comprehensive experiments were conducted with various LLMs such as Mixtral, LLaMA-2, Qwen, and PhiMoE across three test environments featuring different CPUs and GPUs. The experimental findings demonstrate that ScheInfer achieves speeds between 1.11× to 1.80× faster in generation phase and 1.69× to 6.33× faster in prompt phase, leading to an overall speedup ranging from 1.25× to 2.04× compared to state-of-the-art solutions, llama.cpp and Fiddler.
| Original language | English |
|---|---|
| Title of host publication | Euro-Par 2025 |
| Subtitle of host publication | Parallel Processing - 31st European Conference on Parallel and Distributed Processing, 2025, Proceedings |
| Editors | Wolfgang E. Nagel, Diana Goehringer, Pedro C. Diniz |
| Publisher | Springer Science and Business Media Deutschland GmbH |
| Pages | 327-340 |
| Number of pages | 14 |
| ISBN (Electronic) | 9783031998720 |
| ISBN (Print) | 9783031998713 |
| DOIs | |
| Publication status | Published - 22 Aug 2025 |
| Externally published | Yes |
| Event | 31st International Conference on Parallel and Distributed Computing, Euro-Par 2025 - Dresden, Germany Duration: 25 Apr 2025 → 29 Apr 2025 |
Publication series
| Name | Lecture Notes in Computer Science |
|---|---|
| Volume | 15902 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (Electronic) | 1611-3349 |
Conference
| Conference | 31st International Conference on Parallel and Distributed Computing, Euro-Par 2025 |
|---|---|
| Country/Territory | Germany |
| City | Dresden |
| Period | 25/04/25 → 29/04/25 |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
Keywords
- Efficient Inference
- Large Language Models
- Model Partitioning
- Scheduling
Fingerprint
Dive into the research topics of 'ScheInfer: Efficient Inference of Large Language Models with Task Scheduling on Moderate GPUs'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver