ScheInfer: Efficient Inference of Large Language Models with Task Scheduling on Moderate GPUs

Wenxiang Lin, Xinglin Pan, Shaohuai Shi*, Xuan Wang, Xiaowen Chu

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

Large language models (LLMs) are known for their high demand on computing resources and memory due to their substantial model size, which leads to inefficient inference on moderate GPU systems. Techniques like quantization or pruning can shrink model sizes but often impair accuracy, making them unsuitable for practical applications. In this work, we introduce ScheInfer, a high-performance inference engine designed to speed up LLM inference without compromising model accuracy. ScheInfer incorporates three innovative methods to increase inference efficiency: 1) model partitioning to allow asynchronous processing of tasks across CPU computation, GPU computation, and CPU-GPU communication, 2) an adaptive partition algorithm to optimize the use of CPU, GPU, and PCIe communication capabilities, and 3) a token assignment strategy to handle diverse prompt and generation tasks during LLM inference. Comprehensive experiments were conducted with various LLMs such as Mixtral, LLaMA-2, Qwen, and PhiMoE across three test environments featuring different CPUs and GPUs. The experimental findings demonstrate that ScheInfer achieves speeds between 1.11× to 1.80× faster in generation phase and 1.69× to 6.33× faster in prompt phase, leading to an overall speedup ranging from 1.25× to 2.04× compared to state-of-the-art solutions, llama.cpp and Fiddler.

Original languageEnglish
Title of host publicationEuro-Par 2025
Subtitle of host publicationParallel Processing - 31st European Conference on Parallel and Distributed Processing, 2025, Proceedings
EditorsWolfgang E. Nagel, Diana Goehringer, Pedro C. Diniz
PublisherSpringer Science and Business Media Deutschland GmbH
Pages327-340
Number of pages14
ISBN (Electronic)9783031998720
ISBN (Print)9783031998713
DOIs
Publication statusPublished - 22 Aug 2025
Externally publishedYes
Event31st International Conference on Parallel and Distributed Computing, Euro-Par 2025 - Dresden, Germany
Duration: 25 Apr 202529 Apr 2025

Publication series

NameLecture Notes in Computer Science
Volume15902 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference31st International Conference on Parallel and Distributed Computing, Euro-Par 2025
Country/TerritoryGermany
CityDresden
Period25/04/2529/04/25

Bibliographical note

Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.

Keywords

  • Efficient Inference
  • Large Language Models
  • Model Partitioning
  • Scheduling

Fingerprint

Dive into the research topics of 'ScheInfer: Efficient Inference of Large Language Models with Task Scheduling on Moderate GPUs'. Together they form a unique fingerprint.

Cite this