Skip to content

Local Model Inference Speed Benchmark

MasakiMu319

Introduction

This test explores the inference speed of locally deployed ONNX models and simulates request handling under high-concurrency production workloads.

Embedding model used: Alibaba-NLP/gte-multilingual-base

Service endpoint tested: an embedding service deployed with TEI.

ONNX Benchmark (CPU)

When running ONNX inference on CPU, the per-request latency (ms) is shown below.

Notes:

  1. To reduce long-tail noise, total request volume for CPU tests is 10,000.
  2. Table values are P99/P75/P50 latencies under 4/8/16/32 CPU-core limits. For example, with 4 cores, P99 is 188.04ms.
CPU481632
99%188.0493.1738.931.8
75%106.7184.0129.8525.76
50%104.3375.2926.3821.3

Key conclusions:

  1. Inference gets faster as physical core count increases.
  2. The gap between 16 and 32 cores is small because the test machine has only 16 physical cores. Even with 32 logical cores, scheduling still runs on 16 physical cores. In compute-heavy inference, moving from 16 to 32 logical cores does not add real compute resources; hyper-threading gains are limited. In some mixed-workload situations, extra logical cores can still improve scheduling slightly.
  3. ONNX-optimized models show excellent tokenization speed: around 200–400 µs per request.
  4. ONNX Runtime also has efficient scheduling: around 400–600 µs dispatch overhead per request.

Concurrency Test Data

cons23610
99%6086158213
50%4973113162

Under concurrent load, single-request latency increases sharply as concurrency rises. Based on logs, likely causes are:

  1. Scheduling delay grows significantly under concurrency: total_time="95.838661ms" tokenization_time="250.414µs" queue_time="63.300541ms" inference_time="32.212502ms"
  2. Inference itself remains stable around 20–40 ms, indicating the main bottleneck is scheduling and queueing rather than pure compute.
  3. By queueing theory, as utilization approaches 100%, waiting time can grow exponentially, matching the observed “explosive growth.”

GPU Benchmark

Because CPU inference degrades under high concurrency, we tested GPU inference for these reasons:

  1. GPUs have far more lightweight cores and stronger parallel throughput.
  2. GPUs can process multiple inference tasks concurrently, improving total capacity.
  3. Modern GPUs include deep-learning-specific optimizations that accelerate inference.

For GPU concurrency tests, we increased total request count from 10k to 1 million. The reason: in early tests, GPU concurrency was strong enough that low request volume did not reflect behavior under sustained high load.

Specifically, with only 10k requests, percentile spread was very large. At concurrency 512, P99 reached 483.93ms, while P90 was only 90.36ms. This suggests a small number of high-latency outliers disproportionately affected results.

Possible causes:

To obtain more representative results, we scaled to 1 million requests. This better simulates real high-load production, provides more stable statistics, and helps distinguish random outliers from systematic behavior.

1M-Request Test

cons128256384512
99%44617996
95%38527085
90%36496579
75%33445971
50%30405264

After scaling to 1 million requests, metrics became much more stable. As concurrency increased from 128 to 512, percentile latencies rose roughly linearly. Importantly, P99 and P50 remained relatively close (e.g., 96ms vs 64ms at 512), indicating good consistency under load.

Compared with the 10k test, latency distribution became more even. At concurrency 512, the P99–P90 gap narrowed substantially (96ms vs 79ms), likely due to better GPU utilization and more effective batching at scale.

Although latency still grows with concurrency, growth is relatively controlled, suggesting the system may not yet be saturated at concurrency 512. That leaves room for further concurrency scaling.

Summary

Based on these tests:

Previous
0x03 - Cargo
Next
Dive into Embedding