vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
CVE ID: CVE-2025-46560
CVSS Base Severity: MEDIUM
CVSS Base Score: 6.5
CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Vendor: vllm-project
Product: vllm
EPSS Score: 0.05% (probability of being exploited)
EPSS Percentile: 14.14% (scored less or equal to compared to others)
EPSS Date: 2025-05-29 (when was this score calculated)
SSVC Exploitation: poc
SSVC Technical Impact: partial
SSVC Automatable: false