vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
CVE ID: CVE-2025-48944
CVSS Base Severity: MEDIUM
CVSS Base Score: 6.5
CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Vendor: vllm-project
Product: vllm
EPSS Score: 0.07% (probability of being exploited)
EPSS Percentile: 20.79% (scored less or equal to compared to others)
EPSS Date: 2025-06-22 (when was this score calculated)
SSVC Exploitation: poc
SSVC Technical Impact: partial
SSVC Automatable: false