Episode notes
In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem.
Key stories include:
- Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.
- ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.
Keywords
AIAI SecurityThis Week in AI SecurityAI News