TalksAWS re:Invent 2025 - AI in the Cloud: Surveillance, Sabotage, and Security (AIM242)

AWS re:Invent 2025 - AI in the Cloud: Surveillance, Sabotage, and Security (AIM242)

AI Security in the Cloud: Protecting Against Surveillance, Sabotage, and Threats

Overview

This presentation from AWS re:Invent 2025 explores the evolving landscape of AI security, highlighting the new battlegrounds and attack vectors that organizations must address as AI becomes increasingly prevalent in cloud-based applications and infrastructure. The speaker delves into the various layers of AI security, providing a comprehensive framework for proactively protecting against emerging threats.

The AI Security Landscape

  • AI is accelerating innovation, but also expanding the attack surface for adversaries
  • AI applications and workloads are becoming new vectors for attackers if not properly secured
  • The new "battleground" extends beyond just infrastructure, encompassing data, models, AI pipelines, and runtime environments

Attacking the AI Lifecycle

  1. Data Poisoning: Adversaries can inject malicious data into training sets or fine-tuning processes, manipulating model behavior and outputs
  2. Model Theft: Models can be stolen and redeployed in malicious infrastructure, with the potential to export sensitive intellectual property
  3. AI Pipeline Compromise: Attackers can introduce backdoors, prompt injection vulnerabilities, and other exploits throughout the AI development lifecycle
  4. Runtime Manipulation: AI agents and models can be hijacked to perform unauthorized actions, steer outputs in unsafe directions, or bypass safety mechanisms

Securing the AI Lifecycle

  1. Data Security: Establishing a data posture security management approach to monitor and protect sensitive data used in AI applications
  2. Supply Chain Security: Continuously monitoring for vulnerabilities, secrets, and malware in AI-related components and dependencies
  3. Model and Agent Security: Implementing customized guard rails and runtime validation to detect and mitigate model poisoning, manipulation, and rogue agent behavior
  4. Infrastructure Security: Securing AI-specific cloud resources, monitoring for misconfigurations, and applying threat intelligence-driven detection and response

Proactive AI Security

  • Traditional, reactive security approaches are insufficient to keep pace with the rapid evolution of AI threats
  • Organizations must adopt a more proactive, adaptive security strategy that can learn and evolve alongside the AI landscape
  • Comprehensive AI security requires a unified approach that connects and secures every layer of the AI lifecycle, from data to users

Practical Implementation

  • The presenter introduces a new "AI Scanner" tool that can scan AI applications for a range of security risks, including sensitive data disclosure, prompt injection, and malicious code generation
  • This tool integrates with a "LLM Judge" system to automatically apply virtual patching and protection against the identified vulnerabilities and threats

Business Impact

  • Securing AI infrastructure and applications is critical to prevent resource exhaustion, data leakage, and other attacks that can have severe financial and reputational consequences for organizations
  • Proactive AI security helps organizations stay ahead of evolving threats, maintain the integrity of their AI-powered systems, and protect their intellectual property and sensitive data

Real-World Examples

  • The presentation references a research paper by Trend Micro that details a case of a malicious actor extracting and manipulating a model from an exposed container, highlighting the need for comprehensive container and runtime security
  • The speaker also shares an anecdote about a large software organization struggling to control the scope and behavior of their AI agents, underscoring the importance of robust agent security mechanisms

Key Takeaways

  1. AI security must be addressed holistically, securing the entire AI lifecycle from data to users
  2. Proactive, adaptive security strategies are necessary to keep pace with the rapid evolution of AI-based threats
  3. Comprehensive visibility and security controls are required across AI infrastructure, pipelines, models, and runtime environments
  4. Practical tools and techniques, such as the AI Scanner and LLM Judge, can help organizations implement effective AI security measures
  5. Securing AI is crucial to prevent resource exhaustion, data breaches, and other attacks that can have significant business impact

Your Digital Journey deserves a great story.

Build one with us.

Cookies Icon

These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.

If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.