TalksAWS re:Invent 2025 - Red Team vs Blue Team: Securing AI Agents (DEV317)

AWS re:Invent 2025 - Red Team vs Blue Team: Securing AI Agents (DEV317)

Securing AI Agents: Defending Against Emerging Threats

Introduction

  • Presenters: Brian Tarbox and Brian Huff, AWS Heroes from Boston
  • Topic: Securing AI-powered chatbots and agents against various attack vectors

Understanding AI Agents

Components of an AI-Powered Chatbot

  • Front-end: TypeScript-based React chatbot
  • Back-end: Python FastAPI
  • Language Model: Unspecified large language model (LLM)
  • Integrations: Slack, Jira, Confluence, GitHub, knowledge base (Pine Cone)

Defining AI Agents

  • Agents have memory and context to carry on conversations
  • Agents can utilize various tools and take actions
  • Agents can plan, self-reflect, and exhibit emergent behaviors

Challenges of Agentic Systems

  • Distributed system challenges: Unauthorized calls, timeouts, inconsistent responses
  • Increased complexity: Nondeterministic behavior, temperature-based randomness
  • Potential for malicious agents: Agents can be designed to behave maliciously over time

Attack Vectors and Defense Strategies

Prompt Injection

  • Attackers can inject malicious prompts to bypass security measures
  • Defense strategies:
    • Guardrails and input sanitization
    • Prompt validation using models like Nvidia Nemo and Meta AI's LLAMGuard

Tool Poisoning

  • Attackers can manipulate agent access to tools and APIs
  • Defense strategies:
    • Strict identity and access management (IAM) policies for tools
    • Centralized gateway to control and validate tool access

Agent-to-Agent Escalation

  • Attackers can chain agent actions to escalate privileges and hijack workflows
  • Defense strategies:
    • Deterministic, step-based workflows (e.g., AWS Step Functions)
    • Limiting agent decision-making and enforcing strict process flows

Supply Chain Corruption

  • Attackers can poison the knowledge base or data sources used by agents
  • Defense strategies:
    • Throttle and validate document ingestion
    • Implement content filters and trust boundaries
    • Leverage observability and logging for detection

Secure AI System Design

Production Playbook

  • Code scanning and static analysis
  • Secure file uploads and API protections
  • Comprehensive logging and observability

Importance of Guardrails

  • Agents are powerful tools that require careful management and security controls
  • Analogous to giving a young driver access to a high-performance car without proper safeguards

Resources

  • GitHub repository with sample code and documentation: [link]

Key Takeaways

  • AI agents introduce new security challenges beyond traditional distributed systems
  • Prompt injection, tool poisoning, agent-to-agent escalation, and supply chain corruption are critical attack vectors to address
  • Comprehensive security measures, including guardrails, IAM, deterministic workflows, and observability, are essential for building secure AI systems
  • Securing AI agents requires a proactive, defense-in-depth approach to mitigate emerging threats

Your Digital Journey deserves a great story.

Build one with us.

Cookies Icon

These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.

If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.