Talks AWS re:Invent 2025 - Unlocking agentic AI access for microservices (ARC314) VIDEO
AWS re:Invent 2025 - Unlocking agentic AI access for microservices (ARC314) Unlocking Agentic AI Access for Microservices
Introduction to Agentic AI and Microservices
Agentic AI applications have emerged as a new pattern, with key differences from traditional microservices:
Highly stateful, data-intensive, and conversational
Require new DevOps and MLOps practices
Focus on enhancing human abilities through automation and insights
Agentic applications can automate and streamline existing microservices and workflows
Example: Telco provider used an "S-Sur" agent to reduce incident response time from 30-40 minutes to 5 minutes
Microservices Principles and Challenges
Key principles of microservices:
Service independence
Single responsibility
Decentralized communication
Agnostic tech stack
System resiliency
Independent deployability
Challenges arise when integrating agentic AI with microservices:
Agents only understand "wide coding" and need context to interact with microservices
Exposing microservices logic in a structured, contextual way for agents to leverage
Infusing Agentic Capabilities into Microservices
Agentic core constructs that can be infused into microservices:
Reasoning: Agents' ability to solve problems and explain their chain of thought
Memory: Short-term, long-term, and episodic memory to maintain context
Learning: Agents' ability to improvise and improve through feedback loops
Key differences from traditional, hardcoded microservices workflows:
Agents can dynamically determine which services to call and in what order to achieve a goal
Enables more flexible, adaptive, and self-improving automation
When to Use Agentic AI
Start with real, high-impact business use cases (e.g. clinical workflow automation, market analysis)
Define key metrics upfront (e.g. turnaround time reduction, staff productivity)
Assess both business value and risk potential
Building Agentic Agents
Agents are code (e.g. Python, TypeScript) that combine goals, instructions, and context
Context engineering is crucial to guide agents' decisions on tools, data, and actions
Agents need access to various resources:
Web scraping, navigation, and interactions
LLMs and text files
Workflows, automations, and knowledge bases
Structured and unstructured data
Model Context Protocol (MCP) for Agent-to-Tool Integration
MCP is an open-source protocol for enabling seamless agent-to-tool communication
Follows a client-host-server architecture:
Host: Agents, LLMs, and IDs requesting data access
MCP Client: Maintains 1-to-1 connection with MCP Server
MCP Server: Exposes microservices capabilities through standardized MCP protocol
Data Sources: Local and remote resources accessible via MCP
Benefits of MCP:
Simplifies integration complexity compared to custom API integrations
Enables rapid development and deployment of AI features
Agent-to-Agent (A2A) Communication
Agents need to discover, manage tasks, and collaborate with other agents
A2A protocol addresses key challenges:
Discovery: Mechanism to find relevant agents
Task Management: Coordinating different agent capabilities
Collaboration: Sharing context, security, and other metadata
Exposing Microservices for Agentic Access
Define and describe APIs (e.g. using Swagger, OpenAPI, gRPC)
Implement security and guardrails for accessing microservices
Incorporate feedback loops, observability, and retry mechanisms
AWS Agent Core Platform
Key components of the AWS Agent Core platform:
Agent Core Runtime: Serverless environment for running agents
Agent Core Memory: Stores short-term, long-term, and episodic context
Agent Core Identity: Provides secure, delegated access to enterprise services
Agent Core Observability: Comprehensive visibility into agent execution and performance
Agent Core Gateway: Exposes existing APIs, Lambda functions, and MCP servers to agents
Enables rapid development and production-ready deployment of agentic applications
Strands: Open-Source SDK for Building Agents
Strands is an open-source SDK that simplifies agent development:
Ease of use: Just 4 lines of code to create a working agent
Robust capabilities: Pre-built tools and MCP/A2A support
Extensibility: Supports multiple LLM providers
Key Challenges and Considerations
Cardinality and token costs: Managing interactions with thousands of microservices
Auditability requirements: Handling policy changes and precise, deterministic logic
Integration with existing DevOps and MLOps practices
Conclusion
Business-driven use cases are key, focusing on high-impact problems and defined metrics
Integrating agentic AI with microservices requires careful planning and best practices
AWS Agent Core platform provides a modular, production-ready solution to unlock agentic access
Your Digital Journey deserves a great story. Build one with us.