TalksAWS re:Invent 2025 - AI Native Development: Strategies and Impact across Amazon and AWS (DEV323)

AWS re:Invent 2025 - AI Native Development: Strategies and Impact across Amazon and AWS (DEV323)

AI Native Development: Strategies and Impact at Amazon and AWS

Transitioning from AI-Assisted to AI-Native Development

  • Presenters shared lessons, challenges, and breakthroughs in moving from isolated AI experiments to AI-native transformation at Prime Video and AWS.
  • AI-assisted development uses AI tools in an ad-hoc, isolated manner, with limited impact on individual productivity.
  • AI-native development fundamentally reimagines the software development lifecycle, embedding AI capabilities across all roles and phases.

Key Pillars of AI-Native Transformation

  1. Access and Infrastructure:

    • Providing cross-functional teams (PMs, designers, developers) with access to best-in-class AI tools and shared context via Model Context Protocol (MCP).
    • Leveraging AWS services like Bedrock, SageMaker, and Agent Core to build the AI infrastructure.
    • Balancing centralized and decentralized investments to enable both common and domain-specific capabilities.
  2. Culture and Learning:

    • Fostering a grassroots "AI native learning flywheel" of experimentation, sharing, and healthy skepticism.
    • Formalizing mechanisms like AI champions, bar-raising programs, and quick-start guides to scale adoption.
    • Designing role-specific learning paths to upskill PMs, designers, and other stakeholders on AI-native techniques.
  3. Trust and Rigor:

    • Addressing early challenges like code review overload and policy/process friction.
    • Embedding governance and controls at every layer of the AI infrastructure stack.
    • Enforcing permissions, capacity controls, and rigorous model evaluation to ensure safe experimentation and adoption.

Measuring the Impact of AI-Native Transformation

  • Tracking adoption metrics across development lifecycle phases and roles.
  • Monitoring velocity and efficiency improvements using industry standard DORA metrics, while also controlling for incident rates.
  • Diving into specific use case metrics like time/effort savings for PM workflows.
  • Gathering qualitative feedback through surveys, interviews, and leadership listening sessions.

Demonstration: Strands Agent SOPs

  • Introduced "Strands Agent SOPs" - a new open-source framework for authoring and executing reusable, standardized AI agent workflows.
  • Demonstrated how the SOP format allows agents to follow detailed, constrained steps to accomplish tasks like triaging GitHub issues.
  • Showed how the SOP framework enables easy authoring and customization of agent workflows without having to manage low-level prompts.

Key Takeaways

  • Successful AI-native transformation requires a balanced approach of grassroots energy and top-down organizational support.
  • Providing access, building culture, and ensuring trust/rigor are critical pillars for scaling AI adoption across an enterprise.
  • Measuring impact holistically, across productivity metrics and qualitative feedback, is essential for driving continuous improvement.
  • The Strands Agent SOP framework offers a new paradigm for encapsulating and reusing AI agent workflows in a robust, transparent manner.

Your Digital Journey deserves a great story.

Build one with us.

Cookies Icon

These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.

If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.