Talks AWS re:Invent 2025 - Measuring AI Impact with Amazon Q Developer and Jellyfish (DVT219) VIDEO
AWS re:Invent 2025 - Measuring AI Impact with Amazon Q Developer and Jellyfish (DVT219) Measuring the Impact of AI in Software Development
Evolving Approaches to Developer Productivity
The conversation around developer productivity has shifted from individual productivity to team and system-level productivity.
Historically, the focus was on individual developer tools like autocomplete, but now the emphasis is on more advanced AI-powered experiences.
This transition has challenged the traditional notion of what "productivity" means in the age of AI.
Myths About AI and Productivity
The presentation debunked several common myths about AI and productivity:
AI cannot fix inherent process complexity - it requires eliminating manual steps and optimizing workflows first.
AI is not a silver bullet for creating "100x engineers" - team performance tends to gravitate to the average.
There is no single metric that can capture the full impact of AI - a basket of qualitative and quantitative measures is required.
Simply "turning on" AI tools is not enough - structured enablement and training is crucial for adoption.
Strategies for Applying AI Effectively
The presenters outlined a framework for applying AI thoughtfully:
Eliminate : Identify and remove manual steps, tools, processes, and checklists that can be eliminated.
Automate : Use AI to automate lower-value, repetitive tasks like documentation and testing.
Assist : Leverage AI as a thought partner and assistant for complex, infrequent tasks.
Accelerating AI Impact
The presenters highlighted several key accelerators for driving AI impact:
Giving developers "permission to play" and experiment with new tools
Focusing on team-level performance, not just individual productivity
Embracing "sense-making" to help new hires and onboard developers faster
Developing "AI fluency" through prompting, structured problem-solving, and context engineering
Measuring AI Impact
Measuring the impact of AI is a journey that requires:
Establishing a baseline of current productivity metrics
Tracking both leading (adoption, engagement) and lagging (quantitative impact) indicators
Aligning on a framework for measuring impact, such as Amazon's "cost to serve" model
Jellyfish's Approach to Measuring AI Impact
Jellyfish, an AWS partner, shared their framework for measuring AI impact:
Adoption : Tracking the breadth and depth of AI tool usage across the organization
Productivity : Measuring key engineering output metrics like pull request throughput
Business Outcomes : Aligning AI impact to strategic business goals, such as increased innovation
Genesis' AI Journey
Genesis, an AWS customer, shared their experience adopting and measuring the impact of AWS tools like Amazon Q and Amazon Curo:
Initial challenges with visibility and understanding tool usage
Transitioning from homegrown metrics to Jellyfish's unified AI dashboard
Embedding AI across the software development lifecycle to amplify developer capabilities
Achieving measurable productivity gains and cultural shifts towards creativity and speed
Key Takeaways
Measuring AI impact is a journey that requires a structured, multi-faceted approach
Baseline current productivity, then track leading and lagging indicators of AI adoption and impact
Map tool usage to internal taxonomies to understand variation in impact across the organization
Develop a clear vision and messaging around AI's role in amplifying, not replacing, developer capabilities
Leverage partner solutions like Jellyfish to provide a unified view of AI usage and business outcomes
Your Digital Journey deserves a great story. Build one with us.