Talks AWS re:Invent 2025 - From principles to practice: Scaling AI responsibly with Indeed (AIM3323) VIDEO
AWS re:Invent 2025 - From principles to practice: Scaling AI responsibly with Indeed (AIM3323) Scaling AI Responsibly with Indeed
Overview
Presentation by Mike Diamond, Principal Product Lead for Responsible AI at AWS, and Lewis Baker, Senior Data Science Manager and Head of AI at Indeed
Discussed the importance of responsible AI practices and how Indeed has implemented them at scale
The Need for Responsible AI
Every AI system has inherent technical properties that impact its responsible operation
Examples:
Real estate company generating property descriptions - need to ensure fairness, accuracy, and privacy
E-commerce shopping agent - need to provide equitable recommendations and prevent unauthorized charges
Consequences of not addressing these proactively:
OECD AI incident tracker shows a 95% increase in AI-related incidents in October 2025, coinciding with the rise of generative AI
AWS Responsible AI Framework
AWS defines 8 key dimensions of responsible AI:
Controllability
Privacy and security
Safety
Fairness
Veracity and robustness
Explainability
Transparency
Governance
Challenges in addressing responsible AI at scale:
Expertise required for each technical property
Piecing together disparate tools into a holistic solution
Perceived as a bottleneck to innovation
Overwhelmed responsible AI teams due to increasing AI use cases
Responsible AI Strategies
Three overarching strategies:
Baking: Building desired behavior into the AI system
Filtering: Blocking undesirable inputs and outputs
Guiding: Providing transparency and steering users on proper use
Three Lines of Defense Model
Builder teams: Responsible for building safeguards and controls into the AI system
AI expert teams: Guide and support the builder teams, set up practices
Internal audit and assurance teams: Provide overall security and compliance assurance
AWS Responsible AI Best Practices Framework
Spans the AI/ML lifecycle: Design, Develop, Operate
Key practices:
Narrowly define the use case to minimize risk exposure
Identify inherent risks for the stakeholders
Establish release criteria and work backwards from metrics
Design test data sets to evaluate risks
Implement baking, filtering, and guiding strategies
Provide guidance tools like data cards, model cards, and AI system cards
Continuously monitor and improve based on metrics
Responsible AI in Practice: Indeed's Career Scout
Indeed operates at massive scale with 635M job seeker profiles and 3.3M employers
Career Scout is an AI-powered chatbot to help job seekers explore career options
Challenges:
Potential for harmful or unintended outputs (e.g., recommending unsafe actions)
Reputational and financial risks from misuse
Approach:
Establish an "AI Constitution" to define values and acceptable behaviors
Conduct adversarial AI "red teaming" to stress test the system
Implement content moderation and contextual guardrails
Continuously monitor, log, and analyze for unknown issues
Key Takeaways
Responsible AI must be embedded from the beginning, not bolted on later
Defining organizational values and aligning AI systems to them is critical
Proactive testing, guardrails, and monitoring are essential to scaling AI responsibly
Responsible AI should be treated as infrastructure, not just a policy concern
Your Digital Journey deserves a great story. Build one with us.