TalksAWS re:Invent 2025 - From principles to practice: Scaling AI responsibly with Indeed (AIM3323)

AWS re:Invent 2025 - From principles to practice: Scaling AI responsibly with Indeed (AIM3323)

Scaling AI Responsibly with Indeed

Overview

  • Presentation by Mike Diamond, Principal Product Lead for Responsible AI at AWS, and Lewis Baker, Senior Data Science Manager and Head of AI at Indeed
  • Discussed the importance of responsible AI practices and how Indeed has implemented them at scale

The Need for Responsible AI

  • Every AI system has inherent technical properties that impact its responsible operation
  • Examples:
    • Real estate company generating property descriptions - need to ensure fairness, accuracy, and privacy
    • E-commerce shopping agent - need to provide equitable recommendations and prevent unauthorized charges
  • Consequences of not addressing these proactively:
    • OECD AI incident tracker shows a 95% increase in AI-related incidents in October 2025, coinciding with the rise of generative AI

AWS Responsible AI Framework

  • AWS defines 8 key dimensions of responsible AI:
    1. Controllability
    2. Privacy and security
    3. Safety
    4. Fairness
    5. Veracity and robustness
    6. Explainability
    7. Transparency
    8. Governance
  • Challenges in addressing responsible AI at scale:
    • Expertise required for each technical property
    • Piecing together disparate tools into a holistic solution
    • Perceived as a bottleneck to innovation
    • Overwhelmed responsible AI teams due to increasing AI use cases

Responsible AI Strategies

  • Three overarching strategies:
    1. Baking: Building desired behavior into the AI system
    2. Filtering: Blocking undesirable inputs and outputs
    3. Guiding: Providing transparency and steering users on proper use

Three Lines of Defense Model

  1. Builder teams: Responsible for building safeguards and controls into the AI system
  2. AI expert teams: Guide and support the builder teams, set up practices
  3. Internal audit and assurance teams: Provide overall security and compliance assurance

AWS Responsible AI Best Practices Framework

  • Spans the AI/ML lifecycle: Design, Develop, Operate
  • Key practices:
    1. Narrowly define the use case to minimize risk exposure
    2. Identify inherent risks for the stakeholders
    3. Establish release criteria and work backwards from metrics
    4. Design test data sets to evaluate risks
    5. Implement baking, filtering, and guiding strategies
    6. Provide guidance tools like data cards, model cards, and AI system cards
    7. Continuously monitor and improve based on metrics

Responsible AI in Practice: Indeed's Career Scout

  • Indeed operates at massive scale with 635M job seeker profiles and 3.3M employers
  • Career Scout is an AI-powered chatbot to help job seekers explore career options
  • Challenges:
    • Potential for harmful or unintended outputs (e.g., recommending unsafe actions)
    • Reputational and financial risks from misuse
  • Approach:
    1. Establish an "AI Constitution" to define values and acceptable behaviors
    2. Conduct adversarial AI "red teaming" to stress test the system
    3. Implement content moderation and contextual guardrails
    4. Continuously monitor, log, and analyze for unknown issues

Key Takeaways

  • Responsible AI must be embedded from the beginning, not bolted on later
  • Defining organizational values and aligning AI systems to them is critical
  • Proactive testing, guardrails, and monitoring are essential to scaling AI responsibly
  • Responsible AI should be treated as infrastructure, not just a policy concern

Your Digital Journey deserves a great story.

Build one with us.

Cookies Icon

These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.

If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.