Here is a detailed summary of the video transcript, broken down into sections for better readability:
Introduction to Responsible AI
- Dennis Batov is a worldwide Tech Leader for AI/ML at AWS, responsible for organizing the work of hundreds of AI/ML specialists.
- He has been leading the responsible AI efforts, even before the emergence of generative AI, as this area requires additional work from science, engineering, and understanding perspectives.
- Dennis is part of standardization efforts around responsible AI, contributing to explainability, transparency, bias, and other standards.
Risks and Challenges of Generative AI
- Generative AI has tremendous potential, but also introduces new risks and challenges.
- Some of the existing issues, such as hallucinations, are still present, and generative AI has introduced a whole new set of problems.
- The broader and more versatile nature of foundation models, as well as the ability to apply them to many use cases, contribute to these challenges.
- Hallucinations can lead to issues in various applications, such as legal research, chatbots misinterpreting policies, and medical transcription errors.
- However, hallucinations can also be welcomed in certain applications, such as creative tasks.
Defining Responsible AI
- Responsible AI is often discussed, but the definition can vary across organizations.
- The eight pillars of responsible AI at AWS are controllability, privacy and security, safety, bias and fairness, veracity and robustness, explainability, transparency, and governance.
Assessing AI Risk
- Assessing the risk of AI systems is important, as regulations like the EU AI Act focus on identifying prohibited, high-risk, and low-risk AI systems.
- AWS advises customers on how to conduct AI risk assessments, including identifying stakeholders, determining severity levels, and enumerating potential harmful events.
- AWS has published a blog post and organized workshops to help customers with this process.
Fairness and Bias
- Fairness and bias are technically solvable problems in traditional machine learning, with tools like Amazon SageMaker Clarify providing various bias detection metrics.
- However, there are challenges in defining fairness and making decisions about which fairness criteria to optimize for, as there can be trade-offs between accuracy and fairness.
- Generative AI poses additional challenges, as it may be difficult to measure the fairness of the output, and there are concerns about individual-level fairness.
- The industry should move towards more outcome-based assessments of fairness for generative AI systems.
Explainability
- Explainability is about understanding how a machine learning model arrived at a particular output, as opposed to just understanding the model architecture or algorithm.
- Traditional machine learning approaches, such as LIME and SHAP, can be used to explain the decision-making process of classification models.
- Explaining the reasoning behind the generation of entire completions in generative AI is an active area of research, with some approaches exploring the internal workings of transformer architectures or recording the chain of thought process.
Controllability, Privacy, and Security
- Controllability involves monitoring and steering AI systems, with features like the ability to allow or deny actions in agent-based architectures.
- Privacy and security are key concerns, with AWS Bedrock ensuring that customer data is not stored or used to improve models, and providing encryption, access management, and compliance features.
Toxicity, Safety, and Robustness
- Toxicity and safety are new challenges with generative AI, as the generated language could contain harmful, hateful, or threatening content.
- Robustness is about ensuring that small perturbations to the input do not lead to significant changes in the output.
- AWS provides tools like Foundation Model Evaluations and Amazon Bedrock Guard Rails to help customers address these issues.
Transparency and Automated Reasoning Checks
- Transparency is important, with features like invisible watermarks and digital signatures in generated content, as well as service cards and technical reports providing details about the models.
- Automated Reasoning Checks, a new feature in Amazon Bedrock, applies explicit logic-based rules to validate the completions generated by language models, providing explanations for why certain outputs are invalid.
Cisco's Responsible AI Journey
- Matt Frey, Director of Software Engineering at Cisco, discusses Cisco's approach to responsible AI, covering foundations (models), infrastructure (platforms), and end-to-end use cases.
- Cisco has a robust process for assessing AI features in products, internal use cases, and customer-facing offerings, considering factors like data privacy, security, model lifecycle, content moderation, and end-user empowerment.
- Cisco has pioneered the use of transparency notes to provide customers with information about the AI features in their products.