Understanding security & privacy on Amazon Bedrock, featuring Remitly (AIM360)
Security and Privacy with Amazon Bedrock
Introduction
Security and privacy are major concerns for organizations of all sizes, especially when it comes to adopting AI/ML technologies.
This session focuses on addressing the security aspects of using generative AI, such as Amazon Bedrock.
Key Takeaways
GI Security Framework:
Security considerations can be broken down into three stages:
Secure model training and deployment
Controlled model access
End-to-end application security
Security with Amazon Bedrock
Data Privacy and Protection:
Bedrock does not store customer inference or training data, only operational metrics.
Data is isolated per customer and remains in the same region.
Customers can enable specific models in their accounts and encrypt their fine-tuned models.
Bedrock is compliant with various security and compliance programs.
Connectivity and Access Control:
Bedrock offers both public and private connectivity options to ensure secure access.
IAM policies can be used to control and fine-tune model access permissions.
Application Inference Profiles provide granular access control.
Observability and Auditability:
Bedrock integrates with CloudWatch and CloudTrail to provide observability and auditability.
Model invocation logging can be enabled to track requests and responses.
Responsible AI with Bedrock Guardrails:
Bedrock Guardrails provide a set of safeguards to ensure responsible AI policies are enforced.
Guardrails check both the input prompts and the model-generated outputs.
Remitly's Experience with Bedrock
Remitly is a digital-first fintech company that uses Bedrock to enhance their customer support experience.
Key challenges include fast resolution time, multi-language support, and compliance with personal and financial data security.
Remitly used a hybrid approach of traditional search and neural network-based search, coupled with Bedrock's Converse API and Guardrails, to build a secure and effective customer support solution.
Patterns for Secure Generative AI Applications
Prompt Engineering:
Designing the right system and user prompts is crucial to ensure model behavior and output is aligned with security requirements.
Retrieval-Augmented Generation:
Combining trusted data sources with language models to generate relevant and secure responses.
Tool Use and Agentic Behavior:
Allowing language models to interact with external systems and take actions, while maintaining security and control.
Conclusion
Security should be a starting point, not an afterthought, when building generative AI applications.
Combining deterministic controls (e.g., IAM, KMS) and probabilistic controls (e.g., Bedrock Guardrails) is key to building secure and reliable systems.
Leveraging Bedrock's security features, such as data isolation, access control, and Guardrails, can help accelerate the development of secure generative AI applications.
These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.
If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.