TalksAWS re:Invent 2025 - Boost performance and reduce costs in Amazon Aurora and Amazon RDS (DAT312)
AWS re:Invent 2025 - Boost performance and reduce costs in Amazon Aurora and Amazon RDS (DAT312)
Optimizing Performance and Cost in Amazon RDS and Amazon Aurora
Balancing Performance and Cost Challenges
Any Company, a fictional fast-growing e-commerce startup, faced common challenges in balancing performance needs and cost-consciousness as their workload scaled.
The presentation covered techniques to optimize performance and reduce costs across various dimensions, including compute, storage, and backups, for both Amazon RDS and Amazon Aurora.
Optimizing Compute Costs in Amazon RDS
Identified and resolved poorly performing SQL queries using AWS CloudWatch Database Insights, which provided a unified observability experience.
Optimized a slow join query by creating an index, reducing query time from 30 seconds to 46 milliseconds.
Leveraged the latest Graviton-based instance types (R6G to RA) to achieve a 46% cost reduction while maintaining performance.
Offloaded reporting workloads to read replicas to isolate them from the core customer-facing application, improving API response times by 50%.
Utilized RDS Optimized Reads feature with R6GD instances to leverage local NVMe SSD storage, improving dashboard load times by 2x compared to vertical scaling.
Optimizing Storage Costs in Amazon RDS
Identified and resolved I/O latency issues by migrating from GP2 to the more performant and durable IO2 storage option, reducing latency from 500ms to sub-millisecond.
Optimized backup costs by using a combination of automated backups for recent data and lower-cost snapshot exports to Amazon S3 for longer-term archival, achieving a 30% reduction in backup costs.
Optimizing Compute and Storage in Amazon Aurora
Aurora Storage Optimization
Aurora's shared storage architecture automatically scales storage as needed, without requiring upfront provisioning.
Utilized Aurora's "pay-for-what-you-use" storage model, which charges based on storage size and I/O operations, to achieve cost savings.
Leveraged Aurora Optimized Reads to store temporary objects in local NVMe SSD, reducing I/O costs by 90% compared to using larger instances.
Aurora Compute Optimization
Used Aurora Serverless to automatically scale compute resources up and down based on demand, reducing costs for development and test environments.
Leveraged Aurora Fast Clones to create virtual copies of databases for testing, reducing storage costs by 90% compared to full volume restores.
Global Resiliency with Aurora Global Database
Implemented Aurora Global Database to enable low-latency reads across multiple AWS regions, with an RPO of up to 1 second.
Achieved disaster recovery capabilities while optimizing costs by using asymmetric cluster configurations across regions.
Key Takeaways
Comprehensive observability through tools like AWS CloudWatch Database Insights is crucial for identifying and resolving performance bottlenecks.
Leveraging the latest instance types, storage options, and Aurora-specific features can lead to significant cost savings without compromising performance.
Offloading workloads, caching, and serverless architectures can help isolate and optimize costs for different application components.
Aurora's unique storage and compute models provide opportunities for further cost optimization compared to traditional relational databases.
Global resiliency can be achieved with Aurora Global Database, with flexible price-performance configurations across regions.
These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.
If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.