Why it matters
RDS storage is billed per GB-month (and, for some engines, per I/O), so over-allocating storage or choosing more expensive storage classes than needed can become a large, persistent cost. Picking the right storage type and keeping volumes lean is one of the easiest ways to reduce steady-state RDS spend.1
Storage options at a glance
-
General Purpose SSD (gp2 / gp3)
- gp2 ties baseline IOPS to volume size (for example, 3 IOPS/GB with burst behavior).
- gp3 lets you configure IOPS and throughput separately from volume size, which can be more cost-effective when you need performance but not huge volumes.2 In many regions, AWS pricing shows gp3 as cheaper per GB than gp2 while offering more flexible performance tuning.
- Because of this, gp3 is usually the preferred default for most dev/test and many production workloads.
-
Provisioned IOPS SSD (io1 / io2)
- Designed for latency-sensitive, I/O-intensive workloads where you need consistently high IOPS and throughput.
- You explicitly provision IOPS up to engine/instance limits; mis-sizing here can be expensive.1
-
Older magnetic options
- Legacy and generally not recommended for new workloads—avoid choosing magnetic for any new RDS deployments unless you have a very specific, documented reason and understand the performance trade-offs.
Quick Wins
-
Right-size storage type
- Start with gp3 for most workloads, and move to io1/io2 only when you have clear evidence you need higher or more consistent IOPS than gp3 can provide.
- For existing io1/io2 volumes, confirm that you actually need the provisioned IOPS; if metrics show sustained usage well below your provisioned level, consider moving to gp3 or reducing IOPS.
-
Avoid over-allocating volume size (for RDS engines)
- For RDS engines that use fixed-volume storage, allocate enough capacity for growth plus a safety margin, not an arbitrary large number “just in case”—you pay for every GB, every month.
- Where available, use storage autoscaling so volumes grow only when needed instead of front-loading capacity.1
For Aurora, the storage model is different: storage is shared and auto-growing, so tuning is less about initial volume size and more about data hygiene, query/index patterns, and backup/snapshot practices that drive how much data you keep in that shared storage layer.
-
Monitor I/O patterns and latency
- In CloudWatch, track metrics such as ReadIOPS, WriteIOPS, and Read/WriteLatency to understand how your workload uses storage.
- Combine this with Cost Explorer views of RDS storage and I/O usage types to see where storage and I/O charges are concentrated.2
-
Clean up unused data and environments
- Archive or delete old logs, rarely accessed historical tables, and temporary data that lives in your primary database volumes.
- Regularly remove unused dev/test databases and snapshots instead of letting them accumulate indefinitely.
-
Align backups and snapshots with retention needs
- Tune automated backup retention to match actual recovery requirements, not the maximum available.
- Periodically prune manual snapshots that are no longer needed; large databases can accumulate significant snapshot storage cost over time.
- See Backups and snapshots for detailed backup cost optimization guidance.