Why it matters
- CloudWatch Logs keeps data forever by default, so you pay $0.03/GB-month for storage plus $0.50/GB ingested even for ancient Lambda output. These numbers are example values from a specific AWS region and point in time—always check the current CloudWatch Logs pricing page for your region.
- A practical starting point is 7–30 days of searchable logs, with anything older moved to cheaper storage such as S3 Glacier, Deep Archive, or S3-based log pipelines that use tiered pricing—then adjust up or down based on your own compliance and troubleshooting needs.1
Set retention as code
- Use
put-retention-policy(CLI),RetentionInDays(CloudFormation/Terraform), or an AWS Organizations account policy to enforce defaults the moment log groups are created. - Pick retention per environment based on how far back you realistically need to debug issues or satisfy audit needs, and push anything older into cheaper storage tiers.
# Set 7-day retention
aws logs put-retention-policy \
--log-group-name /aws/lambda/my-function \
--retention-in-days 7
Supported retention windows: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653 days.
Use advanced logging controls
- Enable JSON structured logs in Lambda so CloudWatch Logs Insights can filter by keys without regex parsing; this now works natively for Node.js, Python, and Java runtimes using Lambda-managed logging libraries.2
- Set application vs. system log levels (DEBUG/INFO/ERROR) per function to drop noisy events without code changes—turn verbosity up in dev, down in prod.
- When it helps with governance, point related functions at a shared log group so retention/encryption policies live in one place (each role still needs
logs:CreateLogStream/logs:PutLogEvents). - Define these settings in IaC with the
LoggingConfigblock so every deployment inherits the same JSON/level/log group defaults.
Pick the right destination
- For new logs, keep real-time troubleshooting flows in CloudWatch Logs to tap into Logs Insights and Live Tail.
- For long-term retention or third-party analytics, configure Lambda to send logs straight to S3 or Amazon Data Firehose—these destinations use a tiered pricing model down to $0.05/GB.1
- If you choose S3/Firehose, remember the CloudWatch “delivery log group” still exists for routing; set its retention low so it doesn’t become a surprise cost center.
Export existing CloudWatch logs to S3
When you already have large log groups in CloudWatch and must keep them for audits, export them before you lower retention:
aws logs create-export-task \
--log-group-name /aws/lambda/my-function \
--from 1640000000000 \
--to 1642000000000 \
--destination s3-bucket-name
- Use S3 lifecycle policies so data cools from Standard (30 days) → Glacier (90+ days) → Deep Archive (1 year) automatically.
- Consider Firehose subscriptions for near-real-time delivery to S3 if you never need CloudWatch search.
Cost comparison
| Scenario | Monthly cost for 100 GB |
|---|---|
| Keep everything in CloudWatch Logs | $3.00 |
| 30-day retention + S3 Glacier archive | $0.50 |
| Savings | ≈83% |
Automate governance
- Add retention to every
AWS::Logs::LogGroup(CloudFormation),aws_cloudwatch_log_group(Terraform), or IaC module so defaults can’t be skipped. - Run periodic audits with
aws logs describe-log-groups --query 'logGroups[?retentionInDays==null]'and remediate via scripts or Config rules. - Hook a Lambda function to the
CreateLogGroupCloudTrail event to auto-attach the right policy across accounts.
Best practices checklist
RetentionInDaysdefined for every log group- Export/Firehose pipeline in place before lowering retention
- S3 lifecycle policy moves data through cheaper tiers
- Document retention tiers (dev, prod, regulated) and review quarterly
- Pair this strategy with Reduce Log Output so you control both how much you log and how long you keep it.