What is a Local Secondary Index (LSI)?
An LSI is a secondary index that shares the base table’s partition key but has a different sort key. It is created alongside the table (you cannot add one later) and shares the partition key space with the base table, so it:
- Reuses the table’s partition key
- Adds an alternate sort key for new query patterns
- Supports strongly consistent reads (unlike GSIs, which are eventual-only)
Because LSIs must be planned up front, include them in your initial schema design; additional access patterns discovered later generally require GSIs.
LSIs are useful when you need multiple query patterns over items that share the same partition key. Queries against an LSI must still include that partition key (for example, CustomerID = 'CUST-001'), then DynamoDB uses the alternate sort key to order/filter the results within that partition.
The Problem:
DynamoDB can only efficiently query items using the primary key (partition key + sort key). If you want to query the same data sorted differently, you’re stuck with inefficient and expensive SCAN operations.
How LSI Solves It
LSI creates an alternate view of your data with a different sort key, allowing efficient queries on the same partition key but sorted by different attributes.
Example – E-commerce order history
Imagine an Orders table:
- Partition key (
PK):CustomerID - Sort key (
SK):OrderID
You need to show a customer’s orders:
- Newest orders first for account dashboards
- Orders within a date range (e.g., this quarter)
- Orders paginated 10 at a time with totals/status visible
Base table
| CustomerID (PK) | OrderID (SK) | OrderDate | Status | TotalAmount |
|---|---|---|---|---|
| CUST-001 | ORD-105 | 2025-11-20 | Delivered | $299.99 |
| CUST-001 | ORD-104 | 2025-10-05 | Shipped | $149.00 |
| CUST-001 | ORD-103 | 2025-09-12 | Processing | $89.50 |
By default, queries on this table return orders sorted by OrderID. To support “newest orders first” efficiently, you create an LSI such as:
- Index name:
OrderDateIndex - Partition key:
CustomerID(same as table) - Sort key:
OrderDate
Now you can query CustomerID = 'CUST-001' and get results pre-sorted by OrderDate, without scanning and sorting in your application.
Without LSI vs with LSI
| Feature | WITHOUT LSI | WITH LSI (OrderDateIndex) |
|---|---|---|
| Show newest orders | Scan every order for the customer, sort in app (slow, high read cost) | Direct Query returns newest-first (fast, low read cost) |
| Filter orders from Q4 | Scan + filter many items, pay for irrelevant reads | Query by date range on index, read only Q4 orders |
| Pagination | Client-side skip/limit over large result sets | Native DynamoDB pagination via LastEvaluatedKey |
| Cost (for heavy workloads) | Higher total read units due to repeated scans and sorting in app | Fewer read units because each query reads only the relevant items |
Attribute projections
Every LSI copies a subset of the base-table attributes into the index. That subset is called the projection. Projected attributes are returned directly from the index; anything not projected triggers an extra fetch from the base table, which increases latency and read cost.
Projection types at a glance
| Projection type | What is stored | Storage impact | Common use case |
|---|---|---|---|
KEYS_ONLY | Base table partition key, base table sort key, and the LSI’s sort key | Smallest | Workflows that only need identifiers first, then fetch full records when a user drills into details |
INCLUDE | Keys plus specific attributes you list | Medium | Workflows that always need a handful of non-key attributes from the index without going back to the table |
ALL | Every attribute from the base table | Largest | Read-heavy workloads where every query must see the complete item and you accept the storage trade-off |
Visual example
Base Orders table:
| CustomerID (PK) | OrderID (SK) | OrderDate | TotalAmount | Status |
|---|---|---|---|---|
CUST-001 | ORD-100 | 2025-11-20 | 299.99 | Delivered |
Assume you add an LSI named OrderDateIndex to sort orders by OrderDate.
| Attribute | KEYS_ONLY projection | INCLUDE (TotalAmount) | ALL projection |
|---|---|---|---|
| CustomerID (table PK) | ✅ | ✅ | ✅ |
| OrderDate (index SK) | ✅ | ✅ | ✅ |
| OrderID (table SK) | ✅ | ✅ | ✅ |
| TotalAmount | ❌ | ✅ | ✅ |
| Status | ❌ | ❌ | ✅ |
OrderDate is the LSI’s sort key in this example, so it is always present regardless of projection type.
Write amplification
DynamoDB charges WCUs differently depending on what changed:
| Operation | Condition | Extra WCU per affected LSI | Notes |
|---|---|---|---|
| Insert or delete item with projected attributes | Attribute newly appears in (or disappears from) the LSI | +1 | The index stores or removes the copy once |
| Update projected attribute (not the LSI sort key) | Value changes but the LSI sort key stays the same | +1 | Index row is updated in place |
| Update the LSI sort key attribute | Sort key value changes (e.g., OrderDate moves) | +2 | DynamoDB issues a delete of the old entry plus an insert of the new one |
| Update attribute not projected into that LSI | Attribute is absent from the projection | +0 | Only the base table write is billed |
So the total WCU cost of an operation is 1 (base table write) + Σ per-LSI cost. If a single update causes two indexes to update projected attributes (+1 each) and another index to change its sort key (+2), the total equals 1 + (1+1+2) = 5 WCUs. When multiple changes affect the same LSI, add both costs (for example, changing an LSI’s sort key and another projected attribute inside that same LSI would be +2 +1 = +3 for that index).
Because writes scale with the number and type of LSIs that store a given attribute, project only the fields you read frequently enough to justify the extra write cost.
Example: e-commerce orders with three LSIs
Orders table attributes: CustomerID, OrderID, OrderDate, Status, TotalAmount, ShippingAddress
- LSI #1 (OrderDateIndex): projects
OrderDate,Status,TotalAmount - LSI #2 (StatusIndex): projects
Status,TotalAmount - LSI #3 (TotalIndex): projects
TotalAmount
| Update | Projected? | WCUs consumed | Reason |
|---|---|---|---|
ShippingAddress | Not in any LSI | 1 (table only) | Attribute exists only on base table |
Status | In LSI #1 and #2 | 3 (table + 2 LSIs) | Must update base table, OrderDateIndex, and StatusIndex |
TotalAmount | In all LSIs | 4 (table + 3 LSIs) | Written to base table plus every index storing it |
OrderDate | LSI #1 sort key | 3 (table + 2 LSI ops) | Changing the LSI sort key requires deleting the old entry and inserting the new one |
Cost at scale: 1 million status updates/month at on-demand pricing (~$1.25 per million WRUs)
| Status projection | WCU per update | Monthly WCUs | Monthly cost |
|---|---|---|---|
| Not projected | 1 | 1 M | ≈ $1.25 |
| Projected into 1 LSI | 2 | 2 M | ≈ $2.50 |
| Projected into 2 LSIs | 3 | 3 M | ≈ $3.75 |
| Projected into 3 LSIs | 4 | 4 M | ≈ $5.00 |
Takeaway: only project attributes that you read consistently on your hot paths. Leave rarely used fields on the base table so their updates stay inexpensive.
Sparse indexes
LSIs are automatically sparse: only items where the LSI’s sort key attribute is defined are copied into the index. Items missing that attribute never appear, so the index can contain just the slice of data you care about and stay much smaller than the base table.
Example – Tracking VIP customers
- Base table:
Customerspartitioned byRegion,CustomerIDas the sort key - All customers store
Name,Email,JoinDate,TotalSpent - Only VIPs (spend > $10K) get a
VIPSinceattribute
If you create VIPCustomersIndex (partition key Region, sort key VIPSince):
- Only the handful of customers with
VIPSinceare copied into the index - Everyone else (no
VIPSince) consumes zero index storage
Why it helps
- If only 3% of customers are VIPs, the LSI is 97% smaller than the table
- “Show VIPs in us-east by VIP tenure” reads a few hundred items instead of tens of thousands
- Storage charges apply only to the VIP subset, and you don’t need extra filter expressions (
TotalSpent > 10000) because non-VIPs never enter the index
When NOT to Use LSIs
- Unbounded growth per partition – If any partition key might exceed 10 GB (user-generated content, long-lived time-series), you will eventually hit the hard limit. Consider GSIs or sharding the partition key instead.
- Cross-partition queries – LSIs only reorder data within a single partition key value. If you need to query across many different partition keys (e.g., “recent orders across all customers” or “highest-value carts across regions”), use a GSI that can span partitions, or export data to an analytics store.
- Different partition key required – LSIs cannot change the partition key. If the new access pattern needs a new partition key, that is a GSI use case.
Limits and gotchas
- Maximum of five LSIs per table. Plan which access patterns are most important because you cannot add more later without recreating the table. Tip: If you might need more than five sort patterns, consider composite sort keys (for example,
STATUS#DATEso values look likeACTIVE#2025-11-22). This common DynamoDB pattern lets one LSI answer multiple queries (usingbegins_withor range filters) instead of creating a dedicated index per attribute. - 10 GB limit per partition key (base table + all LSIs combined). If all items that share one partition key value exceed 10 GB of storage—counting the base record and every indexed copy—DynamoDB returns
ItemCollectionSizeLimitExceededExceptionon writes. Wide partitions or large projections hit this faster than expected; monitor sizes withReturnItemCollectionMetricsin your write operations and alert around ~8 GB so you have time to react (or emit custom CloudWatch metrics).
Common pitfalls
- Forgetting to define LSIs up front – You cannot add or delete an LSI after the table exists. Recreating a production table to change indexes often means downtime or a complex migration, so front-load the design work.
- Over-projecting attributes – Copying attributes that you rarely read wastes storage and increases write cost. If a field is referenced on most queries, project it; otherwise leave it on the base table and pay the occasional fetch.
- Ignoring item collection size – Without monitoring the 10 GB per-partition limit, you may suddenly hit
ItemCollectionSizeLimitExceededExceptiononce a partition grows too large. - Using LSIs for cross-partition queries – LSIs only reorder data inside a single partition key. Use a GSI or another data store if you need to search across multiple partitions.