a curated list of database news from authoritative sources

May 28, 2025

Chapter 5: Multiversion Concurrency Control (Concurrency Control Book)

Chapter 5 of Concurrency Control and Recovery in Database Systems (1987) introduces multiversion concurrency control (MVCC), a fundamental advance over single-version techniques. Instead of overwriting data, each write operation creates a new version of the data item. Readers can access older committed versions without blocking concurrent writes or being blocked by concurrent writes.

MVCC removes read-write conflicts and increases concurrency significantly. Having multiple versions around gives the scheduler flexibility: if a read arrives "too late" to see the latest write, it can still proceed by accessing an older version. This avoids unnecessary aborts. Writes may still abort due to write-write conflicts, but reads are largely unimpeded. This is especially beneficial in read-heavy workloads.

This chapter presents three broad classes of multiversion methods: Multiversion Timestamp Ordering (MVTO), Multiversion Two-Phase Locking (MV2PL), and Multiversion Mixed Methods.

For all the benefits MVCC provides, the trade-off is additional storage and complexity in managing versions and garbage collection. This is a good tradeoff to take, and  MVCC is the dominant concurrency control approach today. Oracle uses MV2PL. PostgreSQL uses MVCC natively. MySQL uses MVCC in InnoDB. For both of them, reads get a consistent snapshot without locking and writes create new versions and require locking at commit time. Microsoft Hekaton implements an MVTO-style engine in its in-memory OLTP system (see my 2022 post on Hekaton). Spanner may be best viewed as MVTO plus external certification: it uses multiversion reads and assigns commit timestamps via TrueTime, while writes acquire locks and are certified at commit to ensure strict serializability. Unlike MV2PL, reads never block, and unlike pure MVTO, writes are serialized through locking and timestamp-based validation.

Let's dig in to the sections.


Multiversion Serializability Theory

To reason about the correctness of multiversion schemes, this section extends classical serializability theory. It defines MV histories (which include explicit version metadata) and 1V histories, which reflect what users see: a single logical version per item. The key challenge is to ensure that MV histories are equivalent to 1-serial 1V histories. A 1-serial MV history is one in which each read observes the latest committed version at the read's logical time. This avoids anomalies like fractured reads (e.g., reading stale x and fresh y from the same transaction).

Correctness is characterized using a Multiversion Serialization Graph (MVSG). An MV history is 1-serial iff its MVSG is acyclic. This parallels the classical serializability theory, and extends it with versioning. The rest of the section develops the correctness proof via MVSGs. 


Multiversion Timestamp Ordering (MVTO)

MVTO generalizes timestamp ordering by storing multiple versions. Each transaction is assigned a unique timestamp at start. When a transaction Ti reads x, it finds the latest version of x with a timestamp less than TS(Ti). When Ti writes x, it buffers the write and, at commit, creates a new version tagged with TS(Ti).

MVTO guarantees serializability: transactions appear to execute in timestamp order. The main difference from single-version TO is that MVTO avoids aborting reads. Since older versions are preserved, reads always succeed. However, MVTO still aborts transactions on write-write conflicts. If Ti tries to write x, but another transaction Tj with TS(Tj) > TS(Ti) has already read an older version of x, Ti must abort to preserve timestamp order.

MVTO is good for read-mostly workloads but struggles with high write contention. Garbage collection also becomes a concern. Old versions can be discarded only after all transactions that might read them complete.


Multiversion Two-Phase Locking (MV2PL)

MV2PL extends strict 2PL by adding versioning. Unlike 2PL, where transactions block when accessing locked items, MV2PL lets readers proceed by using older versions (e.g., accessing the latest committed version as of the lock time). While 2PL systems block on read-write conflicts; MV2PL avoids this by separating read and write versions.

MV2PL also extends 2PL for writes by introducing certify locks: Instead of overwriting in-place, the writers in MV2PL buffer and acquire certify locks at commit time to serialize version creation. A certify lock is exclusive: only one transaction can hold it on a data item at any time. This prevents races and ensures only one new version per item per commit. 


Multiversion Mixed Method

Multiversioning gives the scheduler more flexibility, especially for read-only transactions. If the system knows in advance which transactions are queries (read-only) and which are updaters (perform writes), it can increase concurrency by handling them differently. This method uses MVTO for queries and Strict 2PL for updaters.

Queries behave like in MVTO: they are assigned timestamps at start, read the latest version less than their timestamp, and never block. Updaters use Strict 2PL for mutual exclusion during execution. At commit, the transaction manager assigns each updater a timestamp consistent with its position in the serialization graph, ensuring global consistency. This hybrid approach prevents the out-of-order write-write conflicts seen in pure MVTO. This also resembles concurrency control in modern OLAP systems: large analytical reads proceed without blocking, while updates are carefully serialized.

A key innovation here is the commit list. Each transaction maintains a commit list of versions it plans to write. When committing, the transaction must check for conflicts: It cannot write a version if a transaction with a later timestamp has already read an earlier version of that item. This would violate timestamp order. In a centralized system, the commit list can be scanned at commit time to detect such anomalies. In distributed systems, where this check can’t be performed atomically, the system needs to use a distributed coordination protocol like 2PC.

Solve a Geospatial (GIS) problem with CedarDB!

Motivation

If you share my interest in finding things, then I hope you will find this brief post worthwhile. I’ve been interested in databases for a while now and, during this time, I’ve consistently been intrigued by text and spatial data. When I got my hands on CedarDB I was very excited about its potential, then I heard a request by a user, for a geospatial related feature. He was kind enough to share with me his specific need, which was essentially given a point, find all other points which are located within a specified distance. (In this exercise, we’ll show we can do this in about 10 ms on a 9M row table.)

May 27, 2025

Postgres 18 beta1: small server, cached Insert Benchmark

I recently published results for Postgres 18 beta1 on a small server using sysbench with a cached and IO-bound workload. This post has results for the Insert Benchmark on a small server with a cached workload and low concurrency.

tl;dr - for 17.5 vs 18 beta

  • the l.i1 benchmark step (write-only with inserts and deletes) was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method= worker or io_uring
  • the point query benchmark steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method= sync or worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
tl;dr for 14.0 through 18 beta1
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) are ~5% slower in 17.5 vs 14.0
  • qp100, qp500, qp1000 (point query) are 1% to 3% slower in 17.5 vs 14.0

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 20M rows.

The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
  • All versions -- 14.0 through 18 beta1
    • See here, this uses the results from 14.0 as the base version
  • Only 17.5 and 18 beta1
    • See here, this uses the results from 17.5 as the base version and there are three results for 18 beta1, one for each of the configurations listed above.
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: 17.5 and 18 beta1

The performance summary is here.

The summary is:
  • the l.i0 (initial load) step was ...
    • 1% or 2% faster in 18 beta1 vs 17.5
  • the create index step (l.x) was ...
    • as fast with 18 beta1 as with 17.5 when using io_method=sync
    • 2% slower in 18 beta1 when using the new io_method= worker or io_uring
  • the l.i1 step was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method =worker =sync
  • the range query steps (qr100, qr500, qr1000) were ...
    • 1% to 3% slower in 18 beta1
  • the point query steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method =sync or =worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
For regressions in the l.i1 step
  • This step does inserts and deletes as fast as possible with 50 rows per transaction. The regressions were smaller for the l.i2 step that only changes 5 rows per transaction.
  • From vmstat and iostat metrics 18 beta1 uses more CPU per operation (see cpupq here)
For regressions in the point query steps (qp100, qp500, qp1000)
  • The worst regression is from 18 beta1 with io_method=io_uring and the CPU /operation there is the largest. See cpupq for qp100, qp500 and qp1000.
Results: 14.0 through 18 beta1

The performance summary is here.

For 17.5 vs 18 beta1 see the previous section.

For 14.0 through 17.5, QPS on ...
  • l.i0 (the initial load) is stable
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) is ~5% slower in 17.5 vs 14.0
  • qr100, qr500, qr1000 (range query) is similar between 17.5 and 14.0
  • qp100, qp500, qp1000 (point query) is 1% to 3% slower in 17.5 vs 14.0

Google Firestore with MongoDB compatibility

In this series, I tested multiple MongoDB emulations on top of SQL databases, and all failed to be compatible with a simple query like listing the last orders for one product in one country:

db.orders.find(
 { 
   country_id: 1, 
   order_details: { $elemMatch: { product_id: 15 } } 
 } ).sort({ 
   created_at: -1 
 }).limit(10)

Those emulations are syntax-compatible, but not behavior-compatible when it comes to performance and scalability. With MongoDB, such query finds immediately the ten documents from the following index:

db.orders.createIndex({ 
   "country_id": 1, 
   "order_details.product_id": 1,
   "created_at": -1 
});

It is simple: you index for the equality predicates, on country and product, and add the creation date to get the keys ordered. That's how you guarantee predictable performance in OLTP: the response time depends on the result, not on the size of the collection.

I tried the same on Google Firestore. Note that the simple find().sort().limit() syntax was not accepted by the Firestore Studio Editor, so I've run the equivalent aggregation pipeline:

db.orders.aggregate([  
{  
    $match: {  
      country_id: 1,  
      order_details: {  
        $elemMatch: { product_id: 15 }  
      }  
    }  
  },  
  {  
    $sort: { created_at: -1 }  
  },  
  {  
    $limit: 10  
  },  
])

Without an index, such query does a full collection scan, sorts all documents, and discard all except the first ten:

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 4.00 KiB (4,096 B)
 entity row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 3.00 ms
|
└── • Drop
    |  fields to drop: [__$6__, __$7__]
    |  records returned: 0
    |  total latency: 2.98 ms
    |
    └── • MajorSort
        |  fields: [__$6__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 2.98 ms
        |
        └── • Extend
            |  expressions: [array_offset(__$7__, 0L) AS __$6__]
            |  records returned: 0
            |  total latency: 2.89 ms
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$7__]
                |  records returned: 0
                |  total latency: 2.87 ms
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 2.87 ms
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 2.87 ms
                        |
                        └── • Filter
                            |  expression: ($eq(country_id, 1) AND $eq(order_details, 15))
                            |  records returned: 0
                            |  total latency: 2.86 ms
                            |
                            └── • TableScan
                                   order: STABLE
                                   properties: * - { __create_time__, __update_time__ }
                                   source: **/orders
                                   records returned: 0
                                   records scanned: 0
                                   total latency: 2.84 ms

I've run an empty collection solely to examine the execution plan's shape and understand its scalability.

I attempted to create the index using mongosh, because Firestore provides protocol compatibility, but Google Cloud requires a credit card even for the free trial, which I did not accept. As a result, billing is not enabled and I can't use it:

firestore> db.orders.createIndex(
   { "country_id": 1, "order_details .product_id": 1, "created_at": -1 }
  );
MongoServerError[PermissionDenied]: Request is prohibited because billing is not enabled.
firestore> 

No problem, I was able to create the index from the console:

It was created after a few minutes:

I've run my query again and here is the execution plan (called EXPLANATION in Google Firestore):

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 12.00 KiB (12,288 B)
 entity row scanned: 0
 index row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 2.04 s (2,040 ms)
|
└── • Drop
    |  fields to drop: [__$8__, __$9__]
    |  records returned: 0
    |  total latency: 2.04 s (2,040 ms)
    |
    └── • MajorSort
        |  fields: [__$8__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 2.04 s (2,040 ms)
        |
        └── • Extend
            |  expressions: [array_offset(__$9__, 0L) AS __$8__]
            |  records returned: 0
            |  total latency: 2.04 s (2,040 ms)
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$9__]
                |  records returned: 0
                |  total latency: 2.04 s (2,040 ms)
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 2.04 s (2,040 ms)
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 2.04 s (2,040 ms)
                        |
                        └── • Filter
                            |  expression: $eq(order_details, 15)
                            |  records returned: 0
                            |  total latency: 2.04 s (2,040 ms)
                            |
                            └── • TableAccess
                                |  order: PRESERVE_INPUT_ORDER
                                |  properties: * - { __create_time__, __update_time__ }
                                |  peak memory usage: 4.00 KiB (4,096 B)
                                |  records returned: 0
                                |  total latency: 2.04 s (2,040 ms)
                                |
                                └── • UniqueScan
                                       index: **/orders (country_id ASC, order_details.product_id ASC, created_at DESC)@[id = CICAgJjF9oIK]
                                       keys: [country_id ASC, __$5__ ASC, created_at DESC, __key__ ASC]
                                       properties: Selection { __key__ }
                                       ranges: /
                                               |----[1]
                                       records returned: 0
                                       records scanned: 0
                                       total latency: 2.04 s (2,038 ms)

The index was used to scan a range (ranges: / |----[1]) for the "country_id" filter, apparently preserving some order (order: PRESERVE_INPUT_ORDER). This could be beneficial for pagination queries, allowing it to stop when the result limit is reached.
However, the product filter ($eq(order_details, 15)) is applied after fetching documents, resulting in unnecessary reads for a filter that was not covered by the index.
Next, projections are performed to include the "id" and remove the "rowid". It appears the preserved order does not relate to the key as expected for pagination, since some computation occurs to determine the sorting field (sortPaths([created_at DESC])).
Ultimately, a sort is performed on this calculated field (fields: [__$8__ DESC]). This execution plan reads all orders from a country before being able to return the ten ones expected by the result. This is not scalable.

I ran this on an empty collection, and the table scan took two seconds (total latency: 2.04 s or 2,038 ms). Given this result, adding data to test for larger workloads is unnecessary. The issue lies not in quantity but in quality of the compatibility, limited to very simple key-value queries, lacking the advantages of MongoDB’s flexible schema document model and multi-key index performance.

When creating the index, I checked 'multi-key' because contrary to MongoDB, it's not the same index that can be created on scalar and arrays. Let's try a non multi-key one - even if it doesn't make sense as to goal is to have multiple products per order:

The execution plan is similar except that it shows a SequentialScan on the index instead of UniqueScan:

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 8.00 KiB (8,192 B)
 entity row scanned: 0
 index row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 20.36 ms
|
└── • Drop
    |  fields to drop: [__$8__, __$9__]
    |  records returned: 0
    |  total latency: 20.35 ms
    |
    └── • MajorSort
        |  fields: [__$8__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 20.35 ms
        |
        └── • Extend
            |  expressions: [array_offset(__$9__, 0L) AS __$8__]
            |  records returned: 0
            |  total latency: 20.29 ms
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$9__]
                |  records returned: 0
                |  total latency: 20.27 ms
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 20.27 ms
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 20.26 ms
                        |
                        └── • Filter
                            |  expression: $eq(order_details, 15)
                            |  records returned: 0
                            |  total latency: 20.26 ms
                            |
                            └── • TableAccess
                                |  order: PRESERVE_INPUT_ORDER
                                |  properties: * - { __create_time__, __update_time__ }
                                |  peak memory usage: 4.00 KiB (4,096 B)
                                |  records returned: 0
                                |  total latency: 20.25 ms
                                |
                                └── • SequentialScan
                                       index: **/orders (country_id ASC, order_details.product_id ASC, created_at DESC)@[id = CICAgJjFqZMK]
                                       key ordering length: 4
                                       keys: [country_id ASC, __$5__ ASC, created_at DESC, __key__ ASC]
                                       properties: Selection { __key__ }
                                       ranges: /
                                               |----[1]
                                       records returned: 0
                                       records scanned: 0
                                       total latency: 20.16 ms

My interpretation is that, in all cases, Firestore with MongoDB compatibility cannot use indexes to cover a sort. Either it is multi-key, and entries have to be deduplicated, or it is single key, but the index only covers the filtering on the country, not the product or the creation date.

In summary, another database claims compatibility with MongoDB by using its protocol and offering a similar API for storing documents. This reinforces MongoDB's status as the de facto standard for document databases. However, as discussed in previous posts, storing JSON in a relational database does not convert it into a document database, and similarly, storing JSON in a key-value data store does not replace MongoDB. If your MongoDB application runs on one of those emulations, you are likely using it as a key-value datastore without using the full potential of a general-purpose document database.

Beyond Guesswork: Enterprise-Grade PostgreSQL Tuning with pg_stat_statements

Something’s slowing your database down, and everyone feels it. Dashboards drag. Reports run late. Engineers start rebooting services just to buy time. Nobody’s saying “the database is broken,” but something isn’t right. You know there’s a problem. What you don’t have is visibility. PostgreSQL isn’t going to raise its hand and tell you which queries […]

Sort on Array with Multi-Key Index

In the previous post, we discussed how MongoDB indexes retrieve documents ordered by a specific field, using published dates from a collection of YouTube video statistics as an example. Unlike many databases that allow only a single value per document, MongoDB's flexible schema also supports indexing within nested arrays.

The dataset I imported was not designed for searching on the video views, only to store them per video. It contains two arrays: one for days ("day.data') and another for corresponding daily view counts ("views.daily.data"). This schema was intended to retrieve all stats for a specific video at once. However, this blog series aims to demonstrate how a MongoDB document model can support a variety of additional use cases through secondary indexes, without modifying the collection schema.

Access pattern: videos with the highest daily views

This is straightforward, just create an index on the array of daily views data, using dot notation to define the path:

db.youstats.createIndex({ 
 "views.daily.data": -1      , // for Sort on maximum daily view
 commentsNumber: 1           , // for additional filter on comments
}); 

I created a descending index since most of my queries focus on the highest number of views. MongoDB indexes can be scanned both forward and backward.
When the indexed field is an array, it has multiple keys per document, but sorting must use one key. The semantic is easy:

  • A descending sort orders documents by the highest value in the array.
  • An ascending sort orders documents by the lowest value in the array.

To find the Top-3 documents by daily views, here is a simple query:

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  // "views.daily": 1        ,
  // "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3)

[
  {
    _id: 'ASO_zypdnsQ',
    commentsNumber: 1122436,
    author: 'officialpsy',
    publishedDate: '2013-04-13T11:59:04.000Z',
    title: 'PSY - GENTLEMAN M/V',
    duration: 234,
    type: 'video/3gpp'
  },
  {
    _id: 'My2FRPA3Gf8',
    commentsNumber: 1125301,
    author: 'MileyCyrusVEVO',
    publishedDate: '2013-09-09T16:00:38.000Z',
    title: 'Miley Cyrus - Wrecking Ball',
    duration: 222,
    type: 'application/x-shockwave-flash'
  },
  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    duration: 239,
    type: 'video/3gpp'
  }
]

The documents show-up when one of the daily view data in the array has the highest value. For example, the 3rd result was:

  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    day: {
      data: [
        Long('1409788800000'), Long('1409875200000'), Long('1409961600000'), Long('1410048000000'), Long('1410134400000'), Long('1410220800000'), Long('1410307200000'), Long('1410393600000'), Long('1410480000000'), Long('1410566400000'), Long('1410652800000'), Long('1410739200000'), Long('1410825600000'), Long('1410912000000'), Long('1410998400000'), Long('1411084800000'), Long('1411171200000'), Long('1411257600000'), Long('1411344000000'), Long('1411430400000'), Long('1411516800000'), Long('1411603200000'), Long('1411689600000'), Long('1411776000000'), Long('1411862400000'), Long('1411948800000'), Long('1412035200000'), Long('1412121600000'), Long('1412208000000'), Long('1412294400000'), Long('1412380800000'), Long('1412467200000'), Long('1412553600000'), Long('1412640000000'), Long('1412726400000'), Long('1412812800000'), Long('1412899200000'), Long('1412985600000'), Long('1413072000000'), Long('1413158400000'), Long('1413244800000'), Long('1413331200000'), Long('1413417600000'), Long('1413504000000'), Long('1413590400000'), Long('1413676800000'), Long('1413763200000'), Long('1413849600000'), Long('1413936000000'), Long('1414022400000'), Long('1414108800000'), Long('1414195200000'), Long('1414281600000'), Long('1414368000000')
      ]
    },
    duration: 239,
    type: 'video/3gpp',
    views: {
      daily: {
        data: [
          2964062, 17799094, 19526335, 14604160, 9606241, 5959851,  4090643,  3419126,  2907521, 2626169, 2195691,  1518943,  1251086,  1128994,  958318, 861349,   785797,   628364,   506154,  564079, 445417,   474349,   498093,   589038,  444256, 363379,   329318,   313375,   333627,  335226, 354050,   322087,   239715,   228562,  213420, 201771,   213078,   247715,   228587,  183759, 168511,   169992,   199282,   326091,  347602, 335237,   290271,   242939,   223959,  219971, 249009,   277773,   279301,   220609
        ]
      }
    }
  }

As this ran in a couple of milliseconds, it is obvious and it didn't scan a million of documents and used the index efficiently. This is easy to check with explain():

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3).explain("executionStats").executionStats

...
  nReturned: 3,
  executionTimeMillis: 1,
  totalKeysExamined: 7,
  totalDocsExamined: 3,
...
    stage: 'LIMIT',
    nReturned: 3,
    limitAmount: 3,
...
      stage: 'PROJECTION_SIMPLE',
      nReturned: 3,
...
        stage: 'FETCH',
        nReturned: 3,
...
          stage: 'IXSCAN',
          nReturned: 3,
          keyPattern: { 'views.daily.data': -1, commentsNumber: 1 },
          isMultiKey: true,
          multiKeyPaths: {
            'views.daily.data': [ 'views.daily.data' ],
            commentsNumber: []
          },
          direction: 'forward',
          indexBounds: {
            'views.daily.data': [ '[MaxKey, MinKey]' ],
            commentsNumber: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 7,
          seeks: 1,
          dupsTested: 7,
          dupsDropped: 4
        }
      }
    }
  }
}

The index contains an entry for each "views.daily.data" of all documents, starting with the highest daily view. The first key examined returns the recordid of the document with the highest daily view. The next key may correspond to the same video, which may have a high view count on another day, or another one. To return 3 distinct documents, the scan must eliminate duplicate recordid. Here 4 duplicates were dropped so that 3+4=7 keys have been read in total.

Even if we were less lucky and had to examine 365 keys per document, reading one thousand index entries is still fast, as only 3 documents have to be fetched.

Access pattern: videos with the lowest daily views and no comments

The same index can also identify videos with the lowest daily views. To filter out those with no data, I ensure to only include entries that exist:

db.youstats.find({
  "views.daily.data": { $exists: true } ,
  "commentsNumber": { $lt: 1 } ,
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  //"day": 1                ,
}).sort({
  "views.daily.data": 1 
}).limit(3)

This query may not yield useful results with my data, as it will return videos with no views on a given day, even if they had many views on another day. Remember, ascending sort retrieves the lowest field value in the array, while descending sort retrieves the highest.

In this series on indexing MongoDB collections, I aim to demonstrate that a document model can support more use cases than initially anticipated during schema design. Although I imported a dataset not optimized for specific queries, introducing a couple of secondary indexes can enhance performance for various access patterns.
What's crucial during schema design to ensure that all fields used for filtering are contained within the same document, allowing the index to retrieve the minimum set of documents efficiently.

Sort on Array with Multi-Key Index

In the previous post, we discussed how MongoDB indexes retrieve documents ordered by a specific field, using published dates from a collection of YouTube video statistics as an example. Unlike many databases that allow only a single value per document, MongoDB's flexible schema also supports indexing within nested arrays.

The dataset I imported was not designed for searching on the video views, only to store them per video. It contains two arrays: one for days ("day.data') and another for corresponding daily view counts ("views.daily.data"). This schema was intended to retrieve all stats for a specific video at once. However, this blog series aims to demonstrate how a MongoDB document model can support a variety of additional use cases through secondary indexes, without modifying the collection schema.

Access pattern: videos with the highest daily views

This is straightforward, just create an index on the array of daily views data, using dot notation to define the path:

db.youstats.createIndex({ 
 "views.daily.data": -1      , // for Sort on maximum daily view
 commentsNumber: 1           , // for additional filter on comments
}); 

I created a descending index since most of my queries focus on the highest number of views. MongoDB indexes can be scanned both forward and backward.
When the indexed field is an array, it has multiple keys per document, but sorting must use one key. The semantic is easy:

  • A descending sort orders documents by the highest value in the array.
  • An ascending sort orders documents by the lowest value in the array.

Another way to think about it is that the index scan uses the first key encountered for each document and skips the others. On a forward scan with a descending index, it is the greatest value.

To find the Top-3 documents by daily views, here is a simple query:

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  // "views.daily": 1        ,
  // "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3)

[
  {
    _id: 'ASO_zypdnsQ',
    commentsNumber: 1122436,
    author: 'officialpsy',
    publishedDate: '2013-04-13T11:59:04.000Z',
    title: 'PSY - GENTLEMAN M/V',
    duration: 234,
    type: 'video/3gpp'
  },
  {
    _id: 'My2FRPA3Gf8',
    commentsNumber: 1125301,
    author: 'MileyCyrusVEVO',
    publishedDate: '2013-09-09T16:00:38.000Z',
    title: 'Miley Cyrus - Wrecking Ball',
    duration: 222,
    type: 'application/x-shockwave-flash'
  },
  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    duration: 239,
    type: 'video/3gpp'
  }
]

The documents show-up when one of the daily view data in the array has the highest value. For example, the 3rd result was:

  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    day: {
      data: [
        Long('1409788800000'), Long('1409875200000'), Long('1409961600000'), Long('1410048000000'), Long('1410134400000'), Long('1410220800000'), Long('1410307200000'), Long('1410393600000'), Long('1410480000000'), Long('1410566400000'), Long('1410652800000'), Long('1410739200000'), Long('1410825600000'), Long('1410912000000'), Long('1410998400000'), Long('1411084800000'), Long('1411171200000'), Long('1411257600000'), Long('1411344000000'), Long('1411430400000'), Long('1411516800000'), Long('1411603200000'), Long('1411689600000'), Long('1411776000000'), Long('1411862400000'), Long('1411948800000'), Long('1412035200000'), Long('1412121600000'), Long('1412208000000'), Long('1412294400000'), Long('1412380800000'), Long('1412467200000'), Long('1412553600000'), Long('1412640000000'), Long('1412726400000'), Long('1412812800000'), Long('1412899200000'), Long('1412985600000'), Long('1413072000000'), Long('1413158400000'), Long('1413244800000'), Long('1413331200000'), Long('1413417600000'), Long('1413504000000'), Long('1413590400000'), Long('1413676800000'), Long('1413763200000'), Long('1413849600000'), Long('1413936000000'), Long('1414022400000'), Long('1414108800000'), Long('1414195200000'), Long('1414281600000'), Long('1414368000000')
      ]
    },
    duration: 239,
    type: 'video/3gpp',
    views: {
      daily: {
        data: [
          2964062, 17799094, 19526335, 14604160, 9606241, 5959851,  4090643,  3419126,  2907521, 2626169, 2195691,  1518943,  1251086,  1128994,  958318, 861349,   785797,   628364,   506154,  564079, 445417,   474349,   498093,   589038,  444256, 363379,   329318,   313375,   333627,  335226, 354050,   322087,   239715,   228562,  213420, 201771,   213078,   247715,   228587,  183759, 168511,   169992,   199282,   326091,  347602, 335237,   290271,   242939,   223959,  219971, 249009,   277773,   279301,   220609
        ]
      }
    }
  }

As this ran in a couple of milliseconds, it is obvious, and it didn't scan millions of documents and used the index efficiently. This is easy to check with explain():

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3).explain("executionStats").executionStats

...
  nReturned: 3,
  executionTimeMillis: 1,
  totalKeysExamined: 7,
  totalDocsExamined: 3,
...
    stage: 'LIMIT',
    nReturned: 3,
    limitAmount: 3,
...
      stage: 'PROJECTION_SIMPLE',
      nReturned: 3,
...
        stage: 'FETCH',
        nReturned: 3,
...
          stage: 'IXSCAN',
          nReturned: 3,
          keyPattern: { 'views.daily.data': -1, commentsNumber: 1 },
          isMultiKey: true,
          multiKeyPaths: {
            'views.daily.data': [ 'views.daily.data' ],
            commentsNumber: []
          },
          direction: 'forward',
          indexBounds: {
            'views.daily.data': [ '[MaxKey, MinKey]' ],
            commentsNumber: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 7,
          seeks: 1,
          dupsTested: 7,
          dupsDropped: 4
        }
      }
    }
  }
}

The index contains an entry for each "views.daily.data" of all documents, starting with the highest daily view. The first key examined returns the recordid of the document with the highest daily view. The next key may correspond to the same video, which may have a high view count on another day, or another one. To return 3 distinct documents, the scan must eliminate duplicate recordid. Here 4 duplicates were dropped so that 3+4=7 keys have been read in total.

MongoDB utilizes WiredTiger B-Tree indexes not only to locate specific values, but also to navigate efficiently through the ranges of index bounds. Even if we were less lucky and had to examine 365 keys per document, reading one thousand index entries is still fast, as only 3 documents have to be fetched.

Access pattern: videos with the lowest daily views and no comments

The same index can also identify videos with the lowest daily views. To exclude those with no data, I only include entries that are available:

db.youstats.find({
  "views.daily.data": { $exists: true } ,
  "commentsNumber": { $lt: 1 } ,
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  //"day": 1                ,
}).sort({
  "views.daily.data": 1 
}).limit(3)

This query may not yield useful results, as it returns videos with no views on a given day, even if they had many views previously. Remember, ascending sort retrieves the lowest field value in the array, while descending sort retrieves the highest.

In this series on indexing MongoDB collections, I aim to demonstrate that a document model can support more use cases than initially anticipated during schema design. Although I imported a dataset not optimized for specific queries, introducing a couple of secondary indexes can enhance performance for various access patterns.
What's crucial during schema design to ensure that all fields used for filtering are contained within the same document, allowing the index to retrieve the minimum set of documents efficiently.

For information on how the sort order behaves with different datatypes, please refer to the documentation: Comparison/Sort Order. This applies only to MongoDB, not to its emulations that claim compatibility. I tested Amazon DocumentDB, Azure CosmosDB, Oracle Database, and FerretDB, but none could effectively cover the sort operation with an index, and they all ended up scanning the entire collection for the queries presented, which is slower and cannot scale.

May 26, 2025

Equality with Multiple Values, Preserving Sort for Pagination

In the previous post, I've created the following index:

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

MongoDB can use this index even for queries that do not have an equality predicate on "category". Not many databases offer this possibility, and it helps to limit the number of indexes to create for an OLTP application.

Access pattern: videos in a list of categories

I ran the same query as in the previous post, but with a list of categories instead of one: Music, Entertainment and Film. This fits in the same access pattern and doesn't need another index. Up to 200 values in the list, MongoDB runs an index scan for each value and merges the sorted results of each:

db.youstats.find({  
  category: { $in: ["Music",  "Entertainment", "Film"] }  ,
  duration: { $gt: 10 }                                   , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

In total, 12 keys were read to find the 10 documents for the result, which is very efficient:

  nReturned: 10,
  executionTimeMillis: 5,
  totalKeysExamined: 12,
  totalDocsExamined: 10,

Each value from the list was a seek into the B-Tree for each category:

                indexBounds: {
                  category: [ '["Entertainment", "Entertainment"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 6,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Film", "Film"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 1,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Music", "Music"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 5,
                seeks: 1,

Such OR-Expansion (the IN could be an OR), exists in other databases, but MongoDB goes further by using a sort-merge algorithm on top of it to preserve the order from each index scan ("explode for sort optimization"), visible in the execution plan with a SORT_MERGE stage. This sort preservation is vital in OLTP scenarios, especially for pagination queries, as it avoids scanning all documents to sort them for page retrieval.

Such technique necessitates a list of values, so it cannot be directly used by the opposite $nin or $neq. However, another optimization allows getting such list very efficiently.

Access pattern: distinct categories (with Loose Index Scan)

This gets the distinct value for "category":

db.youstats.distinct("category")
[
  '3',         '4',
  '5',         'Animals',
  'Autos',     'Comedy',
  'Education', 'Entertainment',
  'Film',      'Games',
  'Howto',     'Music',
  'News',      'Nonprofit',
  'People',    'Shows',
  'Sports',    'Tech',
  'Trailers',  'Travel'
]

It is extremely fast and to explain why I run the same in an aggregation pipeline with explain:

db.youstats.aggregate([  
  { $group: { _id: "$category" } }  
]).explain("executionStats").stages[0]["$cursor"].executionStats

{
  executionSuccess: true,
  nReturned: 20,
  executionTimeMillis: 0,
  totalKeysExamined: 20,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 20,
    executionTimeMillisEstimate: 0,
    works: 21,
    advanced: 20,
    transformBy: { category: 1, _id: 0 },
    inputStage: {
      stage: 'DISTINCT_SCAN',
      nReturned: 20,
      executionTimeMillisEstimate: 0,
      works: 21,
      advanced: 20,
      keyPattern: { category: 1, publishedDate: -1, duration: 1 },
      indexName: 'category_1_publishedDate_-1_duration_1',
      isMultiKey: false,
      multiKeyPaths: { category: [], publishedDate: [], duration: [] },
      direction: 'forward',
      indexBounds: {
        category: [ '[MinKey, MaxKey]' ],
        publishedDate: [ '[MaxKey, MinKey]' ],
        duration: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 20
    }
  }
}

By examining only 20 keys, MongoDB was able to find the distinct values of the first field of the index, skipping values while scanning the B-Tree. Not many databases can do that, and it can be combined with the loose index scan we have seen above.

Access pattern: for any category, sorted by publishing date

When you know that there are less than 200 distinct value in the first field of the key, you can use the same index even without a filter on it, using db.youstats.distinct("category") which is fast:

db.youstats.find({  
  category: { $in: db.youstats.distinct("category") },
  duration: { $gt: 10 }, 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

It had to examine only 29 keys out of one million to get the ten documents for the result:

    nReturned: 10,
    executionTimeMillis: 1,
    totalKeysExamined: 29,
    totalDocsExamined: 10,

This can also be used to find all categories except one:

db.youstats.distinct("category").filter(
 category => category !== "People"
)

Of course, an index without "category" in front will serve this query slightly better, but not having to create a new index is a big advantage.

Skip Scan for all categories

To run my query for all categories, I extracted the list, with DISTINCT_SCAN to inject it for a MERGE SORT. I can read all categories with category: { $gt: MinKey } but this is a single IXSCAN that returns entries sorted by category before published date, which must go though a SORT:

db.youstats.find({  
  category: { $gt: MinKey }   ,
  duration: { $gt: 10 }       , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

...
        stage: 'FETCH',
        nReturned: 10,
        executionTimeMillisEstimate: 406,
        works: 1006099,
        advanced: 10,
        needTime: 1006088,
        docsExamined: 10,
...
          stage: 'SORT',
          nReturned: 10,
          executionTimeMillisEstimate: 405,
          works: 1006099,
          advanced: 10,
          needTime: 1006088,
          sortPattern: { publishedDate: -1 },
          memLimit: 104857600,
          limitAmount: 10,
          totalDataSizeSorted: 1506,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 991323,
            executionTimeMillisEstimate: 237,
            works: 1006088,
            advanced: 991323,
            needTime: 14764,
            keyPattern: { category: 1, publishedDate: -1, duration: 1 },
            indexName: 'category_1_publishedDate_-1_duration_1',
            isMultiKey: false,
            multiKeyPaths: { category: [], publishedDate: [], duration: [] },
            direction: 'forward',
            indexBounds: {
              category: [ '(MinKey, MaxKey]' ],
              publishedDate: [ '[MaxKey, MinKey]' ],
              duration: [ '(10, inf.0]' ]
            },
            keysExamined: 1006087,
            seeks: 14765,
...

Even if it has read and sorted a million index entries, this query executed in less than half a second because it didn't have to fetch the documents, as the index covered the filtering and sort fields. If an index is not the best for avoiding a sort, it may still be sufficient for pagination queries. Always look at the execution plan and ensure that the FETCH operation doesn't read more documents than necessary.

This post illustrates that you may not need to create separate indexes for new queries. An optimal index for one use case can still effectively serve others. In these examples, the most selective filtering was achieved through sort().limit() pagination, making the best index include the sorting field in its key. A prefix with few values still provides good performance, whether used with $eq or $in, and a suffix can cover additional filters or projections.

Equality with Multiple Values, Preserving Sort for Pagination

In the previous post, I've created the following index to quickly find the last videos in one category (reminder: 1 means ASC, -1 means DESC):

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

MongoDB can use this index even for queries that do not have an equality predicate on "category". Not many databases offer this possibility, and it helps to limit the number of indexes to create for an OLTP application.

Access pattern: videos in a list of categories

I ran the same query as in the previous post, but with a list of categories instead of one: Music, Entertainment and Film. This fits in the same access pattern and doesn't need another index. Up to 200 values in the list, MongoDB runs an index scan for each value and merges the sorted results of each:

db.youstats.find({  
  category: { $in: ["Music",  "Entertainment", "Film"] }  ,
  duration: { $gt: 10 }                                   , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

In total, 12 keys were read to find the 10 documents for the result, which is very efficient:

  nReturned: 10,
  executionTimeMillis: 5,
  totalKeysExamined: 12,
  totalDocsExamined: 10,

Each value from the list was a seek into the B-Tree for each category:

                indexBounds: {
                  category: [ '["Entertainment", "Entertainment"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 6,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Film", "Film"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 1,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Music", "Music"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 5,
                seeks: 1,

Such OR-Expansion (the IN could be an OR), exists in other databases, but MongoDB goes further by using a sort-merge algorithm on top of it to preserve the order from each index scan (called "explode for sort optimization"), visible in the execution plan with a SORT_MERGE stage. This sort preservation is vital in OLTP scenarios, especially for pagination queries, as it avoids scanning all documents to sort them for page retrieval.

Such technique necessitates a list of values for the query planner to define the index bounds, so it cannot be directly used by the opposite $nin or $neq. However, another optimization allows getting such list very efficiently.

Access pattern: distinct categories (with Loose Index Scan)

This gets the distinct value for "category":

db.youstats.distinct("category")
[
  '3',         '4',
  '5',         'Animals',
  'Autos',     'Comedy',
  'Education', 'Entertainment',
  'Film',      'Games',
  'Howto',     'Music',
  'News',      'Nonprofit',
  'People',    'Shows',
  'Sports',    'Tech',
  'Trailers',  'Travel'
]

It is extremely fast and to explain why I run the same in an aggregation pipeline with explain:

db.youstats.aggregate([  
  { $group: { _id: "$category" } }  
]).explain("executionStats").stages[0]["$cursor"].executionStats

{
  executionSuccess: true,
  nReturned: 20,
  executionTimeMillis: 0,
  totalKeysExamined: 20,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 20,
    executionTimeMillisEstimate: 0,
    works: 21,
    advanced: 20,
    transformBy: { category: 1, _id: 0 },
    inputStage: {
      stage: 'DISTINCT_SCAN',
      nReturned: 20,
      executionTimeMillisEstimate: 0,
      works: 21,
      advanced: 20,
      keyPattern: { category: 1, publishedDate: -1, duration: 1 },
      indexName: 'category_1_publishedDate_-1_duration_1',
      isMultiKey: false,
      multiKeyPaths: { category: [], publishedDate: [], duration: [] },
      direction: 'forward',
      indexBounds: {
        category: [ '[MinKey, MaxKey]' ],
        publishedDate: [ '[MaxKey, MinKey]' ],
        duration: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 20
    }
  }
}

By examining only 20 keys, MongoDB was able to find the distinct values of the first field of the index, skipping values while scanning the B-Tree. Not many databases can do that (as exposed in the PostgreSQL Wiki), and it can be combined with the loose index scan we have seen above.

Access pattern: for any category, sorted by publishing date

When you know that there are less than 200 distinct value in the first field of the key, you can use the same index even without a filter on it, using db.youstats.distinct("category") which is fast:

db.youstats.find({  
  category: { $in: db.youstats.distinct("category") },
  duration: { $gt: 10 }, 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

It had to examine only 29 keys out of one million to get the ten documents for the result:

    nReturned: 10,
    executionTimeMillis: 1,
    totalKeysExamined: 29,
    totalDocsExamined: 10,

This can also be used to find all categories except one:

db.youstats.distinct("category").filter(
 category => category !== "People"
)

Of course, an index without "category" in front will serve this query slightly better, but not having to create a new index is a big advantage for general purpose applications that cover multiple use cases.

Skip Scan for all categories

To run my query for all categories, I extracted the list, with DISTINCT_SCAN to inject it for a MERGE SORT. I can read all categories with category: { $gt: MinKey } but this is a single IXSCAN that returns entries sorted by category before published date, which must go through a SORT:

db.youstats.find({  
  category: { $gt: MinKey }   ,
  duration: { $gt: 10 }       , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

...
        stage: 'FETCH',
        nReturned: 10,
        executionTimeMillisEstimate: 406,
        works: 1006099,
        advanced: 10,
        needTime: 1006088,
        docsExamined: 10,
...
          stage: 'SORT',
          nReturned: 10,
          executionTimeMillisEstimate: 405,
          works: 1006099,
          advanced: 10,
          needTime: 1006088,
          sortPattern: { publishedDate: -1 },
          memLimit: 104857600,
          limitAmount: 10,
          totalDataSizeSorted: 1506,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 991323,
            executionTimeMillisEstimate: 237,
            works: 1006088,
            advanced: 991323,
            needTime: 14764,
            keyPattern: { category: 1, publishedDate: -1, duration: 1 },
            indexName: 'category_1_publishedDate_-1_duration_1',
            isMultiKey: false,
            multiKeyPaths: { category: [], publishedDate: [], duration: [] },
            direction: 'forward',
            indexBounds: {
              category: [ '(MinKey, MaxKey]' ],
              publishedDate: [ '[MaxKey, MinKey]' ],
              duration: [ '(10, inf.0]' ]
            },
            keysExamined: 1006087,
            seeks: 14765,
...

Even if it has read and sorted a million index entries, this query executed in less than half a second because it didn't have to fetch the documents, as the index covered the filtering and sort fields. If an index is not the best for avoiding a sort, it may still be sufficient for pagination queries. Always look at the execution plan and ensure that the FETCH operation doesn't read more documents than necessary.

This post illustrates that you may not need to create separate indexes for new queries. An optimal index for one use case can still effectively serve others. In these examples, the most selective filtering was achieved through sort().limit() pagination, making the best index include the sorting field in its key. A prefix with few values still provides good performance, whether used with $eq or $in, and a suffix can cover additional filters or projections.

Can I use Supabase for analytics?

Supabase is a popular managed Postgres with a bunch of great features. Learn how to use Supabase to build simple user-facing analytics systems, and when to pair Supabase with technologies optimized for analytics.

May 25, 2025

Postgres 18 beta1: large server, sysbench

This has performance results for Postgres 18 beta1 and 17.4 from the sysbench benchmark and a large server. Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

The workload here is cached by Postgres and my focus is on regressions from new CPU overhead or mutex contention.

tl;dr

  • scan is faster for 18 beta1 
  • some range queries without aggregation are ~3% slower for 18 beta1
  • some writes are 2% to 5% slower for 18 beta1
I have work in progress to reproduce and/or explain this. There might be regressions or this might just be noise that is to be expected from a complex system. But when all three of the 18 beta1 results show a problem then I suspect there is a regression. The many cases where 18 beta1 is slower than 17.4 are accompanied by an increase in CPU and context switches so I hope to explain why 18 beta1 does that.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 17.4 and 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for 17.4 is conf.diff.cx10a_c32r128.

The config files for 18 beta 1 are:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=32 to do async IO via a thread pool. I eventually learned that 32 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests run with 40 client and 8 table with 10M rows per table. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 8 10000000 300 600 $deviceName 1 1 40

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.4 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for PG 18beta1) / (QPS for PG 17.4)
When the relative QPS is > 1 then 18 beta1 is faster than 17.4.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.4

For the results I compare throughput from Postgres 17.4 with 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

For these tests the database should be cached, the database directory (size on disk) is ~24G and shared_buffers is set to 96G.

The results are interesting and require more debugging to explain, there might be regressions or this might be natural jitter in my results. However when the jitter makes all three of the 18 beta1 results worse then I don't think it is jitter.
  • scan is faster in pg18 beta1 than 17.4
  • there might be a small regression (about 3%) in 18 beta1 for range queries without aggregation
    • using range-covered-pk* as the example, CPU/query is larger in 18 beta1, see cpu/o here
  • there might be small regressions for writes in 18 beta1
    • for update-nonindex* the CPU, context switches and KB written to storage /query are larger in 18 beta1, see cpu/o, cs/o, and wKB/o here
    • While update-nonindex* is the outlier for an increase in wKB/o, the increases in CPU/query occur in most cases for 18 beta1

Relative to: PG 17.4 with cx10a_c32r128
col-1 : PG 18b1git with cx10b_c32r128
col-2 : PG 18b1git with cx10c_c32r128
col-3 : PG 18b1git with cx10d_c32r128

col-1   col-2   col-3   --> point queries
0.99    1.02    1.00    hot-points_range=100
0.99    0.98    1.01    point-query_range=100
1.00    1.01    0.99    points-covered-pk_range=100
1.03    1.00    1.03    points-covered-si_range=100
0.99    1.00    0.99    points-notcovered-pk_range=100
1.02    1.00    1.02    points-notcovered-si_range=100
1.00    1.01    1.00    random-points_range=1000
1.00    1.01    0.99    random-points_range=100
1.00    1.02    1.00    random-points_range=10

col-1   col-2   col-3   --> range queries without aggregation
0.96    0.96    0.97    range-covered-pk_range=100
0.96    0.96    0.96    range-covered-si_range=100
0.97    0.98    0.97    range-notcovered-pk_range=100
0.99    1.00    0.99    range-notcovered-si_range=100
1.13    1.02    1.13    scan_range=100

col-1   col-2   col-3   --> range queries with aggregation
1.01    1.00    1.01    read-only-count_range=1000
1.00    1.00    1.00    read-only-distinct_range=1000
0.99    1.00    0.99    read-only-order_range=1000
1.01    1.01    1.01    read-only_range=10000
0.99    1.00    0.99    read-only_range=100
0.97    0.98    0.98    read-only_range=10
0.99    1.00    1.00    read-only-simple_range=1000
1.00    1.00    1.00    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.97    1.00    0.99    delete_range=100
0.97    0.98    0.97    insert_range=100
0.98    0.98    0.98    read-write_range=100
0.96    0.98    0.97    read-write_range=10
1.00    0.99    0.99    update-index_range=100
1.00    1.00    1.00    update-inlist_range=100
0.94    0.96    0.94    update-nonindex_range=100
0.90    1.03    0.98    update-one_range=100
0.96    0.96    0.95    update-zipf_range=100
1.00    1.00    0.99    write-only_range=10000














Postgres 18 beta1: small server, sysbench, IO-bound

This has performance results for Postgres 18 beta1 and several older Postgres releases using the sysbench benchmark and a small server. The difference between this post and the previous post is that the working set here is larger than memory while it was cached by Postgres in the previous post.

I almost always run sysbench with a cached workload so there might be more noise in the results here.

Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

tl;dr - with io_method='worker' in Postgres 18 beta1

  • IO-bound scans are 1.53X faster
  • Context switches per query are up ~10X. I don't know if this is a problem.
  • But I had set io_workers to 16 which was probably too big
tl;dr - with io_method='io_uring' in Postgres 18 beta1
  • IO-bound scans are 1.29X faster
  • Context switches per query are up ~5X. I don't know if this is a problem.
tl;dr - with io_method='sync' in Postgres 18 beta1
  • Performance is similar to Postgres 17.5 which is good
Builds, configuration and hardware

I used a small server with 8 cores, 32G of RAM and 1 NVMe device. More details are in the previous post.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. For most of the microbenchmarks the working set is larger than memory.

The tests run with 1 client and 1 table with 500M rows. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 1 500000000 300 600 $deviceName 1 1 1

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.5 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for $some_version) / (QPS for $base_version)
And $base_version is either Postgres 17.5 or Postgres 14.0 as specified below. When the relative QPS is > 1 then $some_version is faster than $base_version.  When it is < 1 then there might be a regression.

Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.5

For the results here $base_version is Postgres 17.5 and that is compared with Postgres 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

Results for the scan microbenchmark are much better with new configs I can use with 18 beta1:
  • for the cx10c_c8r32 config that uses io_method='worker' and io_workers=16 
    • throughput is 1.53X better than Postgres 17.5
    • the rate of context switches per query is almost 10X larger with this config, see cs/o here
    • the IO efficiency rates (reads/query & read KB /query) are similar to 17.5
    • I assume that 16 is too large for io_workers
  • for the cx10d_c8r32 config that uses  io_method='io_uring'  
    • throughput is 1.29X better than Postgres 17.5
    • the rate of context switches per query is about 5X larger with this config, see cs/o here
    • the IO efficiency rates (reads/query & read KB /query) are similar to 17.5
More iostat and vmstat metrics from the scan microbenchmark

Legend:
* r/s - iostat reads /s
* rMB/s - iostat read MB /s
* r/o - iostat reads /query
* rKB/o - iostat read KB /query
* o/s - queries /s
* cs/s - context switches /s
* cpu/s - cpu utilization (vmstat us + sy)
* cs/o - context switches /query
* cpu/o - cpu microseconds /query

r/s     rMB/s   r/o     rKB/o   o/s
3717.7  434.5   1.787   213.909 2080
6845.4  651.3   2.170   211.381 3155
5015.9  556.7   1.878   213.417 2671

cs/s    cpu/s   cs/o    cpu/o
1699    13.1    0.817   0.006282
23639   20.2    7.493   0.006395
12487   17.1    4.675   0.006417

Relative QPS per microbenchmark

Relative to: PG 17.5 with cx10a_c8r32
col-1 : PG 18 beta1 with cx10b_c8r32
col-2 : PG 18beta1 with cx10c_c8r32
col-3 : PG 18beta1 with cx10d_c8r32

col-1   col-2   col-3   --> point queries
1.00    1.00    1.01    hot-points_range=100
0.99    0.99    0.99    point-query_range=100
1.00    1.00    1.02    points-covered-pk_range=100
1.01    1.02    1.00    points-covered-si_range=100
1.00    0.99    0.99    points-notcovered-pk_range=100
0.99    0.98    0.99    points-notcovered-si_range=100
1.00    1.00    1.00    random-points_range=1000
1.00    0.99    1.00    random-points_range=100
0.99    0.95    1.02    random-points_range=10

col-1   col-2   col-3   --> range queries, part 1, no aggregation
0.97    0.97    0.99    range-covered-pk_range=100
0.98    0.98    1.00    range-covered-si_range=100
0.99    0.99    0.99    range-notcovered-pk_range=100
0.99    0.99    1.00    range-notcovered-si_range=100
1.01    1.53    1.29    scan_range=100

col-1   col-2   col-3   --> range queries, part 2, with aggregation
0.99    1.00    1.04    read-only-count_range=1000
0.99    0.99    0.99    read-only-distinct_range=1000
0.98    0.99    0.99    read-only-order_range=1000
0.99    0.98    0.98    read-only_range=10000
0.99    0.99    0.99    read-only_range=100
0.99    0.99    0.99    read-only_range=10
0.99    0.99    0.99    read-only-simple_range=1000
0.99    0.99    0.99    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.99    1.00    1.00    delete_range=100
0.99    0.98    0.98    insert_range=100
0.99    0.99    0.99    read-write_range=100
1.00    0.99    0.99    read-write_range=10
1.00    1.00    0.99    update-index_range=100
0.99    0.97    1.00    update-inlist_range=100
1.02    1.02    0.99    update-nonindex_range=100
0.98    0.98    0.99    update-one_range=100
1.00    1.00    0.98    update-zipf_range=100
1.00    1.00    1.00    write-only_range=10000

Results: Postgres 14.0 through 18 beta1

For the results here $base_version is Postgres 14.0 and that is compared with more recent Postgres releases. The purpose for this is to see how performance changes over time. Postgres 18beta1 is between 7% slower and 13% faster than 14.0 and there are more improvements than regressions. This is yet another boring result from Postgres, but it is great to see that it continues to focus on CPU efficiency.

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

The data below is also here which can be easier to read.

The results for 18 beta1 are similar to 17.5.

Relative to: PG 14.0 with cx10a_c8r32
col-1 : PG 14.18 with cx10a_c8r32
col-2 : PG 15.0 with cx10a_c8r32
col-3 : PG 15.13 with cx10a_c8r32
col-4 : PG 16.0 with cx10a_c8r32
col-5 : PG 16.9 with cx10a_c8r32.
col-6 : PG 17.0 with cx10a_c8r32
col-7 : PG 17.5 with cx10a_c8r32
col-8 : PG 18 beta1 with cx10b_c8r32

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> point queries
0.99    1.00    1.00    0.99    1.00    2.03    2.00    2.00    hot-points_range=100
0.99    0.99    0.99    0.99    0.99    0.99    1.00    0.98    point-query_range=100
0.98    1.01    1.00    1.02    1.01    0.99    1.00    1.01    points-covered-pk_range=100
1.00    0.98    0.97    1.00    1.00    0.97    0.97    0.98    points-covered-si_range=100
1.00    1.00    1.00    1.00    1.00    0.98    1.00    1.00    points-notcovered-pk_range=100
1.01    1.01    1.01    1.01    1.01    1.00    1.01    1.00    points-notcovered-si_range=100
1.00    1.00    1.00    1.00    1.00    1.00    1.00    1.00    random-points_range=1000
1.00    1.00    1.00    1.00    1.00    0.96    1.00    1.00    random-points_range=100
1.00    1.00    1.00    1.00    1.00    1.00    1.00    0.99    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, no aggregation
0.98    0.98    0.96    0.97    0.96    0.99    0.98    0.95    range-covered-pk_range=100
0.97    0.97    0.97    0.98    0.96    0.99    0.97    0.95    range-covered-si_range=100
1.00    0.99    1.00    0.99    0.99    0.98    0.99    0.98    range-notcovered-pk_range=100
1.01    1.00    1.01    1.01    1.01    1.00    1.01    1.00    range-notcovered-si_range=100
0.94    0.87    0.85    0.94    0.96    0.97    0.96    0.97    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, with aggregation
1.00    1.00    1.00    1.00    0.99    0.99    1.00    0.99    read-only-count_range=1000
0.99    0.99    0.99    1.01    1.00    1.00    1.01    0.99    read-only-distinct_range=1000
0.99    1.00    1.01    1.02    1.00    1.01    1.02    1.00    read-only-order_range=1000
1.00    1.02    1.01    1.03    1.03    1.03    1.05    1.03    read-only_range=10000
1.00    0.99    0.99    1.00    1.00    1.00    1.00    0.99    read-only_range=100
0.99    0.99    0.99    1.00    0.99    1.00    1.00    0.99    read-only_range=10
0.99    0.98    0.99    0.99    0.99    0.99    1.00    0.98    read-only-simple_range=1000
1.00    0.99    1.00    1.00    0.99    1.00    1.00    0.99    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> writes
0.99    0.99    0.99    0.99    0.99    0.99    1.00    1.00    delete_range=100
1.01    0.99    0.99    0.99    1.00    1.01    1.01    1.00    insert_range=100
1.02    1.05    1.03    1.04    1.01    0.97    0.97    0.97    read-write_range=100
1.03    1.03    1.04    1.04    1.03    0.94    0.94    0.94    read-write_range=10
1.03    1.03    1.03    1.04    1.02    1.05    1.04    1.04    update-index_range=100
0.99    0.99    0.99    0.99    0.99    1.05    1.05    1.04    update-inlist_range=100
1.01    1.03    1.02    1.02    1.01    1.03    1.02    1.05    update-nonindex_range=100
0.98    0.99    0.99    0.99    0.99    1.11    1.11    1.08    update-one_range=100
1.00    0.99    1.00    1.00    1.00    0.98    0.98    0.98    update-zipf_range=100
1.00    1.00    1.00    1.00    1.00    0.89    0.89    0.89    write-only_range=10000


How to build CI/CD pipelines for real-time analytics projects

Learn how to build a proper CI/CD pipeline for your real-time analytics project. This follow-up post shows you how to implement automated testing, versioning, and deployment for your real-time data APIs.

May 24, 2025

No Index for LIKE on JSONB with Array in the Path

Here is an example where using PostgreSQL as a document database will quickly fail: indexing. Either use an RDBMS as a relational database, or use a document database. In the previous post, we learned that B-Tree indexes can be created without an array in the JSON path, but this approach may lead to reduced performance since it relies on an expression-based index. However, when you use a document database, you embed some arrays, and the indexing possibilities are even more limited.

Here is an example where users may have more than one e-mail address:

create table users (
  id bigserial primary key,
  data jsonb not null
);

insert into users (data) values (
 jsonb_build_object(  
    'name', 'Homer Simpson',  
    'email', jsonb_build_array(  
      'donutlover@springfieldusa.com',  
      'homerdoh@simpsons.com',  
      'lazy.sofa.guy@tvcharacters.net'  
    )  
  )  
 );

INSERT INTO users (data)   
SELECT   
  jsonb_build_object(  
    'name', 'u' || n::text,  
    'email', jsonb_build_array(  
      'u' || n::text || '@compuserve.com'  
    )  
  )  
FROM generate_series(1, 1000000) n;  

PostgreSQL has a JSON operator @> to find the document that contains a value in an array:

SELECT *   
FROM users  
WHERE data->'email' @> '"donutlover@springfieldusa.com"'
;  

An expression index on (data->'email') cannot be used for such query, as the indexed value would be the whole array. In order to index each item, I need to create an inverted index:

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

I check that the index can be used for a query by e-mail:

set enable_seqscan to off;

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *   
FROM users  
WHERE data->'email' @> '"donutlover@springfieldusa.com"'
;  

                                          QUERY PLAN                                            
-------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.users (actual time=0.014..0.015 rows=1 loops=1)
   Output: id, data
   Recheck Cond: ((users.data -> 'email'::text) @> '"donutlover@springfieldusa.com"'::jsonb)
   Heap Blocks: exact=1
   Buffers: shared hit=3
   ->  Bitmap Index Scan on idx_users_data_email (actual time=0.006..0.006 rows=1 loops=1)
         Index Cond: ((users.data -> 'email'::text) @> '"donutlover@springfieldusa.com"'::jsonb)
         Buffers: shared hit=2
 Planning:
   Buffers: shared hit=27
 Planning Time: 0.169 ms
 Serialization: time=0.005 ms  output=1kB  format=text
 Execution Time: 0.034 ms
(13 rows)

This is fast because I was looking for an exact match. However, indexes are supposed to optimize more than that. Querying this for a partial match where I know only the prefix is extremely complex to write, and cannot use the index:

set enable_seqscan to off;

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *   
FROM users  
WHERE EXISTS (  
  SELECT 1  
  FROM jsonb_array_elements_text(data->'email') AS email  
  WHERE email LIKE 'donutlover@%'  
);

                                                     QUERY PLAN                                                      
---------------------------------------------------------------------------------------------------------------------
 Seq Scan on public.users (actual time=45.032..844.109 rows=1 loops=1)
   Output: users.id, users.data
   Filter: EXISTS(SubPlan 1)
   Rows Removed by Filter: 1000000
   Buffers: shared hit=12346
   SubPlan 1
     ->  Function Scan on pg_catalog.jsonb_array_elements_text email (actual time=0.001..0.001 rows=0 loops=1000001)
           Function Call: jsonb_array_elements_text((users.data -> 'email'::text))
           Filter: (email.value ~~ 'donutlover@%'::text)
           Rows Removed by Filter: 1
 Planning Time: 0.071 ms
 Serialization: time=0.007 ms  output=1kB  format=text
 Execution Time: 844.523 ms
(13 rows)

If you want to index such query in PostgreSQL you need to forget about the JSONB structure and process it as simple text, with a text search index using trigrams:

CREATE EXTENSION IF NOT EXISTS pg_trgm;  
CREATE INDEX idx_users_data_email_trgm ON users USING GIN (
 (data->>'email') gin_trgm_ops
); 

The JSON operator ->> returns a text, and the gin_trgm_ops extracts trigrams from it to index with an inverted index. I can pre-filter on the array expanded as text, with data->>'email', and search a the '%"donutlover@%' pattern with a LIKE. This uses the index but may have false positives, so I need to add my EXISTS with JSONB_ARRAY_ELEMENTS_TEXT to re-check before returning the result. Finally, here is the query that can find a user that has an e-mail starting with "donutlover", and its execution plan:

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *  
FROM users  
WHERE data->>'email' LIKE '%"donutlover@%'
AND EXISTS (  
  SELECT 1  
  FROM jsonb_array_elements_text(data->'email') AS email  
  WHERE email LIKE 'donutlover@%'  
); 

                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.users (actual time=0.035..0.036 rows=1 loops=1)
   Output: users.id, users.data
   Recheck Cond: ((users.data ->> 'email'::text) ~~ '%"donutlover@%'::text)
   Filter: EXISTS(SubPlan 1)
   Heap Blocks: exact=1
   Buffers: shared hit=13
   ->  Bitmap Index Scan on idx_users_data_email_trgm (actual time=0.015..0.015 rows=1 loops=1)
         Index Cond: ((users.data ->> 'email'::text) ~~ '%"donutlover@%'::text)
         Buffers: shared hit=12
   SubPlan 1
     ->  Function Scan on pg_catalog.jsonb_array_elements_text email (actual time=0.007..0.007 rows=1 loops=1)
           Function Call: jsonb_array_elements_text((users.data -> 'email'::text))
           Filter: (email.value ~~ 'donutlover@%'::text)
 Planning:
   Buffers: shared hit=1
 Planning Time: 0.102 ms
 Serialization: time=0.003 ms  output=1kB  format=text
 Execution Time: 0.060 ms
(18 rows)

PostgreSQL can logically function as a document database, but nobody wants to get there given the complexity to index and query it. If you use a SQL database, such data must be normalized, requiring two tables for the One-to-Many relationship between users and their emails. Regular indexes are necessary for effective LIKE predicates. The query will utilize a join, with the query planner ideally starting with the appropriate table if filters are applied to both.

There are valid reasons to prefer a document model: its simplicity, alignment with business entities and application objects, and the ease of querying a flexible structure. The previous solution is at the opposite of the expected simplicity. In MongoDB, you don't even need to know if the "email" field is a single value or an array:

db.users.insertMany([
{  
  "_id": 1,  
  "data": {  
    "name": "Homer Simpson",  
    "email": [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
  }  
},  
{  
  "_id": 2,  
  "data": {  
    "name": "Marge Simpson",  
    "email": "marge@springfieldusa.com"
  }  
}
]);

// Insert one million
const bulkUsers = [];  
for (let n = 3; n <= 1000002; n++) {  
  bulkUsers.push({  
    _id: n,  
    data: {  
      name: "u" + n,  
      email: [  
        "u" + n + "@compuserve.com"  
      ]  
    }  
  });  
  // Insert in batches of 10,000 for efficiency  
  if (bulkUsers.length === 10000) {  
    db.users.insertMany(bulkUsers);  
    bulkUsers.length = 0; // Clear the array  
  }  
}  
// Insert any remaining documents  
if (bulkUsers.length > 0) {  
  db.users.insertMany(bulkUsers);  
}  

I create a simple index:


db.users.createIndex({ "data.email": 1 });  

The query is straightforward, whether it involves an exact search or a regular expression.

db.users.find({  
  "data.email": "donutlover@springfieldusa.com" 
});

[
  {
    _id: 1,
    data: {
      name: 'Homer Simpson',
      email: [
        'donutlover@springfieldusa.com',
        'homerdoh@simpsons.com',
        'lazy.sofa.guy@tvcharacters.net'
      ]
    }
  }
]

db.users.find({  
  "data.email": {  
    $regex: "^donutlover@"  
  }  
});

[
  {
    _id: 1,
    data: {
      name: 'Homer Simpson',
      email: [
        'donutlover@springfieldusa.com',
        'homerdoh@simpsons.com',
        'lazy.sofa.guy@tvcharacters.net'
      ]
    }
  }
]

The execution plan exhibits the fastest access to the document:

...
 executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 1,
  totalKeysExamined: 2,
  totalDocsExamined: 1,
...
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 3,
      keyPattern: { 'data.email': 1 },
      indexName: 'data.email_1',
      isMultiKey: true,
      multiKeyPaths: { 'data.email': [ 'data.email' ] },
      direction: 'forward',
      indexBounds: {
        'data.email': [
          '["donutlover@", "donutloverA")',
          '[/^donutlover@/, /^donutlover@/]'
        ]
      },
      keysExamined: 2,
      seeks: 2,
...

With MongoDB, you don't have to choose between regular and inverted indexes or deal with their limitations. A single index on { 'data.email': 1 } can handle both scalar values and arrays. For arrays, MongoDB recognizes this as a multi-key (isMultiKey: true) and retrieves documents containing values that meet the filter criteria. This index can be used for equality and range queries, and regular expressions with a known prefix are automatically optimized by the query planner into index bounds.

When you hear that JSONB transforms PostgreSQL into a document database, consider trying simple queries beyond just equality predicates. Adding a MongoDB API on top of an SQL database addresses syntax complexity, but it does not resolve the limitations of the underlying indexes.

No Index for LIKE on JSONB with Array in the Path

Here is an example where using PostgreSQL as a document database will quickly fail: indexing. Either use an RDBMS as a relational database, or use a document database. In the previous post, we learned that B-Tree indexes can be created without an array in the JSON path, but this approach may lead to reduced performance since it relies on an expression-based index. However, when you use a document database, you embed some arrays, and the indexing possibilities are even more limited when emulating a document database with SQL and JSONB.

Here is an example where users may have more than one e-mail address:

create table users (
  id bigserial primary key,
  data jsonb not null
);

insert into users (data) values (
 jsonb_build_object(  
    'name', 'Homer Simpson',  
    'email', jsonb_build_array(  
      'donutlover@springfieldusa.com',  
      'homerdoh@simpsons.com',  
      'lazy.sofa.guy@tvcharacters.net'  
    )  
  )  
 );

INSERT INTO users (data)   
SELECT   
  jsonb_build_object(  
    'name', 'u' || n::text,  
    'email', jsonb_build_array(  
      'u' || n::text || '@compuserve.com'  
    )  
  )  
FROM generate_series(1, 1000000) n;  

PostgreSQL has a JSON operator @> to find the document that contains a value in an array:

SELECT *   
FROM users  
WHERE data->'email' @> '"donutlover@springfieldusa.com"'
;  

An expression index on (data->'email') cannot be used for such query, as the indexed value would be the whole array. In order to index each item, I need to create an inverted index:

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

I check that the index can be used for a query by e-mail:

set enable_seqscan to off;

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *   
FROM users  
WHERE data->'email' @> '"donutlover@springfieldusa.com"'
;  

                                          QUERY PLAN                                            
-------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.users (actual time=0.014..0.015 rows=1 loops=1)
   Output: id, data
   Recheck Cond: ((users.data -> 'email'::text) @> '"donutlover@springfieldusa.com"'::jsonb)
   Heap Blocks: exact=1
   Buffers: shared hit=3
   ->  Bitmap Index Scan on idx_users_data_email (actual time=0.006..0.006 rows=1 loops=1)
         Index Cond: ((users.data -> 'email'::text) @> '"donutlover@springfieldusa.com"'::jsonb)
         Buffers: shared hit=2
 Planning:
   Buffers: shared hit=27
 Planning Time: 0.169 ms
 Serialization: time=0.005 ms  output=1kB  format=text
 Execution Time: 0.034 ms
(13 rows)

This is fast because I was looking for an exact match. However, indexes are supposed to optimize more than that. Querying this for a partial match where I know only the prefix is extremely complex to write, and cannot use the index:

set enable_seqscan to off;

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *   
FROM users  
WHERE EXISTS (  
  SELECT 1  
  FROM jsonb_array_elements_text(data->'email') AS email  
  WHERE email LIKE 'donutlover@%'  
);

                                                     QUERY PLAN                                                      
---------------------------------------------------------------------------------------------------------------------
 Seq Scan on public.users (actual time=45.032..844.109 rows=1 loops=1)
   Output: users.id, users.data
   Filter: EXISTS(SubPlan 1)
   Rows Removed by Filter: 1000000
   Buffers: shared hit=12346
   SubPlan 1
     ->  Function Scan on pg_catalog.jsonb_array_elements_text email (actual time=0.001..0.001 rows=0 loops=1000001)
           Function Call: jsonb_array_elements_text((users.data -> 'email'::text))
           Filter: (email.value ~~ 'donutlover@%'::text)
           Rows Removed by Filter: 1
 Planning Time: 0.071 ms
 Serialization: time=0.007 ms  output=1kB  format=text
 Execution Time: 844.523 ms
(13 rows)

If you want to index such query in PostgreSQL you need to forget about the JSONB structure and process it as simple text, with a text search index using trigrams:

CREATE EXTENSION IF NOT EXISTS pg_trgm;  
CREATE INDEX idx_users_data_email_trgm ON users USING GIN (
 (data->>'email') gin_trgm_ops
); 

The JSON operator ->> returns a text, and the gin_trgm_ops extracts trigrams from it to index with an inverted index. I can pre-filter on the array expanded as text, with data->>'email', and search a the '%"donutlover@%' pattern with a LIKE. This uses the index but may have false positives, so I need to add my EXISTS with JSONB_ARRAY_ELEMENTS_TEXT to re-check before returning the result. Finally, here is the query that can find a user that has an e-mail starting with "donutlover", and its execution plan:

explain (analyze, verbose, buffers, costs off, serialize text)
SELECT *  
FROM users  
WHERE data->>'email' LIKE '%"donutlover@%'
AND EXISTS (  
  SELECT 1  
  FROM jsonb_array_elements_text(data->'email') AS email  
  WHERE email LIKE 'donutlover@%'  
); 

                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.users (actual time=0.035..0.036 rows=1 loops=1)
   Output: users.id, users.data
   Recheck Cond: ((users.data ->> 'email'::text) ~~ '%"donutlover@%'::text)
   Filter: EXISTS(SubPlan 1)
   Heap Blocks: exact=1
   Buffers: shared hit=13
   ->  Bitmap Index Scan on idx_users_data_email_trgm (actual time=0.015..0.015 rows=1 loops=1)
         Index Cond: ((users.data ->> 'email'::text) ~~ '%"donutlover@%'::text)
         Buffers: shared hit=12
   SubPlan 1
     ->  Function Scan on pg_catalog.jsonb_array_elements_text email (actual time=0.007..0.007 rows=1 loops=1)
           Function Call: jsonb_array_elements_text((users.data -> 'email'::text))
           Filter: (email.value ~~ 'donutlover@%'::text)
 Planning:
   Buffers: shared hit=1
 Planning Time: 0.102 ms
 Serialization: time=0.003 ms  output=1kB  format=text
 Execution Time: 0.060 ms
(18 rows)

PostgreSQL can logically function as a document database, but nobody wants to get there given the complexity to index and query it. If you use a SQL database, such data must be normalized, requiring two tables for the One-to-Many relationship between users and their emails. Regular indexes are necessary for effective LIKE predicates. The query will utilize a join, with the query planner ideally starting with the appropriate table if filters are applied to both.

There are valid reasons to prefer a document model: its simplicity, alignment with business entities and application objects, and the ease of querying a flexible structure. The previous solution is at the opposite of the expected simplicity. In MongoDB, you don't even need to know if the "email" field is a single value or an array:

db.users.insertMany([
{  
  "_id": 1,  
  "data": {  
    "name": "Homer Simpson",  
    "email": [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
  }  
},  
{  
  "_id": 2,  
  "data": {  
    "name": "Marge Simpson",  
    "email": "marge@springfieldusa.com"
  }  
}
]);

// Insert one million
const bulkUsers = [];  
for (let n = 3; n <= 1000002; n++) {  
  bulkUsers.push({  
    _id: n,  
    data: {  
      name: "u" + n,  
      email: [  
        "u" + n + "@compuserve.com"  
      ]  
    }  
  });  
  // Insert in batches of 10,000 for efficiency  
  if (bulkUsers.length === 10000) {  
    db.users.insertMany(bulkUsers);  
    bulkUsers.length = 0; // Clear the array  
  }  
}  
// Insert any remaining documents  
if (bulkUsers.length > 0) {  
  db.users.insertMany(bulkUsers);  
}  

I create a simple index:


db.users.createIndex({ "data.email": 1 });  

The query is straightforward, whether it involves an exact search or a regular expression.

db.users.find({  
  "data.email": "donutlover@springfieldusa.com" 
});

[
  {
    _id: 1,
    data: {
      name: 'Homer Simpson',
      email: [
        'donutlover@springfieldusa.com',
        'homerdoh@simpsons.com',
        'lazy.sofa.guy@tvcharacters.net'
      ]
    }
  }
]

db.users.find({  
  "data.email": {  
    $regex: "^donutlover@"  
  }  
});

[
  {
    _id: 1,
    data: {
      name: 'Homer Simpson',
      email: [
        'donutlover@springfieldusa.com',
        'homerdoh@simpsons.com',
        'lazy.sofa.guy@tvcharacters.net'
      ]
    }
  }
]

The execution plan exhibits the fastest access to the document:

...
 executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 1,
  totalKeysExamined: 2,
  totalDocsExamined: 1,
...
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 3,
      keyPattern: { 'data.email': 1 },
      indexName: 'data.email_1',
      isMultiKey: true,
      multiKeyPaths: { 'data.email': [ 'data.email' ] },
      direction: 'forward',
      indexBounds: {
        'data.email': [
          '["donutlover@", "donutloverA")',
          '[/^donutlover@/, /^donutlover@/]'
        ]
      },
      keysExamined: 2,
      seeks: 2,
...

With MongoDB, you don't have to choose between regular and inverted indexes or deal with their limitations. A single index on { 'data.email': 1 } can handle both scalar values and arrays. For arrays, MongoDB recognizes this as a multi-key (isMultiKey: true) and retrieves documents containing values that meet the filter criteria. This index can be used for equality and range queries, and regular expressions with a known prefix are automatically optimized by the query planner into index bounds.

When you hear that JSONB transforms PostgreSQL into a document database, consider trying simple queries beyond just equality predicates. Adding a MongoDB API on top of an SQL database addresses syntax complexity, but it does not resolve the limitations of the underlying indexes.

Chapter 4: Non-Locking Schedulers (Concurrency Control Book)

Chapter 4 of the Concurrency Control and Recovery in Database Systems book (1987) opens with a sentence that doesn't quite pass the grammar test: "In this chapter we will examine two scheduling techniques that do not use locks, timestamp ordering (TO) and serialization graph testing (SGT)." That comma is trying to do the job of a colon and failing at it. Precision matters, more so in technical writing.

The writing is otherwise clear and careful. And as par the book, it is ahead of its time. The chapter presents a spectrum of non-locking schedulers, starting from Basic TO, expanding into certifiers (which basically stands for optimistic concurrency control), and ending with modular, composable scheduler designs that separate synchronization concerns cleanly between read-write and write-write synchronization. 

Let's dig into the details.


Timestamp Ordering (TO)

Timestamp Ordering (TO) uses transaction start timestamps to impose a serial order on conflicting operations. It maintains for every data item x the maximum timestamps of Reads and Writes on x that it has sent to the DM, denoted max-r-scheduled[x] and coordinates with the data manager (DM) to preserve ordering constraints.

Timestamp management in TO is a disguised form of watermarking and garbage collection.  The system purges "stale" max-r/max-w entries using a time threshold ts_min, and rejects operations too old to be safely scheduled.

Strict TO extends Basic TO with delayed visibility to ensure recoverability, cascading abort avoidance, and strictness. Reads and writes are withheld until prior conflicting writes are resolved via commit or abort acknowledgment. I was happy to predict this modification while reading. It’s the same trick repeated from the 2PL chapter: keep the locks all the way and delay read/write visibility until commit/abort. This technique also shows up in Conservative SGT and certifiers later in this chapter. It's a useful pattern.

Despite being aggressive (rejecting operations that arrive "late"), Strict TO is still more permissive than two-phase locking (2PL). It accepts more interleavings as serializable (see the example below), and reads never wait. This is great for read-heavy workloads.

The section on distributed TO is particularly interesting. TO is trivially parallelizable: each scheduler can operate independently using local metadata. Unlike distributed 2PL, it avoids inter-scheduler communication and deadlocks entirely. The global timestamp order serves as a natural tiebreaker. This is simple and powerful. With loosely synchronized clocks/hybrid logical clocks, you get a deadlock-free system that preserves serializability with minimal coordination. 

For more on this topic, see my review of the TO paper from VLDB'90 by Bernstein and Goodman here. I also wrote about how Amazon DynamoDB implements Basic TO before. This illustrates how Basic TO made it into production systems. It is a sweet deal: Synchronization helps performance, but safety does not depend on it.

It might be illustrative to contrast TO with 2PL here.

  • TO avoids locking overhead and deadlocks.
  • TO doesn't serialize unnecessarily, 
  • TO is a natural fit for distributed systems, especially with loosely synchronized clocks. And it is great when conflicts are rare.


Serialization Graph Testing (SGT)

SGT builds the serialization graph (SG)  incrementally and aborts transactions to prevent cycles. This can, in theory, yield maximum concurrency under SER constraints, but it comes at a cost. It requires maintaining the SG, detecting cycles, and carefully pruning old transactions. These incur space and computation overhead.

Maybe the quickest way to introduce SGT is again to compare it with 2PL.

  • 2PL prevents cycles via locking. It blocks and delays operations, can deadlock, and is conservative. SGT detects cycles dynamically. It allows speculative progress, aborts instead of blocking, and is more permissive.
  • SGT is more general and precise than 2PL but costs more in metadata and coordination. 2PL, on the other hand, is simpler, and naturally enforces recoverability and strictness.

The Basic SGT approach has no space bounds unless you prune carefully. Even committed transactions can't be garbage-collected immediately. 


Conservative SGT can avoid aborts by requiring predeclared read/write sets and using readiness rules to delay operations. But this only works in restricted environments where access sets can be declared in advance. 

Trying to distribute SGT hits a wall. Global cycle detection is hard. Unlike deadlock detection (which can take its sweet time because deadlocked transactions are blocked) SGT must check cycles before committing. Since that's a high-frequency, high-cost operation, it doesn't scale.


Certifiers

Certifiers are optimistic schedulers. They execute operations without delay and validate serializability only at commit time. In other words, transactions proceed speculatively/optimistically, and the scheduler check for conflicts at commit time. This model avoids blocking under light contention but wastes work under heavy contention.

The section presents three certifier styles: 2PL-based, TO-based, and SGT-based.

  • The 2PL certifier is textbook OCC. It tracks read/write sets and validates at commit. There's no locking, just late conflict detection.
  • TO certification is also straightforward, but redundant: it applies the same checks as Basic TO but defers them till the end. So Basic TO is better.
  • SGT certification builds the SG dynamically and checks for cycles at commit. It mirrors SGT scheduling but pushes all cycle detection to the end.

Certifiers seem to help for distributed transactions, as you can delay the cost of validation until the transaction decides to commit. Distributed certifiers would then engage in coordination, and this brings us straight into two-phase commit (2PC) territory. Each certifier votes on whether to commit or abort. The TM collects votes, makes a global decision, and informs all participants.


Integrated Schedulers

This is conceptually the most exciting part of the chapter. This section suggests decomposing concurrency control into two independent subproblems: read-write (rw) synchronization and write-write (ww) synchronization. Each subproblem can then be solved with any of the core techniques (2PL, TO, or SGT), and the solutions can be composed into a composite scheduler.

Thomas Write Rule (TWR) makes a cameo appearance here as a ww synchronizer. TWR simply discards late writes if they've been overwritten by a newer write. No reordering, no rejection is needed --just drop the write and move on because it is overwritten anyway. But TWR doesn't provide serializability on its own. It must be paired with a proper rw synchronizer to ensure proper transaction isolation. The section discusses how to combine TWR with TO-based rw synchronization to build an integrated scheduler.

The mixed scheduler (2PL for rw + TWR for ww) is also interesting. Reads are protected by locks, ensuring strictness and recoverability, while writes proceed optimistically and avoid unnecessary aborts. This enables non-blocking writes without sacrificing the guarantees of strict 2PL for reads.

The chapter claims that hundreds of correct schedulers can be built from these compositions. But it stops here, only two examples are given. I think, in practice, many modern systems (especially MVCC-based ones) implicitly implement composite schedulers. Snapshot reads impose TO-like constraints, while writes are often validated with locking. 

Postgres 18 beta1: small server, sysbench

This has performance results for Postgres 18 beta1 and several older Postgres releases using the sysbench benchmark and a small server. Results like this from me are usually boring because Postgreshas done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

tl;dr - from Postgres 17.5 to 18 beta1

  • there might be small regressions (1% to 4%) in a few range query microbenchmarks
  • there might be a few small improvements (<= 3%) in other microbenchmarks
  • overall there are neither big improvements nor big regressions which is great news
tl;dr - from Postgres 14.0 to 18 beta1
  • 18 beta1 ranges from 7% slower to 13% faster
  • there are more improvements than regressions
  • the largest regressions occur on range query microbenchmarks

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests run with 1 client and 1 table with 50M rows. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 1 50000000 300 600 $deviceName 1 1 1

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.5 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for $some_version) / (QPS for $base_version)
And $base_version is either Postgres 17.5 or Postgres 14.0 as specified below. When the relative QPS is > 1 then $some_version is faster than $base_version.  When it is < 1 then there might be a regression.

Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.5

For the results here $base_version is Postgres 17.5 and that is compared with Postgres 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

From the relative QPS results below, there might be small regressions for the microbenchmarks highlighted in yelllow -- range-covered-pk*, range-covered-si*, range-notcovered-pk*, read-only-distinct*. All of these do range queries and the last also does aggregation. If there is a regression it is small - between 1% and 4%. However, that requires more investigation because small regressions are not easy to detect with sysbench.

The HW efficiency metrics show there is more CPU per query. The results are here for range-covered-pk*, range-covered-si*, range-notcovered-pk*, read-only-distinct*

I will revisit this.

The results also show possible small improvements for many of the microbenchmarks. I highlight them in green and again am not certain about the results. The changes (both good and bad) are small enough in most cases that they can be from normal variance.

Relative to: PG 17.5 with cx10a_c8r32
col-1 : PG 18beta1 with cx10b_c8r32
col-2 : PG 18beta1 with cx10c_c8r32
col-3 : PG 18beta1 with cx10d_c8r32

col-1   col-2   col-3   --> point queries
1.01    1.01    0.99    hot-points_range=100
1.01    1.00    1.01    point-query_range=100
1.00    0.99    0.98    points-covered-pk_range=100
0.99    0.99    0.99    points-covered-si_range=100
1.00    1.01    1.01    points-notcovered-pk_range=100
1.01    1.00    1.00    points-notcovered-si_range=100
1.00    1.01    1.01    random-points_range=1000
1.03    1.01    1.02    random-points_range=100
1.01    1.01    1.02    random-points_range=10

col-1   col-2   col-3   --> range queries, part 1, no aggregation
0.98    0.98    0.98    range-covered-pk_range=100
0.98    0.99    0.99    range-covered-si_range=100
0.99    0.98    0.98    range-notcovered-pk_range=100
0.99    0.99    1.00    range-notcovered-si_range=100
1.02    1.02    1.03    scan_range=100

col-1   col-2   col-3   --> range queries, part 2, with aggregation
1.04    1.03    1.02    read-only-count_range=1000
0.97    0.96    0.96    read-only-distinct_range=1000
1.00    1.00    0.99    read-only-order_range=1000
1.00    1.00    1.00    read-only_range=10000
1.00    0.99    0.99    read-only_range=100
1.00    0.99    1.00    read-only_range=10
1.01    1.00    1.00    read-only-simple_range=1000
1.02    1.02    1.02    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.99    0.99    0.99    delete_range=100
0.99    0.99    0.99    insert_range=100
1.00    0.99    0.99    read-write_range=100
1.00    0.99    1.00    read-write_range=10
0.99    1.00    1.00    update-index_range=100
1.00    0.99    0.99    update-inlist_range=100
1.00    1.00    1.00    update-nonindex_range=100
1.00    1.00    1.00    update-one_range=100
1.00    0.99    1.00    update-zipf_range=100
0.99    0.99    1.00    write-only_range=10000

Results: Postgres 14.0 through 18 beta1

For the results here $base_version is Postgres 14.0 and that is compared with more recent Postgres releases. The purpose for this is to see how performance changes over time. Postgres 18beta1 is between 7% slower and 13% faster than 14.0 and there are more improvements than regressions. This is yet another boring result from Postgres, but it is great to see that it continues to focus on CPU efficiency.

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

The data below is also here which can be easier to read.

Relative to: PG 14.0 cx10a_c8r32
col-1 : PG 14.18 cx10a_c8r32
col-2 : PG 15.0 cx10a_c8r32\
col-3 : PG 15.13 cx10a_c8r32
col-4 : PG 16.0 cx10a_c8r32
col-5 : PG 16.9 cx10a_c8r32
col-6 : PG 17.0 cx10a_c8r32
col-7 : PG 17.5 cx10a_c8r32
col-8 : PG 18beta1 cx10b_c8r32

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> point queries
1.00    1.03    1.03    1.03    1.03    1.98    1.94    1.95    hot-points_range=100
1.04    1.03    1.02    1.03    1.04    1.05    1.04    1.05    point-query_range=100
1.00    1.02    1.02    1.03    1.02    1.00    1.01    1.01    points-covered-pk_range=100
0.99    0.99    1.00    1.01    0.99    0.99    0.99    0.98    points-covered-si_range=100
1.00    1.00    1.00    1.02    1.01    0.99    0.99    0.99    points-notcovered-pk_range=100
0.99    0.99    0.99    0.99    0.99    0.99    0.98    0.99    points-notcovered-si_range=100
0.99    1.01    1.00    1.02    1.02    0.99    0.99    0.98    random-points_range=1000
1.00    1.00    1.00    1.01    1.01    0.99    0.98    1.00    random-points_range=100
1.01    1.02    1.01    1.03    1.03    1.02    1.00    1.01    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, part 1, no aggregation
0.98    0.98    0.95    0.97    0.97    0.98    0.98    0.95    range-covered-pk_range=100
0.97    0.97    0.96    0.98    0.97    0.97    0.97    0.95    range-covered-si_range=100
1.01    1.00    1.00    0.98    0.98    0.98    0.98    0.97    range-notcovered-pk_range=100
0.99    0.99    0.98    0.99    0.98    1.00    0.97    0.96    range-notcovered-si_range=100
0.90    0.80    0.78    0.89    1.02    0.91    0.91    0.93    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8. --> range queries, part 2, with aggregation
0.99    0.98    0.99    1.00    0.97    0.96    0.96    0.99    read-only-count_range=1000
1.01    1.03    1.02    1.03    1.02    1.01    1.02    0.99    read-only-distinct_range=1000
1.00    1.03    1.03    1.04    1.03    1.05    1.06    1.06    read-only-order_range=1000
1.00    1.03    1.03    1.05    1.04    1.04    1.04    1.04    read-only_range=10000
1.00    1.00    1.00    1.01    1.01    1.01    1.01    1.01    read-only_range=100
1.00    1.01    1.00    1.00    1.00    1.01    1.01    1.00    read-only_range=10
1.01    1.00    1.01    1.00    1.00    1.00    1.00    1.00    read-only-simple_range=1000
1.01    1.00    1.01    1.01    1.00    1.00    0.99    1.01    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8 --> writes
1.00    1.02    0.98    0.98    0.98    1.09    1.10    1.09    delete_range=100
0.98    1.00    1.00    0.98    0.98    1.07    1.06    1.05    insert_range=100
1.00    1.00    1.00    1.01    1.01    1.02    1.03    1.02    read-write_range=100
1.00    1.01    1.00    1.01    1.01    1.01    1.01    1.01    read-write_range=10
0.99    1.01    0.99    1.00    1.00    1.07    1.07    1.06    update-index_range=100
1.01    1.02    1.01    1.01    1.02    0.99    1.02    1.02    update-inlist_range=100
1.01    1.03    1.00    1.01    1.02    1.07    1.09    1.09    update-nonindex_range=100
1.00    1.01    0.98    1.00    1.01    1.13    1.12    1.11    update-one_range=100
1.01    1.04    1.00    1.01    1.03    1.11    1.13    1.13    update-zipf_range=100
1.00    1.02    1.00    1.00    1.01    1.06    1.07    1.06    write-only_range=10000

May 23, 2025

Connect Amazon Bedrock Agents with Amazon Aurora PostgreSQL using Amazon RDS Data API

In this post, we describe a solution to integrate generative AI applications with relational databases like Amazon Aurora PostgreSQL-Compatible Edition using RDS Data API (Data API) for simplified database interactions, Amazon Bedrock for AI model access, Amazon Bedrock Agents for task automation and Amazon Bedrock Knowledge Bases for context information retrieval.

B-Tree for Equality, Sort, Range

When creating an index, you don't need the full details of the queries it will serve, but the following access patterns characteristics:

  1. A key prefix with fields used for equality predicates to query one or multiple discrete values. This key can also be used for partitioning, sharding, hashing, distributing, or merging sorted ranges.
  2. The sorting fields to enable filtering over a continuous range of values, facilitate efficient sort and pagination, or simply get some entries clustered together.
  3. Additional selective fields that can be used by range predicates.
  4. Additional fields included to cover additional filters and projection and avoid fetching the document.

Maintaining indexes during inserts, deletes, and updates is synchronous to guarantee consistency when querying though secondary indexes. Therefore, it’s important to limit the number of indexes to a small set that supports your application queries performance.

In this post, I'll consider only the fields that have no array in the JSON path so that there is at maximum one index entry per document.

Access pattern: videos by category sorted by publishing date

In the previous post, with a search index, I queried the "Games" category for videos longer than 30 seconds that were published recently. To rerun this query with regular indexes on current state and let the query planner select the appropriate index, I can adjust the aggregation pipeline to:

db.youstats.aggregate([  
  {  
    $match: {  
      category: "Music",  
      duration: { $gt: 30 }  
    }  
  },  
  {  
    $sort: {  
      publishedDate: -1  
    }  
  },  
  {  
    $limit: 10  
  },  
  {  
    $project: {  
      _id: 0,  
      author: 1,  
      title: 1,  
      duration: 1,  
      type: 1,  
      publishedDate: 1  
    }  
  }  
]);  

I can also run the same with a simple find():

db.youstats.find({  
  category: "Music",  
  duration: { $gt: 30 }.      ,  
},{
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10)

This is a simple query with Equality (to get one category), Sort (to fetch only the last published) and Range (to get a duration bound) on top level fields, for which each document has a unique value. Without an index, the query is slow.

I create an index with the key fields following the Equality, Sort, Range:

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

Queries with a category filter go directly to relevant B-Tree leaves, returning ordered rows without extra seeks. While reading, it filters by duration and fetches documents, stopping after finding 10 entries, as shown in the execution plan (.explain("executionStats").executionStats).

db.youstats.find({  
  category: "Music"       , 
  duration: { $gt: 10 }   ,  
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
}).sort({
  publishedDate: -1 }).limit(10).explain("executionStats").executionStats

The summary indicates that 10 keys were read, resulting in 10 documents being fetched to return the 10 documents for the result:

  nReturned: 10,
  executionTimeMillis: 0,
  totalKeysExamined: 10,
  totalDocsExamined: 10,

The IXSCAN details how it did it:

         indexBounds: {
            category: [ '["Music", "Music"]' ],
            publishedDate: [ '[MaxKey, MinKey]' ],
            duration: [ '(10, inf.0]' ]
          },
          keysExamined: 10,
          seeks: 1,

The first index bound returns a single range for category: "Music", the second reads all this range in order of publishedDate, and the third one skips duration outside [10, infinity], without changing the order.

It is often suggested that indexes should begin with a selective field, but this is wrong. Here I have the best index and the number of categories is limited. The misconception arises from the potential for an index on a selective field to serve multiple queries, because selective fields are often used for filters. However, we will see in the next post that this index can serve different filters.
Additionally, MongoDB indexes, using WiredTiger engine, leverage prefix compression, which is more effective with common prefixes in keys. The less space they take, the more index pages stay in memory.

In the next post, we will explore how such indexes can still be utilized with less selective filtering on categories, and that the low cardinality of the first field in the key can be advantageous.

B-Tree for Equality, Sort, Range

When creating an index, you don't need the full details of the queries it will serve, but the following access patterns characteristics:

  1. A key prefix with fields used for equality predicates to query one or multiple discrete values. This key can also be used for partitioning, sharding, hashing, distributing, or merging sorted ranges.
  2. The sorting fields to enable filtering over a continuous range of values, facilitate efficient sort and pagination, or simply get some entries clustered together.
  3. Additional selective fields that can be used by range predicates.
  4. Additional fields included to cover additional filters and projection and avoid fetching the document.

Maintaining indexes during inserts, deletes, and updates is synchronous to guarantee consistency when querying though secondary indexes. Therefore, it’s important to limit the number of indexes to a small set that supports your application queries performance.

In this post, I'll consider only the fields that have no array in the JSON path so that there is at maximum one index entry per document.

Access pattern: videos by category sorted by publishing date

In the previous post, with a search index, I queried the "Games" category for videos longer than 30 seconds that were published recently. To rerun this query with regular indexes on current state and let the query planner select the appropriate index, I can adjust the aggregation pipeline to:

db.youstats.aggregate([  
  {  
    $match: {  
      category: "Music",  
      duration: { $gt: 30 }  
    }  
  },  
  {  
    $sort: {  
      publishedDate: -1  
    }  
  },  
  {  
    $limit: 10  
  },  
  {  
    $project: {  
      _id: 0,  
      author: 1,  
      title: 1,  
      duration: 1,  
      type: 1,  
      publishedDate: 1  
    }  
  }  
]);  

I can also run the same with a simple find():

db.youstats.find({  
  category: "Music",  
  duration: { $gt: 30 }.      ,  
},{
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10)

This is a simple query with Equality (to get one category), Sort (to fetch only the last published) and Range (to get a duration bound) on top level fields, for which each document has a unique value. Without an index, the query is slow.

I create an index with the key fields following the Equality, Sort, Range:

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

Queries with a category filter go directly to relevant B-Tree leaves, returning ordered rows without extra seeks. While reading, it filters by duration and fetches documents, stopping after finding 10 entries, as shown in the execution plan (.explain("executionStats").executionStats).

db.youstats.find({  
  category: "Music"       , 
  duration: { $gt: 10 }   ,  
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
}).sort({
  publishedDate: -1 }).limit(10).explain("executionStats").executionStats

The summary indicates that 10 keys were read, resulting in 10 documents being fetched to return the 10 documents for the result:

  nReturned: 10,
  executionTimeMillis: 0,
  totalKeysExamined: 10,
  totalDocsExamined: 10,

The IXSCAN details how it did it:

         indexBounds: {
            category: [ '["Music", "Music"]' ],
            publishedDate: [ '[MaxKey, MinKey]' ],
            duration: [ '(10, inf.0]' ]
          },
          keysExamined: 10,
          seeks: 1,

The first index bound returns a single range for category: "Music", the second reads all this range in order of publishedDate, and the third one skips duration outside [10, infinity], without changing the order.

It is often suggested that indexes should begin with a selective field, but this is wrong. Here I have the best index and the number of categories is limited. The misconception arises from the potential for an index on a selective field to serve multiple queries, because selective fields are often used for filters. However, we will see in the next post that this index can serve different filters.
Additionally, MongoDB indexes, using WiredTiger engine, leverage prefix compression, which is more effective with common prefixes in keys. The less space they take, the more index pages stay in memory.

In the next post, we will explore how such indexes can still be utilized with less selective filtering on categories, and that the low cardinality of the first field in the key can be advantageous.

What Oracle Missed, We Fixed: More Performant Query Processing in Percona Server for MySQL

At Percona, we constantly search for ways to make query processing more performant. Our activities include continuous monitoring of Percona Server for MySQL performance by doing performance regression tests. We also challenge Percona Server for MySQL with newly designed tests and analyze bottlenecks for possible improvements. Among our activities in this area is monitoring what […]

No Index Only Scan on JSONB Fields (and with even scalar)

On reddit, a PostgreSQL user was trying to use the SQL database as a document database, with all data in a JSONB column, but faced a performance issue with a simple GROUP BY on 100 millions rows:

Select (data->>referenceId), count(*) 
From items 
Group by (data->>'referenceId') 
having count(*) > 1;

Some suggested creating an expression index, which the user had already implemented, while others proposed moving this field to a table column, a solid suggestion. Additionally, many requested the execution plan. Overall, this thread features valuable insights from knowledgeable contributors. Depesz added a wise recommendation:

For starters, putting everything in json is going to cause serious performance and storage problems sooner or later.

You can create GIN indexes on JSON fields in PostgreSQL, but those will not help with ORDER BY or GROUP BY because they use Bitmap Scan which doesn't provide range or sort like regular B-Tree indexes.

It is possible to use B-Tree index on JSON fields in PostgreSQL, but only if there's no array in the path. Fortunately, it is the case here.

To investigate, I create the table with a field in JSON but also as a column, to compare both possibilities:

create table items (
  id bigserial primary key,
  referenceId text,
  data jsonb not null
);

insert into items (referenceId,data)
 select '42', jsonb_build_object( 'referenceId', '42', 'filler', repeat('x', 4000) )
  from generate_series(1,1000000)
;

On the regular column, I can create a regular index. On the JSON field, I can create an expression index:

create index on items( referenceId );

create index on items( ((data->>'referenceId')) );

To avoid a full table scan without inserting too many rows, I disable it (there's no query planner hints in PostgreSQL, unfortunately) and I vacuum the table to ensure a fresh visibility map:

set enable_seqscan to off;
vacuum analyze items;

A GROUP BY on the regular column uses the index efficiently, with an Index Only Scan (because all necessary columns are covered by the index entry) and no Hep Fetches (because it was freshly vacuumed):

explain (analyze, verbose, buffers, costs off, serialize text)
Select referenceId, count(*)
From items
Group by referenceId
having count(*) > 1;

                                                     QUERY PLAN                                                     
--------------------------------------------------------------------------------------------------------------------
 GroupAggregate (actual time=181.750..181.750 rows=1 loops=1)
   Output: referenceid, count(*)
   Group Key: items.referenceid
   Filter: (count(*) > 1)
   Buffers: shared hit=1 read=844
   ->  Index Only Scan using items_referenceid_idx on public.items (actual time=0.045..84.469 rows=1000000 loops=1)
         Output: referenceid
         Heap Fetches: 0
         Buffers: shared hit=1 read=844
 Planning:
   Buffers: shared hit=8
 Planning Time: 0.113 ms
 Serialization: time=0.006 ms  output=1kB  format=text
 Execution Time: 181.777 ms
(14 rows)

This is the best plan, reading 844 pages to aggregate one million rows. Can we have the equivalent with an expression-based index on a JSON field? Unfortunately not:

explain (analyze, verbose, buffers, costs off, serialize text)
Select (data->>'referenceId'), count(*)
From items
Group by (data->>'referenceId')
having count(*) > 1;

                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 GroupAggregate (actual time=1081.667..1081.668 rows=1 loops=1)
   Output: ((data ->> 'referenceId'::text)), count(*)
   Group Key: (items.data ->> 'referenceId'::text)
   Filter: (count(*) > 1)
   Buffers: shared hit=9108 read=8978 written=1819
   ->  Index Scan using items_expr_idx on public.items (actual time=0.030..983.316 rows=1000000 loops=1)
         Output: (data ->> 'referenceId'::text)
         Buffers: shared hit=9108 read=8978 written=1819
 Planning:
   Buffers: shared hit=6
 Planning Time: 0.101 ms
 Serialization: time=0.004 ms  output=1kB  format=text
 Execution Time: 1081.693 ms
(13 rows)

There's no Index Only Scan, the Index Scan had to read the table, with 10x more pages read. In my case, the JSON is not too large and well compressed, but this could be worse.

The issue arises not from the JSONB datatype itself, but because it must be indexed with an expression. The PostgreSQL query planner does not expand this expression to recognize that the index is covering the query. This is documented as:

PostgreSQL's planner is currently not very smart about such cases. It considers a query to be potentially executable by index-only scan only when all columns needed by the query are available from the index

The workaround is to add the underlying columns in the INCLUDE clause, but you cannot have expressions there, and adding the whole JSON field would defeat the goal.

The solution is to use a SQL database for what it's made for, with columns in tables, especially for what must be indexed. If you want to use a document database, then MongoDB has no problem on indexing fields in JSON:

db.items.createIndex( { referenceId: 1 } )

db.items.aggregate([  
  { $match: { referenceId: { $gt: MinKey } } },  // to skip inexisting field
  { $group: {  _id: "$referenceId",  count: { $sum: 1 }  }  },  
  { $match: {  count: { $gt: 1 } } },  
  { $project: { referenceId: "$_id", count: 1 } }  
]).explain();  

...
 queryPlan: {
   stage: 'GROUP',
   planNodeId: 3,
   inputStage: {
     stage: 'PROJECTION_COVERED',
     planNodeId: 2,
     transformBy: { referenceId: true, _id: false },
     inputStage: {
       stage: 'IXSCAN',
       planNodeId: 1,
       keyPattern: { referenceId: 1 },
       indexName: 'referenceId_1',
       isMultiKey: false,
       multiKeyPaths: { referenceId: [] },
       isUnique: false,
       isSparse: false,
       isPartial: false,
       indexVersion: 2,
       direction: 'forward', 
       indexBounds: { referenceId: [ '(MinKey, MaxKey]' ] }
     }
   }
 }
...

MongoDB doesn't need expression indexes on document fields, and can efficiently get the entries sorted to cover the projection and grouping without additional hashing or sorting.

In PostgreSQL, you need to extract elements from the JSONB document to achieve comparable indexing performance. It's important to note that MongoDB emulations on PostgreSQL use GIN or RUM indexes, which are not suitable for range, sort, or group optimizations, leaving the issue unresolved. Only MongoDB can provide native performance on documents. In short: use relational databases as relational databases and document databases as document databases.

No Index Only Scan on JSONB Fields (and with even scalar)

On reddit, a PostgreSQL user was trying to use the SQL database as a document database, with all data in a JSONB column, but faced a performance issue with a simple GROUP BY on 100 millions rows:

Select (data->>referenceId), count(*) 
From items 
Group by (data->>'referenceId') 
having count(*) > 1;

Some suggested creating an expression index, which the user had already implemented, while others proposed moving this field to a table column, a solid suggestion. Additionally, many requested the execution plan. Overall, this thread features valuable insights from knowledgeable contributors. Depesz added a wise recommendation:

For starters, putting everything in json is going to cause serious performance and storage problems sooner or later.

You can create GIN indexes on JSON fields in PostgreSQL, but those will not help with ORDER BY or GROUP BY because they use Bitmap Scan which doesn't provide range or sort like regular B-Tree indexes.

It is possible to use B-Tree index on JSON fields in PostgreSQL, but only if there's no array in the path. Fortunately, it is the case here.

To investigate, I create the table with a field in JSON but also as a column, to compare both possibilities:

create table items (
  id bigserial primary key,
  referenceId text,
  data jsonb not null
);

insert into items (referenceId,data)
 select '42', jsonb_build_object( 'referenceId', '42', 'filler', repeat('x', 4000) )
  from generate_series(1,1000000)
;

On the regular column, I can create a regular index. On the JSON field, I can create an expression index:

create index on items( referenceId );

create index on items( ((data->>'referenceId')) );

To avoid a full table scan without inserting too many rows, I disable it (there's no query planner hints in PostgreSQL, unfortunately) and I vacuum the table to ensure a fresh visibility map:

set enable_seqscan to off;
vacuum analyze items;

A GROUP BY on the regular column uses the index efficiently, with an Index Only Scan (because all necessary columns are covered by the index entry) and no Heap Fetches (because it was freshly vacuumed):

explain (analyze, verbose, buffers, costs off, serialize text)
Select referenceId, count(*)
From items
Group by referenceId
having count(*) > 1;

                                                     QUERY PLAN                                                     
--------------------------------------------------------------------------------------------------------------------
 GroupAggregate (actual time=181.750..181.750 rows=1 loops=1)
   Output: referenceid, count(*)
   Group Key: items.referenceid
   Filter: (count(*) > 1)
   Buffers: shared hit=1 read=844
   ->  Index Only Scan using items_referenceid_idx on public.items (actual time=0.045..84.469 rows=1000000 loops=1)
         Output: referenceid
         Heap Fetches: 0
         Buffers: shared hit=1 read=844
 Planning:
   Buffers: shared hit=8
 Planning Time: 0.113 ms
 Serialization: time=0.006 ms  output=1kB  format=text
 Execution Time: 181.777 ms
(14 rows)

This is the best plan, reading 844 pages to aggregate one million rows. Can we have the equivalent with an expression-based index on a JSON field? Unfortunately not:

explain (analyze, verbose, buffers, costs off, serialize text)
Select (data->>'referenceId'), count(*)
From items
Group by (data->>'referenceId')
having count(*) > 1;

                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 GroupAggregate (actual time=1081.667..1081.668 rows=1 loops=1)
   Output: ((data ->> 'referenceId'::text)), count(*)
   Group Key: (items.data ->> 'referenceId'::text)
   Filter: (count(*) > 1)
   Buffers: shared hit=9108 read=8978 written=1819
   ->  Index Scan using items_expr_idx on public.items (actual time=0.030..983.316 rows=1000000 loops=1)
         Output: (data ->> 'referenceId'::text)
         Buffers: shared hit=9108 read=8978 written=1819
 Planning:
   Buffers: shared hit=6
 Planning Time: 0.101 ms
 Serialization: time=0.004 ms  output=1kB  format=text
 Execution Time: 1081.693 ms
(13 rows)

There's no Index Only Scan, the Index Scan had to read the table, with 10x more pages read. In my case, the JSON is not too large and well compressed, but this could be worse.

The issue arises not from the JSONB datatype itself, but because it must be indexed with an expression. The PostgreSQL query planner does not expand this expression to recognize that the index is covering the query. This is documented as:

PostgreSQL's planner is currently not very smart about such cases. It considers a query to be potentially executable by index-only scan only when all columns needed by the query are available from the index

The workaround is to add the underlying columns in the INCLUDE clause, but you cannot have expressions there, and adding the whole JSON field would defeat the goal.

The solution is to use a SQL database for what it's made for, with columns in tables, especially for what must be indexed, the schema-on-write part. If you want to use a document database, then MongoDB has no problem on indexing fields in JSON:

db.items.createIndex( { referenceId: 1 } )

db.items.aggregate([  
  { $match: { referenceId: { $gt: MinKey } } },  // to skip inexisting field
  { $group: {  _id: "$referenceId",  count: { $sum: 1 }  }  },  
  { $match: {  count: { $gt: 1 } } },  
  { $project: { referenceId: "$_id", count: 1 } }  
]).explain();  

...
 queryPlan: {
   stage: 'GROUP',
   planNodeId: 3,
   inputStage: {
     stage: 'PROJECTION_COVERED',
     planNodeId: 2,
     transformBy: { referenceId: true, _id: false },
     inputStage: {
       stage: 'IXSCAN',
       planNodeId: 1,
       keyPattern: { referenceId: 1 },
       indexName: 'referenceId_1',
       isMultiKey: false,
       multiKeyPaths: { referenceId: [] },
       isUnique: false,
       isSparse: false,
       isPartial: false,
       indexVersion: 2,
       direction: 'forward', 
       indexBounds: { referenceId: [ '(MinKey, MaxKey]' ] }
     }
   }
 }
...

MongoDB doesn't need expression indexes on document fields, and can efficiently get the entries sorted to cover the projection and grouping without additional hashing or sorting. explain() shows how the index is used.

In PostgreSQL, you need to normalize the indexed keys to columns instead of JSONB fields to achieve comparable indexing performance. It's important to note that MongoDB emulations on PostgreSQL use GIN or RUM indexes, which are not suitable for range, sort, or group optimizations, leaving the issue unresolved. Only MongoDB can provide native performance on documents.
To utilize databases effectively, employ relational databases for their intended functions, a normalized use-case agnostic schema, and document databases for theirs, optimized for the domain access patterns. This approach maximizes the unique strengths of each database type, ensuring optimal usage.

PostgreSQL (with JSONB) and MongoDB (with schema)

In recent years, MongoDB and PostgreSQL have gained popularity, representing the NoSQL and SQL camps, respectively. Each technology is optimized for different data modeling approaches: PostgreSQL serves as a general-purpose relational database, while MongoDB functions as a general-purpose document database. While one is more data-centric and the other more application-centric, both can handle similar workloads. PostgreSQL offers flexibility over relational with its JSONB datatype, and MongoDB's flexible schema allows for some normalization through references and schema validation.

However, designing a document database as a relational one is inefficient, and vice versa. Although PostgreSQL can superficially resemble a document database with JSON datatypes, it is not optimized for using it as a document database, because of its block format, query planner behavior, index possibilities, and other components not being optimized for document structures.

In this series of blogs, I'll address some of the problems people may face if they use PostgreSQL as a document database, in order to put some facts for discussions like those I've seen on Reddit:

The theorist argues that JSON in PostgreSQL can replace a document database (reddit):

Because you can just create a table in postgres that is a key and a JSON field and boom, you have a document store. It's really hard to find an advantage that mongo brings at that point, postgres is better in almost every way even at being a document store

The more realistic knows that it cannot work at scale (reddit):

I'm not a DBA or anything but this is terrible data design (which you inherited so obviously not your fault). Just chuck the entire data structure as a single json column SMH!

Please, follow for the next posts and here are the slides from my talk at PostgreSQL Germany on this topic:

Schedule - PGConf.DE 2025 — PostgreSQL Conference Germany 2025

Normalize or De-normalize? Relational SQL Columns or JSON Document Attributes?

postgresql.eu

PostgreSQL (with JSONB) and MongoDB (with schema)

In recent years, MongoDB and PostgreSQL have gained popularity, representing the NoSQL and SQL camps, respectively. Each technology is optimized for different data modeling approaches: PostgreSQL serves as a general-purpose relational database, while MongoDB functions as a general-purpose document database. While one is more data-centric and the other more application-centric, both can handle similar workloads. PostgreSQL offers flexibility over relational with its JSONB datatype, and MongoDB's flexible schema allows for some normalization through references and schema validation.

However, designing a document database as a relational one is inefficient, and vice versa. Although PostgreSQL can superficially resemble a document database with JSON datatypes, it is not optimized for using it as a document database, because of its block format, query planner behavior, index possibilities, and other components not being optimized for document structures.

In this series of blogs, I'll address some of the problems people may face if they use PostgreSQL as a document database, in order to put some facts for discussions like those I've seen on Reddit:

The idealist argues that JSON in PostgreSQL can replace a document database (reddit):

Because you can just create a table in postgres that is a key and a JSON field and boom, you have a document store. It's really hard to find an advantage that mongo brings at that point, postgres is better in almost every way even at being a document store

The pragmatist knows that it cannot work at scale (reddit):

I'm not a DBA or anything but this is terrible data design (which you inherited so obviously not your fault). Just chuck the entire data structure as a single json column SMH!

Please, follow for the next posts and here are the slides from my talk at PostgreSQL Germany on this topic:

Schedule - PGConf.DE 2025 — PostgreSQL Conference Germany 2025

Normalize or De-normalize? Relational SQL Columns or JSON Document Attributes?

postgresql.eu

Be cautious of unverified claims in marketing presentations or social media. Always fact-check information, as execution plans reveal the true scalability of data access. Note that indexing on JSONB has several limitations compared to MongoDB Indexes.

May 22, 2025

Amazon Aurora Global Database introduces support for up to 10 secondary Regions

In this post, we dive deep into Amazon Aurora Global Database's new support for up to 10 secondary Regions and explore use cases it unlocks. An Aurora Global Database consists of one primary Region and up to 10 read-only secondary Regions for low-latency local reads.

Search Index for Reporting

In the first post of this series, I've imported a sample dataset, and I'll show how adding a few indexes can open performance to new use cases. Before looking at regular indexes for OLTP, I'll show that it is easy to create one Search Index for on demand reporting queries.

With Search Index: OLAP Ad-hoc queries for reporting

For analytic reporting, the best is to create an Atlas Search index that is maintained asynchronously, isolated from the operational workload, for near-real-time queries. I describe the columns I want to index:

db.youstats.createSearchIndex(
  "SearchYoustats", {  
    mappings: {
      dynamic: false,   
      fields: {  
        type: {  type: "token"   },
        duration: {  type: "number"  },
        commentsNumber: {  type: "number"  },
        publishedDate: {  type: "token"   },
        category: {  type: "token"  },
        title: {  type: "string"  },
        description: {  type: "string"  },
        author: {  type: "string"  },
        views: {  
          type: "document",
          fields: {  
            daily: {
              type: "document",  
              fields: {  
                data: {  type: "number"  }
              }  
            }  
          }
        }  
      }  
    }  
  }  
);  

No need to know the access patterns in advance, just list the fields you want to index without thinking of an ordered key.

To find videos in the "Games" category that are longer than 30 seconds and have been published recently (I want only the Top-10), here’s an efficient query on this index:

db.youstats.aggregate([  
  {      
    $search: {      
      index: "SearchYoustats",     
      compound: {      
        filter: [  
          { equals: { path: "category", value: "Music" } },    
          { range: { path: "duration", gt: 30 } }  
        ]      
      },  
      sort: {  
        "publishedDate": -1  
      }  
    }  
  },  
  {      
    $limit: 10  
  },      
  {      
    $project: {  
      _id: 0,  
      author: 1,              
      title: 1,  
      duration: 1,  
      type: 1,  
      publishedDate: 1  
    }    
  }  
]);  

The same index can also be used for text search, for example looking for "MongoDB" in the title:

db.youstats.aggregate([  
  {          
    $search: {          
      index: "SearchYoustats",     
      text: {         
        query: "MongoDB",  // The search term  
        path: "description"      // Targeting the 'description' field  
      }      
    }  
  },  
  {          
    $group: {  
      _id: "$category",                      // Group by category  
      count: { $sum: 1 },                    // Count the number of documents in each category  
      totalDuration: { $sum: "$duration" }   // Sum the duration for each category  
    }      
  },  
  {  
    $sort: { totalDuration: -1 }            // Sort by total duration in descending order  
  }  
]); 

  { _id: 'Tech', count: 2, totalDuration: 308 },
  { _id: 'Education', count: 1, totalDuration: 118 }

Many new use cases can simply be served by Atlas Index Search with simplicity. This is perfect for analytic queries for reporting that must not impact the operational workloads. However, for OLTP systems that require querying the current state based on specific access patterns, it's essential to create dedicated indexes. We will cover that in the next posts.

Search Index for Reporting

In the first post of this series, I've imported a sample dataset, and I'll show how adding a few indexes can open performance to new use cases. Before looking at regular indexes for OLTP, I'll show that it is easy to create one Search Index for on demand reporting queries.

With Search Index: OLAP Ad-hoc queries for reporting

For analytic reporting, the best is to create an Atlas Search index that is maintained asynchronously, isolated from the operational workload, for near-real-time queries. I describe the columns I want to index:

db.youstats.createSearchIndex(
  "SearchYoustats", {  
    mappings: {
      dynamic: false,   
      fields: {  
        type: {  type: "token"   },
        duration: {  type: "number"  },
        commentsNumber: {  type: "number"  },
        publishedDate: {  type: "token"   },
        category: {  type: "token"  },
        title: {  type: "string"  },
        description: {  type: "string"  },
        author: {  type: "string"  },
        views: {  
          type: "document",
          fields: {  
            daily: {
              type: "document",  
              fields: {  
                data: {  type: "number"  }
              }  
            }  
          }
        }  
      }  
    }  
  }  
);  

No need to know the access patterns in advance, just list the fields you want to index without thinking of an ordered key.

To find videos in the "Games" category that are longer than 30 seconds and have been published recently (I want only the Top-10), here’s an efficient query on this index:

db.youstats.aggregate([  
  {      
    $search: {      
      index: "SearchYoustats",     
      compound: {      
        filter: [  
          { equals: { path: "category", value: "Music" } },    
          { range: { path: "duration", gt: 30 } }  
        ]      
      },  
      sort: {  
        "publishedDate": -1  
      }  
    }  
  },  
  {      
    $limit: 10  
  },      
  {      
    $project: {  
      _id: 0,  
      author: 1,              
      title: 1,  
      duration: 1,  
      type: 1,  
      publishedDate: 1  
    }    
  }  
]);  

The same index can also be used for text search, for example looking for "MongoDB" in the title:

db.youstats.aggregate([  
  {          
    $search: {          
      index: "SearchYoustats",     
      text: {         
        query: "MongoDB",  // The search term  
        path: "description"      // Targeting the 'description' field  
      }      
    }  
  },  
  {          
    $group: {  
      _id: "$category",                      // Group by category  
      count: { $sum: 1 },                    // Count the number of documents in each category  
      totalDuration: { $sum: "$duration" }   // Sum the duration for each category  
    }      
  },  
  {  
    $sort: { totalDuration: -1 }            // Sort by total duration in descending order  
  }  
]); 

  { _id: 'Tech', count: 2, totalDuration: 308 },
  { _id: 'Education', count: 1, totalDuration: 118 }

Many new use cases can simply be served by Atlas Index Search with simplicity. This is perfect for analytic queries for reporting that must not impact the operational workloads. However, for OLTP systems that require querying the current state based on specific access patterns, it's essential to create dedicated indexes. We will cover that in the next posts.

Sampling Without Index

I won't create any indexes for this post yet, but I can still filter efficiently to run a query in milliseconds. However, since I haven't indexed any values, the filter will randomly sample data without offering any real value.

Data was imported in the first post of this series.

Without indexes, queries on a sample

Without indexes, scanning the whole collection would be inefficient, but I can scan a sample to get some insights about data. For example, I list and count the categories of videos from a random sample of one thousand:

db.youstats.aggregate([  
  { $sample: { size: 10000 } },  
  { $group: { _id: "$category", count: { $sum: 1 } , maxDuration: { $max: "$duration" } } },  
  {  $sort: { count: -1 } }  
]).explain('executionStats')

[
  { _id: 'Music', count: 1841, maxDuration: '99' },
  { _id: 'People', count: 1358, maxDuration: '99' },
  { _id: 'Entertainment', count: 1024, maxDuration: '99' },
  { _id: 'Education', count: 949, maxDuration: '99' },
  { _id: 'Tech', count: 698, maxDuration: '99' },
  { _id: 'Howto', count: 668, maxDuration: '99' },
  { _id: 'Sports', count: 498, maxDuration: '99' },
  { _id: 'News', count: 492, maxDuration: '99' },
  { _id: 'Animals', count: 477, maxDuration: '99' },
  { _id: 'Comedy', count: 434, maxDuration: '98' },
  { _id: 'Games', count: 415, maxDuration: '99' },
  { _id: 'Film', count: 384, maxDuration: '99' },
  { _id: 'Travel', count: 294, maxDuration: '97' },
  { _id: 'Autos', count: 276, maxDuration: '99' },
  { _id: 'Nonprofit', count: 151, maxDuration: '96' },
  { _id: '3', count: 28, maxDuration: '97' },
  { _id: '4', count: 10, maxDuration: '60' },
  { _id: '5', count: 3, maxDuration: '71' }
]

An aggregation pipeline is like a SQL query where you have more control on the processing flow. The first stage filters (here a simple $sample), the second stage defines the group by ($group with an identifier built from the grouping fields, and the calculated fields), and the last stage is an order by ($sort with -1 for DESCending).

This allows to take a look at data, quickly, to gate an idea of the shape. You can also use MongoDB Compass to do so. And when connected to Atlas, you can type your query in your own language and get the aggregation pipeline generated:

In the next posts, we will create some indexes to serve more use cases.

Sampling Without Index

I won't create any indexes for this post yet, but I can still filter efficiently to run a query in milliseconds. However, since I haven't indexed any values, the filter will randomly sample data without offering any real value.

Data was imported in the first post of this series.

Without indexes, queries on a sample

Without indexes, scanning the whole collection would be inefficient, but I can scan a sample to get some insights about data. For example, I list and count the categories of videos from a random sample of one thousand:

db.youstats.aggregate([  
  { $sample: { size: 10000 } },  
  { $group: { _id: "$category", count: { $sum: 1 } , maxDuration: { $max: "$duration" } } },  
  {  $sort: { count: -1 } }  
]).explain('executionStats')

[
  { _id: 'Music', count: 1841, maxDuration: '99' },
  { _id: 'People', count: 1358, maxDuration: '99' },
  { _id: 'Entertainment', count: 1024, maxDuration: '99' },
  { _id: 'Education', count: 949, maxDuration: '99' },
  { _id: 'Tech', count: 698, maxDuration: '99' },
  { _id: 'Howto', count: 668, maxDuration: '99' },
  { _id: 'Sports', count: 498, maxDuration: '99' },
  { _id: 'News', count: 492, maxDuration: '99' },
  { _id: 'Animals', count: 477, maxDuration: '99' },
  { _id: 'Comedy', count: 434, maxDuration: '98' },
  { _id: 'Games', count: 415, maxDuration: '99' },
  { _id: 'Film', count: 384, maxDuration: '99' },
  { _id: 'Travel', count: 294, maxDuration: '97' },
  { _id: 'Autos', count: 276, maxDuration: '99' },
  { _id: 'Nonprofit', count: 151, maxDuration: '96' },
  { _id: '3', count: 28, maxDuration: '97' },
  { _id: '4', count: 10, maxDuration: '60' },
  { _id: '5', count: 3, maxDuration: '71' }
]

An aggregation pipeline is like a SQL query where you have more control on the processing flow. The first stage filters (here a simple $sample), the second stage defines the group by ($group with an identifier built from the grouping fields, and the calculated fields), and the last stage is an order by ($sort with -1 for DESCending).

This allows to take a look at data, quickly, to gate an idea of the shape. You can also use MongoDB Compass to do so. And when connected to Atlas, you can type your query in your own language and get the aggregation pipeline generated:

In the next posts, we will create some indexes to serve more use cases.

Indexing for New Use Cases Within the MongoDB Document Model (tutorial)

When designing a schema for MongoDB, it’s crucial to understand your access patterns. The document modeling approach shows its simplicity and efficiency over the relational model when building a database for a bounded context where primary business objects and microservice access patterns are defined. Knowing which documents to manipulate allows you to determine what information to embed and what to reference by identifier.

The question arises: does this mean that integrating a new use case with the same database is difficult? Not at all. Just as relational databases require new secondary indexes for new use cases, MongoDB provides numerous indexing options on its document model. In this series, I will demonstrate how a collection designed for OLAP reporting, specifically YouTube video statistics, can efficiently support OLTP queries with a small set of indexes—without needing to alter the document model.

Let's initialize the lab.

I start a local atlas cluster with Atlas CLI

curl https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_1.41.1_linux_arm64.tar.gz | 
 tar -xzvf - &&
 alias atlas=$PWD/mongodb-atlas-cli_1.41.1_linux_arm64/bin/atlas

atlas deployments setup  atlas --type local --port 27017 --force

I import the Youtube video statistics for 1 million videos made available on Kaggle by Mattia Zeni and Daniele Miorandi and Francesco De Pellegrini YOUStatAnalyzer: a Tool for Analysing the Dynamics of YouTube Content Popularity - 7th International Conference on Performance Evaluation Methodologies and Tools (Valuetools, Torino, Italy, December 2013)

The following downloads, unzips, cleans (duration and number of comments in integer rather than text), and imports the million videos:

curl -L -o youstatanalyzer1000k.zip\
  https://www.kaggle.com/api/v1/datasets/download/mattiazeni/youtube-video-statistics-1million-videos && 
unzip youstatanalyzer1000k.zip &&
rm -f youstatanalyzer1000k.zip &&
sed -E -i.bak 's/("duration" : |"commentsNumber" : )"([0-9]+)"(,)/\1\2\3/g' youstatanalyzer1000k.json &&
mongoimport --db yt --collection youstats --file youstatanalyzer1000k.json -j 10 --drop

Output:

% mongoimport --db yt --collection youstats --file youstatanalyzer1000k.json -j 10 --drop
2025-05-21T16:10:31.136+0200    connected to: mongodb://localhost/
2025-05-21T16:10:31.137+0200    dropping: yt.youstats
2025-05-21T16:10:34.136+0200    [........................] yt.youstats     301MB/29.3GB (1.0%)
2025-05-21T16:10:37.139+0200    [........................] yt.youstats     572MB/29.3GB (1.9%)
2025-05-21T16:10:40.145+0200    [........................] yt.youstats     713MB/29.3GB (2.4%)
2025-05-21T16:10:43.136+0200    [........................] yt.youstats     924MB/29.3GB (3.1%)
2025-05-21T16:10:46.136+0200    [........................] yt.youstats     1.18GB/29.3GB (4.0%)
2025-05-21T16:10:49.137+0200    [#.......................] yt.youstats     1.33GB/29.3GB (4.5%)
2025-05-21T16:10:52.136+0200    [#.......................] yt.youstats     1.51GB/29.3GB (5.1%)
2025-05-21T16:10:55.137+0200    [#.......................] yt.youstats     1.77GB/29.3GB (6.0%)
2025-05-21T16:10:58.136+0200    [#.......................] yt.youstats     2.01GB/29.3GB (6.8%)
...
2025-05-21T16:17:46.134+0200    [#######################.] yt.youstats     29.1GB/29.3GB (99.1%)
2025-05-21T16:17:49.135+0200    [#######################.] yt.youstats     29.2GB/29.3GB (99.6%)
2025-05-21T16:17:51.755+0200    [########################] yt.youstats     29.3GB/29.3GB (100.0%)
2025-05-21T16:17:51.755+0200    1006469 document(s) imported successfully. 0 document(s) failed to import.

Without any index, a query must scan the collection:

db.youstats.find().explain("executionStats").executionStats

 executionStats: {
    executionSuccess: true,
    nReturned: 1006469,
    executionTimeMillis: 75494,
    totalKeysExamined: 0,
    totalDocsExamined: 1006469,
    executionStages: {
      isCached: false,
      stage: 'COLLSCAN',
      nReturned: 1006469,
      executionTimeMillisEstimate: 74846,
      works: 1006470,
      advanced: 1006469,
      needTime: 0,
      needYield: 0,
      saveState: 4874,
      restoreState: 4874,
      isEOF: 1,
      direction: 'forward',
      docsExamined: 1006469
    }
  },

This query returns one million of documents (nReturned: 1006469), so it is expected to examine the same number of documents (totalDocsExamined: 1006469), and it takes one minute on my lab (executionTimeMillis: 75494). However, if I wanted to filter only a subset to return ten documents, it would still have to read the same because I have no index.

Without an index, the collection serves only two access patterns efficiently:

  • read all documents
  • find one document by "_id"

This was the last query I run that takes more than a second on this dataset. The next posts of this series will introduce ways to find documents without their "_id" and without scanning the whole collection.

Indexing for New Use Cases Within the MongoDB Document Model (tutorial)

When designing a schema for MongoDB, it’s crucial to understand your domain access patterns. The document modeling approach shows its simplicity and efficiency over the relational model when building a database for a bounded context where main business objects and microservice access patterns are defined. Knowing which documents to manipulate allows you to determine what information to embed and what to reference by identifier.

The question arises: does this mean that integrating a new use case with the same database is difficult? Not at all. Just as relational databases require new secondary indexes for new use cases, MongoDB provides numerous indexing options on its document model. In this series, I will demonstrate how a collection designed for OLAP reporting, specifically YouTube video statistics, can efficiently support OLTP queries with a small set of indexes—without needing to alter the document model.

I selected a random dataset by searching for "mongoimport" on Kaggle, without a specific access pattern in mind. My goal is to show the diverse use cases it can serve with a couple of indexes, without changing the schema. Let's go ahead and initialize the lab.

I start a local atlas cluster with Atlas CLI

curl https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_1.41.1_linux_arm64.tar.gz | 
 tar -xzvf - &&
 alias atlas=$PWD/mongodb-atlas-cli_1.41.1_linux_arm64/bin/atlas

atlas deployments setup  atlas --type local --port 27017 --force

I import the Youtube video statistics for 1 million videos made available on Kaggle by Mattia Zeni and Daniele Miorandi and Francesco De Pellegrini YOUStatAnalyzer: a Tool for Analysing the Dynamics of YouTube Content Popularity - 7th International Conference on Performance Evaluation Methodologies and Tools (Valuetools, Torino, Italy, December 2013)

The following downloads, unzips, cleans (duration and number of comments in integer rather than text), and imports the million videos:

curl -L -o youstatanalyzer1000k.zip\
  https://www.kaggle.com/api/v1/datasets/download/mattiazeni/youtube-video-statistics-1million-videos && 
unzip youstatanalyzer1000k.zip &&
rm -f youstatanalyzer1000k.zip &&
sed -E -i.bak 's/("duration" : |"commentsNumber" : )"([0-9]+)"(,)/\1\2\3/g' youstatanalyzer1000k.json &&
mongoimport --db yt --collection youstats --file youstatanalyzer1000k.json -j 10 --drop

Output:

% mongoimport --db yt --collection youstats --file youstatanalyzer1000k.json -j 10 --drop
2025-05-21T16:10:31.136+0200    connected to: mongodb://localhost/
2025-05-21T16:10:31.137+0200    dropping: yt.youstats
2025-05-21T16:10:34.136+0200    [........................] yt.youstats     301MB/29.3GB (1.0%)
2025-05-21T16:10:37.139+0200    [........................] yt.youstats     572MB/29.3GB (1.9%)
2025-05-21T16:10:40.145+0200    [........................] yt.youstats     713MB/29.3GB (2.4%)
2025-05-21T16:10:43.136+0200    [........................] yt.youstats     924MB/29.3GB (3.1%)
2025-05-21T16:10:46.136+0200    [........................] yt.youstats     1.18GB/29.3GB (4.0%)
2025-05-21T16:10:49.137+0200    [#.......................] yt.youstats     1.33GB/29.3GB (4.5%)
2025-05-21T16:10:52.136+0200    [#.......................] yt.youstats     1.51GB/29.3GB (5.1%)
2025-05-21T16:10:55.137+0200    [#.......................] yt.youstats     1.77GB/29.3GB (6.0%)
2025-05-21T16:10:58.136+0200    [#.......................] yt.youstats     2.01GB/29.3GB (6.8%)
...
2025-05-21T16:17:46.134+0200    [#######################.] yt.youstats     29.1GB/29.3GB (99.1%)
2025-05-21T16:17:49.135+0200    [#######################.] yt.youstats     29.2GB/29.3GB (99.6%)
2025-05-21T16:17:51.755+0200    [########################] yt.youstats     29.3GB/29.3GB (100.0%)
2025-05-21T16:17:51.755+0200    1006469 document(s) imported successfully. 0 document(s) failed to import.

Without any index, a query must scan the collection:

db.youstats.find().explain("executionStats").executionStats

 executionStats: {
    executionSuccess: true,
    nReturned: 1006469,
    executionTimeMillis: 75494,
    totalKeysExamined: 0,
    totalDocsExamined: 1006469,
    executionStages: {
      isCached: false,
      stage: 'COLLSCAN',
      nReturned: 1006469,
      executionTimeMillisEstimate: 74846,
      works: 1006470,
      advanced: 1006469,
      needTime: 0,
      needYield: 0,
      saveState: 4874,
      restoreState: 4874,
      isEOF: 1,
      direction: 'forward',
      docsExamined: 1006469
    }
  },

This query returns one million of documents (nReturned: 1006469), so it is expected to examine the same number of documents (totalDocsExamined: 1006469), and it takes one minute on my lab (executionTimeMillis: 75494). However, if I wanted to filter only a subset to return ten documents, it would still have to read the same because I have no index.

Without an index, the collection serves only two access patterns efficiently:

  • read all documents
  • find one document by "_id"

This was the last query I run that takes more than a second on this dataset. The next posts of this series will introduce ways to find documents without their "_id" and without scanning the whole collection.