a curated list of database news from authoritative sources

June 03, 2025

June 02, 2025

Streamline code conversion and testing from Microsoft SQL Server and Oracle to PostgreSQL with Amazon Bedrock

Organizations are increasingly seeking to modernize their database infrastructure by migrating from legacy database engines such as Microsoft SQL Server and Oracle to more cost-effective and scalable open source alternatives such as PostgreSQL. This transition not only reduces licensing costs but also unlocks the flexibility and innovation offered by PostgreSQL’s rich feature set. In this post, we demonstrate how to convert and test database code from Microsoft SQL Server and Oracle to PostgreSQL using the generative AI capabilities of Amazon Bedrock.

Understanding the Client-Output Buffer Limit for Replicas in Valkey

Valkey (a community-driven fork of Redis) uses a high-performance replication model to synchronize data between a primary node and its replicas. One critical component of this system is the client-output buffer, especially the configuration of its limits for replicas. This blog explores the client-output buffer, how its limits work in the context of replicas, and […]

May 31, 2025

No HOT updates on JSONB (write amplification)

PostgreSQL's Multi-Version Concurrency Control (MVCC) works around the challenge of in-place updates in fixed block storage by avoiding it. Instead of updating rows, it processes them as deletes and inserts, prioritizing simplicity of implementation over performance. Updating fields in a JSONB document can be problematic due to significant write amplification.

What are Heap Only Tuple (HOT) updates?

When a table row is updated, the entire row is marked for deletion by setting its xmax value, indicating the end of its visibility period. A new version of the row is then created with a fresh xmin value to signify the start of its visibility. Write amplification arises not only from copying the entire row but also from the need to update all indexes associated with the table. PostgreSQL indexes reference rows using their physical location (ctid), meaning that any change in the row's physical location requires new index entries to find the latest version of the row, even if the indexed column values remain unchanged. Over time, when older versions of rows are no longer visible to any active transaction—having passed the xmin horizon—they are eligible for garbage collection by the vacuum process, which removes outdated row versions and their associated index entries.

Given that many SQL applications have multiple indexes on their tables, frequent updates can exacerbate write amplification, with detrimental consequences for checkpoints and replication, especially when every index must be updated regardless of whether the indexed values changed. To mitigate this, PostgreSQL introduces an optimization called Heap-Only Tuple (HOT) updates that avoid adding new index entries for keys that didn't change, in cases where the new version of the row fits in the same block as the previous version. If a column is frequently updated and the old version is frequently vacuumed, some free space may be constantly available in the block for new versions (and this can be initialized with a lower fillfactor) and HOT optimization can kick-in.

This blog post series is about using PostgreSQL as a document database, with all data in JSONB, but there's no Heap-Only Tuple optimization for indexes on JSONB fields.

Test it with EXPLAIN (ANALYZE, WAL, BUFFERS)

I create a table similar to the one in the previous post, storing user profiles, and add a login sub-object to record the last login date and a login counter:

create table users (
  id bigserial primary key,
  data jsonb not null
);
insert into users (data) values (
 jsonb_build_object(
    'name', 'Homer Simpson',
    '{login}',
    jsonb_build_object(
      'last', to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'),
      'count', 0
    )  ,
    'email', jsonb_build_array(
      'donutlover@springfieldusa.com',
      'homerdoh@simpsons.com',
      'lazy.sofa.guy@tvcharacters.net'
    )
  )
 );

This is the PostgreSQL equivalent of the following MongoDB call to insert a document:

db.users.insertOne({  
  data: {  
    name: "Homer Simpson",  
    login: {  
      last: new Date(),  
      count: 0  
    },  
    email: [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
  }  
});  

My use-case is the equivalent of the following to increase the login counter and update the last login date:

db.users.updateOne(  
  { _id: 1 },  
  {  
    $set: { "data.login.last": new Date() },  
    $inc: { "data.login.count": 1 }  
  }  
);  

In SQL, there's no increment operation. Instead, an update sets the new values. When stored as a JSONB field in PostgreSQL, we must replace the document with a new one using json_set() to modify the fields.

I run some updates to increase the login counter and update the last login date and show the execution plan with statistics:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
UPDATE users
SET data = jsonb_set(
  data,
  '{login}',
  jsonb_build_object(
    'last', to_char(current_timestamp, 'YYYY-MM-DD'),
    'count', (COALESCE((data->'login'->>'count')::int, 0) + 1)
  )
)
where id=1
\watch

Here is the execution plan showing two buffer hits to find the row via index, and one Write-Ahead Logging (WAL) record for the update of the row (71 bytes)

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.057..0.057 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=1 bytes=71
   ->  Index Scan using users_pkey on public.users (actual time=0.040..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.077 ms

You can run that for a while and on a large table, and observe the same. Even if it writes more than necessary, because the whole row and JSON documents is re-written, the performance is predictable.

You may observe some executions with one more WAL record generated by the Index Scan as reads may do some delayed cleanup.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.062..0.063 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=2 bytes=157
   ->  Index Scan using users_pkey on public.users (actual time=0.047..0.048 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
         WAL: records=1 bytes=86
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.083 ms

While storing all data in JSONB, similar to a document database, may seem appealing, this table lacks indexes. In a real-world application, documents will contain more fields and sub-documents and require multiple indexes, which are likely to evolve as the application develops.

Adding indexes

During the lifecycle of an application, more indexes are created. I add an index on the user name:

create index on users(
 (data->>'name')
);

In PostgreSQL, adding an index to fields that are not updated does impact updates differently than in many other databases. For instance, my login update produces two additional WAL records, resulting in a total WAL size that is three times larger, along with increased buffer reads.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.091..0.092 rows=0 loops=1)
   Buffers: shared hit=9
   WAL: records=3 bytes=207
   ->  Index Scan using users_pkey on public.users (actual time=0.059..0.060 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.068 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.113 ms

PostgreSQL requires an expression index to index JSON fields. We have seen one limitation of expression indexes in a previous post (No Index Only Scan on JSONB Fields) and here is another one: PostgreSQL doesn't detect when the indexed value has not changed. This prevents it from applying HOT optimization, even if the new row fits within the same block.

This was with an expression index on a scalar value (with no array in the JSON path) but there's the same problem with GIN indexes. I create the same index as in the previous post (No Index for LIKE on JSONB):

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

My update that does't touch this field shows one more WAL record, larger WAL size and more buffer reads:

                                                                                                           QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.080..0.080 rows=0 loops=1)
   Buffers: shared hit=11
   WAL: records=4 bytes=397
   ->  Index Scan using users_pkey on public.users (actual time=0.039..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.070 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.100 ms

The issue at hand is that you might prefer document data modeling over relational data modeling due to its simplicity in matching your domain access patterns. You may have come across some "Just use PostgreSQL" advocacy and claims that JSONB can transform PostgreSQL into a document database, and started such design with all data in a JSONB field. Your initial performance metrics might be met but, as your application grows and more indexes are added, critical use cases may struggle to scale.

I emphasized the importance of WAL records and size, as they are significant bottlenecks in PostgreSQL's scalability due to single-threaded WAL replication. Additionally, write amplification leads to other complications, including increased checkpoint work and higher pressure on vacuum. Scaling up with more CPUs won't resolve the issue, and adding read replicas won't help either since all indexes need to be created on the primary database.

PostgreSQL is a relational database that incorporates JSONB for added flexibility, but it doesn't convert it into a document database. In an SQL RDBMS, frequently updated or indexed fields should be in their own columns, maybe their own tables, while JSON can be used for additional flexible data accessed as a whole. If a document model is preferred, consider using a document database like MongoDB, which performs in-place updates to documents in memory and updates only the relevant indexes (FAQ: Indexes) and is not limited by fixed block size storage (documents are stored in a B-Tree with variable leaf size, and secondary indexes reference them with the key in this B-Tree).

No HOT updates on JSONB (write amplification)

PostgreSQL's Multi-Version Concurrency Control (MVCC) works around the challenge of in-place updates in fixed block storage by avoiding it. Instead of updating rows, it processes them as deletes and inserts, prioritizing simplicity of implementation over performance. Updating fields in a JSONB document can be problematic due to significant write amplification.

What are Heap Only Tuple (HOT) updates?

When a table row is updated, the entire row is marked for deletion by setting its xmax value, indicating the end of its visibility period. A new version of the row is then created with a fresh xmin value to signify the start of its visibility. Write amplification arises not only from copying the entire row but also from the need to update all indexes associated with the table. PostgreSQL indexes reference rows using their physical location (ctid), meaning that any change in the row's physical location requires new index entries to find the latest version of the row, even if the indexed column values remain unchanged. Over time, when older versions of rows are no longer visible to any active transaction—having passed the xmin horizon—they are eligible for garbage collection by the vacuum process, which removes outdated row versions and their associated index entries.

Given that many SQL applications have multiple indexes on their tables, frequent updates can exacerbate write amplification, with detrimental consequences for checkpoints and replication, especially when every index must be updated regardless of whether the indexed values changed. To mitigate this, PostgreSQL introduces an optimization called Heap-Only Tuple (HOT) updates that avoid adding new index entries for keys that didn't change, in cases where the new version of the row fits in the same block as the previous version. If a column is frequently updated and the old version is frequently vacuumed, some free space may be constantly available in the block for new versions (and this can be initialized with a lower fillfactor) and HOT optimization can kick-in.

This blog post series is about using PostgreSQL as a document database, with all data in JSONB, but there's no Heap-Only Tuple optimization for indexes on JSONB fields.

Test it with EXPLAIN (ANALYZE, WAL, BUFFERS)

I create a table similar to the one in the previous post, storing user profiles, and add a login sub-object to record the last login date and a login counter:

create table users (
  id bigserial primary key,
  data jsonb not null
);
insert into users (data) values (
 jsonb_build_object(
    'name', 'Homer Simpson',
    '{login}',
    jsonb_build_object(
      'last', to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'),
      'count', 0
    )  ,
    'email', jsonb_build_array(
      'donutlover@springfieldusa.com',
      'homerdoh@simpsons.com',
      'lazy.sofa.guy@tvcharacters.net'
    )
  )
 );

This is the PostgreSQL equivalent of the following MongoDB call to insert a document:

// MongoDB equivalent query
db.users.insertOne({  
    "_id": 1,
    name: "Homer Simpson",  
    login: {  
      last: new Date(),  
      count: 0  
    },  
    email: [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
});  

My use-case is the equivalent of the following to increase the login counter and update the last login date:

// MongoDB equivalent query
db.users.updateOne(  
  { _id: 1 },  
  {  
    $set: { "login.last": new Date() },  
    $inc: { "login.count": 1 }  
  }  
);  

In SQL, there's no increment operation. Instead, an update sets the new values. When stored as a JSONB field in PostgreSQL, we must replace the document with a new one using json_set() to modify the fields.

I run some updates to increase the login counter and update the last login date and show the execution plan with statistics:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
UPDATE users
SET data = jsonb_set(
  data,
  '{login}',
  jsonb_build_object(
    'last', to_char(current_timestamp, 'YYYY-MM-DD'),
    'count', (COALESCE((data->'login'->>'count')::int, 0) + 1)
  )
)
where id=1
\watch

Here is the execution plan showing two buffer hits to find the row via index, and one Write-Ahead Logging (WAL) record for the update of the row (71 bytes)

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.057..0.057 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=1 bytes=71
   ->  Index Scan using users_pkey on public.users (actual time=0.040..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.077 ms

You can run that for a while and on a large table, and observe the same. Even if it writes more than necessary, because the whole row and JSON documents is re-written, the performance is predictable.

Note that you may observe some executions with one more WAL record generated by the Index Scan as reads may do some delayed cleanup:

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.062..0.063 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=2 bytes=157
   ->  Index Scan using users_pkey on public.users (actual time=0.047..0.048 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
         WAL: records=1 bytes=86
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.083 ms

While storing all data in JSONB, similar to a document database, may seem appealing, this table lacks indexes. In a real-world application, documents will contain more fields and sub-documents and require multiple indexes, which are likely to evolve as the application develops.

Adding indexes

During the lifecycle of an application, more indexes are created. I add an index on the user name:

create index on users(
 (data->>'name')
);

In PostgreSQL, adding an index to fields that are not updated does impact updates differently than in many other databases. For instance, my login update produces two additional WAL records, resulting in a total WAL size that is three times larger, along with increased buffer reads.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.091..0.092 rows=0 loops=1)
   Buffers: shared hit=9
   WAL: records=3 bytes=207
   ->  Index Scan using users_pkey on public.users (actual time=0.059..0.060 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.068 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.113 ms

PostgreSQL requires an expression index to index JSON fields. We have seen one limitation of expression indexes in a previous post (No Index Only Scan on JSONB Fields) and here is another one: PostgreSQL doesn't detect when the indexed value has not changed. This prevents it from applying HOT optimization, even if the new row fits within the same block.

This was with an expression index on a scalar value (with no array in the JSON path) but there's the same problem with GIN indexes. I create the same index as in the previous post (No Index for LIKE on JSONB):

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

My update that does't touch this field shows one more WAL record, larger WAL size and more buffer reads:

                                                                                                           QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.080..0.080 rows=0 loops=1)
   Buffers: shared hit=11
   WAL: records=4 bytes=397
   ->  Index Scan using users_pkey on public.users (actual time=0.039..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.070 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.100 ms

The issue at hand is that you might prefer document data modeling over relational data modeling due to its simplicity in matching your domain access patterns, and may have come across some "Just use PostgreSQL" advocacy that claims that JSONB can transform PostgreSQL into a document database. You started such design with all data in a JSONB field, and your initial performance metrics might be met but, as your application grows and more indexes are added, critical use cases may struggle to scale.

I emphasized the importance of WAL records and size, as they are significant bottlenecks in PostgreSQL's scalability due to single-threaded WAL replication. Additionally, write amplification leads to other complications, including increased checkpoint work and higher pressure on vacuum. Scaling up with more CPUs won't resolve the issue, and adding read replicas won't help either since all indexes need to be created on the primary database.

PostgreSQL is a relational database that incorporates JSONB for added flexibility, but it doesn't convert it into a document database. In an SQL RDBMS, frequently updated or indexed fields should be in their own columns, maybe their own tables, while JSON can be used for additional flexible data accessed as a whole. If a document model is preferred, consider using a document database like MongoDB, which performs in-place updates to documents in memory and updates only the relevant indexes (FAQ: Indexes) and is not limited by fixed block size storage (documents are stored in a B-Tree with variable leaf size, and secondary indexes reference them with the key in this B-Tree).

May 29, 2025

The Future of Comments is Lies, I Guess

I’ve been involved in content moderation since roughly 2004. I’ve built spam prevention for corporate and personal e-mail, moderated open-source mailing lists and IRC channels, worked at a couple social media networks, and help moderate a Mastodon instance for a few hundred people. In the last few years I’ve wasted more time fighting blog comment spam, and I’m pretty sure Large Language Models (LLMs) are to blame.

I think of spam as a space with multiple equilibria. Producing spam takes work. Spammers are willing to invest that work because each message has a small chance to make money, or achieve political or emotional goals. Some spam, like the endless identical Viagra scams in my email spam folder, or the PHPBB comment spam I filter out here on aphyr.com, is cheap to generate and easy to catch. I assume the spammers make it up in volume. Other spam, like spear phishing attacks, is highly time-consuming: the spammer must identify a target, carefully craft a plausible message using, say, the identity of the target’s co-workers, or construct a facade of a bank’s log-in page, and so on. This kind of spam is more likely to make it through filters, but because it takes a lot of human work, is generally only worth it for high-value targets.

LLMs seem to be changing these equilibria. Over the last year I’ve seen a new class of comment spam, using what I’m fairly sure is LLM-generated text. These comments make specific, plausible remarks about the contents of posts and images, and work in a link to some web site or mention a product. Take this one I caught a few months back:

"Walking down a sidewalk lined with vibrant flowers is one of life’s simple joys! It reminds me of playing the [link redacted] slope game, where you have to navigate through colorful landscapes while dodging obstacles.

Before 2023, you’d likely have paid a human a few cents to write and post that. Now, thanks to LLMs, this sort of thing is trivially automated. The model will happily fabricate relatable personal experiences in service of a spam campaign:

That photo reminds me of the first time I tried macro photography in my backyard. I spent an hour trying to get a clear shot of a red flower, experimenting with angles and lighting. It was so much fun discovering the little details up close! If you ever need a break from photography, I recommend playing Snow Rider 3D for a bit of quick, light-hearted fun.

Other spam seems to glue together LLM remarks with what I think is a hand-written snippet of ad copy. Note the abrupt shift in grammar, diction, and specificity.

This piece masterfully blends technical depth with mythological storytelling, transforming a standard Haskell programming interview into an epic narrative. It cleverly critiques the complexity and absurdity of some technical interviews by illustrating how type-level Haskell can be pushed to esoteric extremes beautiful, powerful, and largely impractical. A fascinating and relevant read for anyone interested in the intersection of programming, language design, and narrative. I’m James Maicle, working at Cryptoairhub where we run a clear and insightful crypto blog. I’ll be bookmarking your site and following the updates. Glad to see so much valuable information shared here looking forward to exploring more strategies together. Thanks for sharing. If you interest about Crypto please visit my website and read my article [link redacted] Crypto Blog.

Of course this is not news. Product reviews are inundated with LLM slop, as are social media networks. LLMs allow for cheap, fast, and automated generation of unique spam which is difficult for machines and humans to identify. The cost falls on me and other moderators, who must sift through LLM bullshit trying to sieve “awkward but sincere human” from “automated attack”. Thanks to OpenAI et al I read more spam, and each message takes longer to check.

This problem is only going to get worse as LLMs improve and spammers develop more sophisticated ways to use them. In recent weeks I’ve received vague voice messages from strangers with uncanny speech patterns just asking to catch up—a sentence which, had I uttered it prior to 2023, would have been reasonably interpreted as a sign of psychosis. I assume these too are LLM-generated scams, perhaps a pig butchering scheme. So far these are from strangers, but it’s not hard to imagine an attacker using text and voice synthesis to impersonate a friend, colleague, or lover in a detailed conversation. Or one’s doctor, or bank.

As the cost of generating slop decreases, it’s easy to imagine LLMs generating personae, correspondence, even months-long relationships with real humans before being deployed for commercial or political purposes. Creating plausible accounts and selling them has been a successful business model in social media for some time; likewise, we have strong clues that LLMs are already used for social media bots. Social networks have responded to these attacks via out-of-band mechanisms: IP reputation analysis, javascript and mobile app fingerprinting, statistical correlation across multiple accounts, and so on. I’m not sure how to translate these defenses to less centralized and more privacy-oriented networks, like email or blog spam. I strongly suspect the only reason Mastodon hasn’t been eaten alive by LLM spambots is because we’re just not big enough to be lucrative. But those economics are shifting, and even obscure ecological niches can be worth filling.

As a moderator, that keeps me up at night.

$graphLookup (Connect By / Recursive Query)

In this series, I present various access patterns for a specific document model. These patterns are supported by a limited set of secondary indexes designed to make queries efficient, without modifying the document schema.

This article explores recursive searches through graph-like relationships between documents, with each video in this collection showcasing related content with an array of related videos:

[
  {
    _id: '---U8lzusKE',
    category: 'Entertainment',
    relatedVideos: [
      'x9LRHlMdZmA', '5P5nxdJAFdE',
      'jdg8Sp1HUKM', 'xdxVBiJe8Co',
      'qLSA0gQ9z28', 'WHZPEkZCqwA',
      'y3VMhFCLxRc', 'hHjGtBnSv50',
      '_vx1OVLX5Rc', 'V4LnorVVxfw',
      'l56K8eAtCig', 'dHpCoFyMCHU',
      'XO5BYR39te8', 'yWy0cuxNWDw',
      '4SiXdhL7wxU', '5EaZTxQeQMQ',
      'mOvmBNLQIi4', 'fa2CvFa2xY8',
      'CpbYBZKdi3s', 'lBxzoqTSILc',
      'RBumgq5yVrA', 'EoN8RKubbO0',
      'zIHQPgz_Iwg', '7PCkvCPvDXk',
      't1NVJlm5THo'
    ],
...

With this structure, I can easily navigate from one video to its related ones, and from there to further related content, effectively building a graph of interconnected videos. There’s no need for an additional index since each video references the "_id" of its related videos, which is always indexed.

Access Patterns: forward traversal of related documents

The following query identifies a video and explores down to three levels of related videos, constructing a graph of connections based on the associated video array. It filters these connections by daily views and restructures the output for improved readability:

db.youstats.aggregate([  
  {  
    $match: { _id: 'YoB8t0B4jx4' }   
  },  
  {  
    $graphLookup: {  
      from: "youstats",  
      startWith: "$relatedVideos", 
      connectFromField: "relatedVideos", 
      connectToField: "_id", 
      as: "allRelatedVideos", 
      maxDepth: 3,  
      restrictSearchWithMatch: {  
        "views.daily.data": { $gt: 1e6 } 
      },  
      depthField: "level" 
    }  
  },  
  {  
    $project: {  
      _id: 1,  
      title: 1,  
      author: 1,  
      allRelatedVideos: {  
        $map: {  
          input: "$allRelatedVideos",  
          as: "video",  
          in: {  
            number: { $add: [ { $indexOfArray: ["$allRelatedVideos", "$$video"] }, 1 ] },  
            _id: "$$video._id",  
            title: "$$video.title",  
            author: "$$video.author",  
            level: "$$video.level"  
          }  
        }  
      }  
    }  
  }  
])

The execution plan shows the IXSCAN only during the $match stage, but the subsequent iterations utilize the same method.

    stage: 'EXPRESS_IXSCAN',
    keyPattern: '{ _id: 1 }',

With $graphLookup, you need to have an index on connectToField.

Access Patterns: backward traversal of related documents

To navigate the graph in the opposite direction and find the parent with the _id in its related videos array, an index on that field is essential for quick access. In MongoDB, indexes are created similarly for both scalar fields and arrays:

db.youstats.createIndex({ 
 relatedVideos: 1, _id: 1 
});  

The following query, where the connectToField is the related videos array, is fast:

db.youstats.aggregate([  
  {      
    $match: { _id: 'x9LRHlMdZmA' }       
  },      
  {      
    $graphLookup: {      
      from: "youstats",      
      startWith: "$_id",      
      connectFromField: "_id",      
      connectToField: "relatedVideos",      
      as: "parentVideos",      
      maxDepth: 3,       
      depthField: "level",  
      restrictSearchWithMatch: {      
        "views.daily.data": { $gt: 1e6 }      
      }      
    }      
  }      
]);  

Using $graphLookup in an aggregation pipeline effectively retrieves a limited number of documents, as long as the work area remains within the 100MB memory limit and results do not exceed the BSON limit of 16MB. For utilizing MongoDB as a document database, consider PuppyGraph (Querying MongoDB Atlas Data as a Graph). The same indexes allow fast recursive search and can be on a scalar identifier or an array or of child, depending if you implemented the one-to-many in the one-side or many-side.

How to Safely Upgrade InnoDB Cluster From MySQL 8.0 to 8.4

In this blog, we continue from where we left off in the previous post, InnoDB Cluster Setup: Building a 3-Node High Availability Architecture, where we demonstrated how to set up a MySQL InnoDB Cluster with three nodes to achieve high availability. Here, we walk through the step-by-step process of performing a rolling upgrade of that […]

Building on open table formats

Open table formats like Apache Iceberg, Delta Lake, and Apache Hudi are transforming how developers manage large-scale data on object storage systems.

Postgres 18 beta1: large server, Insert Benchmark, bad configurations

While testing Postgres 18 beta1 on a large server I used several configurations with io_workers set to values the are too large and performance suffered. The default value for it is io_workers and that appears to be a great default. Perhaps other people won't repeat my mistakes.

tl;dr

  • the default value for io_workers is 3 and that is a good value to use
  • be careful about using larger values for io_workers as the performance penalty ranges from 0% (no penalty) to 24% (too much penalty

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config files for 18 beta 1 use names like conf.diff.cx10cw${Z}_c32r128 where $Z is the value for io_workers. All of these use io_method=workers. The files are here. I repeated tests for io_workers set to 2, 4, 6, 8, 16 and 32.

The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table.

The benchmark steps are:

  • l.i0
    • insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: details

The performance summary is here.

The summary of the summary is that larger values for io_workers ...
  • increase throughput by up to 4% for the initial load (l.i0) 
  • increase throughput by up to 12% for create index (l.x)
  • decrease throughput by up to 6% for write heavy (l.i1)
  • decrease throughput by up to 16% for write heavy (l.i2)
  • decrease throughput by up to 3% for range queries, note that this step is CPU-bound
  • decrease throughput by up to 24% for point queries, note that this step is IO-bound
The summary is:
  • the initial load step (l.i0)
    • rQPS for io_workers in (4, 6, 8, 16) was (1.03, 1.03, 1.03, 1.02, 1.04) so these were slightly faster than io_workers=2.
    • rQPS for io_workers=32 was 1.00
  • the create index step (l.x)
    • rQPS for io_workers in (4, 6, 8, 16, 32) was (1.06, 1.05, 1.07, 1.12, 1.11) so these were all faster than io_workers=2.
  • the write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.99, 0.99, 0.96, 0.94)
    • for l.i2 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.84, 0.95, 0.90, 0.88, 0.88)
    • I am surprised that larger values for io_workers doesn't help here but did help during the previous steps (l.i0, l.x) which are also write heavy.
  • the range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.99, 0.99, 0.99, 0.99)
    • for qr500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.98, 0.97, 0.97)
    • for qr1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.01, 1.00, 0.99, 0.98, 0.97)
    • note that this step is usually CPU-bound for Postgres because the indexes fit in memory
  • the point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.97, 0.94, 0.90)
    • for qp500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.00, 0.98, 0.97, 0.89, 0.81)
    • for qp1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.95, 0.93, 0.86, 0.76)
    • these steps are IO-bound
For the regressions in one of the write-heavy steps (l.i2) I don't see an obvious problem in the vmstat and iostat metrics -- the amount of CPU, context switches and IO per operation have some variance there isn't difference that explains the change.

For the regressions in the point query steps (qp100, qp500, qp1000) the vmstat and iostat metrics for qp1000 help to explain the problem. Metrics that increase as io_workers increases include:
  • CPU/operation (see cpupq) has a large increase
  • context switches /operation (see cspq) has a small increase
  • iostat reads /operation (rpq) and KB read /operation (rkbpq) have small increases
Finally, average rates from iostat. These are not normalized by QPS. There aren't many differences, although rps (reads/s) is higher for io_workers=2 because throughput was higher in that case.

Legend:
* rps, wps - read /s and write /s
* rKBps, wKBps - KB read /s & KB written /s
* rawait, wawait - read & write latency
* rareqsz, wareqsz - read & write request size

-- from l.i2 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
3468    34622   0.08    8.9     5374    85567   1.41    17.3     2
2959    24026   0.08    8.3     4866    74547   0.05    17.5    32

-- from qp1000 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
81949   659030  0.13    8.0     39546   589789  168.21  16.5     2
68257   549016  0.12    8.0     36005   549028  130.44  16.2    32
 

May 28, 2025

Postgres 18 beta1: small server, IO-bound Insert Benchmark

I recently published results for Postgres 18 beta1 on a small server using the Insert Benchmark with a cached workload and low concurrency. Here I share results for it with an IO-bound workload.

tl;dr - for 17.5 vs 18 beta

  • the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
  • the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
tl;dr for 14.0 through 18 beta1
  • the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
  • the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
  • the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 800M rows.

The benchmark steps are:

  • l.i0
    • insert 800 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
  • All versions -- 14.0 through 18 beta1
    • See here, this uses the results from 14.0 as the base version
  • Only 17.5 and 18 beta1
    • See here, this uses the results from 17.5 as the base version and there are three results for 18 beta1, one for each of the configurations listed above.
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: 17.5 and 18 beta1

The performance summary is here.

Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.

The summary of the summary is:
  • the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
  • the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
The summary is:
  • the initial load step (l.i0)
    • rQPS is (1.00, 0.99, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
  • the create index step (l.x)
    • rQPS is (1.00, 1.02, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
  • the write-heavy steps (l.i1, l.i2)
    • rQPS is (0.95, 0.98) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.98, 0.96) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (0.99, 0.98) in 18 beta1 with io_method=io_uring vs 17.5
  • the range query steps (qr100, qr500, qr1000)
    • rQPS is (0.98, 0.97, 0.98) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.99, 0.97, 0.97) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (0.99, 0.99, 0.99) in 18 beta1 with io_method=io_uring vs 17.5
  • the point query steps (qp100, qp500, qp1000)
    • rQPS is (1.00, 1.00, 0.99) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.99, 0.99, 0.98) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (1.00, 0.99. 0.98) in 18 beta1 with io_method=io_uring vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.

The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.

Results: 14.0 through 18 beta1

The performance summary is here.

Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.

The summary of the summary is:
  • the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
  • the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
  • the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Comparing 18 beta1 with io_method=sync vs 14.0
  • the initial load step (l.i0)
    • rQPS is 1.01 for 18 beta1 vs 17.5
  • the create index step (l.x)
    • rQPS is 1.14 for 18 beta1 vs 17.5
  • the write-heavy steps (l.i1, l.i2)
    • rQPS is (0.87, 0.85) for 18 beta1 vs 17.5
    • Regressions for these steps are not new, they started in the 14.x releases
  • the range query steps (qr100, qr500, qr1000)
    • rQPS is (0.99, 0.98, 0.96) for 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000)
    • rQPS is (1.01, 1.00, 1.00) for 18 beta1 vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.

The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.










$elemMatch and Multi-Key Indexes

In the previous post, I used the following index on the daily views data, which is an array of integers for each video:

db.youstats.createIndex({ 
 "views.daily.data": -1      , // for Sort on maximum daily view
 commentsNumber: 1           , // for additional filter on comments
}); 

The index was used to find the videos that had more than ten million views in a day:

db.youstats.find({
  "views.daily.data": { $gt: 1e7 } ,
}).explain("executionStats").executionStats

Such filter optimization is easy to understand as there's one index key for each value in the array, and the search simply looks for the values within the bound defined by $gt: 1e7:

      direction: 'forward',
      indexBounds: {
        'views.daily.data': [ '[inf.0, 10000000)' ],
        commentsNumber: [ '[MinKey, MaxKey]' ]
      },

If you use multiple filters in the find() command, they apply to the same document but not to the same array value. For example, the following query will not find videos with daily views between ten and twenty million. Instead, it retrieves videos that had at least one day with views over ten million and one day with views under two million:

db.youstats.find({  
  $and: [  
    { "views.daily.data": { $gt: 1e7 } },  
    { "views.daily.data": { $lt: 2e7 } }  
  ]  
}).explain("executionStats").executionStats

This is visible in the execution as the most selective filter used for the index scan and the other as a filter after fetching the document:

   stage: 'FETCH',
    filter: { 'views.daily.data': { '$lt': 20000000 } },
    nReturned: 8,
...
      stage: 'IXSCAN',
      nReturned: 8,
      indexBounds: {
        'views.daily.data': [ '[inf.0, 10000000)' ],
        commentsNumber: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 44,
      seeks: 1,

Note that the following is exactly the same, with two filters that may apply to different key, and there's no 'between' operator in MongoDB:

db.youstats.find({  
  $and: [  
    { "views.daily.data": { $gt: 1e7 , $lt: 2e7} },  
  ]  
}).explain("executionStats").executionStats

If you want to apply multiple filters to the same key (the same array element) you must use $elemMatch so that the filters apply to an array element rather than the document:

db.youstats.find({
  "views.daily.data": {   
    $elemMatch: { $gt: 1e7 , $lt: 2e7 }   
  } ,
}).explain("executionStats").executionStats

In case of doubt, the execution plan makes it clear in the index bounds:

      indexBounds: {
        'views.daily.data': [ '(20000000, 10000000)' ],
        commentsNumber: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 38,

There's no 'between' operator in MongoDB but you don't need it because the MongoDB query planner can combine the two bounds [ '(20000000, -inf.0]' ] and [ '[inf.0, 10000000)' ]' to [ '(20000000, 10000000)' ] with is effectively a between. It has also the advantage to be implicit about the bounds inclusion with $gt/$lt or $gte/$lte.

This query planner transformation is known as index bound intersection

Once again, the same index was used to serve different queries. On this field, daily views data, each array had a single value and my filters applied on the same field.

My sample dataset has also an array with entries being objects with multiple fields, to record the video sharing activity:

...
    gplus: [
      {
        activityLanguage: 'en',
        activityReshared: 'z120it0xupygjt2hm04cctoodsjmttkwrow0k',
        authorID: '118074003327949301125',
        activityType: 'share',
        authorName: 'Liz Lyon',
        activityID: 'z12hu5cgnwrscznhb04ccrtprnbeupqwicc',
        activityTimestamp: '1391094295'
      },
      {
        activityLanguage: 'en',
        activityReshared: 'z120it0xupygjt2hm04cctoodsjmttkwrow0k',
        authorID: '118074003327949301125',
        activityType: 'share',
        authorName: 'Liz Lyon',
        activityID: 'z12hu5cgnwrscznhb04ccrtprnbeupqwicc',
        activityTimestamp: '1391094295'
      },
...

I have some use cases that needs to find what a user has shared and create the following index for this access pattern:

db.youstats.createIndex({ 
 "gplus.activityType": 1       ,  
 "gplus.authorName": 1         ,  
});

This index can be used to list the activity types, as we have seen on a previous post:

db.youstats.distinct("gplus.activityType")

[ null, 'reshare', 'share' ]

I can filter for the 'share' activity type and the author name. I use a regular expression to find a prefix in the name. I use $elemMatch as the two filters must apply on the same array element:

db.youstats.aggregate([  
  { $match: { 
      gplus: { $elemMatch: { 
                             activityType: "share",
                             authorName: { $regex: /^Franck.*/ } 
      } } 
  } },  
  { $unwind: "$gplus" },  
  { $group: { _id: { video: "$title", author: "$gplus.authorName" }, shareCount: { $sum: 1 } } },  
  { $project: { _id: 0, videoTitle: "$_id.video", "share.author": "$_id.author", shareCount: 1 } },  
  { $sort: { shareCount: -1 } }, 
  { $limit: 5 }                   
]).explain("executionStats").stages[0]['$cursor'].executionStats

The query planner has combined the filters into two index bounds, to get fast access to the index entries for the desired document:

        direction: 'forward',
        indexBounds: {
          'gplus.activityType': [ '["share", "share"]' ],
          'gplus.authorName': [ '["Franck", "Francl")', '[/^Franck.*/, /^Franck.*/]' ]
        },
        keysExamined: 17,
        seeks: 2,

This is known as compound index bounds

In this post, I continued to add new use cases to a document model that was initially designed without specific access patterns in mind. Although optimized for a particular domain, this general-purpose database can adapt to various access patterns, thanks to its powerful multi-key indexes and query planner.
You can reproduce this on a MongoDB database and the first post of this series explains how to setup this lab and load data. If you encounter a database that claims MongoDB compatibility, you can try the same queries but will not observe the same performance because MongoDB is unique in providing multi-key indexes that can cover equality, sort and range efficiently.

The Open Source Ripple Effect: How Valkey Is Redefining the Future of Caching, and Why It Matters

Open wins again: What Valkey’s meteoric rise tells us about the future A product manager’s perspective on navigating an ecosystem in flux. When Redis Inc. changed its core product license, few anticipated how fast the aftershocks would reshape the caching world. But the emergence of Valkey – a Linux Foundation–governed fork – has sparked one […]

Chapter 5: Multiversion Concurrency Control (Concurrency Control Book)

Chapter 5 of Concurrency Control and Recovery in Database Systems (1987) introduces multiversion concurrency control (MVCC), a fundamental advance over single-version techniques. Instead of overwriting data, each write operation creates a new version of the data item. Readers can access older committed versions without blocking concurrent writes or being blocked by concurrent writes.

MVCC removes read-write conflicts and increases concurrency significantly. Having multiple versions around gives the scheduler flexibility: if a read arrives "too late" to see the latest write, it can still proceed by accessing an older version. This avoids unnecessary aborts. Writes may still abort due to write-write conflicts, but reads are largely unimpeded. This is especially beneficial in read-heavy workloads.

This chapter presents three broad classes of multiversion methods: Multiversion Timestamp Ordering (MVTO), Multiversion Two-Phase Locking (MV2PL), and Multiversion Mixed Methods.

For all the benefits MVCC provides, the trade-off is additional storage and complexity in managing versions and garbage collection. This is a good tradeoff to take, and  MVCC is the dominant concurrency control approach today. Oracle uses MV2PL. PostgreSQL uses MVCC natively. MySQL uses MVCC in InnoDB. For both of them, reads get a consistent snapshot without locking and writes create new versions and require locking at commit time. Microsoft Hekaton implements an MVTO-style engine in its in-memory OLTP system (see my 2022 post on Hekaton). Spanner may be best viewed as MVTO plus external certification: it uses multiversion reads and assigns commit timestamps via TrueTime, while writes acquire locks and are certified at commit to ensure strict serializability. Unlike MV2PL, reads never block, and unlike pure MVTO, writes are serialized through locking and timestamp-based validation.

Let's dig in to the sections.


Multiversion Serializability Theory

To reason about the correctness of multiversion schemes, this section extends classical serializability theory. It defines MV histories (which include explicit version metadata) and 1V histories, which reflect what users see: a single logical version per item. The key challenge is to ensure that MV histories are equivalent to 1-serial 1V histories. A 1-serial MV history is one in which each read observes the latest committed version at the read's logical time. This avoids anomalies like fractured reads (e.g., reading stale x and fresh y from the same transaction).

Correctness is characterized using a Multiversion Serialization Graph (MVSG). An MV history is 1-serial iff its MVSG is acyclic. This parallels the classical serializability theory, and extends it with versioning. The rest of the section develops the correctness proof via MVSGs. 


Multiversion Timestamp Ordering (MVTO)

MVTO generalizes timestamp ordering by storing multiple versions. Each transaction is assigned a unique timestamp at start. When a transaction Ti reads x, it finds the latest version of x with a timestamp less than TS(Ti). When Ti writes x, it buffers the write and, at commit, creates a new version tagged with TS(Ti).

MVTO guarantees serializability: transactions appear to execute in timestamp order. The main difference from single-version TO is that MVTO avoids aborting reads. Since older versions are preserved, reads always succeed. However, MVTO still aborts transactions on write-write conflicts. If Ti tries to write x, but another transaction Tj with TS(Tj) > TS(Ti) has already read an older version of x, Ti must abort to preserve timestamp order.

MVTO is good for read-mostly workloads but struggles with high write contention. Garbage collection also becomes a concern. Old versions can be discarded only after all transactions that might read them complete.


Multiversion Two-Phase Locking (MV2PL)

MV2PL extends strict 2PL by adding versioning. Unlike 2PL, where transactions block when accessing locked items, MV2PL lets readers proceed by using older versions (e.g., accessing the latest committed version as of the lock time). While 2PL systems block on read-write conflicts; MV2PL avoids this by separating read and write versions.

MV2PL also extends 2PL for writes by introducing certify locks: Instead of overwriting in-place, the writers in MV2PL buffer and acquire certify locks at commit time to serialize version creation. A certify lock is exclusive: only one transaction can hold it on a data item at any time. This prevents races and ensures only one new version per item per commit. 


Multiversion Mixed Method

Multiversioning gives the scheduler more flexibility, especially for read-only transactions. If the system knows in advance which transactions are queries (read-only) and which are updaters (perform writes), it can increase concurrency by handling them differently. This method uses MVTO for queries and Strict 2PL for updaters.

Queries behave like in MVTO: they are assigned timestamps at start, read the latest version less than their timestamp, and never block. Updaters use Strict 2PL for mutual exclusion during execution. At commit, the transaction manager assigns each updater a timestamp consistent with its position in the serialization graph, ensuring global consistency. This hybrid approach prevents the out-of-order write-write conflicts seen in pure MVTO. This also resembles concurrency control in modern OLAP systems: large analytical reads proceed without blocking, while updates are carefully serialized.

A key innovation here is the commit list. Each transaction maintains a commit list of versions it plans to write. When committing, the transaction must check for conflicts: It cannot write a version if a transaction with a later timestamp has already read an earlier version of that item. This would violate timestamp order. In a centralized system, the commit list can be scanned at commit time to detect such anomalies. In distributed systems, where this check can’t be performed atomically, the system needs to use a distributed coordination protocol like 2PC.

Solve a Geospatial (GIS) problem with CedarDB!

Motivation

If you share my interest in finding things, then I hope you will find this brief post worthwhile. I’ve been interested in databases for a while now and, during this time, I’ve consistently been intrigued by text and spatial data. When I got my hands on CedarDB I was very excited about its potential, then I heard a request by a user, for a geospatial related feature. He was kind enough to share with me his specific need, which was essentially given a point, find all other points which are located within a specified distance. (In this exercise, we’ll show we can do this in about 10 ms on a 9M row table.)

May 27, 2025

Postgres 18 beta1: small server, cached Insert Benchmark

I recently published results for Postgres 18 beta1 on a small server using sysbench with a cached and IO-bound workload. This post has results for the Insert Benchmark on a small server with a cached workload and low concurrency.

tl;dr - for 17.5 vs 18 beta

  • the l.i1 benchmark step (write-only with inserts and deletes) was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method= worker or io_uring
  • the point query benchmark steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method= sync or worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
tl;dr for 14.0 through 18 beta1
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) are ~5% slower in 17.5 vs 14.0
  • qp100, qp500, qp1000 (point query) are 1% to 3% slower in 17.5 vs 14.0

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 20M rows.

The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
  • All versions -- 14.0 through 18 beta1
    • See here, this uses the results from 14.0 as the base version
  • Only 17.5 and 18 beta1
    • See here, this uses the results from 17.5 as the base version and there are three results for 18 beta1, one for each of the configurations listed above.
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: 17.5 and 18 beta1

The performance summary is here.

The summary is:
  • the l.i0 (initial load) step was ...
    • 1% or 2% faster in 18 beta1 vs 17.5
  • the create index step (l.x) was ...
    • as fast with 18 beta1 as with 17.5 when using io_method=sync
    • 2% slower in 18 beta1 when using the new io_method= worker or io_uring
  • the l.i1 step was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method =worker =sync
  • the range query steps (qr100, qr500, qr1000) were ...
    • 1% to 3% slower in 18 beta1
  • the point query steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method =sync or =worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
For regressions in the l.i1 step
  • This step does inserts and deletes as fast as possible with 50 rows per transaction. The regressions were smaller for the l.i2 step that only changes 5 rows per transaction.
  • From vmstat and iostat metrics 18 beta1 uses more CPU per operation (see cpupq here)
For regressions in the point query steps (qp100, qp500, qp1000)
  • The worst regression is from 18 beta1 with io_method=io_uring and the CPU /operation there is the largest. See cpupq for qp100, qp500 and qp1000.
Results: 14.0 through 18 beta1

The performance summary is here.

For 17.5 vs 18 beta1 see the previous section.

For 14.0 through 17.5, QPS on ...
  • l.i0 (the initial load) is stable
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) is ~5% slower in 17.5 vs 14.0
  • qr100, qr500, qr1000 (range query) is similar between 17.5 and 14.0
  • qp100, qp500, qp1000 (point query) is 1% to 3% slower in 17.5 vs 14.0

Google Firestore with MongoDB compatibility

In this series, I tested multiple MongoDB emulations on top of SQL databases, and all failed to be compatible with a simple query like listing the last orders for one product in one country:

db.orders.find(
 { 
   country_id: 1, 
   order_details: { $elemMatch: { product_id: 15 } } 
 } ).sort({ 
   created_at: -1 
 }).limit(10)

Those emulations are syntax-compatible, but not behavior-compatible when it comes to performance and scalability. With MongoDB, such query finds immediately the ten documents from the following index:

db.orders.createIndex({ 
   "country_id": 1, 
   "order_details.product_id": 1,
   "created_at": -1 
});

It is simple: you index for the equality predicates, on country and product, and add the creation date to get the keys ordered. That's how you guarantee predictable performance in OLTP: the response time depends on the result, not on the size of the collection.

I tried the same on Google Firestore. Note that the simple find().sort().limit() syntax was not accepted by the Firestore Studio Editor, so I've run the equivalent aggregation pipeline:

db.orders.aggregate([  
{  
    $match: {  
      country_id: 1,  
      order_details: {  
        $elemMatch: { product_id: 15 }  
      }  
    }  
  },  
  {  
    $sort: { created_at: -1 }  
  },  
  {  
    $limit: 10  
  },  
])

Without an index, such query does a full collection scan, sorts all documents, and discard all except the first ten:

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 4.00 KiB (4,096 B)
 entity row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 3.00 ms
|
└── • Drop
    |  fields to drop: [__$6__, __$7__]
    |  records returned: 0
    |  total latency: 2.98 ms
    |
    └── • MajorSort
        |  fields: [__$6__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 2.98 ms
        |
        └── • Extend
            |  expressions: [array_offset(__$7__, 0L) AS __$6__]
            |  records returned: 0
            |  total latency: 2.89 ms
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$7__]
                |  records returned: 0
                |  total latency: 2.87 ms
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 2.87 ms
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 2.87 ms
                        |
                        └── • Filter
                            |  expression: ($eq(country_id, 1) AND $eq(order_details, 15))
                            |  records returned: 0
                            |  total latency: 2.86 ms
                            |
                            └── • TableScan
                                   order: STABLE
                                   properties: * - { __create_time__, __update_time__ }
                                   source: **/orders
                                   records returned: 0
                                   records scanned: 0
                                   total latency: 2.84 ms

I've run an empty collection solely to examine the execution plan's shape and understand its scalability.

I attempted to create the index using mongosh, because Firestore provides protocol compatibility, but Google Cloud requires a credit card even for the free trial, which I did not accept. As a result, billing is not enabled and I can't use it:

firestore> db.orders.createIndex(
   { "country_id": 1, "order_details .product_id": 1, "created_at": -1 }
  );
MongoServerError[PermissionDenied]: Request is prohibited because billing is not enabled.
firestore> 

No problem, I was able to create the index from the console:

It was created after a few minutes:

I've run my query again and here is the execution plan (called EXPLANATION in Google Firestore):

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 12.00 KiB (12,288 B)
 entity row scanned: 0
 index row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 2.04 s (2,040 ms)
|
└── • Drop
    |  fields to drop: [__$8__, __$9__]
    |  records returned: 0
    |  total latency: 2.04 s (2,040 ms)
    |
    └── • MajorSort
        |  fields: [__$8__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 2.04 s (2,040 ms)
        |
        └── • Extend
            |  expressions: [array_offset(__$9__, 0L) AS __$8__]
            |  records returned: 0
            |  total latency: 2.04 s (2,040 ms)
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$9__]
                |  records returned: 0
                |  total latency: 2.04 s (2,040 ms)
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 2.04 s (2,040 ms)
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 2.04 s (2,040 ms)
                        |
                        └── • Filter
                            |  expression: $eq(order_details, 15)
                            |  records returned: 0
                            |  total latency: 2.04 s (2,040 ms)
                            |
                            └── • TableAccess
                                |  order: PRESERVE_INPUT_ORDER
                                |  properties: * - { __create_time__, __update_time__ }
                                |  peak memory usage: 4.00 KiB (4,096 B)
                                |  records returned: 0
                                |  total latency: 2.04 s (2,040 ms)
                                |
                                └── • UniqueScan
                                       index: **/orders (country_id ASC, order_details.product_id ASC, created_at DESC)@[id = CICAgJjF9oIK]
                                       keys: [country_id ASC, __$5__ ASC, created_at DESC, __key__ ASC]
                                       properties: Selection { __key__ }
                                       ranges: /
                                               |----[1]
                                       records returned: 0
                                       records scanned: 0
                                       total latency: 2.04 s (2,038 ms)

The index was used to scan a range (ranges: / |----[1]) for the "country_id" filter, apparently preserving some order (order: PRESERVE_INPUT_ORDER). This could be beneficial for pagination queries, allowing it to stop when the result limit is reached.
However, the product filter ($eq(order_details, 15)) is applied after fetching documents, resulting in unnecessary reads for a filter that was not covered by the index.
Next, projections are performed to include the "id" and remove the "rowid". It appears the preserved order does not relate to the key as expected for pagination, since some computation occurs to determine the sorting field (sortPaths([created_at DESC])).
Ultimately, a sort is performed on this calculated field (fields: [__$8__ DESC]). This execution plan reads all orders from a country before being able to return the ten ones expected by the result. This is not scalable.

I ran this on an empty collection, and the table scan took two seconds (total latency: 2.04 s or 2,038 ms). Given this result, adding data to test for larger workloads is unnecessary. The issue lies not in quantity but in quality of the compatibility, limited to very simple key-value queries, lacking the advantages of MongoDB’s flexible schema document model and multi-key index performance.

When creating the index, I checked 'multi-key' because contrary to MongoDB, it's not the same index that can be created on scalar and arrays. Let's try a non multi-key one - even if it doesn't make sense as to goal is to have multiple products per order:

The execution plan is similar except that it shows a SequentialScan on the index instead of UniqueScan:

Billing Metrics:
 read units: 0

Execution Metrics:
 results returned: 0
 request peak memory usage: 8.00 KiB (8,192 B)
 entity row scanned: 0
 index row scanned: 0

Tree:
• Drop
|  fields to drop: [__$3__]
|  records returned: 0
|  total latency: 20.36 ms
|
└── • Drop
    |  fields to drop: [__$8__, __$9__]
    |  records returned: 0
    |  total latency: 20.35 ms
    |
    └── • MajorSort
        |  fields: [__$8__ DESC]
        |  limit: 10
        |  peak memory usage: 4.00 KiB (4,096 B)
        |  records returned: 0
        |  total latency: 20.35 ms
        |
        └── • Extend
            |  expressions: [array_offset(__$9__, 0L) AS __$8__]
            |  records returned: 0
            |  total latency: 20.29 ms
            |
            └── • Extend
                |  expressions: [sortPaths([created_at DESC]) AS __$9__]
                |  records returned: 0
                |  total latency: 20.27 ms
                |
                └── • Drop
                    |  fields to drop: [__key__, __row_id__]
                    |  records returned: 0
                    |  total latency: 20.27 ms
                    |
                    └── • Extend
                        |  expressions: [_id(__name__) AS __id__]
                        |  records returned: 0
                        |  total latency: 20.26 ms
                        |
                        └── • Filter
                            |  expression: $eq(order_details, 15)
                            |  records returned: 0
                            |  total latency: 20.26 ms
                            |
                            └── • TableAccess
                                |  order: PRESERVE_INPUT_ORDER
                                |  properties: * - { __create_time__, __update_time__ }
                                |  peak memory usage: 4.00 KiB (4,096 B)
                                |  records returned: 0
                                |  total latency: 20.25 ms
                                |
                                └── • SequentialScan
                                       index: **/orders (country_id ASC, order_details.product_id ASC, created_at DESC)@[id = CICAgJjFqZMK]
                                       key ordering length: 4
                                       keys: [country_id ASC, __$5__ ASC, created_at DESC, __key__ ASC]
                                       properties: Selection { __key__ }
                                       ranges: /
                                               |----[1]
                                       records returned: 0
                                       records scanned: 0
                                       total latency: 20.16 ms

My interpretation is that, in all cases, Firestore with MongoDB compatibility cannot use indexes to cover a sort. Either it is multi-key, and entries have to be deduplicated, or it is single key, but the index only covers the filtering on the country, not the product or the creation date.

In summary, another database claims compatibility with MongoDB by using its protocol and offering a similar API for storing documents. This reinforces MongoDB's status as the de facto standard for document databases. However, as discussed in previous posts, storing JSON in a relational database does not convert it into a document database, and similarly, storing JSON in a key-value data store does not replace MongoDB. If your MongoDB application runs on one of those emulations, you are likely using it as a key-value datastore without using the full potential of a general-purpose document database.

Beyond Guesswork: Enterprise-Grade PostgreSQL Tuning with pg_stat_statements

Something’s slowing your database down, and everyone feels it. Dashboards drag. Reports run late. Engineers start rebooting services just to buy time. Nobody’s saying “the database is broken,” but something isn’t right. You know there’s a problem. What you don’t have is visibility. PostgreSQL isn’t going to raise its hand and tell you which queries […]

Sort on Array with Multi-Key Index

In the previous post, we discussed how MongoDB indexes retrieve documents ordered by a specific field, using published dates from a collection of YouTube video statistics as an example. Unlike many databases that allow only a single value per document, MongoDB's flexible schema also supports indexing within nested arrays.

The dataset I imported was not designed for searching on the video views, only to store them per video. It contains two arrays: one for days ("day.data') and another for corresponding daily view counts ("views.daily.data"). This schema was intended to retrieve all stats for a specific video at once. However, this blog series aims to demonstrate how a MongoDB document model can support a variety of additional use cases through secondary indexes, without modifying the collection schema.

Access pattern: videos with the highest daily views

This is straightforward, just create an index on the array of daily views data, using dot notation to define the path:

db.youstats.createIndex({ 
 "views.daily.data": -1      , // for Sort on maximum daily view
 commentsNumber: 1           , // for additional filter on comments
}); 

I created a descending index since most of my queries focus on the highest number of views. MongoDB indexes can be scanned both forward and backward.
When the indexed field is an array, it has multiple keys per document, but sorting must use one key. The semantic is easy:

  • A descending sort orders documents by the highest value in the array.
  • An ascending sort orders documents by the lowest value in the array.

To find the Top-3 documents by daily views, here is a simple query:

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  // "views.daily": 1        ,
  // "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3)

[
  {
    _id: 'ASO_zypdnsQ',
    commentsNumber: 1122436,
    author: 'officialpsy',
    publishedDate: '2013-04-13T11:59:04.000Z',
    title: 'PSY - GENTLEMAN M/V',
    duration: 234,
    type: 'video/3gpp'
  },
  {
    _id: 'My2FRPA3Gf8',
    commentsNumber: 1125301,
    author: 'MileyCyrusVEVO',
    publishedDate: '2013-09-09T16:00:38.000Z',
    title: 'Miley Cyrus - Wrecking Ball',
    duration: 222,
    type: 'application/x-shockwave-flash'
  },
  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    duration: 239,
    type: 'video/3gpp'
  }
]

The documents show-up when one of the daily view data in the array has the highest value. For example, the 3rd result was:

  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    day: {
      data: [
        Long('1409788800000'), Long('1409875200000'), Long('1409961600000'), Long('1410048000000'), Long('1410134400000'), Long('1410220800000'), Long('1410307200000'), Long('1410393600000'), Long('1410480000000'), Long('1410566400000'), Long('1410652800000'), Long('1410739200000'), Long('1410825600000'), Long('1410912000000'), Long('1410998400000'), Long('1411084800000'), Long('1411171200000'), Long('1411257600000'), Long('1411344000000'), Long('1411430400000'), Long('1411516800000'), Long('1411603200000'), Long('1411689600000'), Long('1411776000000'), Long('1411862400000'), Long('1411948800000'), Long('1412035200000'), Long('1412121600000'), Long('1412208000000'), Long('1412294400000'), Long('1412380800000'), Long('1412467200000'), Long('1412553600000'), Long('1412640000000'), Long('1412726400000'), Long('1412812800000'), Long('1412899200000'), Long('1412985600000'), Long('1413072000000'), Long('1413158400000'), Long('1413244800000'), Long('1413331200000'), Long('1413417600000'), Long('1413504000000'), Long('1413590400000'), Long('1413676800000'), Long('1413763200000'), Long('1413849600000'), Long('1413936000000'), Long('1414022400000'), Long('1414108800000'), Long('1414195200000'), Long('1414281600000'), Long('1414368000000')
      ]
    },
    duration: 239,
    type: 'video/3gpp',
    views: {
      daily: {
        data: [
          2964062, 17799094, 19526335, 14604160, 9606241, 5959851,  4090643,  3419126,  2907521, 2626169, 2195691,  1518943,  1251086,  1128994,  958318, 861349,   785797,   628364,   506154,  564079, 445417,   474349,   498093,   589038,  444256, 363379,   329318,   313375,   333627,  335226, 354050,   322087,   239715,   228562,  213420, 201771,   213078,   247715,   228587,  183759, 168511,   169992,   199282,   326091,  347602, 335237,   290271,   242939,   223959,  219971, 249009,   277773,   279301,   220609
        ]
      }
    }
  }

As this ran in a couple of milliseconds, it is obvious and it didn't scan a million of documents and used the index efficiently. This is easy to check with explain():

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3).explain("executionStats").executionStats

...
  nReturned: 3,
  executionTimeMillis: 1,
  totalKeysExamined: 7,
  totalDocsExamined: 3,
...
    stage: 'LIMIT',
    nReturned: 3,
    limitAmount: 3,
...
      stage: 'PROJECTION_SIMPLE',
      nReturned: 3,
...
        stage: 'FETCH',
        nReturned: 3,
...
          stage: 'IXSCAN',
          nReturned: 3,
          keyPattern: { 'views.daily.data': -1, commentsNumber: 1 },
          isMultiKey: true,
          multiKeyPaths: {
            'views.daily.data': [ 'views.daily.data' ],
            commentsNumber: []
          },
          direction: 'forward',
          indexBounds: {
            'views.daily.data': [ '[MaxKey, MinKey]' ],
            commentsNumber: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 7,
          seeks: 1,
          dupsTested: 7,
          dupsDropped: 4
        }
      }
    }
  }
}

The index contains an entry for each "views.daily.data" of all documents, starting with the highest daily view. The first key examined returns the recordid of the document with the highest daily view. The next key may correspond to the same video, which may have a high view count on another day, or another one. To return 3 distinct documents, the scan must eliminate duplicate recordid. Here 4 duplicates were dropped so that 3+4=7 keys have been read in total.

Even if we were less lucky and had to examine 365 keys per document, reading one thousand index entries is still fast, as only 3 documents have to be fetched.

Access pattern: videos with the lowest daily views and no comments

The same index can also identify videos with the lowest daily views. To filter out those with no data, I ensure to only include entries that exist:

db.youstats.find({
  "views.daily.data": { $exists: true } ,
  "commentsNumber": { $lt: 1 } ,
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  //"day": 1                ,
}).sort({
  "views.daily.data": 1 
}).limit(3)

This query may not yield useful results with my data, as it will return videos with no views on a given day, even if they had many views on another day. Remember, ascending sort retrieves the lowest field value in the array, while descending sort retrieves the highest.

In this series on indexing MongoDB collections, I aim to demonstrate that a document model can support more use cases than initially anticipated during schema design. Although I imported a dataset not optimized for specific queries, introducing a couple of secondary indexes can enhance performance for various access patterns.
What's crucial during schema design to ensure that all fields used for filtering are contained within the same document, allowing the index to retrieve the minimum set of documents efficiently.

Sort on Array with Multi-Key Index

In the previous post, we discussed how MongoDB indexes retrieve documents ordered by a specific field, using published dates from a collection of YouTube video statistics as an example. Unlike many databases that allow only a single value per document, MongoDB's flexible schema also supports indexing within nested arrays.

The dataset I imported was not designed for searching on the video views, only to store them per video. It contains two arrays: one for days ("day.data') and another for corresponding daily view counts ("views.daily.data"). This schema was intended to retrieve all stats for a specific video at once. However, this blog series aims to demonstrate how a MongoDB document model can support a variety of additional use cases through secondary indexes, without modifying the collection schema.

Access pattern: videos with the highest daily views

This is straightforward, just create an index on the array of daily views data, using dot notation to define the path:

db.youstats.createIndex({ 
 "views.daily.data": -1      , // for Sort on maximum daily view
 commentsNumber: 1           , // for additional filter on comments
}); 

I created a descending index since most of my queries focus on the highest number of views. MongoDB indexes can be scanned both forward and backward.
When the indexed field is an array, it has multiple keys per document, but sorting must use one key. The semantic is easy:

  • A descending sort orders documents by the highest value in the array.
  • An ascending sort orders documents by the lowest value in the array.

Another way to think about it is that the index scan uses the first key encountered for each document and skips the others. On a forward scan with a descending index, it is the greatest value.

To find the Top-3 documents by daily views, here is a simple query:

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  // "views.daily": 1        ,
  // "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3)

[
  {
    _id: 'ASO_zypdnsQ',
    commentsNumber: 1122436,
    author: 'officialpsy',
    publishedDate: '2013-04-13T11:59:04.000Z',
    title: 'PSY - GENTLEMAN M/V',
    duration: 234,
    type: 'video/3gpp'
  },
  {
    _id: 'My2FRPA3Gf8',
    commentsNumber: 1125301,
    author: 'MileyCyrusVEVO',
    publishedDate: '2013-09-09T16:00:38.000Z',
    title: 'Miley Cyrus - Wrecking Ball',
    duration: 222,
    type: 'application/x-shockwave-flash'
  },
  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    duration: 239,
    type: 'video/3gpp'
  }
]

The documents show-up when one of the daily view data in the array has the highest value. For example, the 3rd result was:

  {
    _id: 'YoB8t0B4jx4',
    commentsNumber: 39370,
    author: 'SA Wardega',
    publishedDate: '2014-09-04T14:28:22.000Z',
    title: 'Mutant Giant Spider Dog (SA Wardega)',
    day: {
      data: [
        Long('1409788800000'), Long('1409875200000'), Long('1409961600000'), Long('1410048000000'), Long('1410134400000'), Long('1410220800000'), Long('1410307200000'), Long('1410393600000'), Long('1410480000000'), Long('1410566400000'), Long('1410652800000'), Long('1410739200000'), Long('1410825600000'), Long('1410912000000'), Long('1410998400000'), Long('1411084800000'), Long('1411171200000'), Long('1411257600000'), Long('1411344000000'), Long('1411430400000'), Long('1411516800000'), Long('1411603200000'), Long('1411689600000'), Long('1411776000000'), Long('1411862400000'), Long('1411948800000'), Long('1412035200000'), Long('1412121600000'), Long('1412208000000'), Long('1412294400000'), Long('1412380800000'), Long('1412467200000'), Long('1412553600000'), Long('1412640000000'), Long('1412726400000'), Long('1412812800000'), Long('1412899200000'), Long('1412985600000'), Long('1413072000000'), Long('1413158400000'), Long('1413244800000'), Long('1413331200000'), Long('1413417600000'), Long('1413504000000'), Long('1413590400000'), Long('1413676800000'), Long('1413763200000'), Long('1413849600000'), Long('1413936000000'), Long('1414022400000'), Long('1414108800000'), Long('1414195200000'), Long('1414281600000'), Long('1414368000000')
      ]
    },
    duration: 239,
    type: 'video/3gpp',
    views: {
      daily: {
        data: [
          2964062, 17799094, 19526335, 14604160, 9606241, 5959851,  4090643,  3419126,  2907521, 2626169, 2195691,  1518943,  1251086,  1128994,  958318, 861349,   785797,   628364,   506154,  564079, 445417,   474349,   498093,   589038,  444256, 363379,   329318,   313375,   333627,  335226, 354050,   322087,   239715,   228562,  213420, 201771,   213078,   247715,   228587,  183759, 168511,   169992,   199282,   326091,  347602, 335237,   290271,   242939,   223959,  219971, 249009,   277773,   279301,   220609
        ]
      }
    }
  }

As this ran in a couple of milliseconds, it is obvious, and it didn't scan millions of documents and used the index efficiently. This is easy to check with explain():

db.youstats.find({},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  "day": 1                ,
}).sort({
  "views.daily.data": -1 
}).limit(3).explain("executionStats").executionStats

...
  nReturned: 3,
  executionTimeMillis: 1,
  totalKeysExamined: 7,
  totalDocsExamined: 3,
...
    stage: 'LIMIT',
    nReturned: 3,
    limitAmount: 3,
...
      stage: 'PROJECTION_SIMPLE',
      nReturned: 3,
...
        stage: 'FETCH',
        nReturned: 3,
...
          stage: 'IXSCAN',
          nReturned: 3,
          keyPattern: { 'views.daily.data': -1, commentsNumber: 1 },
          isMultiKey: true,
          multiKeyPaths: {
            'views.daily.data': [ 'views.daily.data' ],
            commentsNumber: []
          },
          direction: 'forward',
          indexBounds: {
            'views.daily.data': [ '[MaxKey, MinKey]' ],
            commentsNumber: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 7,
          seeks: 1,
          dupsTested: 7,
          dupsDropped: 4
        }
      }
    }
  }
}

The index contains an entry for each "views.daily.data" of all documents, starting with the highest daily view. The first key examined returns the recordid of the document with the highest daily view. The next key may correspond to the same video, which may have a high view count on another day, or another one. To return 3 distinct documents, the scan must eliminate duplicate recordid. Here 4 duplicates were dropped so that 3+4=7 keys have been read in total.

MongoDB utilizes WiredTiger B-Tree indexes not only to locate specific values, but also to navigate efficiently through the ranges of index bounds. Even if we were less lucky and had to examine 365 keys per document, reading one thousand index entries is still fast, as only 3 documents have to be fetched.

Access pattern: videos with the lowest daily views and no comments

The same index can also identify videos with the lowest daily views. To exclude those with no data, I only include entries that are available:

db.youstats.find({
  "views.daily.data": { $exists: true } ,
  "commentsNumber": { $lt: 1 } ,
},{
  author: 1               ,
  title: 1                ,
  duration: 1             ,
  type: 1                 ,
  publishedDate: 1        ,
  commentsNumber: 1       ,
  "views.daily": 1        ,
  //"day": 1                ,
}).sort({
  "views.daily.data": 1 
}).limit(3)

This query may not yield useful results, as it returns videos with no views on a given day, even if they had many views previously. Remember, ascending sort retrieves the lowest field value in the array, while descending sort retrieves the highest.

In this series on indexing MongoDB collections, I aim to demonstrate that a document model can support more use cases than initially anticipated during schema design. Although I imported a dataset not optimized for specific queries, introducing a couple of secondary indexes can enhance performance for various access patterns.
What's crucial during schema design to ensure that all fields used for filtering are contained within the same document, allowing the index to retrieve the minimum set of documents efficiently.

For information on how the sort order behaves with different datatypes, please refer to the documentation: Comparison/Sort Order. This applies only to MongoDB, not to its emulations that claim compatibility. I tested Amazon DocumentDB, Azure CosmosDB, Oracle Database, and FerretDB, but none could effectively cover the sort operation with an index, and they all ended up scanning the entire collection for the queries presented, which is slower and cannot scale.

May 26, 2025

Equality with Multiple Values, Preserving Sort for Pagination

In the previous post, I've created the following index:

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

MongoDB can use this index even for queries that do not have an equality predicate on "category". Not many databases offer this possibility, and it helps to limit the number of indexes to create for an OLTP application.

Access pattern: videos in a list of categories

I ran the same query as in the previous post, but with a list of categories instead of one: Music, Entertainment and Film. This fits in the same access pattern and doesn't need another index. Up to 200 values in the list, MongoDB runs an index scan for each value and merges the sorted results of each:

db.youstats.find({  
  category: { $in: ["Music",  "Entertainment", "Film"] }  ,
  duration: { $gt: 10 }                                   , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

In total, 12 keys were read to find the 10 documents for the result, which is very efficient:

  nReturned: 10,
  executionTimeMillis: 5,
  totalKeysExamined: 12,
  totalDocsExamined: 10,

Each value from the list was a seek into the B-Tree for each category:

                indexBounds: {
                  category: [ '["Entertainment", "Entertainment"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 6,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Film", "Film"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 1,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Music", "Music"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 5,
                seeks: 1,

Such OR-Expansion (the IN could be an OR), exists in other databases, but MongoDB goes further by using a sort-merge algorithm on top of it to preserve the order from each index scan ("explode for sort optimization"), visible in the execution plan with a SORT_MERGE stage. This sort preservation is vital in OLTP scenarios, especially for pagination queries, as it avoids scanning all documents to sort them for page retrieval.

Such technique necessitates a list of values, so it cannot be directly used by the opposite $nin or $neq. However, another optimization allows getting such list very efficiently.

Access pattern: distinct categories (with Loose Index Scan)

This gets the distinct value for "category":

db.youstats.distinct("category")
[
  '3',         '4',
  '5',         'Animals',
  'Autos',     'Comedy',
  'Education', 'Entertainment',
  'Film',      'Games',
  'Howto',     'Music',
  'News',      'Nonprofit',
  'People',    'Shows',
  'Sports',    'Tech',
  'Trailers',  'Travel'
]

It is extremely fast and to explain why I run the same in an aggregation pipeline with explain:

db.youstats.aggregate([  
  { $group: { _id: "$category" } }  
]).explain("executionStats").stages[0]["$cursor"].executionStats

{
  executionSuccess: true,
  nReturned: 20,
  executionTimeMillis: 0,
  totalKeysExamined: 20,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 20,
    executionTimeMillisEstimate: 0,
    works: 21,
    advanced: 20,
    transformBy: { category: 1, _id: 0 },
    inputStage: {
      stage: 'DISTINCT_SCAN',
      nReturned: 20,
      executionTimeMillisEstimate: 0,
      works: 21,
      advanced: 20,
      keyPattern: { category: 1, publishedDate: -1, duration: 1 },
      indexName: 'category_1_publishedDate_-1_duration_1',
      isMultiKey: false,
      multiKeyPaths: { category: [], publishedDate: [], duration: [] },
      direction: 'forward',
      indexBounds: {
        category: [ '[MinKey, MaxKey]' ],
        publishedDate: [ '[MaxKey, MinKey]' ],
        duration: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 20
    }
  }
}

By examining only 20 keys, MongoDB was able to find the distinct values of the first field of the index, skipping values while scanning the B-Tree. Not many databases can do that, and it can be combined with the loose index scan we have seen above.

Access pattern: for any category, sorted by publishing date

When you know that there are less than 200 distinct value in the first field of the key, you can use the same index even without a filter on it, using db.youstats.distinct("category") which is fast:

db.youstats.find({  
  category: { $in: db.youstats.distinct("category") },
  duration: { $gt: 10 }, 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

It had to examine only 29 keys out of one million to get the ten documents for the result:

    nReturned: 10,
    executionTimeMillis: 1,
    totalKeysExamined: 29,
    totalDocsExamined: 10,

This can also be used to find all categories except one:

db.youstats.distinct("category").filter(
 category => category !== "People"
)

Of course, an index without "category" in front will serve this query slightly better, but not having to create a new index is a big advantage.

Skip Scan for all categories

To run my query for all categories, I extracted the list, with DISTINCT_SCAN to inject it for a MERGE SORT. I can read all categories with category: { $gt: MinKey } but this is a single IXSCAN that returns entries sorted by category before published date, which must go though a SORT:

db.youstats.find({  
  category: { $gt: MinKey }   ,
  duration: { $gt: 10 }       , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

...
        stage: 'FETCH',
        nReturned: 10,
        executionTimeMillisEstimate: 406,
        works: 1006099,
        advanced: 10,
        needTime: 1006088,
        docsExamined: 10,
...
          stage: 'SORT',
          nReturned: 10,
          executionTimeMillisEstimate: 405,
          works: 1006099,
          advanced: 10,
          needTime: 1006088,
          sortPattern: { publishedDate: -1 },
          memLimit: 104857600,
          limitAmount: 10,
          totalDataSizeSorted: 1506,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 991323,
            executionTimeMillisEstimate: 237,
            works: 1006088,
            advanced: 991323,
            needTime: 14764,
            keyPattern: { category: 1, publishedDate: -1, duration: 1 },
            indexName: 'category_1_publishedDate_-1_duration_1',
            isMultiKey: false,
            multiKeyPaths: { category: [], publishedDate: [], duration: [] },
            direction: 'forward',
            indexBounds: {
              category: [ '(MinKey, MaxKey]' ],
              publishedDate: [ '[MaxKey, MinKey]' ],
              duration: [ '(10, inf.0]' ]
            },
            keysExamined: 1006087,
            seeks: 14765,
...

Even if it has read and sorted a million index entries, this query executed in less than half a second because it didn't have to fetch the documents, as the index covered the filtering and sort fields. If an index is not the best for avoiding a sort, it may still be sufficient for pagination queries. Always look at the execution plan and ensure that the FETCH operation doesn't read more documents than necessary.

This post illustrates that you may not need to create separate indexes for new queries. An optimal index for one use case can still effectively serve others. In these examples, the most selective filtering was achieved through sort().limit() pagination, making the best index include the sorting field in its key. A prefix with few values still provides good performance, whether used with $eq or $in, and a suffix can cover additional filters or projections.

Equality with Multiple Values, Preserving Sort for Pagination

In the previous post, I've created the following index to quickly find the last videos in one category (reminder: 1 means ASC, -1 means DESC):

db.youstats.createIndex({ 
 category: 1               , // for Equality on category
 publishedDate: -1         , // for Sort within each category
 duration: 1               , // for additional Range filtering
}); 

MongoDB can use this index even for queries that do not have an equality predicate on "category". Not many databases offer this possibility, and it helps to limit the number of indexes to create for an OLTP application.

Access pattern: videos in a list of categories

I ran the same query as in the previous post, but with a list of categories instead of one: Music, Entertainment and Film. This fits in the same access pattern and doesn't need another index. Up to 200 values in the list, MongoDB runs an index scan for each value and merges the sorted results of each:

db.youstats.find({  
  category: { $in: ["Music",  "Entertainment", "Film"] }  ,
  duration: { $gt: 10 }                                   , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

In total, 12 keys were read to find the 10 documents for the result, which is very efficient:

  nReturned: 10,
  executionTimeMillis: 5,
  totalKeysExamined: 12,
  totalDocsExamined: 10,

Each value from the list was a seek into the B-Tree for each category:

                indexBounds: {
                  category: [ '["Entertainment", "Entertainment"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 6,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Film", "Film"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 1,
                seeks: 1,
...
                indexBounds: {
                  category: [ '["Music", "Music"]' ],
                  publishedDate: [ '[MaxKey, MinKey]' ],
                  duration: [ '(10, inf.0]' ]
                },
                keysExamined: 5,
                seeks: 1,

Such OR-Expansion (the IN could be an OR), exists in other databases, but MongoDB goes further by using a sort-merge algorithm on top of it to preserve the order from each index scan (called "explode for sort optimization"), visible in the execution plan with a SORT_MERGE stage. This sort preservation is vital in OLTP scenarios, especially for pagination queries, as it avoids scanning all documents to sort them for page retrieval.

Such technique necessitates a list of values for the query planner to define the index bounds, so it cannot be directly used by the opposite $nin or $neq. However, another optimization allows getting such list very efficiently.

Access pattern: distinct categories (with Loose Index Scan)

This gets the distinct value for "category":

db.youstats.distinct("category")
[
  '3',         '4',
  '5',         'Animals',
  'Autos',     'Comedy',
  'Education', 'Entertainment',
  'Film',      'Games',
  'Howto',     'Music',
  'News',      'Nonprofit',
  'People',    'Shows',
  'Sports',    'Tech',
  'Trailers',  'Travel'
]

It is extremely fast and to explain why I run the same in an aggregation pipeline with explain:

db.youstats.aggregate([  
  { $group: { _id: "$category" } }  
]).explain("executionStats").stages[0]["$cursor"].executionStats

{
  executionSuccess: true,
  nReturned: 20,
  executionTimeMillis: 0,
  totalKeysExamined: 20,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 20,
    executionTimeMillisEstimate: 0,
    works: 21,
    advanced: 20,
    transformBy: { category: 1, _id: 0 },
    inputStage: {
      stage: 'DISTINCT_SCAN',
      nReturned: 20,
      executionTimeMillisEstimate: 0,
      works: 21,
      advanced: 20,
      keyPattern: { category: 1, publishedDate: -1, duration: 1 },
      indexName: 'category_1_publishedDate_-1_duration_1',
      isMultiKey: false,
      multiKeyPaths: { category: [], publishedDate: [], duration: [] },
      direction: 'forward',
      indexBounds: {
        category: [ '[MinKey, MaxKey]' ],
        publishedDate: [ '[MaxKey, MinKey]' ],
        duration: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 20
    }
  }
}

By examining only 20 keys, MongoDB was able to find the distinct values of the first field of the index, skipping values while scanning the B-Tree. Not many databases can do that (as exposed in the PostgreSQL Wiki), and it can be combined with the loose index scan we have seen above.

Access pattern: for any category, sorted by publishing date

When you know that there are less than 200 distinct value in the first field of the key, you can use the same index even without a filter on it, using db.youstats.distinct("category") which is fast:

db.youstats.find({  
  category: { $in: db.youstats.distinct("category") },
  duration: { $gt: 10 }, 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

It had to examine only 29 keys out of one million to get the ten documents for the result:

    nReturned: 10,
    executionTimeMillis: 1,
    totalKeysExamined: 29,
    totalDocsExamined: 10,

This can also be used to find all categories except one:

db.youstats.distinct("category").filter(
 category => category !== "People"
)

Of course, an index without "category" in front will serve this query slightly better, but not having to create a new index is a big advantage for general purpose applications that cover multiple use cases.

Skip Scan for all categories

To run my query for all categories, I extracted the list, with DISTINCT_SCAN to inject it for a MERGE SORT. I can read all categories with category: { $gt: MinKey } but this is a single IXSCAN that returns entries sorted by category before published date, which must go through a SORT:

db.youstats.find({  
  category: { $gt: MinKey }   ,
  duration: { $gt: 10 }       , 
},{
  category: 1                 ,
  author: 1                   ,
  title: 1                    ,
  duration: 1                 ,
  type: 1                     ,
  publishedDate: 1            ,
}).sort({ publishedDate: -1 }).limit(10).explain("executionStats")

...
        stage: 'FETCH',
        nReturned: 10,
        executionTimeMillisEstimate: 406,
        works: 1006099,
        advanced: 10,
        needTime: 1006088,
        docsExamined: 10,
...
          stage: 'SORT',
          nReturned: 10,
          executionTimeMillisEstimate: 405,
          works: 1006099,
          advanced: 10,
          needTime: 1006088,
          sortPattern: { publishedDate: -1 },
          memLimit: 104857600,
          limitAmount: 10,
          totalDataSizeSorted: 1506,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 991323,
            executionTimeMillisEstimate: 237,
            works: 1006088,
            advanced: 991323,
            needTime: 14764,
            keyPattern: { category: 1, publishedDate: -1, duration: 1 },
            indexName: 'category_1_publishedDate_-1_duration_1',
            isMultiKey: false,
            multiKeyPaths: { category: [], publishedDate: [], duration: [] },
            direction: 'forward',
            indexBounds: {
              category: [ '(MinKey, MaxKey]' ],
              publishedDate: [ '[MaxKey, MinKey]' ],
              duration: [ '(10, inf.0]' ]
            },
            keysExamined: 1006087,
            seeks: 14765,
...

Even if it has read and sorted a million index entries, this query executed in less than half a second because it didn't have to fetch the documents, as the index covered the filtering and sort fields. If an index is not the best for avoiding a sort, it may still be sufficient for pagination queries. Always look at the execution plan and ensure that the FETCH operation doesn't read more documents than necessary.

This post illustrates that you may not need to create separate indexes for new queries. An optimal index for one use case can still effectively serve others. In these examples, the most selective filtering was achieved through sort().limit() pagination, making the best index include the sorting field in its key. A prefix with few values still provides good performance, whether used with $eq or $in, and a suffix can cover additional filters or projections.

Can I use Supabase for analytics?

Supabase is a popular managed Postgres with a bunch of great features. Learn how to use Supabase to build simple user-facing analytics systems, and when to pair Supabase with technologies optimized for analytics.