a curated list of database news from authoritative sources

April 30, 2025

Querying embedded arrays in JSON (PostgreSQL JSONB and MongoDB documents)

When working with document-based data structures, the fields at the root can be indexed using simple expression indexes. However, when an array is present in the path, representing a One-to-Many relationship, PostgreSQL requires a GIN index and the use of JSON path operators for indexing, more efficient than SQL/JSON queries.

Example

I create the following table to store books. I decided to embed more information with a flexible schema and added a "data" column to store JSON data:

create table books (  
   primary key(book_id),  
   book_id bigint,  
   title text,  
   data jsonb  
);

I insert one book and add some reviews in my flexible schema document:

insert into books values (  
   8675309,  
   'Brave New World',  
   '{ "reviews":[  
      { "name": "John", "text": "Amazing!" },  
      { "name": "Jane", "text": "Incredible book!" }  
   ] }'
);

There’s no need for another table, as reviews are inherently linked to the books they discuss. A book cannot be reviewed without being displayed alongside its review, making any separate table unnecessary. I know it looks like violating the first normal form, but there's no update anomaly possible here because there's no duplication. From a normalization point of view, this is not very different from storing text, which is an array of char, or embeddings, which are arrays of numbers.

Inefficient query with SQL join

If you're comfortable with SQL, you might want to query this structure using SQL. Simply unnest the JSON document array and use it like a relational table:

SELECT DISTINCT title FROM books    
JOIN LATERAL jsonb_array_elements(books.data->'reviews') AS review 
ON review->>'name' = 'John'
;  

jsonb_array_elements expands a JSON array into rows for SQL queries. The lateral join adds book information, the ON or WHERE clause filters by reviewer name, and DISTINCT removes duplicate titles. This is standard SQL syntax but cannot use an index to filter on the reviewer name before unnesting, requiring a read of all rows and documents:

                              QUERY PLAN                               
-----------------------------------------------------------------------
 Unique
   ->  Sort
         Sort Key: books.title
         ->  Nested Loop
               ->  Seq Scan on books
               ->  Function Scan on jsonb_array_elements review
                     Filter: ((value ->> 'name'::text) = 'John'::text)

While this is a valid SQL syntax, and JSON is a valid SQL datatype, they are not so friendly because a relational database is not a document database. When using documents in PostgreSQL, you must learn how to query them and index them.

Note that jsonb_array_elements is not SQL standard, but PostgreSQL 17 introduced the JSON_TABLE which is aprt of the standard. The query can be re-written as:

SELECT books.title  
FROM books  
JOIN JSON_TABLE(  
  books.data->'reviews',  
  '$[*]' COLUMNS (  
    name TEXT PATH '$.name' 
  )  
) AS review  
ON review.name = 'John'
;  

This is the standard SQL/JSON way to query documents. Unfortunately, it is not efficient as no index scan is possible. Don't forget that SQL indexes are not part of the SQL standard.

Efficient query with JSON operators

To efficiently query JSONB data for reviews by a specific person, we need to utilize PostgreSQL's containment operator @> instead of relying on standard SQL:

SELECT title FROM books  
WHERE data->'reviews' @> '[{"name": "John"}]'
;

Now that I filter directly on the table without transforming the document, I can create an index. Since there can be multiple keys per table row, an inverted index is necessary:

CREATE INDEX ON books USING gin ((data->'reviews') jsonb_path_ops)
;

With an index for the JSON path operators, each key corresponds to an item in the array. This can be utilized when querying with an equality filter on the embedded array field:

                                   QUERY PLAN                                   
--------------------------------------------------------------------------------
 Bitmap Heap Scan on books
   Recheck Cond: ((data -> 'reviews'::text) @> '[{"name": "John"}]'::jsonb)
   ->  Bitmap Index Scan on books_expr_idx
         Index Cond: ((data -> 'reviews'::text) @> '[{"name": "John"}]'::jsonb)

GIN (Generalized Inverted Index) is designed for datatypes that includes multiple keys, such as array items or words, stems, or trigrams in text. While powerful, GIN has limitations: it cannot support range queries, optimize ORDER BY clauses, or perform covering projections (no Index Only Scan).

Comparison with a document database

While PostgreSQL offers flexibility in storing and indexing JSON documents, it does not replace a document database where documents are native types. For instance, in MongoDB the fields within an array are used like any other fields. I insert similar document in MongoDB:

db.books.insertOne({  
  book_id: 8675309,  
  title: "Brave New World",  
  reviews: [  
    { name: "John", text: "Amazing!" },  
    { name: "Jane", text: "Incredible book!" }  
  ]  
}); 

There is no need for special operators, and I can query the embedded field like any other field:

db.books.find(
  { "reviews.name": "John" } // filter
, { title: 1, _id: 0 }      // projection
);  

[ { title: 'Brave New World' } ]

There is no need for special index type, and I can index the embedded field like any other field:

db.books.createIndex({ "reviews.name": 1 })
;  

The execution plan confirms that the index is used to filter on "reviews.name":

db.books.find(
  { "reviews.name": "John" } // filter
, { title: 1, _id: 0 }      // projection
).explain().queryPlanner.winningPlan
; 

{
  isCached: false,
  stage: 'PROJECTION_SIMPLE',
  transformBy: { title: 1, _id: 0 },
  inputStage: {
    stage: 'FETCH',
    inputStage: {
      stage: 'IXSCAN',
      keyPattern: { 'reviews.name': 1 },
      indexName: 'reviews.name_1',
      isMultiKey: true,
      multiKeyPaths: { 'reviews.name': [ 'reviews' ] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: { 'reviews.name': [ '["John", "John"]' ] }
    }
  }
}

It is a regular index, the only particularity is that it allows multi-key entries.

Unlike PostgreSQL's GIN index, which requires a Bitmap Scan that doesn't maintain entry order, MongoDB employs a regular index that supports range queries. For instance, if I only know the beginning of a name, I can utilize a Regular Expression to filter the results effectively:

db.books.find(  
  { "reviews.name": { $regex: "^Joh" } }, // filter using regex  
  { title: 1, _id: 0 }                    // projection  
).explain().queryPlanner.winningPlan
;  

{
  isCached: false,
  stage: 'PROJECTION_SIMPLE',
  transformBy: { title: 1, _id: 0 },
  inputStage: {
    stage: 'FETCH',
    inputStage: {
      stage: 'IXSCAN',
      keyPattern: { 'reviews.name': 1 },
      indexName: 'reviews.name_1',
      isMultiKey: true,
      multiKeyPaths: { 'reviews.name': [ 'reviews' ] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: { 'reviews.name': [ '["Joh", "Joi")', '[/^Joh/, /^Joh/]' ] }
    }
  }
}

MongoDB utilized the index efficiently, as the query planner transformed the regular expression /^Joh/ into a range scan, specifically ["Joh", "Joi").

Conclusion

When comparing PostgreSQL and MongoDB, it is essential to understand their querying and indexing mechanisms, and not rely only on their ability to store JSON.
Like other RDBMS, PostgreSQL excels as a centralized, monolithic database utilized by multiple applications. With its specialized JSONB functions and GIN indexes, it adds some flexibility to the normalized tables.
MongoDB is ideal for development agility, particularly in microservices and domain-driven design, where access patterns are well-defined, but the application evolves with high velocity. Its document model aligns well with business objects.

Ultimately, the choice of a database should be based on your team's expertise, comfort with database syntax, data modeling, optimal indexing, and access to new hires and educational resources. The best database for a specific workload will not perform as expected if there's no expertise to code efficient queries, read execution plans, and index the access paths. Another database may be good enough when it fits better with the development organization and provides a better developer experience and simplifies the optimization.

Benchmarking PostgreSQL: The Hidden Cost of Over-Indexing

Indexing is one of PostgreSQL’s most effective tools for improving query performance, but like any powerful tool, it can cause real problems when overused. A while ago, my colleague Jobin wrote a blog post exploring the negative side effects of over-indexing in PostgreSQL: PostgreSQL Indexes Can Hurt Performance: Exploring the Negative Effects and the Costs […]

April 29, 2025

Announcing Vitess 22

Announcing Vitess 22 # The Vitess maintainers are happy to announce the release of version 22.0.0, along with version 2.15.0 of the Vitess Kubernetes Operator. This release is the first to benefit from a 6-month-long development cycle, after our recent change to the release cadence. Version 22.0.0 comes with significant enhancements to query serving and cluster management. These changes have allowed Vitess to be more performant and easier to operate compared to version 21.

April 28, 2025

PostgreSQL aborts the transactions on error

You may be surprised by this in PostgreSQL:

postgres=!# commit;
ROLLBACK
postgres=#

Yes, I issued a COMMIT but got a ROLLBACK!
I'll demo how it happened and how to avoid it. In short, the transaction was already rolled back and the only possible command to run is a ROLLBACK, which is implicit when terminating the transaction.

I created a table, started a transaction and inserted one row:

postgres=# create table demo ( id int primary key );
CREATE TABLE
postgres=# begin transaction;
BEGIN
postgres=*# insert into demo values (1);
INSERT 0 1
postgres=*#

The * in the prompt (which comes from the %x in the default %/%R%x%# PROMPT1) shows that I'm still in a transaction

I try to add the same row, with the same key, which violates the primary key constraint:

postgres=*# insert into demo values (1);
ERROR:  duplicate key value violates unique constraint "demo_pkey"
DETAIL:  Key (id)=(1) already exists.
postgres=!#

The ! in the prompt shows that the transaction failed.

I can check its status from another transaction (trying it within a failed transaction would have raised ERROR: current transaction is aborted, commands ignored until end of transaction block):

postgres=!# \! psql -c "select pid, application_name, state from pg_stat_activity where wait_event='ClientRead'"

  pid  | application_name |             state             
-------+------------------+-------------------------------
 66420 | psql             | idle in transaction (aborted)
(1 row)

postgres=!#

As the transaction has been aborted, it has released all locks. To verify this, I insert the same tow from another session:

postgres=!# \! psql -c "insert into demo values (1)"

INSERT 0 1

postgres=!

The only thing I can do is ending the transaction block, with ABORT, ROLLBACK or even COMMIT:

postgres=!# commit;
ROLLBACK
postgres=# 

If you're used to Oracle Database, you might find it surprising that in an interactive transaction, you must restart from the beginning even after completing previous statements and only one failed. Oracle Database automatically creates a savepoint before each user call, rolling back to this savepoint in case of an error, so that the user can continue with another statement once aware of the error.

In PostgreSQL, creating implicit savepoints is the client's or driver's responsibility.

For instance, PgJDBC can enable autosave=on to achieve this. However, it's important to note that using savepoints in PostgreSQL may be more resource-intensive compared to other databases.

Another example is a PL/pgSQL statement with an exception block that creates an implicit savepoint to roll back the main block before running the exception block. This differs from Oracle Database, which rolls back only the statement that failed when continuing to the exception block of PL/SQL.

If the exceptions are managed by your application code, you must use savepoints to achieve the same.

With an interactive user interface, like PSQL, it might be preferrable to create an implicit savepoint before each statement, and this is possible with ON_ERROR_ROLLBACK. Here is an example:

postgres=# drop table demo;
DROP TABLE

postgres=# create table demo ( id int primary key );
CREATE TABLE

postgres=# \set ON_ERROR_ROLLBACK on

postgres=# begin transaction;
BEGIN

postgres=*# insert into demo values (1);
INSERT 0 1

postgres=*# insert into demo values (1);
ERROR:  duplicate key value violates unique constraint "demo_pkey"
DETAIL:  Key (id)=(1) already exists.

postgres=*# \! psql -c "select pid, application_name, state from pg_stat_activity where wait_event='ClientRead'"  

  pid  | application_name |        state
-------+------------------+---------------------
 66461 | psql             | idle in transaction
(1 row)

postgres=*# insert into demo values (2);
INSERT 0 1

postgres=*# commit;
COMMIT

In this interactive transaction, with ON_ERROR_ROLLBACK set to on, I was able to continue with another value when I got the information that the one I tried to insert was a duplicate one.

When managing exceptions in your application, such as implementing retry logic for serializable errors, consider creating a savepoint before executing a statement. This allows you to continue with the same transaction if an exception is caught. However, be cautious as it is not always the right solution. In cases of deadlocks, one transaction must abort to release its locks. The scope of rollback on errors depends on what has been executed before, so it makes sense that the application controls it rather than relying on defaults.

In interactive usage with PSQL, setting ON_ERROR_ROLLBACK is advisable to prevent rolling back all previous work due to a simple typo causing an error. While it is unnecessary if you do not start a transaction explicitly, and rely on autocommit, running interactive commands without the ability to verify outcomes before committing changes is not recommended.

April 27, 2025

Index Only Scan on JSON Documents, with Covering and Multi-Key Indexes in MongoDB

Storing JSON documents in a SQL database does not make it a document database. The strength of any database lies in its indexing capabilities, and SQL databases, even with JSON datatypes, lack the flexibility of document databases, particularly when dealing with arrays in an embedded One-to-Many relationships.

In SQL databases, normalized tables are first-class citizens, with single-key indexes optimized for handling equality and range filters (WHERE), sorting (ORDER BY), and projection coverage (SELECT) for individual tables. To avoid joining before filtering, a denormalized model is preferred when those operations involve multiple tables.
While SQL databases can store documents, including arrays for One-to-Many relationships, working with arrays in JSON necessitates the use of inverted indexes like PostgreSQL's GIN, which do not cover range filters, sorting, and projection like regular indexes.

In contrast, MongoDB treats documents as the core of its data model. Its indexing mechanisms naturally extend to handle documents and their arrays, retaining functionality for filtering, sorting, and projection coverage, as regular indexes can hangle multiple keys per documents.

In previous posts, we examined how a multi-key index supports sort operations. Now, let's explore the conditions under which a query projection is covered by the index, eliminating the need to fetch the document from the collection.

Here is the table of content of the tests:

  • With default projection: IXSCAN ➤ FETCH
  • Partial projection: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Full projection: IXSCAN ➤ PROJECTION_COVERED
  • Query on a single-key entry: IXSCAN ➤ PROJECTION_COVERED
  • Query on a multi-key entry: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Projection of "_id" : IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Projection of "$recordId" : IXSCAN ➤ PROJECTION_DEFAULT <!-- TOC end -->

Here is a collection of friends, with their first name, last name, and phone number.

db.friends.insertMany([
  { firstName: "Rachel", lastName: "Green", phoneNumber: "555-1234" },
  { firstName: "Ross", lastName: "Geller", phoneNumber: "555-2345" },
  { firstName: "Monica", lastName: "Geller", phoneNumber: "555-3456" },
  { firstName: "Chandler", lastName: "Bing", phoneNumber: "555-4567" },
  { firstName: "Joey", lastName: "Tribbiani", phoneNumber: "555-6789" },
  { firstName: "Janice", lastName: "Hosenstein", phoneNumber: "555-7890" },
  { firstName: "Gunther", lastName: "Centralperk", phoneNumber: "555-8901" },
  { firstName: "Carol", lastName: "Willick", phoneNumber: "555-9012" },
  { firstName: "Susan", lastName: "Bunch", phoneNumber: "555-0123" },
  { firstName: "Mike", lastName: "Hannigan", phoneNumber: "555-1123" },
  { firstName: "Emily", lastName: "Waltham", phoneNumber: "555-2234" }
])

In any database, relational or document, implementing new use cases often requires an index for effective access patterns. For instance, for a reverse phone directory, I create an index where the key starts with the phone number. I add the names to the key in order to allow for index-only scans to benefit from the O(log n) scalability of B-Tree indexes:

db.friends.createIndex(
 { phoneNumber:1, firstName:1, lastName:1 } 
)

To confirm that an index-only scan is occurring, I examine the execution plan specifically for PROJECTION_COVERED rather than FETCH

With default projection: IXSCAN ➤ FETCH

Due to its flexible schema, MongoDB cannot assume that all fields in every document within a collection are covered by the index. As a result, it must fetch the entire document:

db.friends.find( 
 { phoneNumber:"555-6789" } 
).explain('executionStats').executionStats


{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'FETCH',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    docsExamined: 1,
    alreadyHasObj: 0,
    inputStage: {
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
      indexName: 'phoneNumber_1_firstName_1_lastName_1',
      isMultiKey: false,
      multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: {
        phoneNumber: [ '["555-6789", "555-6789"]' ],
        firstName: [ '[MinKey, MaxKey]' ],
        lastName: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 1,
      seeks: 1,
      dupsTested: 0,
      dupsDropped: 0
    }
  }
}

Looking at the result, I can see the "_id" which is stored in the document:

db.friends.find( 
 { phoneNumber:"555-6789" } 
)

[
  {
    _id: ObjectId('680d46a1672e2e146dd4b0c6'),
    firstName: 'Joey',
    lastName: 'Tribbiani',
    phoneNumber: '555-6789'
  }
]

I can remove it from the projection as I don't need it.

Partial projection: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE

I add a projection to exclude the "_id" from the result, but it doesn't remove the FETCH that gets the document with all fields:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 }
).explain('executionStats').executionStats


{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: false,
        multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-6789", "555-6789"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 0,
        dupsDropped: 0
      }
    }
  }
}

Even if I know that my documents have no other fields, the query planner doesn't know it and must plan to get the document.

Full projection: IXSCAN ➤ PROJECTION_COVERED

When the projection declares all fields, and they are in the index key, there's no need to fetch the document as the projection is covered:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
      indexName: 'phoneNumber_1_firstName_1_lastName_1',
      isMultiKey: false,
      multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: {
        phoneNumber: [ '["555-6789", "555-6789"]' ],
        firstName: [ '[MinKey, MaxKey]' ],
        lastName: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 1,
      seeks: 1,
      dupsTested: 0,
      dupsDropped: 0
    }
  }
}

Such plan is an index only scan, optimal as it doesn't need to read documents.

Adding an array instead of a scalar value

Now that we have examined how a query is covered by a single-key index, where each document has a unique index entry, let's explore the implications of a multi-key index. In MongoDB, a field can contain a single value in one document and an array of values in another. I add such a document, where one of the friends has three phone numbers:

db.friends.insertOne({
  firstName: "Phoebe",
  lastName: "Buffay",
  phoneNumber: ["555-3344", "555-4455", "555-5566"]
})

We refer to the index as a multi-key index, but in reality, it remains the same index in MongoDB. The distinction lies in its capacity to hold multiple entries per document, rather than solely single-key entries.

Query on a single-key entry: IXSCAN ➤ PROJECTION_COVERED

When I query the same single-key document as before, nothing changes and the projection is covered:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: true,
        multiKeyPaths: { phoneNumber: [ 'phoneNumber' ], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-6789", "555-6789"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 1,
        dupsDropped: 0
      }
    }
  }
}

A key advantage of MongoDB's flexible document model is that changes in structure, as the business evolves, do not impact existing documents. This is more agile than SQL databases where changing a One-to-One relationship to a One-to-Many requires complete refactoring of the model and extensive non-regression testing.

Query on a multi-key entry: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE

There's a difference when I query the document with an array of values, visible with isMultiKey: true in the IXSCAN, and a FETCH stage:

db.friends.find( 
 { phoneNumber:"555-4455" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: true,
        multiKeyPaths: { phoneNumber: [ 'phoneNumber' ], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-4455", "555-4455"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 1,
        dupsDropped: 0
      }
    }
  }
}

Understanding the behavior is simplified by recognizing that there is one index entry for each key, with only one entry being read (keysExamined: 1). However, the projection requires access to all associated values. Even if a single value is used to locate the document, the result must display all relevant values:

db.friends.find( 
 { phoneNumber:"555-4455" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
)

[
  {
    firstName: 'Phoebe',
    lastName: 'Buffay',
    phoneNumber: [ '555-3344', '555-4455', '
                                    
                                    
                                    
                                    
                                

Index Only Scan on JSON Documents, with Covering and Multi-Key Indexes in MongoDB

Storing JSON documents in a SQL database does not make it a document database. The strength of any database lies in its indexing capabilities, and SQL databases, even with JSON datatypes, lack the flexibility of document databases, particularly when dealing with arrays in an embedded One-to-Many relationships.

In SQL databases, normalized tables are first-class citizens, with single-key indexes optimized for handling equality and range filters (WHERE), sorting (ORDER BY), and projection coverage (SELECT) for individual tables. To avoid joining before filtering, a denormalized model is preferred when those operations involve multiple tables.
While SQL databases can store documents, including arrays for One-to-Many relationships, working with arrays in JSON necessitates the use of inverted indexes like PostgreSQL's GIN, which do not cover range filters, sorting, and projection like regular indexes.

In contrast, MongoDB treats documents as the core of its data model. Its indexing mechanisms naturally extend to handle documents and their arrays, retaining functionality for filtering, sorting, and projection coverage, as regular indexes can hangle multiple keys per documents.

In previous posts, we examined how a multi-key index supports sort operations. Now, let's explore the conditions under which a query projection is covered by the index, eliminating the need to fetch the document from the collection.

Here are the execution plans tested:

  • With default projection: IXSCAN ➤ FETCH
  • Partial projection: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Full projection: IXSCAN ➤ PROJECTION_COVERED
  • Query on a single-key entry: IXSCAN ➤ PROJECTION_COVERED
  • Query on a multi-key entry: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Projection of "_id" : IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE
  • Projection of "$recordId" : IXSCAN ➤ PROJECTION_DEFAULT

Here is a collection of friends, with their first name, last name, and phone number.

db.friends.insertMany([
  { firstName: "Rachel", lastName: "Green", phoneNumber: "555-1234" },
  { firstName: "Ross", lastName: "Geller", phoneNumber: "555-2345" },
  { firstName: "Monica", lastName: "Geller", phoneNumber: "555-3456" },
  { firstName: "Chandler", lastName: "Bing", phoneNumber: "555-4567" },
  { firstName: "Joey", lastName: "Tribbiani", phoneNumber: "555-6789" },
  { firstName: "Janice", lastName: "Hosenstein", phoneNumber: "555-7890" },
  { firstName: "Gunther", lastName: "Centralperk", phoneNumber: "555-8901" },
  { firstName: "Carol", lastName: "Willick", phoneNumber: "555-9012" },
  { firstName: "Susan", lastName: "Bunch", phoneNumber: "555-0123" },
  { firstName: "Mike", lastName: "Hannigan", phoneNumber: "555-1123" },
  { firstName: "Emily", lastName: "Waltham", phoneNumber: "555-2234" }
])

In any database, relational or document, implementing new use cases often requires an index for effective access patterns. For instance, for a reverse phone directory, I create an index where the key starts with the phone number. I add the names to the key in order to allow for index-only scans to benefit from the O(log n) scalability of B-Tree indexes:

db.friends.createIndex(
 { phoneNumber:1, firstName:1, lastName:1 } 
)

To confirm that an index-only scan is occurring, I examine the execution plan specifically for PROJECTION_COVERED rather than FETCH

With default projection: IXSCAN ➤ FETCH

Due to its flexible schema, MongoDB cannot assume that all fields in every document within a collection are covered by the index. As a result, it must fetch the entire document:

db.friends.find( 
 { phoneNumber:"555-6789" } 
).explain('executionStats').executionStats


{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'FETCH',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    docsExamined: 1,
    alreadyHasObj: 0,
    inputStage: {
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
      indexName: 'phoneNumber_1_firstName_1_lastName_1',
      isMultiKey: false,
      multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: {
        phoneNumber: [ '["555-6789", "555-6789"]' ],
        firstName: [ '[MinKey, MaxKey]' ],
        lastName: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 1,
      seeks: 1,
      dupsTested: 0,
      dupsDropped: 0
    }
  }
}

Looking at the result, I can see the "_id" which is stored in the document:

db.friends.find( 
 { phoneNumber:"555-6789" } 
)

[
  {
    _id: ObjectId('680d46a1672e2e146dd4b0c6'),
    firstName: 'Joey',
    lastName: 'Tribbiani',
    phoneNumber: '555-6789'
  }
]

I can remove it from the projection as I don't need it.

Partial projection: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE

I add a projection to exclude the "_id" from the result, but it doesn't remove the FETCH that gets the document with all fields:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 }
).explain('executionStats').executionStats


{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: false,
        multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-6789", "555-6789"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 0,
        dupsDropped: 0
      }
    }
  }
}

Even if I know that my documents have no other fields, the query planner doesn't know it and must plan to get the document.

Full projection: IXSCAN ➤ PROJECTION_COVERED

When the projection declares all fields, and they are in the index key, there's no need to fetch the document as the projection is covered:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 0,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_COVERED',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'IXSCAN',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
      indexName: 'phoneNumber_1_firstName_1_lastName_1',
      isMultiKey: false,
      multiKeyPaths: { phoneNumber: [], firstName: [], lastName: [] },
      isUnique: false,
      isSparse: false,
      isPartial: false,
      indexVersion: 2,
      direction: 'forward',
      indexBounds: {
        phoneNumber: [ '["555-6789", "555-6789"]' ],
        firstName: [ '[MinKey, MaxKey]' ],
        lastName: [ '[MinKey, MaxKey]' ]
      },
      keysExamined: 1,
      seeks: 1,
      dupsTested: 0,
      dupsDropped: 0
    }
  }
}

Such plan is an index only scan, optimal as it doesn't need to read documents.

Adding an array instead of a scalar value

Now that we have examined how a query is covered by a single-key index, where each document has a unique index entry, let's explore the implications of a multi-key index. In MongoDB, a field can contain a single value in one document and an array of values in another. I add such a document, where one of the friends has three phone numbers:

db.friends.insertOne({
  firstName: "Phoebe",
  lastName: "Buffay",
  phoneNumber: ["555-3344", "555-4455", "555-5566"]
})

We refer to the index as a multi-key index, but in reality, it remains the same index in MongoDB. The distinction lies in its capacity to hold multiple entries per document, rather than solely single-key entries.

Query on a single-key entry: IXSCAN ➤ PROJECTION_COVERED

When I query the same single-key document as before, nothing changes and the projection is covered:

db.friends.find( 
 { phoneNumber:"555-6789" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: true,
        multiKeyPaths: { phoneNumber: [ 'phoneNumber' ], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-6789", "555-6789"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 1,
        dupsDropped: 0
      }
    }
  }
}

A key advantage of MongoDB's flexible document model is that changes in structure, as the business evolves, do not impact existing documents. This is more agile than SQL databases where changing a One-to-One relationship to a One-to-Many requires complete refactoring of the model and extensive non-regression testing.

Query on a multi-key entry: IXSCAN ➤ FETCH ➤ PROJECTION_SIMPLE

There's a difference when I query the document with an array of values, visible with isMultiKey: true in the IXSCAN, and a FETCH stage:

db.friends.find( 
 { phoneNumber:"555-4455" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
).explain('executionStats').executionStats

{
  executionSuccess: true,
  nReturned: 1,
  executionTimeMillis: 0,
  totalKeysExamined: 1,
  totalDocsExamined: 1,
  executionStages: {
    isCached: false,
    stage: 'PROJECTION_SIMPLE',
    nReturned: 1,
    executionTimeMillisEstimate: 0,
    works: 2,
    advanced: 1,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { _id: 0, firstName: 1, lastName: 1, phoneNumber: 1 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 1,
      executionTimeMillisEstimate: 0,
      works: 2,
      advanced: 1,
      needTime: 0,
      needYield: 0,
      saveState: 0,
      restoreState: 0,
      isEOF: 1,
      docsExamined: 1,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        works: 2,
        advanced: 1,
        needTime: 0,
        needYield: 0,
        saveState: 0,
        restoreState: 0,
        isEOF: 1,
        keyPattern: { phoneNumber: 1, firstName: 1, lastName: 1 },
        indexName: 'phoneNumber_1_firstName_1_lastName_1',
        isMultiKey: true,
        multiKeyPaths: { phoneNumber: [ 'phoneNumber' ], firstName: [], lastName: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          phoneNumber: [ '["555-4455", "555-4455"]' ],
          firstName: [ '[MinKey, MaxKey]' ],
          lastName: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 1,
        seeks: 1,
        dupsTested: 1,
        dupsDropped: 0
      }
    }
  }
}

Understanding the behavior is simplified by recognizing that there is one index entry for each key, with only one entry being read (keysExamined: 1). However, the projection requires access to all associated values. Even if a single value is used to locate the document, the result must display all relevant values:

db.friends.find( 
 { phoneNumber:"555-4455" } 
 , { "_id":0 , firstName:1, lastName:1, phoneNumber:1 }
)

[
  {
    firstName: 'Phoebe',
    lastName: 'Buffay',
    phoneNumber: [ '555-3344', '555-4455', '555-5566
                                    
                                    
                                    
                                    
                                

April 25, 2025

pgvector: The Critical PostgreSQL Component for Your Enterprise AI Strategy

You’re likely racing to enhance your applications with more intelligent, data-driven capabilities, whether through AI-powered models (which have moved into “must implement now!” territory), advanced search functions, real-time fraud detection, or geospatial analysis. As these demands grow, you face a significant challenge: efficiently storing, managing, and querying high-dimensional vector data within your existing database infrastructure. […]

April 24, 2025

What It Takes to Be PostgreSQL Compatible

What It Takes to Be PostgreSQL Compatible

We, and many other database enthusiasts, are big fans of PostgreSQL. Even though we built a database system from scratch, we believe there are many good reasons for using PostgreSQL. In fact, we like PostgreSQL so much that we made sure to build CedarDB to be compatible with PostgreSQL.

Because of PostgreSQL’s popularity, we were not the first to develop a PostgreSQL compatible database system. CMU’s “Database of Databases” lists over 40 database systems that claim to be PostgreSQL compatible. Among them you can find database systems from large cloud vendors such as AlloyDB from Google or Aurora from AWS.

April 23, 2025

Set Up Transactional Email in Convex with Bluefox and AWS SES

Learn how to integrate Bluefox—a modern developer-first email API built on AWS SES—with Convex, the reactive backend database. This guide covers everything from sending transactional and triggered emails to setting up real-time webhooks and preparing for production. You’ll walk away knowing how to configure your environment, build reusable email clients, handle errors safely, and manage multiple deployment environments with best practices for SES authentication, bounce handling, and email tracking.

Does FCV Have Any Impact on MongoDB Performance?

It is generally known and established that different database versions provide different sets of features and performance rates. Along with the binary versions, MongoDB has a property called Feature Compatibility Value, which allows administrators to run a binary version without enabling any features that are incompatible with the previous one. This property is generally helpful […]