a curated list of database news from authoritative sources

June 13, 2025

Percona Earns Kubernetes Certified Services Provider Status for All Three Major Open Source Databases

As a member of the Kubernetes Certified Services Provider program, Percona is now part of a select, “pre-qualified tier of vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes…” Kubernetes (frequently abbreviated as K8s) has come a long way over the past decade. From being used almost exclusively for orchestrating stateless container […]

June 11, 2025

Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

There might be regressions from 17.5 to 18 beta1

  • QPS decreases by ~5% and CPU increases by ~5% on the l.i2 (write-only) step
  • QPS decreases by <= 2% and CPU increases by ~2% on the qr* (range query) steps
There might be regressions from 14.0 to 18 beta1
  • QPS decreases by ~6% and ~18% on the write-heavy steps (l.i1, l.i2)

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 800M rows. I provide two performance reports:
  • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
  • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

The summary sections (herehere and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for the benchmark steps. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.5.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: Postgres 14.0 through 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions. The results here are similar to what I reported prior to fixing the too many connections problem in the benchmark client.

For 14.0 through 18 beta1, QPS on ...
  • the initial load (l.i0)
    • Performance is stable across versions
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99)
  • create index (l.x)
    • ~10% faster starting in 15.0
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.11, 1.12)
  • first write-only step (l.i1)
    • Performance decreases ~7% from version 16.9 to 17.0. CPU overhead (see cpupq here) increases by ~5% in 17.0.
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.93, 0.94)
  • second write-only step (l.i2)
    • Performance decreases ~6% in 15.0, ~8% in 17.0 and then ~5% in 18.0. CPU overhead (see cpupq here) increases ~5%, ~6% and ~5% in 15.0, 17.0 and 18beta1. Of all benchmark steps, this has the largest perf regression from 14.0 through 18 beta1 which is ~20%.
    • 18 beta1 is ~4% slower than 17.5
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.86, 0.82)
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.5 have similar performance, but 18 beta1 is slightly slower
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99) for qr100, (0.97, 0.98) for qr500 and (0.97, 0.95) for qr1000. The issue is new CPU overhead, see cpupq here.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 and 17.5 have similar performance but 18 beta1 is slightly slower
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.98) for qp100, (0.99, 0.98) for qp500 and (0.97, 0.96) for qp1000. The issue is new CPU overhead, see cpupq here.
Results: Postgres 17.5 vs 18 beta1

The performance summary is here.

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10c with io_method=worker and io_workers=16
  • x10d with io_method=io_uring
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10c, x10d) was (0.99, 1001.00)
  • create index step (l.x)
    • rQPS for (x10b, x10c, x10d) was (1.011.021.02)
  • first write-heavy ste (l.i1)
    • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 1.01)
  • second write-heavy step (l.i2)
    • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.960.930.94)
    • CPU overhead (see cpupq here) increases by ~5% in 18 beta1
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 0.99)
    • for qr500 the rQPS for (x10b, x10c, x10d) was (1.00, 0.97, 0.99)
    • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.98, 0.97)
    • CPU overhead (see cpupq here, here and here) increases by ~2% in 18 beta1
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.99)
    • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
    • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.990.99)










June 10, 2025

Percona Named to DBTA’s 2025 List of 100 Companies That Matter Most in Data

We’re proud to share that Percona has been named to the 2025 DBTA 100—Database Trends and Applications’ annual list of “The Companies That Matter Most in Data.” This recognition highlights our success in empowering organizations to build, scale, and optimize open source database environments for today’s most demanding applications. At Percona, we believe open source […]

June 09, 2025

Using the PostgreSQL extension tds_fdw to validate data migration from SQL Server to Amazon Aurora PostgreSQL

Data validation is an important process during data migrations, helping to verify that the migrated data matches the source data. In this post, we present alternatives you can use for data validation when dealing with tables that lack primary keys. We discuss alternative approaches, best practices, and potential solutions to make sure that your data migration process remains thorough and reliable, even in the absence of traditional primary key-based validation methods. Specifically, we demonstrate how to perform data validation after a full load migration from SQL Server to Amazon Aurora PostgreSQL-Compatible Edition using the PostgreSQL tds_fdw extension.

Comparison of JOINS 👉🏻 aggregation pipeline

In a blog post titled Comparison of JOINS: MongoDB vs. PostgreSQL on EDB's site, Michael Stonebraker runs a join with $lookup in an aggregation pipeline to compare with a PostgreSQL join. I take this opportunity to discuss some effective design practices for working with a document database.

A common mistake vendors make is comparing one database, where the author is an expert, to another database they are unfamiliar with and unwilling to learn about. This leads to biased conclusions, as they contrast best practices from one database with a database where the design was incorrect. In the EDB blog post, the following aggregation pipeline is used claim that "JOINS are Brittle in MongoDB":

db.employee.aggregate([
  {
    $lookup: {
      from: "department",
      localField: "department",
      foreignField: "_id",
      as: "dept"
    }
  },
  {
    $unwind: "$dept"
  },
  {
    $group: {
      "_id": "$dept.dname",
      "salary": { "$sum": "$salary" },
    }
  }
]);

This query reads the "employee" collection, which contains a "department" field referencing the _id field in the "department" collection. It performs a lookup into the "department" collection to retrieve additional information about each employee's department, specifically the department name ("dname"). After unwinding the "dept" array (from the lookup, in our case there's a single value because it's a many to one but MongoDB can embed a many-to-many as well), it groups the data by department name ("dept.dname") and calculates the total salary of employees for each department by summing up the salary field.

In a document model, the department name should be included as a field within the employee document instead of using a reference. While one might argue that normalizing it to another collection simplifies updates, this operation is infrequent. Additionally, any department renaming is likely part of a broader enterprise reorganization, which would prompt updates to the employee collection regardless.

But let's say that we need to avoid duplication and normalize it with a department relation that implements the dependency between the department surrogate key "_id" and the department name. Still, the department name must be unique, as it is the natural key, and this can be enforced with a unique index ({ dname: 1 }, { unique: true }). Knowing this, grouping by "_id" or grouping by "name" is the same. There's no need to lookup for the name for each employee. In an aggregation pipeline, it is better to aggregate first and lookup after, for each department rather than for each employee.

Here is the correct aggregation pipeline:

db.employee.aggregate([  
  {    
    $group: {  
      _id: "$department",  
      salary: { $sum: "$salary" }  
    }  
  },  
  {  
    $lookup: {  
      from: "department",  
      localField: "_id",  
      foreignField: "_id",  
      as: "dept"  
    }  
  },  
  {  
    $unwind: "$dept"  
  },  
  {  
    $project: {  
      _id: "$dept.dname",  
      salary: 1  
    }  
  }  
]);  

For an expert in relational theory, coding the order of execution might seem surprising, as RDBMS are built to optimize access paths from declarative queries on a logical model. However, similar considerations apply to SQL databases, where those concerns are usually deferred until production, when data grows, and you need to analyze the execution plan.

For example, the article compared with the following query in PostgreSQL:

create table department (
    dname varchar    primary key
,   floor            integer
,   budget           integer
);

create table employee (
    ename varchar
,   age              integer
,   salary           integer
,   department       varchar references department(dname)
);

select dname, sum(salary)
from employee as e
            inner join
            department as d
            on e.department = d.dname
group by dname
;

There are two main differences between the queries in MongoDB and PostgreSQL. First, PostgreSQL utilizes a natural key instead of a surrogate key, which simplifies joins but does not resolve the issue of updating department names. Second, while MongoDB performs a left outer join for lookups, but it's an inner join in the PostgreSQL example. Given the data, the outer join makes sense because you don't want to miss the salary because of the department missing. PostgreSQL does not optimize this either and executes the join prior to aggregation:

explain (costs off)
select dname, sum(salary)
from employee as e
            left outer join
            department as d
            on e.department = d.dname
group by dname
;

                         QUERY PLAN                          
-------------------------------------------------------------
 HashAggregate
   Group Key: d.dname
   ->  Hash Left Join
         Hash Cond: ((e.department)::text = (d.dname)::text)
         ->  Seq Scan on employee e
         ->  Hash
               ->  Seq Scan on department d

In PostgreSQL, you also need to look at the execution plan and change the query. For example, using the department name from the employee table instead of the one in the department table eliminates the join:

explain (costs off)
select department, sum(salary)
from employee as e
            left outer join
            department as d
            on e.department = d.dname
group by department
;

          QUERY PLAN          
------------------------------
 HashAggregate
   Group Key: e.department
   ->  Seq Scan on employee e

This is not different from MongoDB. In PostgreSQL as well:

  • You must understand the access path.
  • You accept some update complexity to eliminate joins.

In PostgreSQL, you can adopt a style similar to the MongoDB aggregation pipeline by declaring stages within a Common Table Expression (CTE) using a WITH clause. This approach executes the GROUP BY before the JOIN, making the code's intent clearer:

with employee_agg as (  
  select department, sum(salary) as total_salary  
  from employee  
  group by department  
)  
select d.dname as _id, ea.total_salary as salary  
from employee_agg as ea  
left outer join department as d  
on ea.department = d.dname;  

To conclude, it is true that joins in PostgreSQL are generally faster than lookups in MongoDB. This is because PostgreSQL is designed for normalized schemas, where a single business query can retrieve data from multiple tables, while MongoDB is optimized for document models that align with business domain entities, and optimizing joins is not a priority.

If MongoDB lookups are causing slow queries, consider improving your data model and aggregation pipelines first. Filter and aggregate before joining, and utilize multi-key indexes on well-designed document schemas.
Additionally, be aware that MongoDB's aggregation pipeline includes an optimization phase to reshape the pipeline for enhanced performance.

If you find the aggregation pipeline syntax complex, it's easy to learn and developers like it (other databases like Google BigQuery or DuckDB adopted a similar approach). It resembles Common Table Expressions in SQL, making it straightforward to test each stage. Additionally, you can use the Atlas UI or Compass to construct it with a wizard and view the output of each stage, as shown in the header of this post. But do not abuse it: the document model should avoid lookups on many documents, and aggregation pipelines should filter and aggregate first.

Comparison of JOINS 👉🏻 aggregation pipeline and CTEs

In a blog post titled Comparison of JOINS: MongoDB vs. PostgreSQL on EDB's site, Michael Stonebraker runs a join with $lookup in an aggregation pipeline to compare with a PostgreSQL join. I take this opportunity to discuss some effective design practices for working with a document database.

A common mistake vendors make is publishing comparisons between one database, where the author of the article is an expert, and another database they are unfamiliar with and unwilling to learn about. This leads to biased conclusions, as they contrast best practices from one database with a database where the design is incorrect. In the EDB blog post, the following aggregation pipeline is used to claim that "JOINS are Brittle in MongoDB" - do you spot the problem?

db.employee.aggregate([
  {
    $lookup: {
      from: "department",
      localField: "department",
      foreignField: "_id",
      as: "dept"
    }
  },
  {
    $unwind: "$dept"
  },
  {
    $group: {
      "_id": "$dept.dname",
      "salary": { "$sum": "$salary" },
    }
  }
]);

This query reads the "employee" collection, which contains a "department" field referencing the _id field in the "department" collection. It performs a lookup into the "department" collection to retrieve additional information about each employee's department, specifically the department name ("dname"). After unwinding the "dept" array (from the lookup, in our case there's a single value because it's a many to one but MongoDB can embed a many-to-many as well), it groups the data by department name ("dept.dname") and calculates the total salary of employees for each department by summing up the salary field.

In a document model, the department name should be included as a field within the employee document instead of using a reference. While one might argue that normalizing it to another collection simplifies updates, this operation is infrequent. Additionally, any department renaming is likely part of a broader enterprise reorganization, which would prompt updates to the employee collection regardless.
The model does not account for departments without employees, as it is inherently tied to a specific business domain, HR in this case. It focuses on employees and does not share sensitive information like salary details with other domains. In this bounded context, the department information is an employee attribute.

But let's say that we need to avoid duplication and normalize it with a department relation that implements the dependency between the department surrogate key "_id" and the department name. Still, the department name must be unique, as it is the natural key, and this can be enforced with a unique index ({ dname: 1 }, { unique: true }). Knowing this, grouping by "_id" or grouping by "name" is the same. There's no need to lookup for the name for each employee. In an aggregation pipeline, it is better to aggregate first and lookup after, for each department rather than for each employee.

Here is the correct aggregation pipeline:

db.employee.aggregate([  
  {    
    $group: {  
      _id: "$department",  
      salary: { $sum: "$salary" }  
    }  
  },  
  {  
    $lookup: {  
      from: "department",  
      localField: "_id",  
      foreignField: "_id",  
      as: "dept"  
    }  
  },  
  {  
    $unwind: "$dept"  
  },  
  {  
    $project: {  
      _id: "$dept.dname",  
      salary: 1  
    }  
  }  
]);  

For an expert in relational theory, coding the order of execution might seem surprising, as RDBMS are built to optimize access paths from declarative queries on a logical model. However, similar considerations apply to SQL databases, where those concerns are usually deferred until production, when data grows, and you need to analyze the execution plan and "tune" the query.

For example, the article compared with the following in PostgreSQL:

create table department (
    dname varchar    primary key
,   floor            integer
,   budget           integer
);

create table employee (
    ename varchar
,   age              integer
,   salary           integer
,   department       varchar references department(dname)
);

select dname, sum(salary)
from employee as e
            inner join
            department as d
            on e.department = d.dname
group by dname
;

There are two main differences between the queries they use in MongoDB and PostgreSQL. First, PostgreSQL utilizes a natural key instead of a surrogate key, which simplifies joins but does not resolve the issue of cascading the updates to the department names. This is equivalent to embedding or extended reference in MongoDB. While the author may have ignored it, MongoDB, like SQL, can reference and join columns other than the generated "_id", and secondary indexes makes it fast.
Second, MongoDB performs a left outer join for lookups, because they are lookups, not relational joins. However, the author used an inner join in the PostgreSQL example. Given the data, the outer join makes sense because you don't want to miss the salary because of the department missing. PostgreSQL does not optimize this either and executes the join prior to aggregation:

explain (costs off)
select dname, sum(salary)
from employee as e
            left outer join
            department as d
            on e.department = d.dname
group by dname
;

                         QUERY PLAN                          
-------------------------------------------------------------
 HashAggregate
   Group Key: d.dname
   ->  Hash Left Join
         Hash Cond: ((e.department)::text = (d.dname)::text)
         ->  Seq Scan on employee e
         ->  Hash
               ->  Seq Scan on department d

In PostgreSQL, you also need to look at the execution plan and change the query. For example, using the department name from the employee table instead of the one in the department table eliminates the join:

explain (costs off)
select
 department,
 sum(salary)
from employee as e
      left outer join
      department as d
      on e.department = d.dname
group by
 department
;

          QUERY PLAN          
------------------------------
 HashAggregate
   Group Key: e.department
   ->  Seq Scan on employee e

This is not different from MongoDB. In PostgreSQL as well:

  • You must understand the access path.
  • You accept some update complexity to eliminate joins.

The join was eliminated because no column is read from the inner table, because the natural key, department name, was chosen. If you query an additional column from departments, like "floor", the query becomes more complex as this column must be added to the GROUP BY clause even if the normalized model doesn't allow more than one floor per department, and it is joining before the aggregation:

explain (costs off)
select 
 department,
 floor,
 sum(salary)
from employee as e
      left outer join
      department as d
      on e.department = d.dname
group by 
 department,
 floor
;
                         QUERY PLAN                          
-------------------------------------------------------------
 HashAggregate
   Group Key: e.department, d.floor
   ->  Hash Left Join
         Hash Cond: ((e.department)::text = (d.dname)::text)
         ->  Seq Scan on employee e
         ->  Hash
               ->  Seq Scan on department d

In PostgreSQL, you can adopt a style similar to the MongoDB aggregation pipeline by declaring stages within a Common Table Expression (CTE) using a WITH clause. This approach executes the GROUP BY before the JOIN, making the code's intent clearer:

explain (costs off)
with employee_agg as (  
  select 
   department,
   sum(salary) as total_salary  
  from employee  
  group by department  
)  
select 
  d.dname as department_name, 
  d.floor as department_floor,
  ea.total_salary as total_salary  
from employee_agg as ea  
left outer join department as d  
on ea.department = d.dname; 

                       QUERY PLAN                       
--------------------------------------------------------
 Hash Right Join
   Hash Cond: ((d.dname)::text = (ea.department)::text)
   ->  Seq Scan on department d
   ->  Hash
         ->  Subquery Scan on ea
               ->  HashAggregate
                     Group Key: employee.department
                     ->  Seq Scan on employee

This method is more efficient as it aggregates before join. Using Common Table Expressions (CTEs) imitates the MongoDB aggregation pipeline, which provides greater control over data access optimization. Both are high-level languages that enable developers to decompose queries into logical steps effectively.

When writing an SQL query, I prefer to start with aggregations and projections in Common Table Expressions (CTEs) before performing natural joins. This method is valid as long as all projections are clearly defined, ensuring an organized and efficient query structure:

explain (costs off)
with "EmployeesPerDepartment" as (  
  select 
   department     as "DepartmentName",
   sum(salary)    as "TotalSalary"  
  from employee  
  group by department  
),  "Departments" as (
 select 
  dname         as "DepartmentName", 
  floor         as "DepartmentFloor"
 from department
)  select 
  "DepartmentName", 
  "DepartmentFloor",
  "TotalSalary"  
from "EmployeesPerDepartment"
natural left join "Departments"
;
                              QUERY PLAN                               
-----------------------------------------------------------------------
 Hash Right Join
   Hash Cond: ((department.dname)::text = (employee.department)::text)
   ->  Seq Scan on department
   ->  Hash
         ->  HashAggregate
               Group Key: employee.department
               ->  Seq Scan on employee

Because a SQL result is a single tabular result, it is possible to declare the projection (column aliases) to the final column name first. This eliminates the need for table aliases and complex join clauses. It is also easier to debug, running the intermediate steps

To conclude, it is true that joins in PostgreSQL are generally faster than lookups in MongoDB. This is because PostgreSQL is designed for normalized schemas, where a single business query can retrieve data from multiple tables, while MongoDB is optimized for document models that align with business domain entities, and adding more complexity to the query planner to optimize joins is not a priority.
In SQL databases, the challenge lies not in executing joins but in the complexity faced by developers when crafting optimal queries. To achieve acceptable response times, SQL databases must utilize multiple join algorithms. This requires the query planner to perform cost-based optimization, which heavily relies on accurate cardinality estimations. As the number of tables to join increases, so does the risk of obtaining a poor execution plan. This complexity impacting the developer and the optimizer can create the perception that joins are slow.

If MongoDB lookups are causing slow queries, consider improving your data model and aggregation pipelines first. Embed the one-to-one or one-to-many that belong to the same business object. Filter and aggregate before joining, and utilize multi-key indexes on well-designed document schemas.
Additionally, be aware that MongoDB's aggregation pipeline includes an optimization phase to reshape the pipeline for enhanced performance.

If you find the aggregation pipeline syntax complex, it's easy to learn and developers like it (other databases like Google BigQuery or DuckDB adopted a similar pipeline approach). It resembles Common Table Expressions in SQL, making it straightforward to test each stage. Additionally, you can use the Atlas UI or Compass to construct it with a wizard and view the output of each stage, as shown in the header of this post. But do not abuse it: the document model should avoid lookups on many documents, and aggregation pipelines should filter (on indexes) and aggregate first.

"Schema Later" considered harmful 👉🏻 schema validation

In a blog post titled "Schema Later" Considered Harmful on EDB's site, Michael Stonebraker demonstrates that inserting junk data can be harmful to queries. While this conclusion is evident, it’s important to note that all databases have some form of schema and a full "schema later" doesn't exist. Otherwise, indexes couldn't be added, and data couldn't be queried and processed.
It's recommended to declare the schema-on-write part in the database in addition to the application code, once it has been established and used in the application.

A common mistake vendors make is comparing one database, where the author is an expert, to another database they are unfamiliar with and unwilling to learn about. This leads to biased conclusions, as they contrast best practices from one database with a database where the design was incorrect. In the EDB blog post, the schema for PostgreSQL was defined using data types and check constraints. However, nothing similar was done for the example on MongoDB.

In MongoDB, you can begin with a flexible schema defined by your application. Once the structure is established, MongoDB schema validation ensures that there are no unintended changes or improper data types, maintaining the integrity of your data. Although this feature has been available since MongoDB 3.6, released in 2017, it remains overlooked due to persistent myths about NoSQL.

In the EDB blog, they created the PostgreSQL table as:

create table employee (
 name varchar,
 age int4,
 salary int4 check (salary > 0)
);

To compare, they should have created the MongoDB collection as:

db.createCollection("employee", {
  validator: {
    $jsonSchema: {
      bsonType: "object",
      required: ["name", "age", "salary"],
      properties: {
        name:   {  bsonType: "string",            description: "VARCHAR equivalent"  },                   
        age:    {  bsonType: "int",               description: "INT4 equivalent"  },                     
        salary: {  bsonType: "int",  minimum: 0,  description: "CHECK salary > 0 equivalent"                                 
        }
      }
    }
  },
  validationAction: "error" // Strict validation: reject invalid documents
});

With such schema validation, the incorrect inserts are rejected:

db.employee.insertOne ({name : "Stonebraker", age : 45, salary : -99})

MongoServerError: Document failed validation
Additional information: {
  failingDocumentId: ObjectId('6845cfe3f9e37e21a1d4b0c8'),
 ...
        propertyName: 'salary',
            description: 'CHECK salary > 0 equivalent',
            details: [
              {
                operatorName: 'minimum',
                specifiedAs: { minimum: 0 },
                reason: 'comparison failed',
                consideredValue: -99
...


db.employee.insertOne ({name : "Codd", age : "old", salary : 40000})

MongoServerError: Document failed validation
Additional information: {
  failingDocumentId: ObjectId('6845d041f9e37e21a1d4b0c9'),
...
            propertyName: 'age',
            description: 'INT4 equivalent',
            details: [
              {
                operatorName: 'bsonType',
                specifiedAs: { bsonType: 'int' },
                reason: 'type did not match',
                consideredValue: 'old',
                consideredType: 'string'
              }
...

The application receives all information regarding the violation, in JSON that is parsable by the exception handling. Unlike many SQL databases that provide only the constraint name in a text message, this approach avoids exposing parts of the physical data model to the application, enhancing logical data independence.

Note that the examples are taken from the EDB blog post. You should probably store the date of birth rather than the age (validated as { bsonType: 'date' } and with a range of acceptable dates), and a currency along with the salary (with a sub-object { salary: { amount: 40000, currency: "CHF" } }).

MongoDB schema validation is declarative, and the Atlas UI or Compass can help you start with an existing collection by populating rules from sample data (the screenshot in the header of this post used rule generation from this example).

"Schema Later" considered harmful 👉🏻 schema validation

In a blog post titled "Schema Later" Considered Harmful on EDB's site, Michael Stonebraker demonstrates that inserting junk data can be harmful to queries. While this conclusion is evident, it’s important to note that all databases have some form of schema and a full "schema later" doesn't exist. Otherwise, indexes couldn't be added, and data couldn't be queried and processed.
It's recommended to declare the schema-on-write part in the database in addition to the application code, once it has been established and used in the application.

A common mistake vendors make is comparing one database, where the author is an expert, to another database they are unfamiliar with and unwilling to learn about. This leads to biased conclusions, as they contrast best practices from one database with a database where the design was incorrect. In the EDB blog post, the schema for PostgreSQL was defined using data types and check constraints. However, this work was left out in the example for MongoDB.

In MongoDB, you can begin with a flexible schema defined by your application. Once the structure is established, MongoDB schema validation ensures that there are no unintended changes or improper data types, maintaining the integrity of your data. Although this feature has been available since MongoDB 3.6, released in 2017, it remains overlooked due to persistent myths about NoSQL and unstructured data.

In the EDB blog, they created the PostgreSQL table as:

create table employee (
 name varchar,
 age int4,
 salary int4 check (salary > 0)
);

To compare, they should have created the MongoDB collection as:

db.createCollection("employee", {
  validator: {
    $jsonSchema: {
      bsonType: "object",
      required: ["name", "age", "salary"],
      properties: {
        name:   {  bsonType: "string",            description: "VARCHAR equivalent"  },                   
        age:    {  bsonType: "int",               description: "INT4 equivalent"  },                     
        salary: {  bsonType: "int",  minimum: 0,  description: "CHECK salary > 0 equivalent"                                 
        }
      }
    }
  },
  validationAction: "error" // Strict validation: reject invalid documents
});

With such schema validation, the incorrect inserts are rejected:

db.employee.insertOne ({name : "Stonebraker", age : 45, salary : -99})

MongoServerError: Document failed validation
Additional information: {
  failingDocumentId: ObjectId('6845cfe3f9e37e21a1d4b0c8'),
 ...
        propertyName: 'salary',
            description: 'CHECK salary > 0 equivalent',
            details: [
              {
                operatorName: 'minimum',
                specifiedAs: { minimum: 0 },
                reason: 'comparison failed',
                consideredValue: -99
...


db.employee.insertOne ({name : "Codd", age : "old", salary : 40000})

MongoServerError: Document failed validation
Additional information: {
  failingDocumentId: ObjectId('6845d041f9e37e21a1d4b0c9'),
...
            propertyName: 'age',
            description: 'INT4 equivalent',
            details: [
              {
                operatorName: 'bsonType',
                specifiedAs: { bsonType: 'int' },
                reason: 'type did not match',
                consideredValue: 'old',
                consideredType: 'string'
              }
...

The application receives all information regarding the violation, in JSON that is parsable by the exception handling. Unlike many SQL databases that provide only the constraint name in a text message, this approach avoids exposing internal names to the application, enhancing logical data independence.

Remark: instead of storing age, it’s advisable to store the date of birth as a date type ({ bsonType: 'date' } with an acceptable range of values). Additionally, you can use sub-objects to include a currency alongside the salary: { salary: { amount: 40000, currency: "CHF" } }.

MongoDB schema validation is declarative, and Atlas UI or Compass can help you start with an existing collection by populating rules from sample data (the screenshot in the header of this post used rule generation from this example).

How to Perform a Disaster Recovery Switchover with Patroni for PostgreSQL

Patroni is a Python-based template for managing high availability PostgreSQL clusters. Originally a fork of the Governor project by Compose, Patroni has evolved significantly with many new features and active community development. It supports integration with various Distributed Configuration Stores (DCS) like etcd, Consul, and ZooKeeper, and provides simple setup and robust failover management. This blog […]

June 08, 2025

Postgres 18 beta1: small server, CPU-bound Insert Benchmark (v2)

This is my second attempt at CPU-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated becau sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

tl;dr

  • Performance between 17.5 and 18 beta1 is mostly similar on read-heavy steps
  • 18 beta1 might have small regressions from new CPU overheads on write-heavy steps

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 20M rows. I provide two performance reports:
  • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
  • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

The summary sections (here and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: Postgres 14.0 through 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions.

For 14.0 through 17.5, QPS on ...
  • l.i0 (the initial load)
    • Slightly faster starting in 15.0
    • Throughput was ~4% faster starting in 15.0 and that drops to ~2% in 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.x (create index) 
    • Faster starting in 15.0
    • Throughput is between 9% and 17% faster in 15.0 through 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.i1 (write-only)
    • Slower starting in 15.0
    • It is ~3% slower in 15.0 and that increases to between 6% and 10% in 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.i2 (write-only)
    • Slower starting in 15.13 with a big drop in 17.0
    • 18 beta1 with io_method= sync and io_uring is worse than 17.5. It isn't clear but one problem might be more CPU/operation (see cpupq here)
  • qr100, qr500, qr1000 (range query)
    • Stable from 14.0 through 18 beta1
  • qp100, qp500, qp1000 (point query) 
    • Stable from 14.0 through 18 beta1
Results: Postgres 17.5 vs 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10c with io_method=worker and io_workers=16
  • x10d with io_method=io_uring
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% slower than 17.5
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 0% to 2% faster than 17.5
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy step (l.i1)
    • 18 beta1 with io_method= sync and workers has similar perf as 17.5
    • 18 beta1 with io_method=io_uring is ~4% slower than 17.5. The problem might be more CPU/operation, see cpupq here
  • write-heavy step (l.i2)
    • 18 beta1 with io_method=workers is ~2% faster than 17.5
    • 18 beta1 with io_method= sync and workers is 6% and 8% slower than 17.5. The problem might be more CPU/operation, see cpupq here
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.5 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 and 17.5 have similar performance
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.97)
  • create index step (l.x)
    • rQPS for (x10b, x10c, x10d) was (1.00, 1.02, 1.00)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.011.00, 0.96)
    • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.941.02, 0.92)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
    • for qr500 the rQPS for (x10b, x10c, x10d) was (0.991.011.00)
    • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10c, x10d) was (1.001.001.00)
    • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
    • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.991.00, 0.98)

June 06, 2025

Postgres 18 beta1: large server, IO-bound Insert Benchmark

This has results for a CPU-bound Insert Benchmark with Postgres on a large server. A blog post about a CPU-bound workload on the same server is here.

tl;dr

  • initial load step (l.i0)
    • 18 beta1 is 4% faster than 17.4
  • create index step (l.x)
    • 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1 and 17.4. I got the source for 18 beta1 from github using the REL_18_BETA1 tag because I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for Postgres 17.4 is here and named conf.diff.cx10a_c32r128.

For 18 beta1 I tested 3 configuration files, and they are here:
  • conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
  • conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
  • conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table. The database is larger than memory. In some benchmark steps the working set is larger than memory (see the point query steps qp100, qp500, qp1000) while the working set it cached for other benchmarks steps (see the range query steps qr100, qr500 and qr1000).

The benchmark steps are:

  • l.i0
    • insert 10 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: details

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.4 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10cw4 with io_method=worker and io_workers=4
  • x10d with io_method=io_uring).
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 4% faster than 17.4
    • From metrics, 18 beta1 has a lower context switch rate (cspq) and sustains a higher write rate to storage (wmbps).
  • create index step (l.x)
    • 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
    • From metrics, 18 beta1 with io_method=io_uring sustains a higher write rate (wmbps)
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
    • From metrics for l.i1 and l.i2, in the case where 18 beta1 is 40% faster, there is much less CPU/operation (cpupq).
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring)
    • From metrics for qr100, qr500 and qr1000 the problem might be more CPU/operation (cpupq)
    • Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring).
    • From metrics for qp100, qp500 and qp1000 the problem might be more CPU/operation (cpupq)
    • Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10cw4, x10d) was (1.041.041.04)
  • create index step (l.x)
    • rQPS for (x10b, x10cw4, x10d) was (0.990.991.07)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10cw4, x10d) was (1.010.991.02)
    • for l.i2 the rQPS for (x10b, x10cw4, x10d) was (1.001.400.99)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10cw4, x10d) was (0.970.980.97)
    • for qr500 the rQPS for (x10b, x10cw4, x10d) was (0.980.980.97)
    • for qr1000 the rQPS for (x10b, x10cw4, x10d) was (1.000.990.98)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10cw4, x10d) was (1.000.990.98)
    • for qp500 the rQPS for (x10b, x10cw4, x10d) was (1.000.95, 0.98)
    • for qp1000 the rQPS for (x10b, x10cw4, x10d) was (0.970.95, 0.99)

Isolation Level for MongoDB Multi-Document Transactions

Many outdated or imprecise claims about transaction isolation levels in MongoDB persist. These claims are outdated because they may be based on an old version where multi-document transactions were introduced, MongoDB 4.0, such as the old Jepsen report, and issues have been fixed since then. They are also imprecise because people attempt to map MongoDB's transaction isolation to SQL isolation levels, which is inappropriate, as the SQL Standard definitions ignore Multi-Version Concurrency Control (MVCC), utilized by most databases, including MongoDB.
Martin Kleppmann has discussed this issue and provided tests to assess transaction isolation and potential anomalies. I will conduct these tests on MongoDB to explain how multi-document transaction work and avoid anomalies.

I followed the structure of Martin Kleppmann's tests on PostgreSQL and ported them to MongoDB. The read isolation level in MongoDB is controlled by the Read Concern, and the "snapshot" read concern is the only one comparable other Multi-Version Concurrency Control SQL databases, and maps to Snapshot Isolation, improperly called Repeatable Read to use the closest SQL standard term. As I test on a single-node lab, I use "majority" to show that it does more than Read Committed. The write concern should also be set to "majority" to ensure that at least one node is common between the read and write quorums.

Recap on Isolation Levels

Let me explain quickly the other isolation levels and why they cannot be mapped to the SQL standard:

  • readConcern: { level: "local" } is sometimes compared to Uncommitted Reads because it may show a state that can be later rolled back in case of failure. However, some SQL databases may show the same behavior in some rare conditions (example here) and still call that Read Committed
  • readConcern: { level: "majority" } is sometimes compared to Read Committed, because it avoids uncommitted reads. However, Read Committed was defined for wait-on-conflict databases to reduce the lock duration in two-phase locking, but MongoDB multi-document transactions use fail-on-conflict to avoid waits. Some databases consider that Read Committed can allow reads from multiple states (example here) while some others consider it must be a statement-level snapshot isolation (examples here). In a multi-shard transaction, majority may show a result from multiple states, as snapshot is the one being timeline consistent.
  • readConcern: { level: "snapshot" } is the real equivalent to Snapshot Isolation, and prevents more anomalies than Read Committed. Some databases even call that "serializable" (example here) because the SQL standard ignores the write-skew anomaly.
  • readConcern: { level: "linearlizable" } is comparable to serializable, but for a single document, not available for multi-document transactions, similar to many SQL databases that do not provide serializable as it re-introduces scalability the problems of read locks, that MVCC avoids.

Read Committed basic requirements (G0, G1a, G1b, G1c)

Here are some tests for anomalies typically prevented in Read Committed. I'll run them with readConcern: { level: "majority" } but keep in mind that readConcern: { level: "snapshot" } may be better if you want a consistent snapshot across multiple shards.

MongoDB Prevents Write Cycles (G0) with conflict error

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

In a two-phase locking database, with wait-on-conflict behavior, the second transaction would wait for the first one to avoid anomalies. However, MongoDB with transactions is fail-on-conflict and raises a retriable error to avoid the anomaly.

Each transaction touched only one document, but it was declared explicitly with a session and startTransaction(), to allow multi-document transactions, and this is why we observed the fail-on-conflict behavior to let the application apply its retry logic for complex transactions.

If the conflicting update was run as a single-document transaction, equivalent to an auto-commit statement, it would have used a wait-on-conflict behavior. I can test it by immediately running this while the t1 transaction is still active:


const db = db.getMongo().getDB("test_db");
print(`Elapsed time: ${
    ((startTime = new Date())
    && db.test.updateOne({ _id: 1 }, { $set: { value: 12 } }))
    && (new Date() - startTime)
} ms`);

Elapsed time: 72548 ms

I've run the updateOne({ _id: 1 }) without an implicit transaction. It waited for the other transaction to terminate, which happened after a 60 seconds timeout, and then update was successful. The first transaction that timed out is aborted:

session1.commitTransaction();

MongoServerError[NoSuchTransaction]: Transaction with { txnNumber: 2 } has been aborted.

The behavior of conflict in transactions differs:

  • wait-on-conflict for implicit single-document transactions
  • fail-on-conflict for explicit multiple-document transactions immediately, resulting in a transient error, without waiting, to let the application rollback and retry.

MongoDB prevents Aborted Reads (G1a)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session1.abortTransaction();

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session2.commitTransaction();

MongoDB prevents reading an aborted transaction by reading only the committed value when Read Concern is 'majority' or 'snapshot'.

MongoDB prevents Intermediate Reads (G1b)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The non-committed change from T1 is not visible to T2.


T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

session1.commitTransaction();  // T1 commits

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The committed change from T1 is still not visible to T2 because it happened after T2 started.

This is different from the majority of Multi-Version Concurrency Control SQL databases. To minimize the performance impact of wait-on-conflict, they reset the read time before each statement in Read Committed, as phantom reads are allowed. They would have displayed the newly committed value with this example.
MongoDB never does that, the read time is always the start of the transaction, and no phantom read anomaly happens. However, it doesn't wait to see if the conflict is resolved or must fail with a deadlock, and fails immediately to let the application retry it.

MongoDB prevents Circular Information Flow (G1c)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 2 }, { $set: { value: 22 } });

T1.test.find({ _id: 2 });

[ { _id: 2, value: 20 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

session1.commitTransaction();

session2.commitTransaction();

In both transactions, the un-commited changes are not visible to others.

MongoDB prevents Observed Transaction Vanishes (OTV)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T3
const session3 = db.getMongo().startSession();
const T3 = session3.getDatabase("test_db");
session3.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T1.test.updateOne({ _id: 2 }, { $set: { value: 19 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

This anomaly is prevented by fail-on-conflict with explicit transaction. With implicit single-document transaction, it would have wait for the conflicting transaction to end.

MongoDB prevents Predicate-Many-Preceders (PMP)

With a SQL database, this anomaly would require Snapshot Isolation level because Read Committed use different read times per statement. However, I can show it in MongoDB with 'majority' read concern, 'snapshot' being required only to get cross-shard snapshot consistency.

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ value: 30 }).toArray();

[]

T2.test.insertOne(  { _id: 3, value: 30 }  );

session2.commitTransaction();

T1.test.find({ value: { $mod: [3, 0] } }).toArray();

[]

The newly inserted row is not visible because it was committed by T2 after the start of T1.

Martin Kleppmann's tests include some variations with a delete statement and a write predicate:

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateMany({}, { $inc: { value: 10 } });

T2.test.deleteMany({ value: 20 });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, rather than blocking, the delete detects the conflict and raises a retriable exception to prevent the anomaly. Compared to PostgreSQL which prevents that in Repeatable Read, it saves the waiting time before failure, but require the application to implement a retry logic.

MongoDB prevents Lost Update (P4)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, the update doesn't wait and raises a retriable exception, so that it is impossible to overwrite the other update, without waiting for its completion.

MongoDB prevents Read Skew (G-single)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2
                                    
                                    
                                    
                                    
                                

Isolation Level for MongoDB Multi-Document Transactions

Many outdated or imprecise claims about transaction isolation levels in MongoDB persist. These claims are outdated because they may be based on an old version where multi-document transactions were introduced, MongoDB 4.0, such as the old Jepsen report, and issues have been fixed since then. They are also imprecise because people attempt to map MongoDB's transaction isolation to SQL isolation levels, which is inappropriate, as the SQL Standard definitions ignore Multi-Version Concurrency Control (MVCC), utilized by most databases, including MongoDB.
Martin Kleppmann has discussed this issue and provided tests to assess transaction isolation and potential anomalies. I will conduct these tests on MongoDB to explain how multi-document transaction work and avoid anomalies.

I followed the structure of Martin Kleppmann's tests on PostgreSQL and ported them to MongoDB. The read isolation level in MongoDB is controlled by the Read Concern, and the "snapshot" read concern is the only one comparable other Multi-Version Concurrency Control SQL databases, and maps to Snapshot Isolation, improperly called Repeatable Read to use the closest SQL standard term. As I test on a single-node lab, I use "majority" to show that it does more than Read Committed. The write concern should also be set to "majority" to ensure that at least one node is common between the read and write quorums.

Recap on Isolation Levels

Let me explain quickly the other isolation levels and why they cannot be mapped to the SQL standard:

  • readConcern: { level: "local" } is sometimes compared to Uncommitted Reads because it may show a state that can be later rolled back in case of failure. However, some SQL databases may show the same behavior in some rare conditions (example here) and still call that Read Committed
  • readConcern: { level: "majority" } is sometimes compared to Read Committed, because it avoids uncommitted reads. However, Read Committed was defined for wait-on-conflict databases to reduce the lock duration in two-phase locking, but MongoDB multi-document transactions use fail-on-conflict to avoid waits. Some databases consider that Read Committed can allow reads from multiple states (example here) while some others consider it must be a statement-level snapshot isolation (examples here). In a multi-shard transaction, majority may show a result from multiple states, as snapshot is the one being timeline consistent.
  • readConcern: { level: "snapshot" } is the real equivalent to Snapshot Isolation, and prevents more anomalies than Read Committed. Some databases even call that "serializable" (example here) because the SQL standard ignores the write-skew anomaly.
  • readConcern: { level: "linearlizable" } is comparable to serializable, but for a single document, not available for multi-document transactions, similar to many SQL databases that do not provide serializable as it re-introduces scalability the problems of read locks, that MVCC avoids.

Read Committed basic requirements (G0, G1a, G1b, G1c)

Here are some tests for anomalies typically prevented in Read Committed. I'll run them with readConcern: { level: "majority" } but keep in mind that readConcern: { level: "snapshot" } may be better if you want a consistent snapshot across multiple shards.

MongoDB Prevents Write Cycles (G0) with conflict error

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

In a two-phase locking database, with wait-on-conflict behavior, the second transaction would wait for the first one to avoid anomalies. However, MongoDB with transactions is fail-on-conflict and raises a retriable error to avoid the anomaly.

Each transaction touched only one document, but it was declared explicitly with a session and startTransaction(), to allow multi-document transactions, and this is why we observed the fail-on-conflict behavior to let the application apply its retry logic for complex transactions.

If the conflicting update was run as a single-document transaction, equivalent to an auto-commit statement, it would have used a wait-on-conflict behavior. I can test it by immediately running this while the t1 transaction is still active:


const db = db.getMongo().getDB("test_db");
print(`Elapsed time: ${
    ((startTime = new Date())
    && db.test.updateOne({ _id: 1 }, { $set: { value: 12 } }))
    && (new Date() - startTime)
} ms`);

Elapsed time: 72548 ms

I've run the updateOne({ _id: 1 }) without an implicit transaction. It waited for the other transaction to terminate, which happened after a 60 seconds timeout, and then update was successful. The first transaction that timed out is aborted:

session1.commitTransaction();

MongoServerError[NoSuchTransaction]: Transaction with { txnNumber: 2 } has been aborted.

The behavior of conflict in transactions differs:

  • wait-on-conflict for implicit single-document transactions
  • fail-on-conflict for explicit multiple-document transactions immediately, resulting in a transient error, without waiting, to let the application rollback and retry.

MongoDB prevents Aborted Reads (G1a)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session1.abortTransaction();

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session2.commitTransaction();

MongoDB prevents reading an aborted transaction by reading only the committed value when Read Concern is 'majority' or 'snapshot'.

MongoDB prevents Intermediate Reads (G1b)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The non-committed change from T1 is not visible to T2.


T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

session1.commitTransaction();  // T1 commits

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The committed change from T1 is still not visible to T2 because it happened after T2 started.

This is different from the majority of Multi-Version Concurrency Control SQL databases. To minimize the performance impact of wait-on-conflict, they reset the read time before each statement in Read Committed, as phantom reads are allowed. They would have displayed the newly committed value with this example.
MongoDB never does that, the read time is always the start of the transaction, and no phantom read anomaly happens. However, it doesn't wait to see if the conflict is resolved or must fail with a deadlock, and fails immediately to let the application retry it.

MongoDB prevents Circular Information Flow (G1c)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 2 }, { $set: { value: 22 } });

T1.test.find({ _id: 2 });

[ { _id: 2, value: 20 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

session1.commitTransaction();

session2.commitTransaction();

In both transactions, the un-commited changes are not visible to others.

MongoDB prevents Observed Transaction Vanishes (OTV)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T3
const session3 = db.getMongo().startSession();
const T3 = session3.getDatabase("test_db");
session3.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T1.test.updateOne({ _id: 2 }, { $set: { value: 19 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

This anomaly is prevented by fail-on-conflict with explicit transaction. With implicit single-document transaction, it would have wait for the conflicting transaction to end.

MongoDB prevents Predicate-Many-Preceders (PMP)

With a SQL database, this anomaly would require Snapshot Isolation level because Read Committed use different read times per statement. However, I can show it in MongoDB with 'majority' read concern, 'snapshot' being required only to get cross-shard snapshot consistency.

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ value: 30 }).toArray();

[]

T2.test.insertOne(  { _id: 3, value: 30 }  );

session2.commitTransaction();

T1.test.find({ value: { $mod: [3, 0] } }).toArray();

[]

The newly inserted row is not visible because it was committed by T2 after the start of T1.

Martin Kleppmann's tests include some variations with a delete statement and a write predicate:

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateMany({}, { $inc: { value: 10 } });

T2.test.deleteMany({ value: 20 });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, rather than blocking, the delete detects the conflict and raises a retriable exception to prevent the anomaly. Compared to PostgreSQL which prevents that in Repeatable Read, it saves the waiting time before failure, but require the application to implement a retry logic.

MongoDB prevents Lost Update (P4)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, the update doesn't wait and raises a retriable exception, so that it is impossible to overwrite the other update, without waiting for its completion.

MongoDB prevents Read Skew (G-single)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2
                                    
                                    
                                    
                                    
                                

PgBouncer for PostgreSQL: How Connection Pooling Solves Enterprise Slowdowns

If your database is getting slower, your users don’t care why. They just want it to work. Meanwhile, you’re stuck dealing with the symptoms: sluggish apps, spiking resource usage, and support tickets piling up faster than your monitoring alerts can handle. Why PostgreSQL struggles under load Often, the problem isn’t with your queries or hardware; […]

June 05, 2025

Analyzing Metastable Failures in Distributed Systems

So it goes: your system is purring like a tiger, devouring requests, until, without warning, it slumps into existential dread. Not a crash. Not a bang. A quiet, self-sustaining collapse. The system doesn’t stop. It just refuses to get better. Metastable failure is what happens when the feedback loops in the system go feral. Retries pile up, queues overflow, recovery stalls. Everything runs but nothing improves. The system is busy and useless.

In an earlier post, I reviewed the excellent OSDI ’22 paper on metastable failures, which dissected real-world incidents and laid the theoretical groundwork. If you haven’t read that one, start there.

This HotOS ’25 paper picks up the thread. It introduces tooling and a simulation framework to help engineers identify potential metastable failure modes before disaster strikes. It’s early stage work. A short paper. But a promising start. Let’s walk through it.


Introduction

Like most great tragedies, metastable failure doesn't begin with villainy; it begins with good intentions. Systems are built to be resilient: retries, queues, timeouts, backoffs. An immune system for failure, so to speak. But occasionally that immune system misfires and attacks the system itself. Retries amplify load. Timeouts cascade. Error handling makes more errors. Feedback loops go feral and you get an Ouroboros, a snake that eats its tail in an eternal cycle. The system gets stuck in degraded mode, and refuses to get better.

This paper takes on the problem of identifying where systems are vulnerable to such failures. It proposes a modeling and simulation framework to give operators a macroscopic view: where metastability can strike, and how to steer clear of it.


Overview


The paper proposes a modeling pipeline that spans levels of abstraction: from queueing theory models (CTMC), to simulations (DES), to emulations, and finally, to stress tests on real systems. The further down the stack you go, the more accurate and more expensive the analysis becomes.

The key idea is a chain of simulations: each stage refines the previous one. Abstract models suggest where trouble might be, and concrete experiments confirm or calibrate. The pipeline is bidirectional: data from low-level runs improves high-level models, and high-level predictions guide where to focus concrete testing.

The modeling is done using a Python-based DSL. It captures common abstractions: thread pools, queues, retries, service times. Crucially, the authors claim that only a small number of such components are needed to capture the essential dynamics of many production services. Business logic is abstracted away as service-time distributions.

Figure 2 shows a simple running example used throughout the paper: a single-threaded server handling API requests at 10 RPS, serving a client that sends requests at 5 RPS, with a 5s timeout and five retries. The queue bound is 150. The goal is to understand when this setup tips into metastability and how to tune parameters to avoid that.


Continuous-Time Markov Chains (CTMC)


CTMC provides an abstract average-case view of a system, eliding the operational details of the constructs. Figure 3 shows a probabilistic heatmap of queue length vs. retry count (called orbit). Arrows show the most likely next state; lighter color means higher probability. You can clearly see a tipping point: once the queue exceeds 40 and retries hit 30, the system is likely to spiral into a self-sustaining feedback loop. Below that threshold, it trends toward recovery. This model doesn't capture fine-grained behaviors like retry timers, but it's useful for quickly flagging dangerous regions of the state space.


Simulation (DES)

Discrete event simulation (DES) hits a sweet spot between abstract math and real-world mess. It validates CTMC predictions but also opens up the system for inspection. You can trace individual requests, capture any metric, and watch metastability unfold. The paper claims that operators often get their "aha" moment here, seeing exactly how retries and queues spiral out of control.


Emulation


Figure 4 shows the emulator results. This stage runs a stripped-down version of the service on production infrastructure. This is not the real system, but its lazy cousin. It doesn't do real work (it just sleeps on request) but it behaves like the real thing under stress. The emulator confirms that the CTMC and DES models are on track: the fake server fails in the same way as the real one.


Testing

The final stage is real stress tests on real servers. It's slow, expensive, and mostly useless unless you already know where to look. And that's the point of the whole pipeline: make testing less dumb. Feed it model predictions, aim it precisely, and maybe catch the metastable failure before it catches you.


Discussion

There may be a connection between metastability and self-stabilization. If we think in terms of a global variant function (say, system stress) then metastability is when that function stops decreasing and the system slips into an attractor basin from which recovery is unlikely. Real-world randomness might kick the system out. But sometimes it is already stuck so bad that it doesn't. Probabilistic self-stabilization once explored this terrain, and it may still have lessons here.

The paper nods at composability, but doesn't deliver. In practice, feedback loops cross the component boundaries. Metastability often emerges from these interdependencies. Component-wise analysis may miss the whole. As we know from self-stabilization: composition is hard. It works by layering or superposition, not naive composition.

The running example in the paper is useful but tiny. The authors claim this generalizes, but don't show how. For a real system, like Kafka or Spanner, how many components do you need to simulate? What metrics matter? What fidelity is enough? This feels like a "marking territory” paper that maps a problem space.

There's also a Black Swan angle here. Like Taleb's rare, high-impact events, metastable failures are hard to predict, easy to explain in hindsight, and often ignored until too late. Like Black Swan detection, I think metastability is less about prediction and more about preparation: structuring our systems and minds to notice fragility before it breaks. The paper stops at identifying potential metastability risks, and recovery is not considered. Load shedding would work, but we need some theoretical and analytical guidance, otherwise it is too easy to do harm instead of recovery via load shedding. Which actions would help nudge the system out of the metastable basin? In what order, as not to cause further harmful feedback loops? What runtime signals suggest you're close?

Aleksey Charapko, the leading author of the OSDI'22 paper on metastability, is helping MongoDB to identify potential metastability risks and address them with preventive strategies and defenses. 

Introducing Experimental Support for Stored Programs in JS in Percona Server for MySQL

TL;DR Percona Server for MySQL now offers experimental support for stored programs in the JS language. This free and open source alternative to Oracle’s Enterprise/Cloud-only feature enables users to write stored programs in a more modern, convenient, and often more familiar language. It is still in active development, and we would very much like your […]