a curated list of database news from authoritative sources

June 06, 2025

Isolation Level for MongoDB Multi-Document Transactions

Many outdated or imprecise claims about transaction isolation levels in MongoDB persist. These claims are outdated because they may be based on an old version where multi-document transactions were introduced, MongoDB 4.0, such as the old Jepsen report, and issues have been fixed since then. They are also imprecise because people attempt to map MongoDB's transaction isolation to SQL isolation levels, which is inappropriate, as the SQL Standard definitions ignore Multi-Version Concurrency Control (MVCC), utilized by most databases, including MongoDB.
Martin Kleppmann has discussed this issue and provided tests to assess transaction isolation and potential anomalies. I will conduct these tests on MongoDB to explain how multi-document transaction work and avoid anomalies.

I followed the structure of Martin Kleppmann's tests on PostgreSQL and ported them to MongoDB. The read isolation level in MongoDB is controlled by the Read Concern, and the "snapshot" read concern is the only one comparable other Multi-Version Concurrency Control SQL databases, and maps to Snapshot Isolation, improperly called Repeatable Read to use the closest SQL standard term. As I test on a single-node lab, I use "majority" to show that it does more than Read Committed. The write concern should also be set to "majority" to ensure that at least one node is common between the read and write quorums.

Recap on Isolation Levels

Let me explain quickly the other isolation levels and why they cannot be mapped to the SQL standard:

  • readConcern: { level: "local" } is sometimes compared to Uncommitted Reads because it may show a state that can be later rolled back in case of failure. However, some SQL databases may show the same behavior in some rare conditions (example here) and still call that Read Committed
  • readConcern: { level: "majority" } is sometimes compared to Read Committed, because it avoids uncommitted reads. However, Read Committed was defined for wait-on-conflict databases to reduce the lock duration in two-phase locking, but MongoDB multi-document transactions use fail-on-conflict to avoid waits. Some databases consider that Read Committed can allow reads from multiple states (example here) while some others consider it must be a statement-level snapshot isolation (examples here). In a multi-shard transaction, majority may show a result from multiple states, as snapshot is the one being timeline consistent.
  • readConcern: { level: "snapshot" } is the real equivalent to Snapshot Isolation, and prevents more anomalies than Read Committed. Some databases even call that "serializable" (example here) because the SQL standard ignores the write-skew anomaly.
  • readConcern: { level: "linearlizable" } is comparable to serializable, but for a single document, not available for multi-document transactions, similar to many SQL databases that do not provide serializable as it re-introduces scalability the problems of read locks, that MVCC avoids.

Read Committed basic requirements (G0, G1a, G1b, G1c)

Here are some tests for anomalies typically prevented in Read Committed. I'll run them with readConcern: { level: "majority" } but keep in mind that readConcern: { level: "snapshot" } may be better if you want a consistent snapshot across multiple shards.

MongoDB Prevents Write Cycles (G0) with conflict error

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

In a two-phase locking database, with wait-on-conflict behavior, the second transaction would wait for the first one to avoid anomalies. However, MongoDB with transactions is fail-on-conflict and raises a retriable error to avoid the anomaly.

Each transaction touched only one document, but it was declared explicitly with a session and startTransaction(), to allow multi-document transactions, and this is why we observed the fail-on-conflict behavior to let the application apply its retry logic for complex transactions.

If the conflicting update was run as a single-document transaction, equivalent to an auto-commit statement, it would have used a wait-on-conflict behavior. I can test it by immediately running this while the t1 transaction is still active:


const db = db.getMongo().getDB("test_db");
print(`Elapsed time: ${
    ((startTime = new Date())
    && db.test.updateOne({ _id: 1 }, { $set: { value: 12 } }))
    && (new Date() - startTime)
} ms`);

Elapsed time: 72548 ms

I've run the updateOne({ _id: 1 }) without an implicit transaction. It waited for the other transaction to terminate, which happened after a 60 seconds timeout, and then update was successful. The first transaction that timed out is aborted:

session1.commitTransaction();

MongoServerError[NoSuchTransaction]: Transaction with { txnNumber: 2 } has been aborted.

The behavior of conflict in transactions differs:

  • wait-on-conflict for implicit single-document transactions
  • fail-on-conflict for explicit multiple-document transactions immediately, resulting in a transient error, without waiting, to let the application rollback and retry.

MongoDB prevents Aborted Reads (G1a)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session1.abortTransaction();

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

session2.commitTransaction();

MongoDB prevents reading an aborted transaction by reading only the committed value when Read Concern is 'majority' or 'snapshot'.

MongoDB prevents Intermediate Reads (G1b)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 101 } });

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The non-committed change from T1 is not visible to T2.


T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

session1.commitTransaction();  // T1 commits

T2.test.find();

[ { _id: 1, value: 10 }, { _id: 2, value: 20 } ]

The committed change from T1 is still not visible to T2 because it happened after T2 started.

This is different from the majority of Multi-Version Concurrency Control SQL databases. To minimize the performance impact of wait-on-conflict, they reset the read time before each statement in Read Committed, as phantom reads are allowed. They would have displayed the newly committed value with this example.
MongoDB never does that, the read time is always the start of the transaction, and no phantom read anomaly happens. However, it doesn't wait to see if the conflict is resolved or must fail with a deadlock, and fails immediately to let the application retry it.

MongoDB prevents Circular Information Flow (G1c)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 2 }, { $set: { value: 22 } });

T1.test.find({ _id: 2 });

[ { _id: 2, value: 20 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

session1.commitTransaction();

session2.commitTransaction();

In both transactions, the un-commited changes are not visible to others.

MongoDB prevents Observed Transaction Vanishes (OTV)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T3
const session3 = db.getMongo().startSession();
const T3 = session3.getDatabase("test_db");
session3.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T1.test.updateOne({ _id: 2 }, { $set: { value: 19 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 12 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

This anomaly is prevented by fail-on-conflict with explicit transaction. With implicit single-document transaction, it would have wait for the conflicting transaction to end.

MongoDB prevents Predicate-Many-Preceders (PMP)

With a SQL database, this anomaly would require Snapshot Isolation level because Read Committed use different read times per statement. However, I can show it in MongoDB with 'majority' read concern, 'snapshot' being required only to get cross-shard snapshot consistency.

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ value: 30 }).toArray();

[]

T2.test.insertOne(  { _id: 3, value: 30 }  );

session2.commitTransaction();

T1.test.find({ value: { $mod: [3, 0] } }).toArray();

[]

The newly inserted row is not visible because it was committed by T2 after the start of T1.

Martin Kleppmann's tests include some variations with a delete statement and a write predicate:

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.updateMany({}, { $inc: { value: 10 } });

T2.test.deleteMany({ value: 20 });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, rather than blocking, the delete detects the conflict and raises a retriable exception to prevent the anomaly. Compared to PostgreSQL which prevents that in Repeatable Read, it saves the waiting time before failure, but require the application to implement a retry logic.

MongoDB prevents Lost Update (P4)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

T1.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T2.test.find({ _id: 1 });

[ { _id: 1, value: 10 } ]

T1.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

T2.test.updateOne({ _id: 1 }, { $set: { value: 11 } });

MongoServerError[WriteConflict]: Caused by :: Write conflict during plan execution and yielding is disabled. :: Please retry your operation or multi-document transaction.

As it is an explicit transaction, the update doesn't wait and raises a retriable exception, so that it is impossible to overwrite the other update, without waiting for its completion.

MongoDB prevents Read Skew (G-single)

// init
use test_db;
db.test.drop();
db.test.insertMany([
  { _id: 1, value: 10 },
  { _id: 2, value: 20 }
]);

// T1
const session1 = db.getMongo().startSession();
const T1 = session1.getDatabase("test_db");
session1.startTransaction({
  readConcern: { level: "majority" },
  writeConcern: { w: "majority" }
});

// T2
const session2 = db.getMongo().startSession();
const T2 = session2.getDatabase("test_db");
session2
                                    
                                    
                                    
                                    
                                

PgBouncer for PostgreSQL: How Connection Pooling Solves Enterprise Slowdowns

If your database is getting slower, your users don’t care why. They just want it to work. Meanwhile, you’re stuck dealing with the symptoms: sluggish apps, spiking resource usage, and support tickets piling up faster than your monitoring alerts can handle. Why PostgreSQL struggles under load Often, the problem isn’t with your queries or hardware; […]

June 05, 2025

Analyzing Metastable Failures in Distributed Systems

So it goes: your system is purring like a tiger, devouring requests, until, without warning, it slumps into existential dread. Not a crash. Not a bang. A quiet, self-sustaining collapse. The system doesn’t stop. It just refuses to get better. Metastable failure is what happens when the feedback loops in the system go feral. Retries pile up, queues overflow, recovery stalls. Everything runs but nothing improves. The system is busy and useless.

In an earlier post, I reviewed the excellent OSDI ’22 paper on metastable failures, which dissected real-world incidents and laid the theoretical groundwork. If you haven’t read that one, start there.

This HotOS ’25 paper picks up the thread. It introduces tooling and a simulation framework to help engineers identify potential metastable failure modes before disaster strikes. It’s early stage work. A short paper. But a promising start. Let’s walk through it.


Introduction

Like most great tragedies, metastable failure doesn't begin with villainy; it begins with good intentions. Systems are built to be resilient: retries, queues, timeouts, backoffs. An immune system for failure, so to speak. But occasionally that immune system misfires and attacks the system itself. Retries amplify load. Timeouts cascade. Error handling makes more errors. Feedback loops go feral and you get an Ouroboros, a snake that eats its tail in an eternal cycle. The system gets stuck in degraded mode, and refuses to get better.

This paper takes on the problem of identifying where systems are vulnerable to such failures. It proposes a modeling and simulation framework to give operators a macroscopic view: where metastability can strike, and how to steer clear of it.


Overview


The paper proposes a modeling pipeline that spans levels of abstraction: from queueing theory models (CTMC), to simulations (DES), to emulations, and finally, to stress tests on real systems. The further down the stack you go, the more accurate and more expensive the analysis becomes.

The key idea is a chain of simulations: each stage refines the previous one. Abstract models suggest where trouble might be, and concrete experiments confirm or calibrate. The pipeline is bidirectional: data from low-level runs improves high-level models, and high-level predictions guide where to focus concrete testing.

The modeling is done using a Python-based DSL. It captures common abstractions: thread pools, queues, retries, service times. Crucially, the authors claim that only a small number of such components are needed to capture the essential dynamics of many production services. Business logic is abstracted away as service-time distributions.

Figure 2 shows a simple running example used throughout the paper: a single-threaded server handling API requests at 10 RPS, serving a client that sends requests at 5 RPS, with a 5s timeout and five retries. The queue bound is 150. The goal is to understand when this setup tips into metastability and how to tune parameters to avoid that.


Continuous-Time Markov Chains (CTMC)


CTMC provides an abstract average-case view of a system, eliding the operational details of the constructs. Figure 3 shows a probabilistic heatmap of queue length vs. retry count (called orbit). Arrows show the most likely next state; lighter color means higher probability. You can clearly see a tipping point: once the queue exceeds 40 and retries hit 30, the system is likely to spiral into a self-sustaining feedback loop. Below that threshold, it trends toward recovery. This model doesn't capture fine-grained behaviors like retry timers, but it's useful for quickly flagging dangerous regions of the state space.


Simulation (DES)

Discrete event simulation (DES) hits a sweet spot between abstract math and real-world mess. It validates CTMC predictions but also opens up the system for inspection. You can trace individual requests, capture any metric, and watch metastability unfold. The paper claims that operators often get their "aha" moment here, seeing exactly how retries and queues spiral out of control.


Emulation


Figure 4 shows the emulator results. This stage runs a stripped-down version of the service on production infrastructure. This is not the real system, but its lazy cousin. It doesn't do real work (it just sleeps on request) but it behaves like the real thing under stress. The emulator confirms that the CTMC and DES models are on track: the fake server fails in the same way as the real one.


Testing

The final stage is real stress tests on real servers. It's slow, expensive, and mostly useless unless you already know where to look. And that's the point of the whole pipeline: make testing less dumb. Feed it model predictions, aim it precisely, and maybe catch the metastable failure before it catches you.


Discussion

There may be a connection between metastability and self-stabilization. If we think in terms of a global variant function (say, system stress) then metastability is when that function stops decreasing and the system slips into an attractor basin from which recovery is unlikely. Real-world randomness might kick the system out. But sometimes it is already stuck so bad that it doesn't. Probabilistic self-stabilization once explored this terrain, and it may still have lessons here.

The paper nods at composability, but doesn't deliver. In practice, feedback loops cross the component boundaries. Metastability often emerges from these interdependencies. Component-wise analysis may miss the whole. As we know from self-stabilization: composition is hard. It works by layering or superposition, not naive composition.

The running example in the paper is useful but tiny. The authors claim this generalizes, but don't show how. For a real system, like Kafka or Spanner, how many components do you need to simulate? What metrics matter? What fidelity is enough? This feels like a "marking territory” paper that maps a problem space.

There's also a Black Swan angle here. Like Taleb's rare, high-impact events, metastable failures are hard to predict, easy to explain in hindsight, and often ignored until too late. Like Black Swan detection, I think metastability is less about prediction and more about preparation: structuring our systems and minds to notice fragility before it breaks. The paper stops at identifying potential metastability risks, and recovery is not considered. Load shedding would work, but we need some theoretical and analytical guidance, otherwise it is too easy to do harm instead of recovery via load shedding. Which actions would help nudge the system out of the metastable basin? In what order, as not to cause further harmful feedback loops? What runtime signals suggest you're close?

Aleksey Charapko, the leading author of the OSDI'22 paper on metastability, is helping MongoDB to identify potential metastability risks and address them with preventive strategies and defenses. 

Introducing Experimental Support for Stored Programs in JS in Percona Server for MySQL

TL;DR Percona Server for MySQL now offers experimental support for stored programs in the JS language. This free and open source alternative to Oracle’s Enterprise/Cloud-only feature enables users to write stored programs in a more modern, convenient, and often more familiar language. It is still in active development, and we would very much like your […]

June 04, 2025

Migrate Google Cloud SQL for PostgreSQL to Amazon RDS and Amazon Aurora using pglogical

In this post, we provide the steps to migrate a PostgreSQL database from Google Cloud SQL to RDS for PostgreSQL and Aurora PostgreSQL using the pglogical extension. We also demonstrate the necessary connection attributes required to support the database migration. The pglogical extension works for the community PostgreSQL version 9.4 and higher, and is supported on RDS for PostgreSQL and Aurora PostgreSQL as of version 12+.

Postgres 18 beta1: large server, CPU-bound Insert Benchmark

This has results for a CPU-bound Insert Benchmark with Postgres on a large server. A blog post about a similar workload on a small server is here.

This report was delayed because I had to debug a performance regression (see below) and then repeat tests after implementing a workaround in my benchmark client.

tl;dr - 

  • creating connections
    • this is slower in 18 beta1 with io_method=io_uring vs 17.4
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% faster than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 1% to 3% slower than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 is 0% to 4% faster
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.4 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is 0% to 2% faster
Performance regression

Connection create is much slower in Postgres 18 beta1, at least with io_method=io_uring. On my large server it takes ~2.3X longer when the client runs on the same server as Postgres (no network latency) and the CPU overhead on the postmaster process is ~3.5X larger. When the benchmark client shares the server with Postgres it used to take ~3 milliseconds to get a connection and that increases to ~7 milliseconds with 18 beta1 when using io_method=io_uring.

More details on the regression are here. By postmaster I mean this process, because the docs claim that postmaster is deprecated:
/home/mdcallag/d/pg174_o2nofp/bin/postgres -D /data/m/pg
I stumbled by this bug on accident because my benchmark client was intermittently creating connections on a performance critical path. I have since fixed the benchmark client to avoid that. But I suspect that this regression might be an issue in production for some workloads -- one risk is that the postmaster process can run out of CPU.

I assumed my benchmark wasn't creating many connections as the connections used for the inserts, deletes and queries are created at the start of a benchmark step and closed at the end. But I missed one place in the benchmark client where it ran an extra query once every 100 point queries during the point query benchmark steps (qp100, qp500, qp1000) and the new overhead from connection create in that workflow reduced QPS by ~20% for 18 beta1 with io_method=io_uring.

From some debugging it looks like there is just more time spent in the kernel dealing with the VM (page tables, etc).

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1 and 17.4. I got the source for 18 beta1 from github using the REL_18_BETA1 tag because I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for Postgres 17.4 is here and named conf.diff.cx10a_c32r128.

For 18 beta1 I tested 3 configuration files, and they are here:
  • conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
  • conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
  • conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 10M rows per table.

The benchmark steps are:

  • l.i0
    • insert 10 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 16M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 4M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for a setup without the regression and here for a setup with the regression. The rest of this post will focus on the results without the regression.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: details

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.4 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10cw4 with io_method=worker and io_workers=4
  • x10d with io_method=io_uring).
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% faster than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 1% to 3% slower than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 is 0% to 4% faster
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.4 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is 0% to 2% faster
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10cw4, x10d) was (1.01, 1.03, 1.02)
  • create index step (l.x)
    • rQPS for (x10b, x10cw4, x10d) was (0.99, 0.97, 0.97)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10cw4, x10d) was (1.02, 1.04, 1.03)
    • for l.i2 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.04, 1.01)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10cw4, x10d) was (1.00, 0.99, 1.01)
    • for qr500 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.00, 1.01)
    • for qr1000 the rQPS for (x10b, x10cw4, x10d) was (1.000.991.01)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.00, 1.02)
    • for qp500 the rQPS for (x10b, x10cw4, x10d) was (1.001.001.02)
    • for qp1000 the rQPS for (x10b, x10cw4, x10d) was (1.001.001.01)

JSONB DeTOASTing (read amplification)

PostgreSQL limits tuple sizes to a quarter of the block size, generally capping at 2KB. In document data modeling, where documents represent business transactions, sizes often exceed this limit. Storing entire transaction documents as a single JSONB can lead to compression and splitting via TOAST (The Oversized-Attribute Storage Technique). While this is suitable for static documents that are infrequently accessed, it is less optimal for queries on documents. Let's take an example to detect this antipattern of using PostgreSQL as a document database.

I create a user profile table similar to the previous post, but adding a bio field with large text:


create table users (
  id bigserial primary key,
  data jsonb not null
);

INSERT INTO users (data)
SELECT
  jsonb_build_object(
    'name', 'u' || n::text,
    'bio', (SELECT string_agg(chr(trunc(random() * (126 - 33) + 33)::int), '')   FROM generate_series(1, 5000)),
    'email', jsonb_build_array(
      'u' || n::text || '@compuserve.com'
    ),
    'login', jsonb_build_object(
      'last', to_char(current_timestamp, 'YYYY-MM-DD'),
      'count', 1
    )
  )
FROM generate_series(1, 100000) n
;
vacuum analyze users
;

I check the size of the table, and also the TOAST overflow:

with users_cte as (
  select * from pg_class
  where oid = 'users'::regclass
)
select oid, relname, relpages, reltuples, reltoastrelid
 from users_cte
union all
select oid, relname, relpages, reltuples, reltoastrelid
 from pg_class
where oid = (select reltoastrelid from users_cte)
;

  oid  |    relname     | relpages | reltuples | reltoastrelid 
-------+----------------+----------+-----------+---------------
 32314 | users          |      736 |    100000 |         32318
 32318 | pg_toast_32314 |    71430 |    300000 |             0

The table contains 100000 rows across 736 pages, with three chunks per row stored externally in the TOAST table. This results in a total of 736 + 71430 = 72166 pages. Each chunk is approximately (71430 * 8192) / 300000 = 1951 bytes, ensuring that tuples remain under 2KB.

I use EXPLAIN ANALYZE to query all documents, which shows the number of pages accessed. To retrieve the JSON document, I apply SERIALIZE, as EXPLAIN does not fetch it by default:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
select id, data
from users
;

                                QUERY PLAN                                
--------------------------------------------------------------------------
 Seq Scan on public.users (actual time=0.040..10.973 rows=100000 loops=1)
   Output: id, data
   Buffers: shared read=736
 Planning Time: 0.038 ms
 Serialization: time=2482.359 ms  output=509831kB  format=text
   Buffers: shared hit=384887 read=72253
 Execution Time: 2504.164 ms

The table scan read 736 pages from the base table (shared read=736), while reading the JSONB content accessed 72253 pages from the TOAST (shared read=72253). Each TOAST page was read an average of 384887 / 72253 = 5 times, fortunately staying in the shared buffer cache (shared hit=384887) but accessing to shared buffers costs CPU and lightweight locks.

We observe a fivefold read amplification when querying the JSONB column, as it requires de-toasting.

In a document database, you can retrieve the entire document or access specific fields for filtering and sorting. For example, I include a projection of the "username" field.

explain (analyze, verbose, buffers, wal, serialize text, costs off)
select id, data
, (data->>'username')
from users
;

                                QUERY PLAN                                 
---------------------------------------------------------------------------
 Seq Scan on public.users (actual time=0.085..532.367 rows=100000 loops=1)
   Output: id, data, (data ->> 'username'::text)
   Buffers: shared hit=384887 read=72989
 Planning Time: 0.039 ms
 Serialization: time=2276.025 ms  output=510222kB  format=text
   Buffers: shared hit=457140
 Execution Time: 2819.235 ms

PostgreSQL lacks optimization for accessing JSONB, which may lead to multiple de-toasting. As indicated in the Output of the EXPLAIN VERBOSE, the scan has two projections in addition to the whole document, and access many times to the TOAST pages as indicated by shared hit=384887 read=72989. Retrieving the whole document causes an additional shared hit=457140.

I continue by projecting one more field, "login.last":

explain (analyze, verbose, buffers, wal, serialize text, costs off)
select id, data
, (data->'login'->>'last')
, (data->>'username')
from users
;

                                         QUERY PLAN                                          
---------------------------------------------------------------------------------------------
 Seq Scan on public.users (actual time=0.079..855.246 rows=100000 loops=1)
   Output: id, data, ((data -> 'login'::text) ->> 'last'::text), (data ->> 'username'::text)
   Buffers: shared hit=842027 read=72989
 Planning Time: 0.040 ms
 Serialization: time=2261.679 ms  output=511589kB  format=text
   Buffers: shared hit=457140
 Execution Time: 3128.017 ms

Even when both projections occur in the same scan, the JSONB document is de-toasted twice: once for each field, resulting in shared hit=842027 read=72989.

To avoid retrieving the entire document, I project only the fields I need. I also run the query with an additional field, "login.count", but exclude "data" from the SELECT statement:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
select id
, (data->'login'->>'count')::int
, (data->'login'->>'last')
, (data->>'username')
from users
;

                                                                                                                                       QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on public.users (actual time=0.087..1149.260 rows=100000 loops=1)
   Output: id, (((data -> 'login'::text) ->> 'count'::text))::integer, ((data -> 'login'::text) ->> 'last'::text), (data ->> 'username'::text)
   Buffers: shared hit=1299167 read=72989
 Planning Time: 0.042 ms
 Serialization: time=21.837 ms  o

I saved the reads for serializing the result, but the scan indicated shared hit=1299167 read=72989. It accessed only the necessary data from disk: 72989 pages from the base and TOAST tables. However, to read only three fields, it accessed the buffers 1299167 / 72989 = 18 times, leading to excessive CPU usage and potential lightweight lock contention during concurrent access.

In PostgreSQL, JSONB is a datatype for a single column, to store and get the entire document. In contrast, a document database like MongoDB reads and writes individual fields, keeping the document in memory for quick access, like an object cache. Its WiredTiger storage engine decompresses the on-disk representation when loading to memory and compresses it during cache eviction or checkpoint.

It is not recommended to use PostgreSQL as a document database. Instead, fields requiring individual access should be normalized into SQL columns rather than being embedded in a JSONB document. To identify suboptimal designs, examine the shared buffer hits using EXPLAIN ANALYZE with SERIALIZATION. If you choose a document model, then use a document database.

Chapter 6: Centralized Recovery (Concurrency Control Book)

With Chapter 6, the Concurrency Control and Recovery in Database Systems book shifts focus from concurrency control to the recovery! This chapter addresses how to preserve the atomicity and durability of transactions in the presence of failures, and how to restore the system to a consistent state afterward.

The book offers a remarkably complete foundation for transactional recovery, covering undo/redo logging, checkpointing, and crash recovery. While it doesn't use the phrase "write-ahead logging", the basic concepts are there, including log-before-data and dual-pass recovery. When the book was written, the full WAL abstraction in ARIES was still to come in another five years at 1992 (see my review here). I revisit this discussion/comparison at the end of the post.


System Model and Architecture

In earlier chapters, we had reviewed the architecture: transactions pass through a transaction manager (TM), which sends operations to a scheduler and then to a data manager (DM). For recovery, the DM is split into:

  • A Cache Manager (CM), which handles reading/writing between memory and disk, with a volatile cache and a stable storage backend.
  • A Recovery Manager (RM), which interprets high-level operations (Read, Write, Commit, Abort, Restart) and ensures durability and atomicity. What is Restart? Well, we will discuss it in the recovery algorithm section. 

The model distinguishes between volatile storage (lost on crash) and stable storage (assumed to survive crashes). Cache slots have a "dirty" bit to track whether their content has diverged from stable storage. The focus of the chapter is primarily on system failures that involve a crash or reboot that wipes out volatile memory but leaves the disk intact.


Stable Storage and Shadowing

For maintaining data in stable storage, in-place updating means overwrites destroy prior values. The shadowing idea allows fast copy as new versions are written to different locations, and a directory atomically switches the system view to the new state.

Shadowing avoids UNDO, but complicates consistency. In-place updating is simpler and more compatible with buffering, but requires careful logging. I had seen a shadowing solution being developed in S3 for a "fast copy" feature, and I was amazed how much complexity it introduces for correctness of deleting/garbage collection. It is surprisingly tricky.


Logging and Recovery

This section lays the groundwork for WAL in modern systems. The log contains entries of the form [T_i, x, v], which record that transaction T_i wrote value v to data item x.

There are two key rules to ensure safe recover.

  • Undo Rule: Before a transaction overwrites a committed value in the database, the old value must be saved.
  • Redo Rule: A transaction's writes must be logged before the transaction is considered committed.

These collectively ensure that the system can UNDO the effects of uncommitted transactions and REDO the effects of committed ones. 


Recovery Algorithm Types

The chapter categorizes recovery algorithms by their needs:

  • Undo/Redo: This can handle both missing and uncommitted updates. This is the most flexible category as it allows steal/no-force as in ARIES. The chapter focuses on Undo/Redo algorithms as the most general and efficient in runtime operation.
  • Undo/No-Redo: This requires forcing all updates to disk before commit, eliminating redo. This is steal/force.
  • No-Undo/Redo: This is no-steal and no-force. So it disallows overwriting before commit, eliminating undo.
  • No-Undo/No-Redo: This is the least flexible one: no-steal, force. It requires full data persistence at commit.

Remember the restart operation, which we said that the RM is responsible for. A Restart is invoked after a crash to restore the database to the last consistent committed state. This involves:

  • Scanning the log backward to undo changes from uncommitted transactions.
  • Scanning forward to redo changes from committed ones.

The section on restart discusses how this process is idempotent, meaning it can be safely retried if another crash interrupts it.


Checkpointing

Without optimization, Restart must scan the entire log. Checkpointing minimizes recovery time by flushing current database state to disk and marking that point in the log. Checkpointing approaches trades off between runtime efficiency and recovery time. Types include:

  • Commit-consistent: Waits for all transactions to finish, then flushes all dirty pages.
  • Cache-consistent: Flushes all dirty cache slots but doesn't wait for transactions to finish.
  • Fuzzy checkpointing: Flushes only old dirty pages, reducing disruption.


Implementation of Undo/Redo with optimizations

The chapter discusses several performance optimizations for implementing Undo/Redo algorithm.

  • Partial data item logging: Log only modified bytes/fields within a page.
  • Buffering the log: To reduce I/O, log entries are staged in memory and flushed in batches.
  • Log Sequence Numbers (LSNs): Pages and log entries are tagged with sequence numbers to track which updates have been applied.
  • Delayed (group) commit: Allows batching commits to improve efficiency.

These optimization suggestions resemble those that ARIES (1992) paper formalized and implemented efficiently and correctly. More on this below.


Logical Logging

Instead of logging low-level data changes, the system may log high-level operations like "insert record" or "update field". Logical logging is more compact but harder to undo or redo, since it may depend on the current state. 

Logical logging contrasts with ARIES-style logging, which uses physiological logging: a hybrid of logical and physical approaches. While logical logging records abstract operations without referencing specific data layouts, ARIES logs changes to specific pages along with logical intent (e.g., "insert at offset X in page P") and includes both before- and after-images to enable precise undo and redo. ARIES also relies on Log Sequence Numbers (LSNs) embedded in pages and Compensation Log Records (CLRs) to make recovery idempotent and efficient. This makes ARIES more robust under the steal/no-force buffer policy, whereas logical logging is lighter but more fragile.

ARIES critiques earlier approaches, like the one in this book, for performing undo before redo. It shows that this ordering becomes incorrect when combined with optimizations like LSNs and selective redo. In such hybrids, undo can write CLRs that raise page LSNs and cause the redo phase to mistakenly skip necessary committed updates.

ARIES reverses the order. It does redo first, and reapplies all committed changes that might be missing. Then it does the undo and erases changes from uncommitted transactions. This avoids the problem above. The redo is based on comparing page LSNs to log record LSNs. Since undo hasn't happened yet, page LSNs still reflect what's truly on disk. With this approach, you only advance page LSNs after you know the change is present. Tricky stuff.


June 03, 2025

June 02, 2025

Streamline code conversion and testing from Microsoft SQL Server and Oracle to PostgreSQL with Amazon Bedrock

Organizations are increasingly seeking to modernize their database infrastructure by migrating from legacy database engines such as Microsoft SQL Server and Oracle to more cost-effective and scalable open source alternatives such as PostgreSQL. This transition not only reduces licensing costs but also unlocks the flexibility and innovation offered by PostgreSQL’s rich feature set. In this post, we demonstrate how to convert and test database code from Microsoft SQL Server and Oracle to PostgreSQL using the generative AI capabilities of Amazon Bedrock.

Understanding the Client-Output Buffer Limit for Replicas in Valkey

Valkey (a community-driven fork of Redis) uses a high-performance replication model to synchronize data between a primary node and its replicas. One critical component of this system is the client-output buffer, especially the configuration of its limits for replicas. This blog explores the client-output buffer, how its limits work in the context of replicas, and […]

May 31, 2025

No HOT updates on JSONB (write amplification)

PostgreSQL's Multi-Version Concurrency Control (MVCC) works around the challenge of in-place updates in fixed block storage by avoiding it. Instead of updating rows, it processes them as deletes and inserts, prioritizing simplicity of implementation over performance. Updating fields in a JSONB document can be problematic due to significant write amplification.

What are Heap Only Tuple (HOT) updates?

When a table row is updated, the entire row is marked for deletion by setting its xmax value, indicating the end of its visibility period. A new version of the row is then created with a fresh xmin value to signify the start of its visibility. Write amplification arises not only from copying the entire row but also from the need to update all indexes associated with the table. PostgreSQL indexes reference rows using their physical location (ctid), meaning that any change in the row's physical location requires new index entries to find the latest version of the row, even if the indexed column values remain unchanged. Over time, when older versions of rows are no longer visible to any active transaction—having passed the xmin horizon—they are eligible for garbage collection by the vacuum process, which removes outdated row versions and their associated index entries.

Given that many SQL applications have multiple indexes on their tables, frequent updates can exacerbate write amplification, with detrimental consequences for checkpoints and replication, especially when every index must be updated regardless of whether the indexed values changed. To mitigate this, PostgreSQL introduces an optimization called Heap-Only Tuple (HOT) updates that avoid adding new index entries for keys that didn't change, in cases where the new version of the row fits in the same block as the previous version. If a column is frequently updated and the old version is frequently vacuumed, some free space may be constantly available in the block for new versions (and this can be initialized with a lower fillfactor) and HOT optimization can kick-in.

This blog post series is about using PostgreSQL as a document database, with all data in JSONB, but there's no Heap-Only Tuple optimization for indexes on JSONB fields.

Test it with EXPLAIN (ANALYZE, WAL, BUFFERS)

I create a table similar to the one in the previous post, storing user profiles, and add a login sub-object to record the last login date and a login counter:

create table users (
  id bigserial primary key,
  data jsonb not null
);
insert into users (data) values (
 jsonb_build_object(
    'name', 'Homer Simpson',
    '{login}',
    jsonb_build_object(
      'last', to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'),
      'count', 0
    )  ,
    'email', jsonb_build_array(
      'donutlover@springfieldusa.com',
      'homerdoh@simpsons.com',
      'lazy.sofa.guy@tvcharacters.net'
    )
  )
 );

This is the PostgreSQL equivalent of the following MongoDB call to insert a document:

db.users.insertOne({  
  data: {  
    name: "Homer Simpson",  
    login: {  
      last: new Date(),  
      count: 0  
    },  
    email: [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
  }  
});  

My use-case is the equivalent of the following to increase the login counter and update the last login date:

db.users.updateOne(  
  { _id: 1 },  
  {  
    $set: { "data.login.last": new Date() },  
    $inc: { "data.login.count": 1 }  
  }  
);  

In SQL, there's no increment operation. Instead, an update sets the new values. When stored as a JSONB field in PostgreSQL, we must replace the document with a new one using json_set() to modify the fields.

I run some updates to increase the login counter and update the last login date and show the execution plan with statistics:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
UPDATE users
SET data = jsonb_set(
  data,
  '{login}',
  jsonb_build_object(
    'last', to_char(current_timestamp, 'YYYY-MM-DD'),
    'count', (COALESCE((data->'login'->>'count')::int, 0) + 1)
  )
)
where id=1
\watch

Here is the execution plan showing two buffer hits to find the row via index, and one Write-Ahead Logging (WAL) record for the update of the row (71 bytes)

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.057..0.057 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=1 bytes=71
   ->  Index Scan using users_pkey on public.users (actual time=0.040..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.077 ms

You can run that for a while and on a large table, and observe the same. Even if it writes more than necessary, because the whole row and JSON documents is re-written, the performance is predictable.

You may observe some executions with one more WAL record generated by the Index Scan as reads may do some delayed cleanup.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.062..0.063 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=2 bytes=157
   ->  Index Scan using users_pkey on public.users (actual time=0.047..0.048 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
         WAL: records=1 bytes=86
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.083 ms

While storing all data in JSONB, similar to a document database, may seem appealing, this table lacks indexes. In a real-world application, documents will contain more fields and sub-documents and require multiple indexes, which are likely to evolve as the application develops.

Adding indexes

During the lifecycle of an application, more indexes are created. I add an index on the user name:

create index on users(
 (data->>'name')
);

In PostgreSQL, adding an index to fields that are not updated does impact updates differently than in many other databases. For instance, my login update produces two additional WAL records, resulting in a total WAL size that is three times larger, along with increased buffer reads.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.091..0.092 rows=0 loops=1)
   Buffers: shared hit=9
   WAL: records=3 bytes=207
   ->  Index Scan using users_pkey on public.users (actual time=0.059..0.060 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.068 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.113 ms

PostgreSQL requires an expression index to index JSON fields. We have seen one limitation of expression indexes in a previous post (No Index Only Scan on JSONB Fields) and here is another one: PostgreSQL doesn't detect when the indexed value has not changed. This prevents it from applying HOT optimization, even if the new row fits within the same block.

This was with an expression index on a scalar value (with no array in the JSON path) but there's the same problem with GIN indexes. I create the same index as in the previous post (No Index for LIKE on JSONB):

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

My update that does't touch this field shows one more WAL record, larger WAL size and more buffer reads:

                                                                                                           QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.080..0.080 rows=0 loops=1)
   Buffers: shared hit=11
   WAL: records=4 bytes=397
   ->  Index Scan using users_pkey on public.users (actual time=0.039..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.070 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.100 ms

The issue at hand is that you might prefer document data modeling over relational data modeling due to its simplicity in matching your domain access patterns. You may have come across some "Just use PostgreSQL" advocacy and claims that JSONB can transform PostgreSQL into a document database, and started such design with all data in a JSONB field. Your initial performance metrics might be met but, as your application grows and more indexes are added, critical use cases may struggle to scale.

I emphasized the importance of WAL records and size, as they are significant bottlenecks in PostgreSQL's scalability due to single-threaded WAL replication. Additionally, write amplification leads to other complications, including increased checkpoint work and higher pressure on vacuum. Scaling up with more CPUs won't resolve the issue, and adding read replicas won't help either since all indexes need to be created on the primary database.

PostgreSQL is a relational database that incorporates JSONB for added flexibility, but it doesn't convert it into a document database. In an SQL RDBMS, frequently updated or indexed fields should be in their own columns, maybe their own tables, while JSON can be used for additional flexible data accessed as a whole. If a document model is preferred, consider using a document database like MongoDB, which performs in-place updates to documents in memory and updates only the relevant indexes (FAQ: Indexes) and is not limited by fixed block size storage (documents are stored in a B-Tree with variable leaf size, and secondary indexes reference them with the key in this B-Tree).

No HOT updates on JSONB (write amplification)

PostgreSQL's Multi-Version Concurrency Control (MVCC) works around the challenge of in-place updates in fixed block storage by avoiding it. Instead of updating rows, it processes them as deletes and inserts, prioritizing simplicity of implementation over performance. Updating fields in a JSONB document can be problematic due to significant write amplification.

What are Heap Only Tuple (HOT) updates?

When a table row is updated, the entire row is marked for deletion by setting its xmax value, indicating the end of its visibility period. A new version of the row is then created with a fresh xmin value to signify the start of its visibility. Write amplification arises not only from copying the entire row but also from the need to update all indexes associated with the table. PostgreSQL indexes reference rows using their physical location (ctid), meaning that any change in the row's physical location requires new index entries to find the latest version of the row, even if the indexed column values remain unchanged. Over time, when older versions of rows are no longer visible to any active transaction—having passed the xmin horizon—they are eligible for garbage collection by the vacuum process, which removes outdated row versions and their associated index entries.

Given that many SQL applications have multiple indexes on their tables, frequent updates can exacerbate write amplification, with detrimental consequences for checkpoints and replication, especially when every index must be updated regardless of whether the indexed values changed. To mitigate this, PostgreSQL introduces an optimization called Heap-Only Tuple (HOT) updates that avoid adding new index entries for keys that didn't change, in cases where the new version of the row fits in the same block as the previous version. If a column is frequently updated and the old version is frequently vacuumed, some free space may be constantly available in the block for new versions (and this can be initialized with a lower fillfactor) and HOT optimization can kick-in.

This blog post series is about using PostgreSQL as a document database, with all data in JSONB, but there's no Heap-Only Tuple optimization for indexes on JSONB fields.

Test it with EXPLAIN (ANALYZE, WAL, BUFFERS)

I create a table similar to the one in the previous post, storing user profiles, and add a login sub-object to record the last login date and a login counter:

create table users (
  id bigserial primary key,
  data jsonb not null
);
insert into users (data) values (
 jsonb_build_object(
    'name', 'Homer Simpson',
    '{login}',
    jsonb_build_object(
      'last', to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'),
      'count', 0
    )  ,
    'email', jsonb_build_array(
      'donutlover@springfieldusa.com',
      'homerdoh@simpsons.com',
      'lazy.sofa.guy@tvcharacters.net'
    )
  )
 );

This is the PostgreSQL equivalent of the following MongoDB call to insert a document:

// MongoDB equivalent query
db.users.insertOne({  
    "_id": 1,
    name: "Homer Simpson",  
    login: {  
      last: new Date(),  
      count: 0  
    },  
    email: [  
      "donutlover@springfieldusa.com",  
      "homerdoh@simpsons.com",  
      "lazy.sofa.guy@tvcharacters.net"  
    ]  
});  

My use-case is the equivalent of the following to increase the login counter and update the last login date:

// MongoDB equivalent query
db.users.updateOne(  
  { _id: 1 },  
  {  
    $set: { "login.last": new Date() },  
    $inc: { "login.count": 1 }  
  }  
);  

In SQL, there's no increment operation. Instead, an update sets the new values. When stored as a JSONB field in PostgreSQL, we must replace the document with a new one using json_set() to modify the fields.

I run some updates to increase the login counter and update the last login date and show the execution plan with statistics:

explain (analyze, verbose, buffers, wal, serialize text, costs off)
UPDATE users
SET data = jsonb_set(
  data,
  '{login}',
  jsonb_build_object(
    'last', to_char(current_timestamp, 'YYYY-MM-DD'),
    'count', (COALESCE((data->'login'->>'count')::int, 0) + 1)
  )
)
where id=1
\watch

Here is the execution plan showing two buffer hits to find the row via index, and one Write-Ahead Logging (WAL) record for the update of the row (71 bytes)

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.057..0.057 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=1 bytes=71
   ->  Index Scan using users_pkey on public.users (actual time=0.040..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.077 ms

You can run that for a while and on a large table, and observe the same. Even if it writes more than necessary, because the whole row and JSON documents is re-written, the performance is predictable.

Note that you may observe some executions with one more WAL record generated by the Index Scan as reads may do some delayed cleanup:

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.062..0.063 rows=0 loops=1)
   Buffers: shared hit=4
   WAL: records=2 bytes=157
   ->  Index Scan using users_pkey on public.users (actual time=0.047..0.048 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=2
         WAL: records=1 bytes=86
 Planning Time: 0.063 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.083 ms

While storing all data in JSONB, similar to a document database, may seem appealing, this table lacks indexes. In a real-world application, documents will contain more fields and sub-documents and require multiple indexes, which are likely to evolve as the application develops.

Adding indexes

During the lifecycle of an application, more indexes are created. I add an index on the user name:

create index on users(
 (data->>'name')
);

In PostgreSQL, adding an index to fields that are not updated does impact updates differently than in many other databases. For instance, my login update produces two additional WAL records, resulting in a total WAL size that is three times larger, along with increased buffer reads.

QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.091..0.092 rows=0 loops=1)
   Buffers: shared hit=9
   WAL: records=3 bytes=207
   ->  Index Scan using users_pkey on public.users (actual time=0.059..0.060 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.068 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.113 ms

PostgreSQL requires an expression index to index JSON fields. We have seen one limitation of expression indexes in a previous post (No Index Only Scan on JSONB Fields) and here is another one: PostgreSQL doesn't detect when the indexed value has not changed. This prevents it from applying HOT optimization, even if the new row fits within the same block.

This was with an expression index on a scalar value (with no array in the JSON path) but there's the same problem with GIN indexes. I create the same index as in the previous post (No Index for LIKE on JSONB):

CREATE INDEX idx_users_data_email ON users USING GIN (
 (data->'email') jsonb_path_ops
);  

My update that does't touch this field shows one more WAL record, larger WAL size and more buffer reads:

                                                                                                           QUERY PLAN                                                                                                           
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on public.users (actual time=0.080..0.080 rows=0 loops=1)
   Buffers: shared hit=11
   WAL: records=4 bytes=397
   ->  Index Scan using users_pkey on public.users (actual time=0.039..0.041 rows=1 loops=1)
         Output: jsonb_set(data, '{login}'::text[], jsonb_build_object('last', to_char(CURRENT_TIMESTAMP, 'YYYY-MM-DD'::text), 'count', (COALESCE((((data -> 'login'::text) ->> 'count'::text))::integer, 0) + 1)), true), ctid
         Index Cond: (users.id = 1)
         Buffers: shared hit=3
 Planning Time: 0.070 ms
 Serialization: time=0.000 ms  output=0kB  format=text
 Execution Time: 0.100 ms

The issue at hand is that you might prefer document data modeling over relational data modeling due to its simplicity in matching your domain access patterns, and may have come across some "Just use PostgreSQL" advocacy that claims that JSONB can transform PostgreSQL into a document database. You started such design with all data in a JSONB field, and your initial performance metrics might be met but, as your application grows and more indexes are added, critical use cases may struggle to scale.

I emphasized the importance of WAL records and size, as they are significant bottlenecks in PostgreSQL's scalability due to single-threaded WAL replication. Additionally, write amplification leads to other complications, including increased checkpoint work and higher pressure on vacuum. Scaling up with more CPUs won't resolve the issue, and adding read replicas won't help either since all indexes need to be created on the primary database.

PostgreSQL is a relational database that incorporates JSONB for added flexibility, but it doesn't convert it into a document database. In an SQL RDBMS, frequently updated or indexed fields should be in their own columns, maybe their own tables, while JSON can be used for additional flexible data accessed as a whole. If a document model is preferred, consider using a document database like MongoDB, which performs in-place updates to documents in memory and updates only the relevant indexes (FAQ: Indexes) and is not limited by fixed block size storage (documents are stored in a B-Tree with variable leaf size, and secondary indexes reference them with the key in this B-Tree).

May 29, 2025

The Future of Comments is Lies, I Guess

I’ve been involved in content moderation since roughly 2004. I’ve built spam prevention for corporate and personal e-mail, moderated open-source mailing lists and IRC channels, worked at a couple social media networks, and help moderate a Mastodon instance for a few hundred people. In the last few years I’ve wasted more time fighting blog comment spam, and I’m pretty sure Large Language Models (LLMs) are to blame.

I think of spam as a space with multiple equilibria. Producing spam takes work. Spammers are willing to invest that work because each message has a small chance to make money, or achieve political or emotional goals. Some spam, like the endless identical Viagra scams in my email spam folder, or the PHPBB comment spam I filter out here on aphyr.com, is cheap to generate and easy to catch. I assume the spammers make it up in volume. Other spam, like spear phishing attacks, is highly time-consuming: the spammer must identify a target, carefully craft a plausible message using, say, the identity of the target’s co-workers, or construct a facade of a bank’s log-in page, and so on. This kind of spam is more likely to make it through filters, but because it takes a lot of human work, is generally only worth it for high-value targets.

LLMs seem to be changing these equilibria. Over the last year I’ve seen a new class of comment spam, using what I’m fairly sure is LLM-generated text. These comments make specific, plausible remarks about the contents of posts and images, and work in a link to some web site or mention a product. Take this one I caught a few months back:

"Walking down a sidewalk lined with vibrant flowers is one of life’s simple joys! It reminds me of playing the [link redacted] slope game, where you have to navigate through colorful landscapes while dodging obstacles.

Before 2023, you’d likely have paid a human a few cents to write and post that. Now, thanks to LLMs, this sort of thing is trivially automated. The model will happily fabricate relatable personal experiences in service of a spam campaign:

That photo reminds me of the first time I tried macro photography in my backyard. I spent an hour trying to get a clear shot of a red flower, experimenting with angles and lighting. It was so much fun discovering the little details up close! If you ever need a break from photography, I recommend playing Snow Rider 3D for a bit of quick, light-hearted fun.

Other spam seems to glue together LLM remarks with what I think is a hand-written snippet of ad copy. Note the abrupt shift in grammar, diction, and specificity.

This piece masterfully blends technical depth with mythological storytelling, transforming a standard Haskell programming interview into an epic narrative. It cleverly critiques the complexity and absurdity of some technical interviews by illustrating how type-level Haskell can be pushed to esoteric extremes beautiful, powerful, and largely impractical. A fascinating and relevant read for anyone interested in the intersection of programming, language design, and narrative. I’m James Maicle, working at Cryptoairhub where we run a clear and insightful crypto blog. I’ll be bookmarking your site and following the updates. Glad to see so much valuable information shared here looking forward to exploring more strategies together. Thanks for sharing. If you interest about Crypto please visit my website and read my article [link redacted] Crypto Blog.

Of course this is not news. Product reviews are inundated with LLM slop, as are social media networks. LLMs allow for cheap, fast, and automated generation of unique spam which is difficult for machines and humans to identify. The cost falls on me and other moderators, who must sift through LLM bullshit trying to sieve “awkward but sincere human” from “automated attack”. Thanks to OpenAI et al I read more spam, and each message takes longer to check.

This problem is only going to get worse as LLMs improve and spammers develop more sophisticated ways to use them. In recent weeks I’ve received vague voice messages from strangers with uncanny speech patterns just asking to catch up—a sentence which, had I uttered it prior to 2023, would have been reasonably interpreted as a sign of psychosis. I assume these too are LLM-generated scams, perhaps a pig butchering scheme. So far these are from strangers, but it’s not hard to imagine an attacker using text and voice synthesis to impersonate a friend, colleague, or lover in a detailed conversation. Or one’s doctor, or bank.

As the cost of generating slop decreases, it’s easy to imagine LLMs generating personae, correspondence, even months-long relationships with real humans before being deployed for commercial or political purposes. Creating plausible accounts and selling them has been a successful business model in social media for some time; likewise, we have strong clues that LLMs are already used for social media bots. Social networks have responded to these attacks via out-of-band mechanisms: IP reputation analysis, javascript and mobile app fingerprinting, statistical correlation across multiple accounts, and so on. I’m not sure how to translate these defenses to less centralized and more privacy-oriented networks, like email or blog spam. I strongly suspect the only reason Mastodon hasn’t been eaten alive by LLM spambots is because we’re just not big enough to be lucrative. But those economics are shifting, and even obscure ecological niches can be worth filling.

As a moderator, that keeps me up at night.

$graphLookup (Connect By / Recursive Query)

In this series, I present various access patterns for a specific document model. These patterns are supported by a limited set of secondary indexes designed to make queries efficient, without modifying the document schema.

This article explores recursive searches through graph-like relationships between documents, with each video in this collection showcasing related content with an array of related videos:

[
  {
    _id: '---U8lzusKE',
    category: 'Entertainment',
    relatedVideos: [
      'x9LRHlMdZmA', '5P5nxdJAFdE',
      'jdg8Sp1HUKM', 'xdxVBiJe8Co',
      'qLSA0gQ9z28', 'WHZPEkZCqwA',
      'y3VMhFCLxRc', 'hHjGtBnSv50',
      '_vx1OVLX5Rc', 'V4LnorVVxfw',
      'l56K8eAtCig', 'dHpCoFyMCHU',
      'XO5BYR39te8', 'yWy0cuxNWDw',
      '4SiXdhL7wxU', '5EaZTxQeQMQ',
      'mOvmBNLQIi4', 'fa2CvFa2xY8',
      'CpbYBZKdi3s', 'lBxzoqTSILc',
      'RBumgq5yVrA', 'EoN8RKubbO0',
      'zIHQPgz_Iwg', '7PCkvCPvDXk',
      't1NVJlm5THo'
    ],
...

With this structure, I can easily navigate from one video to its related ones, and from there to further related content, effectively building a graph of interconnected videos. There’s no need for an additional index since each video references the "_id" of its related videos, which is always indexed.

Access Patterns: forward traversal of related documents

The following query identifies a video and explores down to three levels of related videos, constructing a graph of connections based on the associated video array. It filters these connections by daily views and restructures the output for improved readability:

db.youstats.aggregate([  
  {  
    $match: { _id: 'YoB8t0B4jx4' }   
  },  
  {  
    $graphLookup: {  
      from: "youstats",  
      startWith: "$relatedVideos", 
      connectFromField: "relatedVideos", 
      connectToField: "_id", 
      as: "allRelatedVideos", 
      maxDepth: 3,  
      restrictSearchWithMatch: {  
        "views.daily.data": { $gt: 1e6 } 
      },  
      depthField: "level" 
    }  
  },  
  {  
    $project: {  
      _id: 1,  
      title: 1,  
      author: 1,  
      allRelatedVideos: {  
        $map: {  
          input: "$allRelatedVideos",  
          as: "video",  
          in: {  
            number: { $add: [ { $indexOfArray: ["$allRelatedVideos", "$$video"] }, 1 ] },  
            _id: "$$video._id",  
            title: "$$video.title",  
            author: "$$video.author",  
            level: "$$video.level"  
          }  
        }  
      }  
    }  
  }  
])

The execution plan shows the IXSCAN only during the $match stage, but the subsequent iterations utilize the same method.

    stage: 'EXPRESS_IXSCAN',
    keyPattern: '{ _id: 1 }',

With $graphLookup, you need to have an index on connectToField.

Access Patterns: backward traversal of related documents

To navigate the graph in the opposite direction and find the parent with the _id in its related videos array, an index on that field is essential for quick access. In MongoDB, indexes are created similarly for both scalar fields and arrays:

db.youstats.createIndex({ 
 relatedVideos: 1, _id: 1 
});  

The following query, where the connectToField is the related videos array, is fast:

db.youstats.aggregate([  
  {      
    $match: { _id: 'x9LRHlMdZmA' }       
  },      
  {      
    $graphLookup: {      
      from: "youstats",      
      startWith: "$_id",      
      connectFromField: "_id",      
      connectToField: "relatedVideos",      
      as: "parentVideos",      
      maxDepth: 3,       
      depthField: "level",  
      restrictSearchWithMatch: {      
        "views.daily.data": { $gt: 1e6 }      
      }      
    }      
  }      
]);  

Using $graphLookup in an aggregation pipeline effectively retrieves a limited number of documents, as long as the work area remains within the 100MB memory limit and results do not exceed the BSON limit of 16MB. For utilizing MongoDB as a document database, consider PuppyGraph (Querying MongoDB Atlas Data as a Graph). The same indexes allow fast recursive search and can be on a scalar identifier or an array or of child, depending if you implemented the one-to-many in the one-side or many-side.

How to Safely Upgrade InnoDB Cluster From MySQL 8.0 to 8.4

In this blog, we continue from where we left off in the previous post, InnoDB Cluster Setup: Building a 3-Node High Availability Architecture, where we demonstrated how to set up a MySQL InnoDB Cluster with three nodes to achieve high availability. Here, we walk through the step-by-step process of performing a rolling upgrade of that […]

Building on open table formats

Open table formats like Apache Iceberg, Delta Lake, and Apache Hudi are transforming how developers manage large-scale data on object storage systems.

Postgres 18 beta1: large server, Insert Benchmark, bad configurations

While testing Postgres 18 beta1 on a large server I used several configurations with io_workers set to values the are too large and performance suffered. The default value for it is io_workers and that appears to be a great default. Perhaps other people won't repeat my mistakes.

tl;dr

  • the default value for io_workers is 3 and that is a good value to use
  • be careful about using larger values for io_workers as the performance penalty ranges from 0% (no penalty) to 24% (too much penalty

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config files for 18 beta 1 use names like conf.diff.cx10cw${Z}_c32r128 where $Z is the value for io_workers. All of these use io_method=workers. The files are here. I repeated tests for io_workers set to 2, 4, 6, 8, 16 and 32.

The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table.

The benchmark steps are:

  • l.i0
    • insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: details

The performance summary is here.

The summary of the summary is that larger values for io_workers ...
  • increase throughput by up to 4% for the initial load (l.i0) 
  • increase throughput by up to 12% for create index (l.x)
  • decrease throughput by up to 6% for write heavy (l.i1)
  • decrease throughput by up to 16% for write heavy (l.i2)
  • decrease throughput by up to 3% for range queries, note that this step is CPU-bound
  • decrease throughput by up to 24% for point queries, note that this step is IO-bound
The summary is:
  • the initial load step (l.i0)
    • rQPS for io_workers in (4, 6, 8, 16) was (1.03, 1.03, 1.03, 1.02, 1.04) so these were slightly faster than io_workers=2.
    • rQPS for io_workers=32 was 1.00
  • the create index step (l.x)
    • rQPS for io_workers in (4, 6, 8, 16, 32) was (1.06, 1.05, 1.07, 1.12, 1.11) so these were all faster than io_workers=2.
  • the write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.99, 0.99, 0.96, 0.94)
    • for l.i2 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.84, 0.95, 0.90, 0.88, 0.88)
    • I am surprised that larger values for io_workers doesn't help here but did help during the previous steps (l.i0, l.x) which are also write heavy.
  • the range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.99, 0.99, 0.99, 0.99)
    • for qr500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.98, 0.97, 0.97)
    • for qr1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.01, 1.00, 0.99, 0.98, 0.97)
    • note that this step is usually CPU-bound for Postgres because the indexes fit in memory
  • the point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.97, 0.94, 0.90)
    • for qp500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.00, 0.98, 0.97, 0.89, 0.81)
    • for qp1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.95, 0.93, 0.86, 0.76)
    • these steps are IO-bound
For the regressions in one of the write-heavy steps (l.i2) I don't see an obvious problem in the vmstat and iostat metrics -- the amount of CPU, context switches and IO per operation have some variance there isn't difference that explains the change.

For the regressions in the point query steps (qp100, qp500, qp1000) the vmstat and iostat metrics for qp1000 help to explain the problem. Metrics that increase as io_workers increases include:
  • CPU/operation (see cpupq) has a large increase
  • context switches /operation (see cspq) has a small increase
  • iostat reads /operation (rpq) and KB read /operation (rkbpq) have small increases
Finally, average rates from iostat. These are not normalized by QPS. There aren't many differences, although rps (reads/s) is higher for io_workers=2 because throughput was higher in that case.

Legend:
* rps, wps - read /s and write /s
* rKBps, wKBps - KB read /s & KB written /s
* rawait, wawait - read & write latency
* rareqsz, wareqsz - read & write request size

-- from l.i2 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
3468    34622   0.08    8.9     5374    85567   1.41    17.3     2
2959    24026   0.08    8.3     4866    74547   0.05    17.5    32

-- from qp1000 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
81949   659030  0.13    8.0     39546   589789  168.21  16.5     2
68257   549016  0.12    8.0     36005   549028  130.44  16.2    32