a curated list of database news from authoritative sources

December 04, 2025

"Horses" Feels Tame

From a letter to Valve Corporation’s CEO Gabe Newell, lightly edited for formatting and links.

Dear Mr. Newell,

Steam has been my main source for games for over twenty years. I am disheartened that you chose not to publish Santa Ragione’s recently released game, Horses.

I’ve read some substantive critique; Polygon and Rock Paper Shotgun’s among them. I have also read Santa Regione’s discussion of the ban. I bought Horses on Itch and played it through; I found it enjoyable and thought provoking, though not transformative. I was surprised to find a much tamer experience than I had been led to believe. I am particularly concerned that Steam found this title dangerous enough to ban it.

Is Horses unsettling? Yes, though you would see far worse in popular horror films. I find Hostel or Saw gut-wrenching to near-unwatchable; Horses felt almost cute by comparison. It is nowhere near Argento’s classic Suspiria or Aster’s harrowing Midsommar, which deals with similar themes. The game’s pixelated censorship, silly animations, and cutting away from its worst moments comes across as coy, even camp. I suspect this is intentional: the game is in large part concerned with authoritarianism and the reproductive dynamics (in all senses) of cinema and games themselves. It is also concerned with complicity: the character’s choices to voice disgust or approval have apparently no impact on the story. Its four explicit themes—laid out in the embedded narrative of a VHS tape you must watch and decode to progress—are the repression of patriarchy, religion, chastity, and silence.

Steam has long been willing to publish works engaging with brutal dehumanization and authoritarian violence. Games like Amnesia: A Machine for Pigs or SOMA depict humans involuntarily warped, both physically and mentally, beyond recognition. Like Horses, Amnesia uses horror as a lens for the industrial revolution and the power of the wealthy. Valve’s own work has not shied away from horror; Half Life 2 is entirely about the visceral subjugation of political and bodily autonomy. Valve gave us the headcrab and the stalker—both instances of forcible objectification—and the game’s camera shies away from neither.

What, then, makes Horses unpublishable? Surely not violence, or you’d have pulled half the Steam catalogue by now. It can’t be body horror: I flinched far more at Dead Space 2’s eyeball scene. Could it be nudity? Half Life 2’s stalkers are fully nude and completely uncensored; I find the image of their mutilated bodies far more visually disturbing than the titular horses. Is it sex? Steam publishes the wildly popular Cyberpunk 2077, which has no shortage of breasts, vaginas, penises, and first-person sex scenes. It also depicts rape, torture, murder, and young boys hooded, restrained, and tortured as livestock on a farm. Could Horses be at fault for flagellation? Surely not; Steam published Robert Yang’s charming Hurt Me Plenty, where players directly spank a simulated consensusal partner. Is it complicity in authoriarian abuse? Lucas Pope’s highly-regarded Papers Please, also on Steam, puts players in the increasingly disturbing role of enforcing a fascist state’s border. It too contains pixelated nudity and violence.

I love cute, safe, and friendly games; Astroneer is my jam! And as an adult, I also enjoy challenging, even harrowing narratives. For me, Horses falls somewhere in the middle—one might call it the Animal Farm of fascist horror parables. I think Steam ought to publish it, and more transgressive works as well.

Yours truly,

Kyle Kingsbury

Cloud-Native MySQL High Availability: Understanding Virtually SYNC and ASYNC Replication

When we run databases in Kubernetes, we quickly learn one important truth: things will fail, and we need to be prepared for this. Pods are ephemeral; nodes can come and go, storage is abstracted behind PersistentVolumes and can be either local to a node or backed by network storage, and Kubernetes moves workloads as needed […]

Build "Sign in with Your App" using Supabase Auth

Supabase Auth now supports OAuth 2.1 and OpenID Connect server capabilities, turning your project into a full-fledged identity provider for AI agents, third-party developers, and enterprise SSO.

December 03, 2025

New Relic - CPU usage (%) and Load Average

At AWS re:Invent 2025, I had the opportunity to give a brief demonstration at the New Relic booth that opened the stage to some AWS Heroes. In Linux environments, administrators often rely on CPU usage and load average metrics to determine whether an instance is appropriately sized. An oversized instance that sits idle wastes resources and drives up the cloud bill, whereas an undersized instance pushed to its limits can degrade application performance—ultimately impacting a company’s revenue.

To set the stage for my demo, I began with the Summary tab’s first graph, which displays CPU usage as a percentage. I started a simple CPU-bound task using the yes command, which continuously outputs the character "y". Then, I moved on to a more complex workload: fio with 32 threads performing synchronous, direct I/O disk writes:

fio --rw=write --ioengine=psync --direct=1 --bs=1M --numjobs=32 --name=test --filename=/tmp/x --size=10G --thread

One could expect the yes command to saturate the CPU, and this fio workload to be I/O-bound, but surprisingly, both tasks show 100% CPU usage:

The load average graph doesn't help and shows the 32 threads on the CPU:

We don't have more details on the load average. The Process tab shows yes, but with only low CPU usage, and not fio:

What could help detail the load average is the process state, not gathered here. The R state is real CPU usage, for running tasks, like the 12.5% of yes. The fio processes are mostly doing disk I/O in the D state, which doesn't utilize CPU, despite what is displayed here.

Back to the CPU usage graph, I can get more details. I open the query and instead of cpuPercent I display the details:

SELECT --average(cpuPercent),
 average(cpuUserPercent), average(cpuSystemPercent),  -- running
 average(cpuStealPercent),                            -- hypervisor
 average(cpuIdlePercent), average(cpuIOWaitPercent)   -- idle (incl. wait I/O)
FROM SystemSample WHERE (entityGuid = 'NzM2NzA3MXxJTkZSQXxOQXw0MDAyNTY2MTYyODgwNDkyMzM0') 
TIMESERIES AUTO SINCE 5 minutes ago UNTIL now

The 100% CPU usage was in "Steal" for the running yes command because I've run that on an overprovisioned virtual machine where the hypervisor gives only 1/4th of the CPU cycles, and was in "IO Wait" for fio when it was waiting for IO completion rather than running in CPU:

To explain this "IO Wait" and that it is just like "Idle", I've started yes again while fio was running and the "IO Wait" disappeared:

The reason is that "IO Wait" is accounted when a CPU is idle, because the process that was running waits on an IO call, and no other process had to run on CPU. Then, the process stays scheduled on the CPU with this state. But if another process comes to run in CPU then the "IO Wait" is not accounted anymore. The CPU usage (%) is not the state of the processes, but the state of the processor:

  • when a process is running on a processor's threads, it is in R state and counts in cpuUserPercent or cpuSystemPercent depending if it is running in userspace (the application) or kernel (system call). If the hypervisor preempted the CPU cycles, it is reported as "Steal"
    • when a process is on an uninterruptible call and not scheduled out of the processor's thread, it is in D state and counts as "IO Wait"
    • when the process in the processor's thread it is waiting on something else, it is accounted as "Idle"

Back to the load average, the reason why the D state ("IO Wait") is accounted in addition to the R state ("User", "Sys" and "Steal") is visible in the loadavg.c code:

 * The global load average is an exponentially decaying average of nr_running +
 * nr_uninterruptible.
 *
 * Once every LOAD_FREQ:
 *
 *   nr_active = 0;
 *   for_each_possible_cpu(cpu)
 *  nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible;
 *
 *   avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n)

However, the most important comment is the one that explains that it is a silly number:

/*
 * kernel/sched/loadavg.c
 *
 * This file contains the magic bits required to compute the global loadavg
 * figure. Its a silly number but people think its important. We go through
 * great pains to make it work on big machines and tickless kernels.
 */

The load average metric traces back to early Unix systems (1970s), where it reflected how busy a system was by counting processes in the run queue. At that time, computers used periodic scheduling ticks—even while idle—to keep track of time and processes. Sysadmins would see a load average of 1 on a single CPU as one process running or waiting, thus equating it to CPU usage. Today, with multi-core processors, sophisticated scheduling, and tickless kernels, the load average is a far less reliable indicator of real‑time CPU usage and is often misunderstood without considering the real state of the processes.

The uninterruptible state D is not necessarily linked to disk activity. For example, asynchronous disk I/O operations that collect completed IO do not enter the D state. I demonstrated this by changing the fio job from psync to async, observing identical I/O throughput and rate but less "IO Wait" and a lower load average. Furthermore, some system calls can appear as "IO Wait" and increase the load average, even when they are idle and harmless. I also showed this by launching a thousand processes with {vfork();sleep(300);}.

The first step when using the New Relic dashboard for Linux is to replace 'cpuPercent' in the CPU usage charts with more detailed metrics:

  • cpuUserPercent and cpuSystemPercent for tasks running in CPU, respectively, for user (application) and system (kernel) code.
  • cpuStealPercent, cpuIOWaitPercent, cpuIdlePercent for idle CPU, because the hypervisor didn't allow tasks to run (and steals CPU cycles), or no tasks has something to run (with or without a task waiting on uninterruptible call)

Remember, the load average is not a reliable metric because it does not exclusively reflect actively running processes. It may also include various types of waits, but not all. The 'IO wait' percentage does not indicate high IO activity. Instead, it shows the CPU is idle while many processes are waiting for IO operations. In cloud environments where minimizing costs by reducing idle CPU time is crucial, you should focus on CPU usage for user and system processes, excluding waiting tasks, when sizing an instance. An inexperienced user might misinterpret the 100% usage shown in my fio demo as a sign the instance is too small, while in fact, it's the opposite.

Expanding Our Reach: Percona Server for MongoDB Now Officially Supports Rocky Linux 8 and 9!

Your stack, Your rules. That’s our belief, and it’s non-negotiable.  We see the landscape changing. With the massive community migration from CentOS and CentOS Stream to Rocky Linux, we heard your requests loud and clear. You need a trusted, enterprise-ready database on your preferred platform. Our telemetry data, which we receive from you, also confirms […]

Best of metadata in 2025

It is that time of year again to look back on a year of posts. I average about sixty posts annually. I don't explicitly plan for the number, and I sometimes skip weeks for travel or work, yet I somehow hit the number by December. Looking back, I always feel a bit proud. The posts make past Murat look sharp and sensible, and I will not argue with that. Here are some of the more interesting pieces from the roughly sixty posts of 2025.


Advice

Looks like I wrote several advice posts this year. I must be getting old.

The Invisible Curriculum of Research

Academic chat: On PhD

What I'd do as a College Freshman in 2025

My Time at MIT

What makes entrepreneurs entrepreneurial?

Publish and Perish: Why Ponder Stibbons Left the Ivory Tower


Databases

Concurrency Control book reading was fun. Also the series on use of time in distributed databases. And it seems like I got hyperfocused on transaction isolation this year. 

Use of Time in Distributed Databases (part 5): Lessons learned (Link to the index)

Morty: Scaling Concurrency Control with Re-Execution

Serializable Isolation for Snapshot Databases

Analysing Snapshot Isolation


Distributed systems

We started a live paper reading series with Aleksey Charapko. It has been a lot of fun and provably a better way to read papers. My summaries/reviews of the papers we read together are more insightful than the papers I read solo. The first seven paper reviews below are from our live reading sessions. 

Asymmetric Linearizable Local Reads

Cabinet: Dynamically Weighted Consensus Made Fast

Can a Client–Server Cache Tango Accelerate Disaggregated Storage?

Real Life Is Uncertain. Consensus Should Be Too!

Vive la Difference: Practical Diff Testing of Stateful Applications

Mitigating Application Resource Overload with Targeted Task Cancellation

Tiga: Accelerating Geo-Distributed Transactions with Synchronized Clocks

Analyzing Metastable Failures in Distributed Systems

Disaggregation: A New Architecture for Cloud Databases

Disaggregated Database Management Systems

Taurus Database: How to be Fast, Available, and Frugal in the Cloud

ATC/OSDI’25 Technical Sessions


AI

Of course AI!

Barbarians at the Gate: How AI is Upending Systems Research

Supporting our AI overlords: Redesigning data systems to be Agent-first

Neurosymbolic AI: Why, What, and How


Formal methods

And of course formal methods! Well mostly TLA+ in my case.

Modular verification of MongoDB Transactions using TLA+

Multi-Grained Specifications for Distributed System Model Checking and Verification

Notes from the TLA+ Community Event

Smart Casual Verification of the Confidential Consortium Framework

TLA+ Modeling of AWS outage DNS race condition



Best of previous years


Best of metadata in 2021

Best of metadata in 2020

Best of metadata in 2019

Best of metadata in 2018

Research, writing, and career advice

The new Supabase power for Kiro

Build full-stack applications faster with the Kiro IDE using deep knowledge of your Supabase project, best practices for database migrations, edge functions, and security policies.

December 02, 2025

Optimize for momentum

Progress comes from motion.  Momentum is the invisible engine of any significant work. A project feels daunting when you face it as a blank page. It feels easier when you built some momentum with some next steps. So, momentum makes the difference between blocked and flowing.

Think of a stalled truck on a desert road. You can't lift it with superhuman strength. But by rocking it with small periodic forces at the right rhythm (matching its natural frequency) you can get it rolling. Each tiny push adds to the previous one because the timing aligns with the system's response. The truck starts to move, and then the engine catches.

Projects behave the same way. A big project has its own rhythm. If you revisit it daily, even briefly, your pushes line up. Your brain stays warm. Context stays loaded. Ideas from yesterday are still alive today. Each session amplifies the last because you are operating in phase with your own momentum. When you produce something every day, you never feel stuck. You end each session with a clear record of progress. A researcher who touches their project daily is always a day away from a breakthrough. Skip too many days and you fall out of resonance. Then the project feels heavy again and needs a large effort to budge.

So the trick is to design your workflow around staying in motion. Don't wait for the perfect idea or the right mood. Act first. Clarity comes after. If a task feels intimidating, cut it down until it becomes trivial. Open the file. Draft one paragraph. Try freewriting. Run one experiment. Or sketch one figure. You want the smallest possible task that gets the wheel turning. Completion, even on tiny tasks, builds momentum and creates the energy to do the next thing. The goal is to get traction and stop getting blocked on an empty page.

A messy page is better than an empty page to get started. In an interesting Machintosh folklore story, Burrell Smith deliberately made a mess in the classic video game Defender. He shot his own humans and let all mutants loose, just so he could figure out how to clean up the chaos wholesale. Fire and maneuver!


Practical tips

This is where I find LLMs help tremendously. (Haha. AI butts its head even in an advice column.) When you face a large messy problem, ask the model to break it into a sequence of concrete subtasks: "List the next ten actions for the experiment" or "Suggest a structure for this section". Then ask the LLM to do one of the easiest tasks in this list. The mediocrity will annoy you just enough to fix it. And now you are moving. We are getting somewhere.

A ten-minute timer is one of the simplest ways to get things going. Ten minutes is short enough that you can do almost anything for ten minutes. Pick a tiny task, and start. Most of the time you keep going after the timer ends because starting was the hard part. The timer lowers the activation energy and creates the first push on the flywheel.

Another way to build momentum is to work on the part of the project that feels most attractive at the moment. If you are not in the mood to write the introduction but feel curious about running a side experiment, do the experiment. If you feel more like drawing a diagram, draw it. Interest/love/curiosity is your fuel. Progress is progress. Nobody hands out medals for following a linear plan. The only requirement is that you keep adding small meaningful pieces to the project. 


Momentum is not a glamorous, sexy idea. But it is reliable. Think of your work as a flywheel. Each nudge adds speed, and over the weeks, the wheel becomes powerful and unstoppable. People often admire the end state but they don't see the messy daily pushes that built the momentum.

We Believe in Freedom

Percona is built on the belief that Freedom matters, especially in how organizations run and evolve their database infrastructure. Our mission is to help customers and the broader community maximize that Freedom by giving them control, transparency, and choice at every layer. Here is what that means in practice: Open Source Software At Percona, we […]

Introducing Analytics Buckets

Use Analytics Buckets to store huge datasets in Supabase Storage with Apache Iceberg and columnar Parquet format, optimized for analytical workloads.

Introducing Supabase ETL

A change-data-capture pipeline that replicates your Postgres tables to analytical destinations like Analytics Buckets and BigQuery in near real time.

Using db_bench to measure RocksDB performance with gcc and clang

This has results for db_bench, a benchmark for RocksDB, when compiling it with gcc and clang. On one of my servers I saw a regression on one of the tests (fillseq) when compiling with gcc. The result on that server didn't match what I measured on two other servers. So I repeated tests after compiling with clang to see if I could reproduce it.

tl;dr

  • a common outcome is
    • ~10% more QPS with clang+LTO than with gcc
    • ~5% more QPS with clang than with gcc
  • the performance gap between clang and gcc is larger in RocksDB 10.x than in earlier versions

Variance

I always worry about variance when I search for performance bugs. Variance can be misinterpreted as a performance regression and I strive to avoid that because I don't want to file bogus performance bugs.

Possible sources of variance are:

  • the compiler toolchain
    • a bad code layout might hurt performance by increasing cache and TLB misses
  • RocksDB
    • the overhead from compaction is intermittent and the LSM tree layout can help or hurt CPU overhead during reads
  • hardware
    • sources include noisy neighbors on public cloud servers, insufficient CPU cooling and CPU frequency management that is too clever
  • benchmark client
    • the way in which I run tests can create more or less variance and more information on that is here and here

Software

I used RocksDB versions 6.29.5, 7.10.2, 8.0, 8.4, 8.8, 8.11, 9.0, 9.4, 9.8, 9.11 and 10.0 through 10.8.

I compiled each version three times:

  • gcc using version 13.3.0
  • clang - using version 18.3.1
  • clang+LTO - using version 18.3.1, where LTO is link-time optimization
The build command lines are below

flags=( DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 V=1 VERBOSE=1 )

# for gcc
make "${flags[@]}" static_lib db_bench

# for clang
AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
    make "${flags[@]}" static_lib db_bench

# for clang+LTO
AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
    make USE_LTO=1 "${flags[@]}" static_lib db_bench

On the small servers I used the LRU block cache. On the large server I used hyper clock when possible:
  • lru_cache was used for versions 7.6 and earlier
  • hyper_clock_cache was used for versions 7.7 through 8.5
  • auto_hyper_clock_cache was used for versions 8.5+

Hardware

I used two small servers and one large server, all run Ubuntu 22.04:

  • pn-53
    • Ryzen 7 (AMD) CPU with 8 cores and 32G of RAM. It is v5 in the blog post
    • benchmarks are run with 1 client (thread)
  • arm
    • an ARM server from the Google cloud -- c4a-standard-8-lssd with 8 cores and 32G of RAM, 2 local SSDs using RAID 0 and ext-4
    • benchmarks are run with 1 client (thread)
  • hetzner
    • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled, 128G of RAM, 2 SSDs with RAID 1 (3.8T each) using ext4
    • benchmarks are run with 36 clients (threads)

Benchmark

Overviews on how I use db_bench are here and here.

Tests were run for a workload with the database cached by RocksDB that I call byrx in my scripts.

The benchmark steps that I focus on are:
  • fillseq
    • load RocksDB in key order with 1 thread
  • revrangeww, fwdrangeww
    • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
  • readww
    • do point queries with a rate-limited writer. Report performance for the point queries.
  • overwrite
    • overwrite (via Put) random keys

Relative QPS

Many of the tables below (inlined and via URL) show the relative QPS which is:
    (QPS for my version / QPS for RocksDB 6.29 compiled with gcc)

The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than the base version. When it is < 1.0 then there might be a performance regression or there might just be noise.

The spreadsheet with numbers and charts is here.

Results: fillseq

Results for the pn53 server

  • clang+LTO provides ~15% more QPS than gcc in RocksDB 10.8
  • clang provides ~11% more QPS than gcc in RocksDB 10.8
  • Results for the Arm server

    • I am fascinated by how stable the QPS is here for clang and clang+LTO
    • clang+LTO and clang provide ~3% more QPS than gcc in RocksDB 10.8

    Results for the Hetzner server

    • I don't show results for 6.29 or 7.x to improve readability
    • the performance for RocksDB 10.8.3 with gcc is what motivated me to repeat tests with clang
    • clang+LTO and clang provide ~20% more QPS than gcc in RocksDB 10.8

    Results: revrangeww

    Results for the pn53 server

    • clang+LTO provides ~9% more QPS than gcc in RocksDB 10.8
    • clang provides ~6% more QPS than gcc in RocksDB 10.8

    Results for the Arm server

  • clang+LTO provides ~11% more QPS than gcc in RocksDB 10.8
  • clang provides ~6% more QPS than gcc in RocksDB 10.8
  • Results for the Hetzner server

    • I don't show results for 6.29 or 7.x to improve readability
    • clang+LTO provides ~8% more QPS than gcc in RocksDB 10.8
    • clang provides ~3% more QPS than gcc in RocksDB 10.8
    • Results: fwdrangeww

      Results for the pn53 server

    • clang+LTO provides ~9% more QPS than gcc in RocksDB 10.8
    • clang provides ~4% more QPS than gcc in RocksDB 10.8
    • Results for the Arm server

    • clang+LTO provides ~13% more QPS than gcc in RocksDB 10.8
    • clang provides ~7% more QPS than gcc in RocksDB 10.8
    • Results for the Hetzner server

      • I don't show results for 6.29 or 7.x to improve readability
      • clang+LTO provides ~4% more QPS than gcc in RocksDB 10.8
      • clang provides ~1% more QPS than gcc in RocksDB 10.8
      • Results: readww

        Results for the pn53 server

      • clang+LTO provides ~6% more QPS than gcc in RocksDB 10.8
      • clang provides ~5% less QPS than gcc in RocksDB 10.8
      • Results for the Arm server

      • clang+LTO provides ~14% more QPS than gcc in RocksDB 10.8
      • clang provides ~2% more QPS than gcc in RocksDB 10.8
      • Results for the Hetzner server

        • I don't show results for 6.29 or 7.x to improve readability
        • clang+LTO provides ~4% more QPS than gcc in RocksDB 10.8
        • clang provides ~1% more QPS than gcc in RocksDB 10.8
        • Results: overwrite

          Results for the pn53 server

        • clang+LTO provides ~6% less QPS than gcc in RocksDB 10.8
        • clang provides ~8% less QPS than gcc in RocksDB 10.8
        • but for most versions there is similar QPS for gcc, clang and clang+LTO
        • Results for the Arm server

          • QPS is similar for gcc, clang and clang+LTO

          Results for the Hetzner server

          • I don't show results for 6.29 or 7.x to improve readability
          • clang+LTO provides ~2% more QPS than gcc in RocksDB 10.8
          • clang provides ~1% more QPS than gcc in RocksDB 10.8
          • December 01, 2025

            Impact of Starting PostgreSQL Service Manually in an Active Patroni Cluster

            A Nightmare no DBA wants One day, I had a nightmare, the kind every DBA fears. My manager called me and said, “Hey… Mr. Clever. We don’t need you on the team anymore.” I panicked. “What did I do wrong?!” He replied, “You tell me. Think about your ‘great’ work yesterday.” Trying to defend myself, […]

            Introducing Vector Buckets

            Introducing vector storage in Supabase: a durable storage layer with similarity search built-in.

            November 29, 2025

            Using sysbench to measure how Postgres performance changes over time, November 2025 edition

            This has results for the sysbench benchmark on a small and big server for Postgres versions 12 through 18. Once again, Postgres is boring because I search for perf regressions and can't find any here. Results from MySQL are here and MySQL is not boring.

            While I don't show the results here, I don't see regressions when comparing the latest point releases with their predecessors -- 13.22 vs 13.23, 14.19 vs 14.20, 15.14 vs 15.15, 16.10 vs 16.11, 17.6 vs 17.7 and 18.0 vs 18.1.

            tl;dr for low-concurrency

            • a few small regressions
            • many more small improvements
            • for write-heavy tests at high-concurrency there are many large improvements starting in PG 17

            Builds, configuration and hardware

            I compiled Postgres from source for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

            I used two servers:
            • small
              • an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
            • big
              • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled
              • 2 Intel D7-P5520 NVMe storage devices with RAID 1 (3.8T each) using ext4
              • 128G RAM
              • Ubuntu 22.04 running the non-HWE kernel (5.5.0-118-generic)
            Configuration files for the small server
            • Configuration files are here for Postgres versions 1213141516 and 17.
            • For Postgres 18 I used io_method=sync and the configuration file is here.
            Configuration files for the big server
            • Configuration files are here for Postgres versions 1213141516 and 17.
            • For Postgres 18 I used io_method=sync and the configuration file is here.
            Benchmark

            I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB.

            The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. On the small server the benchmark is run with 1 client and 1 table with 50M rows. On the big server the benchmark is run with 12 clients and 8 tables with 10M rows per table. 

            The purpose is to search for regressions from new CPU overhead and mutex contention. I use the small server with low concurrency to find regressions from new CPU overheads and then larger servers with high concurrency to find regressions from new CPU overheads and mutex contention.

            Results

            The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

            I provide charts below with relative QPS. The relative QPS is the following:
            (QPS for some version) / (QPS for Postgres 12.22)
            When the relative QPS is > 1 then some version is faster than Postgres 12.12.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than Postgres 12.22.

            Values from iostat and vmstat divided by QPS are here for the small server and the big serverThese can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

            The spreadsheet and charts are here and in some cases are easier to read than the charts below. Converting the Google Sheets charts to PNG files does the wrong thing for some of the test names listed at the bottom of the charts below.

            Results: point queries

            This is from the small server.
            • a large improvement arrived in Postgres 17 for the hot-points test
            • otherwise results have been stable from 12.22 through 18.1
            This is from the big server.
            • a large improvement arrived in Postgres 17 for the hot-points test
            • otherwise results have been stable from 12.22 through 18.1
            Results: range queries without aggregation

            This is from the small server.
            • there are small improvements for the scan test
            • otherwise results have been stable from 12.22 through 18.1
            This is from the big server.
            • there are small improvements for the scan test
            • otherwise results have been stable from 12.22 through 18.1
            Results: range queries with aggregation

            This is from the small server.
            • there are small improvements for a few tests
            • otherwise results have been stable from 12.22 through 18.1
            This is from the big server.
            • there might be small regressions for a few tests
            • otherwise results have been stable from 12.22 through 18.1
            Results: writes

            This is from the small server.
            • there are small improvements for most tests
            • otherwise results have been stable from 12.22 through 18.1
            This is from the big server.
            • there are large improvements for half of the tests
            • otherwise results have been stable from 12.22 through 18.1
            From vmstat results for update-index the per-operation CPU overhead and context switch rate are much smaller starting in Postgres 17.7. The CPU overhead is about 70% of what it was in 16.11 and the context switch rate is about 50% of the rate for 16.11. Note that context switch rates are often a proxy for mutex contention.

            Script to list MongoDB collection URI (to map to WiredTiger files)

            When tracing I/O to WiredTiger files, like with strace -y, the filenames do not give the collection name:

            strace -yy -e trace=pwrite64 -s100 -fqT -p $(pgrep -d, mongod)
            
            [pid   237] pwrite64(15</data/db/journal/WiredTigerLog.0000000001>, "\0\2\0\0\6\314kK\0\0\0\0\0\0\0\0\201\300\211\204\300%\202\265table:collection-2fb69242-1b95-4a08-9939-23172e5ea178+\0\0\0\22numRecords\0\3\0\0\0\0\0\0"..., 512, 200576) = 512 <0.000179>
            [pid   238] pwrite64(69</data/db/collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1.wt>, "\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0`\0\0\0\2\0\0\0\7\4\0\1\0\20\0\0\225\21\375t\1\0\0\0\5\201\214(\350i*%\214\377\377\337\303\300\207\200\247'\0\0\0\2_id\0\6\0\0\0test1\0\2marker\0\7\0\0\0Franck\0\0\0\0\0\0"..., 4096, 4096) = 4096 <0.000053>
            [pid   238] pwrite64(69</data/db/collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1.wt>, "\0\0\0\0\0\0\0\0\t\0\0\0\0\0\0\0@\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\22B\312\326\1\0\0\0\5\08*\350i*%\214\377\377\337\303\300\207\200\207\200\201\344t\374\361U\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 8192) = 4096 <0.000034>
            [pid   238] pwrite64(69</data/db/collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0002\0\0\0\n\0\0\0\1\0\0\1\0\20\0\0V\335F\25\1\0\0\0\342\365\32\200\317\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 12288) = 4096 <0.000028>
            [pid   238] pwrite64(69</data/db/collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\09\0\0\0\21\0\0\0\1\0\0\1\0\20\0\0m/\302\306\1\0\0\0\342\365\32\200\342\37\300\317\300\342/\300\342\37\300\200\201\212\342o\300\0\0\0\0\0\0\304Oencryption=(keyid=,name=),block"..., 4096, 16384) = 4096 <0.000022>
            [pid   238] pwrite64(37</data/db/index-58332e85-20c2-4b5d-a14c-4d2a62f2f7f4.wt>, "\0\0\0\0\0\0\0\0\5\0\0\0\0\0\0\0\322\0\0\0\4\0\0\0\7\4\0\1\0\20\0\0)\1=\20\1\0\0\0P\202FZid\0Z\20\4<\353\316\272\256\317E\17\240nK\233X~\233\6Zuid\0Z \0\343\260\304B\230\374\34\24\232\373\364\310\231o\271$'\256A\344d\233\223L\244\225"..., 4096, 20480) = 4096 <0.000069>
            [pid   238] pwrite64(37</data/db/index-58332e85-20c2-4b5d-a14c-4d2a62f2f7f4.wt>, "\0\0\0\0\0\0\0\0\6\0\0\0\0\0\0\0E\0\0\0\2\0\0\0\6 \0\1\0\20\0\0p\3142\240\1\0\0\0\5\08*\350i*%F\377\377\337\302\300\205\345E\377\377\337\300\207\204\201\344\20<\340\351\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 24576) = 4096 <0.000057>
            [pid   238] pwrite64(37</data/db/index-58332e85-20c2-4b5d-a14c-4d2a62f2f7f4.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0\364'AB\1\0\0\0\342\365\32\200\342/\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 28672) = 4096 <0.000083>
            [pid   238] pwrite64(37</data/db/index-58332e85-20c2-4b5d-a14c-4d2a62f2f7f4.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0\210\201\f\231\1\0\0\0\342\365\32\200\317\300\342\37\300\200\201\207\342o\300\0\0\0\0\0\0\304Gencryption=(keyid=,name=),block_metad"..., 4096, 32768) = 4096 <0.000012>
            [pid   238] pwrite64(36</data/db/collection-2f934890-bfba-4bf1-8e30-a4983d1e443e.wt>, "\0\0\0\0\0\0\0\0\5\0\0\0\0\0\0\0\320\1\0\0\4\0\0\0\7\4\0\1\0\20\0\0\3y\341\332\1\0\0\0\5\201\214(\350i*%F\377\377\337\302\300s\200\300\202\302\0\0\0\3_id\0H\0\0\0\5id\0\20\0\0\0\4^\v\252=\334\206IR\272\320\264\246v\353\27.\5uid"..., 4096, 20480) = 4096 <0.000051>
            [pid   238] pwrite64(36</data/db/collection-2f934890-bfba-4bf1-8e30-a4983d1e443e.wt>, "\0\0\0\0\0\0\0\0\6\0\0\0\0\0\0\0E\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\241yx\34\1\0\0\0\5\08*\350i*%F\377\377\337\302\300\205\345E\377\377\337\300\207\204\201\344\332\341X\303\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 24576) = 4096 <0.000085>
            [pid   238] pwrite64(36</data/db/collection-2f934890-bfba-4bf1-8e30-a4983d1e443e.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0\364'AB\1\0\0\0\342\365\32\200\342/\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 28672) = 4096 <0.000027>
            [pid   238] pwrite64(36</data/db/collection-2f934890-bfba-4bf1-8e30-a4983d1e443e.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0d\3230Y\1\0\0\0\342\365\32\200\317\300\342\37\300\200\201\207\342o\300\0\0\0\0\0\0\304Oencryption=(keyid=,name=),block_metad"..., 4096, 32768) = 4096 <0.000041>
            [pid   238] pwrite64(5</data/db/index-0bdb5c19-b8d1-4329-a85e-dce59f589b66.wt>, "\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0'\1\0\0\6\0\0\0\7\4\0\1\0\20\0\0\33\312\247\363\1\0\0\0P\202FZid\0Z\20\4<\353\316\272\256\317E\17\240nK\233X~\233\6Zuid\0Z \0\343\260\304B\230\374\34\24\232\373\364\310\231o\271$'\256A\344d\233\223L\244\225"..., 4096, 4096) = 4096 <0.000026>
            [pid   238] pwrite64(5</data/db/index-0bdb5c19-b8d1-4329-a85e-dce59f589b66.wt>, "\0\0\0\0\0\0\0\0\t\0\0\0\0\0\0\0E\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\343\21\367&\1\0\0\0\5\08*\350i*$\374\377\377\337\303\300\205\345\217\377\377\337\277\207\200\201\344\363\247\251\333\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 8192) = 4096 <0.000271>
            [pid   238] pwrite64(5</data/db/index-0bdb5c19-b8d1-4329-a85e-dce59f589b66.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0002\0\0\0\n\0\0\0\1\0\0\1\0\20\0\0V\335F\25\1\0\0\0\342\365\32\200\317\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 12288) = 4096 <0.000055>
            [pid   238] pwrite64(5</data/db/index-0bdb5c19-b8d1-4329-a85e-dce59f589b66.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\09\0\0\0\21\0\0\0\1\0\0\1\0\20\0\0H\224$\305\1\0\0\0\342\365\32\200\342\37\300\317\300\342/\300\342\37\300\200\201\212\342o\300\0\0\0\0\0\0\304Gencryption=(keyid=,name=),block"..., 4096, 16384) = 4096 <0.000127>
            [pid   238] pwrite64(49</data/db/collection-2fb69242-1b95-4a08-9939-23172e5ea178.wt>, "\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0P\2\0\0\6\0\0\0\7\4\0\1\0\20\0\0\243\222\264\346\1\0\0\0\5\201\214(\350i*$\374\377\377\337\303\300c\200\300f\246\0\0\0\3_id\0H\0\0\0\5id\0\20\0\0\0\4\243,\276\205\7\376F\244\205z\r\343\216\216\f\376\5uid"..., 4096, 4096) = 4096 <0.000124>
            [pid   238] pwrite64(49</data/db/collection-2fb69242-1b95-4a08-9939-23172e5ea178.wt>, "\0\0\0\0\0\0\0\0\t\0\0\0\0\0\0\0E\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\25472\200\1\0\0\0\5\08*\350i*$\374\377\377\337\303\300\207\345\217\377\377\337\300\207\200\201\344\346\264rc\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 8192) = 4096 <0.000058>
            [pid   238] pwrite64(49</data/db/collection-2fb69242-1b95-4a08-9939-23172e5ea178.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0002\0\0\0\n\0\0\0\1\0\0\1\0\20\0\0V\335F\25\1\0\0\0\342\365\32\200\317\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 12288) = 4096 <0.000085>
            [pid   238] pwrite64(49</data/db/collection-2fb69242-1b95-4a08-9939-23172e5ea178.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\09\0\0\0\21\0\0\0\1\0\0\1\0\20\0\0\305\232\373\24\1\0\0\0\342\365\32\200\342\37\300\317\300\342/\300\342\37\300\200\201\212\342o\300\0\0\0\0\0\0\304Oencryption=(keyid=,name=),block"..., 4096, 16384) = 4096 <0.000010>
            [pid   238] pwrite64(41</data/db/collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c.wt>, "\0\0\0\0\0\0\0\0\v\0\0\0\0\0\0\0\345\33\0\0H\0\0\0\7\4\0\1\0 \0\0\317@\335\234\1\0\0\0%\350i*$\335\377\377\337\301\200\235]\0\0\0\2op\0\2\0\0\0n\0\2ns\0\1\0\0\0\0\3o\0\35\0\0\0\2msg\0\17\0\0\0initiatin"..., 8192, 20480) = 8192 <0.000038>
            [pid   238] pwrite64(41</data/db/collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c.wt>, "\0\0\0\0\0\0\0\0\f\0\0\0\0\0\0\0004\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\2772\240\200\1\0\0\0\5\08\0\207\204\202\344\234\335 \217\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 28672) = 4096 <0.000027>
            [pid   238] pwrite64(41</data/db/collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0004\0\0\0\f\0\0\0\1\0\0\1\0\20\0\0^\245\335\343\1\0\0\0\342\365\32\200\342/\300\342\17\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 32768) = 4096 <0.000017>
            [pid   238] pwrite64(41</data/db/collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0=\0\0\0\25\0\0\0\1\0\0\1\0\20\0\0\372\257\271\374\1\0\0\0\342\365\32\200\342o\300\317\300\317\300\342\37\300\342\177\300\317\300\200\201\215\342\217\300\0\0\0\0\0\0\304jencryption=(keyid=,name=),b"..., 4096, 36864) = 4096 <0.000009>
            [pid   238] pwrite64(24</data/db/collection-d3c1ce81-a90b-434a-867e-d985cc474e98.wt>, "\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0s\0\0\0\2\0\0\0\7\4\0\1\0\20\0\0?\330w8\1\0\0\0\5\201\200\207G\0\0\0\2_id\0\30\0\0\0oplogTruncateAfterPoint\0\21oplogTruncateAfter"..., 4096, 4096) = 4096 <0.000137>
            [pid   238] pwrite64(24</data/db/collection-d3c1ce81-a90b-434a-867e-d985cc474e98.wt>, "\0\0\0\0\0\0\0\0\t\0\0\0\0\0\0\0004\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\271p\261a\1\0\0\0\5\08\0\207\200\201\3448w\267\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 8192) = 4096 <0.000087>
            [pid   238] pwrite64(24</data/db/collection-d3c1ce81-a90b-434a-867e-d985cc474e98.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0002\0\0\0\n\0\0\0\1\0\0\1\0\20\0\0V\335F\25\1\0\0\0\342\365\32\200\317\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 12288) = 4096 <0.000102>
            [pid   238] pwrite64(24</data/db/collection-d3c1ce81-a90b-434a-867e-d985cc474e98.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\09\0\0\0\21\0\0\0\1\0\0\1\0\20\0\0\203\206\33-\1\0\0\0\342\365\32\200\342\37\300\317\300\342/\300\342\37\300\200\201\212\342o\300\0\0\0\0\0\0\304Mencryption=(keyid=,name=),block"..., 4096, 16384) = 4096 <0.000074>
            [pid   238] pwrite64(18</data/db/sizeStorer.wt>, "\0\0\0\0\0\0\0\0\v\0\0\0\0\0\0\0a\5\0\0\34\0\0\0\7\4\0\1\0\20\0\0\256dT\370\1\0\0\0Itable:_mdb_catalog\257+\0\0\0\22numRecords\0\22\0\0\0\0\0\0\0\22dataSize\0\363\34\0\0\0\0"..., 4096, 20480) = 4096 <0.000064>
            [pid   238] pwrite64(18</data/db/sizeStorer.wt>, "\0\0\0\0\0\0\0\0\f\0\0\0\0\0\0\0004\0\0\0\2\0\0\0\6 \0\1\0\20\0\0W\243\10:\1\0\0\0\5\08\0\207\204\201\344\370TDn\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 24576) = 4096 <0.000014>
            [pid   238] pwrite64(18</data/db/sizeStorer.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0\364'AB\1\0\0\0\342\365\32\200\342/\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 28672) = 4096 <0.000047>
            [pid   238] pwrite64(18</data/db/sizeStorer.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0003\0\0\0\v\0\0\0\1\0\0\1\0\20\0\0\34\265ch\1\0\0\0\342\365\32\200\317\300\342\37\300\200\201\215\342o\300\0\0\0\0\0\0\3045encryption=(keyid=,name=),block_metad"..., 4096, 32768) = 4096 <0.000027>
            [pid   238] pwrite64(16</data/db/WiredTigerHS.wt>, "\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0h\n\0\0\24\0\0\0\7\4\0\1\0\20\0\0:9\224\230\1\0\0\0005\203\201\212\350i*$\335\377\377\337\303\200\214x\350i*$\335\377\377\337\303\300\21\201\207\200\301=\350i*$\335\377\377\337\304\350i*$\335\377\377\337\303\203j\1\0\0\3md\0\306"..., 4096, 4096) = 4096 <0.000129>
            [pid   238] pwrite64(16</data/db/WiredTigerHS.wt>, "\0\0\0\0\0\0\0\0\t\0\0\0\0\0\0\0M\0\0\0\2\0\0\0\6 \0\1\0\20\0\0\31\333Y\221\1\0\0\0\5\08~\350i*$\335\377\377\337\303\300\207\345\256\377\377\337\277\345\256\377\377\337\300\200\200\207\200\201\344\230\224\30\372\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 8192) = 4096 <0.000046>
            [pid   238] pwrite64(16</data/db/WiredTigerHS.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0002\0\0\0\n\0\0\0\1\0\0\1\0\20\0\0V\335F\25\1\0\0\0\342\365\32\200\317\300\337\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 12288) = 4096 <0.000023>
            [pid   238] pwrite64(16</data/db/WiredTigerHS.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\09\0\0\0\21\0\0\0\1\0\0\1\0\20\0\0$jAb\1\0\0\0\342\365\32\200\342\37\300\317\300\342/\300\342\37\300\200\201\212\342o\300\0\0\0\0\0\0\305\230encryption=(keyid=,name=),block"..., 4096, 16384) = 4096 <0.000062>
            [pid   238] pwrite64(15</data/db/journal/WiredTigerLog.0000000001>, "\200\0\0\0)\300\374y\0\0\0\0\0\0\0\0\204\207\202\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 13568, 201088) = 13568 <0.000091>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\33\0\0\0\0\0\0\0\24n\0\0f\0\0\0\7\4\0\1\0p\0\0Y\232Q~\1\0\0\0Ycolgroup:_mdb_catalog\0\200\300^app_metadata=(formatVersion=1),ass"..., 28672, 4096) = 28672 <0.000034>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\34\0\0\0\0\0\0\0\5m\0\0(\0\0\0\7\4\0\1\0p\0\0\253|\333*\1\0\0\0\341file:collection-af1de828-88d4-462d-8163-138c3e6a94de.wt\0\200\304\213"..., 28672, 102400) = 28672 <0.000184>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\35\0\0\0\0\0\0\0*b\0\0l\0\0\0\7\4\0\1\0p\0\0002pLs\1\0\0\0\315file:index-679f8d22-1c9e-4852-8909-903c3809d2b4.wt\0\200\304\305acces"..., 28672, 131072) = 28672 <0.000018>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\36\0\0\0\0\0\0\0o\0\0\0\6\0\0\0\6 \0\1\0\20\0\0\352=F\361\1\0\0\0\5\08 \300\212\207\200\207\344~Qz\31Ifile:collection-af8 \300\212\207\230\207\344*\333\\k5file:index-678"..., 4096, 32768) = 4096 <0.000046>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0:\0\0\0\22\0\0\0\1\0\0\1\0\20\0\0\2\262\230\230\1\0\0\0\342\365\32\200\317\300\342_\300\343\1o\300\342\277\300\200\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096, 36864) = 4096 <0.000027>
            [pid   238] pwrite64(13</data/db/WiredTiger.wt>, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0@\0\0\0\30\0\0\0\1\0\0\1\0\20\0\0}c\245(\1\0\0\0\342\365\32\200\342\177\300\317\300\342\217\300\342\277\300\343\2O\300\342\177\300\200\201\237\343\2\357\300\0\0\0\0\0\305Hencryption=(keyid=,name="..., 4096, 40960) = 4096 <0.000124>
            [pid   238] pwrite64(64</data/db/WiredTiger.turtle.set>, "WiredTiger version string\nWiredTiger 12.0.0: (November 15, 2024)\nWiredTiger version\nmajor=12,minor=0"..., 1471, 0) = 1471 <0.000048>
            [pid   238] pwrite64(15</data/db/journal/WiredTigerLog.0000000001>, "\200\0\0\0D\376\33\222\0\0\0\0\0\0\0\0\200\201\343\2\361@\200\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 128, 214656) = 128 <0.000029>
            

            Here is a quick script to run with mongosh that gets the URI for all collections:

            let results = [];  
            const dbNames = db.adminCommand({ listDatabases: 1 }).databases.map(db => db.name);  
            // For each database
            for (const dbName of dbNames) {  
              const currentDB = db.getSiblingDB(dbName);  
              const collNames = currentDB.getCollectionNames();  
              // Get collectons
              for (const collName of collNames) {  
                const stats = currentDB[collName].stats({ indexDetails: true });  
                // Collection URI  
                if (stats.wiredTiger?.uri) {  
                  results.push({  
                    id: (stats.wiredTiger.uri.replace(/^statistics:table:/, "")),  
                    name: `${dbName}.${collName}`  
                  });  
                }  
                // Index URIs  
                if (stats.indexDetails) {  
                  for (const [indexName, details] of Object.entries(stats.indexDetails)) {  
                    if (details.uri) {  
                      results.push({  
                        id:(details.uri.replace(/^statistics:table:/, "     ")),  
                        name: `${dbName}.${collName} "${indexName}"`  
                      });  
                    }  
                  }  
                }  
              }  
            }  
            // Sort by the identifier  
            results.sort((a, b) => a.id.localeCompare(b.id));  
            // Print  
            results.forEach(r => {  
              print(`${r.id} ${r.name}`);  
            });  
            
            

            Here is an example of output:

                 index-0076544d-0e60-4613-ac74-798b08940e83 config.sampledQueriesDiff "_id_"
                 index-09da90b9-dcaf-42b6-ad0d-17fe23efa6f9 test.demo "_id_"
                 index-0bdb5c19-b8d1-4329-a85e-dce59f589b66 config.transactions "_id_"
                 index-11ae14c2-0298-413c-8ed4-148f806cfb2a admin.system.version "_id_"
                 index-11e8073c-b849-4c57-8abd-382d4df08cf0 local.replset.oplogTruncateAfterPoint "_id_"
                 index-1394e651-d8fa-43c3-8215-cc7d2c860e8f local.replset.initialSyncId "_id_"
                 index-4ac5f987-a260-4952-96fa-4dfd53ae2442 local.replset.election "_id_"
                 index-578076ff-91f4-4150-abaf-ab0d8ee99ca1 config.sampledQueries "SampledQueriesTTLIndex"
                 index-58332e85-20c2-4b5d-a14c-4d2a62f2f7f4 config.image_collection "_id_"
                 index-594731ae-f513-4443-afd8-7be83ee371dd config.transactions "parent_lsid"
                 index-63c4c350-4cbf-43e5-b963-2470388c13ea local.startup_log "_id_"
                 index-679f8d22-1c9e-4852-8909-903c3809d2b4 local.system.replset "_id_"
                 index-6e10b714-4320-46ad-8dd8-826e181a80c8 local.replset.minvalid "_id_"
                 index-7f7e27ba-c599-473c-ae45-098c95bf0fa5 admin.system.keys "_id_"
                 index-acf8df7a-79fd-4ca3-90af-2065c56731fb config.analyzeShardKeySplitPoints "_id_"
                 index-c3f19b26-8cfe-4162-b72e-0457e1290c91 config.analyzeShardKeySplitPoints "AnalyzeShardKeySplitPointsTTLIndex"
                 index-e3de6faa-ce85-49bd-811e-2718dbfe68c6 config.sampledQueries "_id_"
                 index-e435b3cd-214c-4e54-a3a0-2b502ad4e2f3 local.system.rollback.id "_id_"
                 index-f2ab8612-b224-4284-8d98-178241c2c40f config.system.indexBuilds "_id_"
                 index-f55353fc-6037-4919-9240-0832291acd86 config.sampledQueriesDiff "SampledQueriesDiffTTLIndex"
            collection-0a266347-b02a-4b6d-aad9-d5fea0978ecc admin.system.keys
            collection-2f934890-bfba-4bf1-8e30-a4983d1e443e config.image_collection
            collection-2fb69242-1b95-4a08-9939-23172e5ea178 config.transactions
            collection-44998cb5-8f3c-46fd-b793-bb7af13fc6fc local.replset.initialSyncId
            collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c local.oplog.rs
            collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1 test.demo
            collection-7f227aa8-c421-4095-84ed-e188eca693b5 local.system.replset
            collection-94cfdac0-3772-45cc-8bf4-e93691a663b1 config.analyzeShardKeySplitPoints
            collection-aec65821-ce4d-4dcc-a753-b445dac4b7b8 admin.system.version
            collection-af1de828-88d4-462d-8163-138c3e6a94de config.sampledQueries
            collection-b7b41814-325d-4331-96b2-57896547a8d8 config.sampledQueriesDiff
            collection-cb1d9d74-51cd-404e-8baf-559a1f583864 local.startup_log
            collection-d3c1ce81-a90b-434a-867e-d985cc474e98 local.replset.oplogTruncateAfterPoint
            collection-e9e489e7-441d-449a-8972-71c6610d217a config.system.preimages
            collection-ed1a19e5-322f-4dbb-adaa-bbfc474a0d55 local.system.rollback.id
            collection-ee3b161c-625b-48c4-9d23-620aa7b776e1 config.system.indexBuilds
            collection-f5621f15-8658-4df7-b2c6-f6e0e51eec73 local.replset.minvalid
            collection-fccee054-c41f-4585-8354-163477ef91c2 local.replset.election
            

            This helps me to understand my trace:

            • collection-7e76acd8-718e-4fd6-93f2-fd08eaac3ed1 is test.demo which I updated
            • collection-2f934890-bfba-4bf1-8e30-a4983d1e443e is config.image_collection that stores pre/post-image documents for retryable findAndModify operations, enabling idempotent retries across failovers.
            • collection-2fb69242-1b95-4a08-9939-23172e5ea178 is config.transactions that tracks session transaction state including startOpTime for active transactions.
            • collection-7dc3bad2-7f29-4853-b83a-a0d75ce4091c is local.oplog.rs   which is the capped collection containing all replication operations in timestamp order
            • collection-d3c1ce81-a90b-434a-867e-d985cc474e98 is local.replset.oplogTruncateAfterPoint which marks the safe truncation point to remove oplog holes after unclean shutdown (crash recovery)
            • collection-af1de828-88d4-462d-8163-138c3e6a94de is config.sampledQueries that stores sampled query shapes for query analyzer and sharding key analysis

            Using sysbench to measure how MySQL performance changes over time, November 2025 edition

            This has results for the sysbench benchmark on a small and big server for MySQL versions 5.6 through 9.5. The good news is that the arrival rate of performance regressions has mostly stopped as of 8.0.43. The bad news is that there were large regressions from 5.6 through 8.0.

            tl;dr for low-concurrency tests

            • for point queries
              • MySQL 5.7.44 gets about 10% less QPS than 5.6.51
              • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • for range queries without aggregation
              • MySQL 5.7.44 gets about 15% less QPS than 5.6.51
              • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • for range queries with aggregation
              • MySQL 5.7.44 is faster than 5.6.51 for two tests, as fast for one and gets about 15% less QPS for the other five
              • MySQL 8.0 to 9.5 are faster than 5.6.51 for one test, as fast for one and get about 30% less QPS for the other six
            • for writes
              • MySQL 5.7.44 gets between 10% and 20% less QPS than 5.6.51 for most tests
              • MySQL 8.0 to 9.5 get between 40% to 50% less QPS than 5.6.51 for most tests
            tl;dr for high-concurrency tests
            • for point queries
              • for most tests MySQL 5.7 to 9.5 get at least 1.5X more QPS than 5.6.51
              • for tests that use secondary indexes MySQL 5.7 to 9.5 get about 25% less QPS than 5.6.51
            • for range queries without aggregation
              • MySQL 5.7.44 gets about 10% less QPS than 5.6.51
              • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • for range queries with aggregation
              • MySQL 5.7.44 is faster than 5.6.51 for six tests, as fast for one test and gets about 20% less QPS for one test
              • MySQL 8.0 to 9.5 are a lot faster than 5.6.51 for two tests, about as fast for three tests and gets between 10% and 30% less QPS for the other three tests
            • for writes
              • MySQL 5.7.44 gets more QPS than 5.6.51 for all tests
              • MySQL 8.0 to 9.5 get more QPS than 5.6.51 for all tests

            Builds, configuration and hardware

            I build MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

            I used two servers:
            • small
              • an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
            • big
              • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled
              • 2 Intel D7-P5520 NVMe storage devices with RAID 1 (3.8T each) using ext4
              • 128G RAM
              • Ubuntu 22.04 running the non-HWE kernel (5.5.0-118-generic)
            The config files are here:
            Benchmark

            I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB.

            The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. On the small server the benchmark is run with 1 client and 1 table with 50M rows. On the big server the benchmark is run with 12 clients and 8 tables with 10M rows per table. 

            The purpose is to search for regressions from new CPU overhead and mutex contention. I use the small server with low concurrency to find regressions from new CPU overheads and then larger servers with high concurrency to find regressions from new CPU overheads and mutex contention.

            Results

            The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

            I provide charts below with relative QPS. The relative QPS is the following:
            (QPS for some version) / (QPS for MySQL 5.6.51)
            When the relative QPS is > 1 then some version is faster than MySQL 5.6.51.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than MySQL 5.6.51.

            Values from iostat and vmstat divided by QPS are here for the small server and the big serverThese can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

            The spreadsheet and charts are here and in some cases are easier to read than the charts below. Converting the Google Sheets charts to PNG files does the wrong thing for some of the test names listed at the bottom of the charts below.

            Results: point queries

            This is from the small server.
            • MySQL 5.7.44 gets about 10% less QPS than 5.6.51
            • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • There are few regressions after MySQL 8.0
            • New CPU overheads explain the regressions. See the vmstat results for the hot-points test.
            This is from the large server.
            • For most point query tests MySQL 5.7 to 9.5 get at least 1.5X more QPS than 5.6.51
              • MySQL 5.7 to 9.5 use less CPU, see vmstat results for the hot-points test.
            • For tests that use secondary indexes (*-si) MySQL 5.7 to 9.5 get about 25% less QPS than 5.6.51.
              • This result is similar to what happens on the small server above.
              • The regressions are from extra CPU overhead, see vmstat results
            • MySQL 5.7 does better than 8.0 to 9.5. There are few regressions after MySQL 8.0.
            Results: range queries without aggregation

            This is from the small server.
            • MySQL 5.7.44 gets about 15% less QPS than 5.6.51
            • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • There are few regressions after MySQL 8.0
            • New CPU overheads explain the regressions. See the vmstat results for the scan test.
            This is from the large server.
            • MySQL 5.7.44 gets about 10% less QPS than 5.6.51
            • MySQL 8.0 through 9.5 get about 30% less QPS than 5.6.51
            • There are few regressions after MySQL 8.0
            • New CPU overheads explain the regressions. See the vmstat results for the scan test.
            Results: range queries with aggregation

            This is from the small server.
            • for the read-only-distinct test, MySQL 5.7 to 9.5 are faster than 5.6.51
            • for the read-only_range=X tests
              • with the longest range scan (*_range=10000), MySQL 5.7.44 is faster than 5.6.51 and 8.0 to 9.5 have the same QPS as 5.6.51
              • with shorter range scans (*_range=100 & *_range=10) MySQL 5.6.51 is faster than 5.7 to 9.5. This implies that the regressions are from code above the storage engine layer.
              • From vmstat results the perf differences are explained by CPU overheads
            • for the other tests
              • MySQL 5.7.44 gets about 15% less QPS than 5.6.51
              • MySQL 8.0 to 9.5 get about 30% less QPS than 5.6.51
              • From vmstat results for read-only-count the reason is new CPU overhead
            This is from the large server.
            • for the read-only-distinct test, MySQL 5.7 to 9.5 are faster than 5.6.51
            • for the read-only_range=X tests
              • MySQL 5.7.44 is as fast as 5.6.51 for the longest range scan and faster than 5.6.51 for the shorter range scans
              • MySQL 8.0 to 9.5 are much faster than 5.6.51 for the longest range scan and somewhat faster for the shorter range scans
              • From vmstat results the perf differences are explained by CPU overheads and possible from changes in mutex contention
            • for the other tests
              • MySQL 5.7.44 gets about 20% less QPS than 5.6.51 for read-only-count and about 10% more QPS than 5.6.51 for read-only-simple and read-only-sum
              • MySQL 8.0 to 9.5 get about 30% less QPS than 5.6.51 for read-only-count and up to 20% less QPS than 5.6.51 for read-only-simple and read-only-sum
              • From vmstat results for read-only-count the reason is new CPU overhead
            Results: writes

            This is from the small server.
            • For most tests
              • MySQL 5.7.44 gets between 10% and 20% less QPS than 5.6.51
              • MySQL 8.0 to 9.5 get between 40% to 50% less QPS than 5.6.51
              • From vmstat results for the insert test, MySQL 5.7 to 9.5 use a lot more CPU
            • For the update-index test
              • MySQL 5.7.44 is faster than 5.6.51
              • MySQL 8.0 to 9.5 get about 10% less QPS than 5.6.51
              • From vmstat metrics MySQL 5.6.51 has more mutex contention
            • For the update-inlist test
              • MySQL 5.7.44 is as fast as 5.6.51
              • MySQL 8.0 to 9.5 get about 30% less QPS than 5.6.51
              • From vmstat metrics MySQL 5.6.51 has more mutex contention
            This is from the large server and the y-axis truncates the result for the update-index test to improve readability for the other results.
            • For all tests MySQL 5.7 to 9.5 get more QPS than 5.6.51
              • From vmstat results for the write-only test MySQL 5.6.51 uses more CPU and has more mutex contention.
            • For some tests (read-write_range=X) MySQL 8.0 to 9.5 get less QPS than 5.7.44
              • These are the classic sysbench transaction with different range scan lengths and the performance is dominated by the range query response time, thus 5.7 is fastest.
            • For most tests MySQL 5.7 to 9.5 have similar perf with two exceptions
              • For the delete test, MySQL 8.0 to 9.5 are faster than 5.7. From vmstat metrics 5.7 uses more CPU and has more mutex contention than 8.0 to 9.5.
              • For the update-inlist test, MySQL 8.0 to 9.5 are faster than 5.7. From vmstat metrics 5.7 uses more CPU than 8.0 to 9.5.
            This is also from the large server and does not truncate the update-index test result.

            November 27, 2025

            MongoDB Index Intersection (and PostgreSQL Bitmap-and)

            You won’t always have a perfect index for every query, but may have have single-field indexes for each filter. In such cases, PostgreSQL can use Bitmap Scan to combine these indexes. MongoDB is also capable of merging multiple index bounds in a single scan or using index intersection to combine separate scans. Yet, the MongoDB query planner rarely selects index intersection. Let’s look at the reasons behind this.

            TL;DR: if you think you need index intersection, you probably need better compound indexes.

            Test Setup

            I create a collection with one hundred thousand documents and two fields, with an index on each:

            
            let bulk = [];    
            for (let i = 0; i < 100000; i++) {    
              bulk.push({      
                a: Math.floor(Math.random()*100),     
                b: Math.floor(Math.random()*100)      
              });    
            }    
            db.demo.insertMany(bulk);
            
            // separate indexes
            db.demo.createIndex({ a: 1 });    
            db.demo.createIndex({ b: 1 });  
            

            In PostgreSQL, we’ll mirror the dataset as follow:

            CREATE TABLE demo AS
            SELECT id,
              (random()*100)::int AS a,
              (random()*100)::int AS b
            FROM generate_series(1,100000) id;
            
            CREATE INDEX demo_a_idx ON demo(a);
            CREATE INDEX demo_b_idx ON demo(b);
            

            In my MongoDB collection of 100,000 documents, only nine documents have both the "a" and "b" fields set to 42:

            mongo> db.demo.countDocuments()
            
            100000
            
            mongo> db.demo.find({ a: 42, b: 42 }).showRecordID()
            
            [
              { _id: ObjectId('6928697ae5fd2cdba9d53f54'), a: 42, b: 42, '$recordId': Long('36499') },
              { _id: ObjectId('6928697ae5fd2cdba9d54081'), a: 42, b: 42, '$recordId': Long('36800') },
              { _id: ObjectId('6928697ae5fd2cdba9d54a7c'), a: 42, b: 42, '$recordId': Long('39355') },
              { _id: ObjectId('6928697ae5fd2cdba9d55a3e'), a: 42, b: 42, '$recordId': Long('43389') },
              { _id: ObjectId('6928697ae5fd2cdba9d5a214'), a: 42, b: 42, '$recordId': Long('61779') },
              { _id: ObjectId('6928697ae5fd2cdba9d5e52a'), a: 42, b: 42, '$recordId': Long('78953') },
              { _id: ObjectId('6928697ae5fd2cdba9d5eeea'), a: 42, b: 42, '$recordId': Long('81449') },
              { _id: ObjectId('6928697ae5fd2cdba9d61f48'), a: 42, b: 42, '$recordId': Long('93831') },
              { _id: ObjectId('6928697ae5fd2cdba9d61f97'), a: 42, b: 42, '$recordId': Long('93910') }
            ]
            

            In my PostgreSQL database, there are 100,000 rows and, among them, nine rows have the value 42 in both the "a" and "b" columns:

            postgres=# select count(*) from demo;
             count
            --------
             100000
            (1 row)
            
            postgres=# SELECT *, ctid FROM demo WHERE a = 42 AND b = 42;
            
             a  | b  |  id   |   ctid
            ----+----+-------+-----------
             42 | 42 |  4734 | (25,109)
             42 | 42 | 15678 | (84,138)
             42 | 42 | 29464 | (159,49)
             42 | 42 | 29748 | (160,148)
             42 | 42 | 31139 | (168,59)
             42 | 42 | 37785 | (204,45)
             42 | 42 | 55112 | (297,167)
             42 | 42 | 85823 | (463,168)
             42 | 42 | 88707 | (479,92)
            

            I displayed the CTID for PostgreSQL and the RecordID for MongoDB, to see the distribution over the heap table (for PostgreSQL) or the WiredTiger B-Tree (for MongoDB).

            MongoDB possible execution plans

            I have executed db.demo.find({ a: 42, b: 42 }), and multiple plans have been evaluated:

            mongo> db.demo.getPlanCache().list();
            [
              {
                version: '1',
                queryHash: 'BBC007A6',
                planCacheShapeHash: 'BBC007A6',
                planCacheKey: '51C56FDD',
                isActive: true,
                works: Long('968'),
                worksType: 'works',
                timeOfCreation: ISODate('2025-11-27T15:09:11.069Z'),
                createdFromQuery: { query: { a: 42, b: 42 }, sort: {}, projection: {} },
                cachedPlan: {
                  stage: 'FETCH',
                  filter: { b: { '$eq': 42 } },
                  inputStage: {
                    stage: 'IXSCAN',
                    keyPattern: { a: 1 },
                    indexName: 'a_1',
                    isMultiKey: false,
                    multiKeyPaths: { a: [] },
                    isUnique: false,
                    isSparse: false,
                    isPartial: false,
                    indexVersion: 2,
                    direction: 'forward',
                    indexBounds: { a: [ '[42, 42]' ] }
                  }
                },
                creationExecStats: [
                  {
                    nReturned: 12,
                    executionTimeMillisEstimate: 0,
                    totalKeysExamined: 967,
                    totalDocsExamined: 967,
                    executionStages: {
                      stage: 'FETCH',
                      filter: { b: { '$eq': 42 } },
                      nReturned: 12,
                      executionTimeMillisEstimate: 0,
                      works: 968,
                      advanced: 12,
                      needTime: 955,
                      needYield: 0,
                      saveState: 0,
                      restoreState: 0,
                      isEOF: 1,
                      docsExamined: 967,
                      alreadyHasObj: 0,
                      inputStage: {
                        stage: 'IXSCAN',
                        nReturned: 967,
                        executionTimeMillisEstimate: 0,
                        works: 968,
                        advanced: 967,
                        needTime: 0,
                        needYield: 0,
                        saveState: 0,
                        restoreState: 0,
                        isEOF: 1,
                        keyPattern: { a: 1 },
                        indexName: 'a_1',
                        isMultiKey: false,
                        multiKeyPaths: { a: [] },
                        isUnique: false,
                        isSparse: false,
                        isPartial: false,
                        indexVersion: 2,
                        direction: 'forward',
                        indexBounds: { a: [Array] },
                        keysExamined: 967,
                        seeks: 1,
                        dupsTested: 0,
                        dupsDropped: 0
                      }
                    }
                  },
                  {
                    nReturned: 10,
                    executionTimeMillisEstimate: 0,
                    totalKeysExamined: 968,
                    totalDocsExamined: 968,
                    executionStages: {
                      stage: 'FETCH',
                      filter: { a: { '$eq': 42 } },
                      nReturned: 10,
                      executionTimeMillisEstimate: 0,
                      works: 968,
                      advanced: 10,
                      needTime: 958,
                      needYield: 0,
                      saveState: 0,
                      restoreState: 0,
                      isEOF: 0,
                      docsExamined: 968,
                      alreadyHasObj: 0,
                      inputStage: {
                        stage: 'IXSCAN',
                        nReturned: 968,
                        executionTimeMillisEstimate: 0,
                        works: 968,
                        advanced: 968,
                        needTime: 0,
                        needYield: 0,
                        saveState: 0,
                        restoreState: 0,
                        isEOF: 0,
                        keyPattern: { b: 1 },
                        indexName: 'b_1',
                        isMultiKey: false,
                        multiKeyPaths: { b: [] },
                        isUnique: false,
                        isSparse: false,
                        isPartial: false,
                        indexVersion: 2,
                        direction: 'forward',
                        indexBounds: { b: [Array] },
                        keysExamined: 968,
                        seeks: 1,
                        dupsTested: 0,
                        dupsDropped: 0
                      }
                    }
                  },
                  {
                    nReturned: 7,
                    executionTimeMillisEstimate: 0,
                    totalKeysExamined: 968,
                    totalDocsExamined: 7,
                    executionStages: {
                      stage: 'FETCH',
                      filter: { '$and': [ [Object], [Object] ] },
                      nReturned: 7,
                      executionTimeMillisEstimate: 0,
                      works: 968,
                      advanced: 7,
                      needTime: 961,
                      needYield: 0,
                      saveState: 0,
                      restoreState: 0,
                      isEOF: 0,
                      docsExamined: 7,
                      alreadyHasObj: 0,
                      inputStage: {
                        stage: 'AND_SORTED',
                        nReturned: 7,
                        executionTimeMillisEstimate: 0,
                        works: 968,
                        advanced: 7,
                        needTime: 961,
                        needYield: 0,
                        saveState: 0,
                        restoreState: 0,
                        isEOF: 0,
                        failedAnd_0: 232,
                        failedAnd_1: 230,
                        inputStages: [ [Object], [Object] ]
                      }
                    }
                  }
                ],
                candidatePlanScores: [ 2.012596694214876, 1.0105305785123966, 1.0073314049586777 ],
                indexFilterSet: false,
                estimatedSizeBytes: Long('4826'),
                solutionHash: Long('6151768200665613849'),
                host: '40ae92e83a12:27017'
              }
            ]
            

            The cached plan uses only one index, on "a", but there are two additional possible plans in the cache: one using the index on "b" and another using a combination of both indexes with AND_SORTED. The scores (candidatePlanScores) are:

            • 2.012 for the index on "a"
            • 1.010 for the index on "b"
            • 1.007 for the AND_SORTED intersection of the indexes on "a" and "b"

            This may be surprising, and given how the data was generated, we should expect similar costs for the two indexes. We can see that during the trial period on the query plans, the index on "a" finished the scan (isEOF: 1), and even though the other two had similar performance and were going to end, the trial period ended before they reached the end (isEOF: 0). MongoDB adds an EOF bonus of 1 when one when the trial plan finishes before the others, and that explains why the score is higher. So it's not really that the index on "a" is better than the other plans, but just that all plans are good, and the first one started and finished first, and got the bonus.

            In addition to that, there's another small penalty on index intersection. Finally the scores are:

            • Index on "a": 1 (base) + 0.012 (productivity) + 1.0 (EOF bonus) = 2.012
            • Index on "b": 1 (base) + 0.010 (productivity) + 0 (no EOF) = 1.010
            • AND_SORTED: 1 (base) + 0.007 (productivity) + 0 (no EOF) = 1.007

            Without the penalties, AND_SORTED would still not have been chosen. The problem is that the score measure productivity in units of work (advanced/work) but do not account for lighter work: one index scan must fetch the document and apply the additional filter in one work unit, where AND_SORTED doesn't and waits for the intersection without additional fetch and filter.

            To show the AND_SORTED plan, I'll force it on my lab database for the following examples in this article:

            // get the current parametrs (default):
            mongo> Object.keys(db.adminCommand({ getParameter: "*" })).filter(k => k.toLowerCase().includes("intersection"))  .forEach(k => print(k + " : " + allParams[k]));
            
            internalQueryForceIntersectionPlans : false
            internalQueryPlannerEnableHashIntersection : false
            internalQueryPlannerEnableIndexIntersection : true
            
            // set all at true:
            db.adminCommand({
              setParameter: 1,
              internalQueryPlannerEnableIndexIntersection: true,
              internalQueryPlannerEnableHashIntersection: true,
              internalQueryForceIntersectionPlans: true
            });
            

            I have set internalQueryForceIntersectionPlans to force index intersection (it still uses the query planner, but with a 3-point boost to the score). Index intersection is possible for AND_SORTED by default, but I also set AND_HASH for another test later that cannot use AND_SORTED.

            Index Intersection in MongoDB

            Now that I forced index intersection, I can observe it with execution statistics:

            db.demo.find({ a: 42, b: 42 }).explain("executionStats").executionStats
            

            Execution plan shows that both indexes were scanned with one range (seeks: 1), and combined with an AND_SORTED before fetching the documents for the result:

            {
              executionSuccess: true,
              nReturned: 9,
              executionTimeMillis: 4,
              totalKeysExamined: 2009,
              totalDocsExamined: 9,
              executionStages: {
                isCached: false,
                stage: 'FETCH',
                filter: {
                  '$and': [ { a: { '$eq': 42 } }, { b: { '$eq': 42 } } ]
                },
                nReturned: 9,
                executionTimeMillisEstimate: 0,
                works: 2010,
                advanced: 9,
                needTime: 2000,
                needYield: 0,
                saveState: 0,
                restoreState: 0,
                isEOF: 1,
                docsExamined: 9,
                alreadyHasObj: 0,
                inputStage: {
                  stage: 'AND_SORTED',
                  nReturned: 9,
                  executionTimeMillisEstimate: 0,
                  works: 2010,
                  advanced: 9,
                  needTime: 2000,
                  needYield: 0,
                  saveState: 0,
                  restoreState: 0,
                  isEOF: 1,
                  failedAnd_0: 501,
                  failedAnd_1: 495,
                  inputStages: [
                    {
                      stage: 'IXSCAN',
                      nReturned: 964,
                      executionTimeMillisEstimate: 0,
                      works: 965,
                      advanced: 964,
                      needTime: 0,
                      needYield: 0,
                      saveState: 0,
                      restoreState: 0,
                      isEOF: 1,
                      keyPattern: { a: 1 },
                      indexName: 'a_1',
                      isMultiKey: false,
                      multiKeyPaths: { a: [] },
                      isUnique: false,
                      isSparse: false,
                      isPartial: false,
                      indexVersion: 2,
                      direction: 'forward',
                      indexBounds: { a: [ '[42, 42]' ] },
                      keysExamined: 964,
                      seeks: 1,
                      dupsTested: 0,
                      dupsDropped: 0
                    },
                    {
                      stage: 'IXSCAN',
                      nReturned: 1045,
                      executionTimeMillisEstimate: 0,
                      works: 1045,
                      advanced: 1045,
                      needTime: 0,
                      needYield: 0,
                      saveState: 0,
                      restoreState: 0,
                      isEOF: 0,
                      keyPattern: { b: 1 },
                      indexName: 'b_1',
                      isMultiKey: false,
                      multiKeyPaths: { b: [] },
                      isUnique: false,
                      isSparse: false,
                      isPartial: false,
                      indexVersion: 2,
                      direction: 'forward',
                      indexBounds: { b: [ '[42, 42]' ] },
                      keysExamined: 1045,
                      seeks: 1,
                      dupsTested: 0,
                      
                                                
                                                
                                                
                                                
                                            

            Netflix consolidates relational database infrastructure on Amazon Aurora, achieving up to 75% improved performance

            Netflix operates a global streaming service that serves hundreds of millions of users through a distributed microservices architecture. In this post, we examine the technical and operational challenges encountered by their Online Data Stores (ODS) team with their current self-managed distributed PostgreSQL-compatible database, the evaluation criteria used to select a database solution, and why they chose to migrate to Amazon Aurora PostgreSQL to meet their current and future performance needs. The migration to Aurora PostgreSQL improved their database infrastructure, achieving up to 75% increase in performance and 28% cost savings across critical applications.

            November 26, 2025

            How Letta builds production-ready AI agents with Amazon Aurora PostgreSQL

            With the Letta Developer Platform, you can create stateful agents with built-in context management (compaction, context rewriting, and context offloading) and persistence. Using the Letta API, you can create agents that are long-lived or achieve complex tasks without worrying about context overflow or model lock-in. In this post, we guide you through setting up Amazon Aurora Serverless as a database repository for storing Letta long-term memory. We show how to create an Aurora cluster in the cloud, configure Letta to connect to it, and deploy agents that persist their memory to Aurora. We also explore how to query the database directly to view agent state.

            Let’s Rebuild the MySQL Community Together

            Where We Are We can all agree that the MySQL ecosystem isn’t in great shape right now. Take a look at Julia’s blog post [Analyzing the Heartbeat of the MySQL Server: A Look at Repository Statistics], which confirms what many of us have felt: Oracle isn’t as committed to MySQL and its ecosystem as it […]

            ParadeDB 0.20.0: Simpler and Faster

            Introducing search aggregation, V2 API as default, and performance improvements that eliminate the complexity between search and analytics in a single Postgres-native system.