a curated list of database news from authoritative sources

December 10, 2025

The insert benchmark on a small server : MySQL 5.6 through 9.5

This has results for MySQL versions 5.6 through 9.5 with the Insert Benchmark on a small server. Results for Postgres on the same hardware are here.

tl;dr

  • good news - there are no large regressions after MySQL 8.0
  • bad news - there are many large regressions from 5.6 to 5.7 to 8.0

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

The config files are here: 5.6.515.7.448.0.4x8.4.x9.x.0.

The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from MySQL 5.6.51.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

Results: cached

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.59)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) has similar throughput for 5.6.51 and 8.4.7
  • the second write-only step (l.i2) is 14% slower in 8.4.7 vs 8.4.7
  • the range-query steps (qr*) are ~30% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 38% slower in 8.4.7 vs 5.6.51

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.531.161.090.830.830.830.840.830.83
8.0.440.602.421.050.870.690.620.700.620.700.62
8.4.70.592.541.040.860.680.610.680.610.670.60
9.4.00.592.571.030.860.690.620.690.620.700.61
9.5.00.592.611.050.850.690.620.690.620.690.62

Results: IO-bound

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.60)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) is 1.54X faster for 8.4.7 vs 5.6.51
  • the second write-only step (l.i2) is  1.82X faster for 8.4.7 vs 5.6.51
  • the range-query steps (qr*) are ~20% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 13% slower, 3% slower and 17% faster in 8.4.7 vs 5.6.51
dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.421.521.780.840.920.870.970.931.17
8.0.440.622.581.561.810.760.880.790.990.851.18
8.4.70.602.651.541.820.740.870.770.980.821.17
9.4.00.612.681.521.760.750.860.800.970.851.16
9.5.00.602.751.531.730.750.870.790.970.841.17

The insert benchmark on a small server : Postgres 12.22 through 18.1

This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.

Postgres continues to be boring in a good way. It is hard to find performance regressions.

 tl;dr for a cached workload

  • performance has been stable from Postgres 12 through 18
tl;dr for an IO-bound workload
  • performance has mostly been stable
  • create index has been ~10% faster since Postgres 15
  • throughput for the write-only steps has been ~10% less since Postgres 15
  • throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

For versions prior to 18, the config file is named conf.diff.cx10a_c8r32 and they are as similar as possible and here for versions 12, 13, 14, 15, 16 and 17.

For Postgres 18 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 12.22.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
This statement doesn't apply to this blog post, but I keep it here for copy/paste into future posts. Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: cached

The performance summaries are here for all versions and latest versions.

I focus on the  latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range

Results: IO-bound

The performance summaries are here for all versions and latest versions.

I focus on the latest versions.
  • throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
  • throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
  • throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
  • throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
  • throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22
The improvements for the index step arrived in Postgres 15.

The regressions for the write-only steps arrived in Postgres 15 and are likely from two issues -- vacuum and get_actual_variable_range

The improvements for the point-query steps arrived in Postgres 13.













    Rotate SSL/TLS Certificates in Valkey/Redis Without Downtime

    If your Valkey/Redis deployments use SSL/TLS, you will eventually need to rotate the TLS certificates. Perhaps it is because the certificates are expiring, or you made mistakes when creating them, or it could be that the private key has been leaked. This article explains the process of rotating the TLS/SSL certificates used by Valkey/Redis deployments […]

    December 09, 2025

    How to Turn a MySQL Unique Key Into a Primary Key

    A unique constraint specifies, one or more columns as unique it identifies. It is satisfied only when no two rows store the same non-null values at its core. A primary key constraint is a unique one that will say PRIMARY KEY in its defined way. It is satisfied only when rows unfold, and none may […]

    December 08, 2025

    RocksDB performance over time on a small Arm server

    This post has results for RocksDB on an Arm server. I previously shared results for RocksDB performance using gcc and clang. Here I share results using clang with LTO.

    RocksDB is boring, there are few performance regressions.

    tl;dr

    • for cached workloads throughput with RocksDB 10.8 is as good or better than with 6.29
    • for not-cached workloads throughput with RocksDB 10.8 is similar to 6.29 except for the overwrite test where it is 7% less, probably from correctness checks added in 7.x and 8.x.

    Software

    I used RocksDB versions 6.29, 7.0, 7.10, 8.0, 8.4, 8.8, 8.11, 9.0, 9.4, 9.8, 9.11 and 10.0 through 10.8.

    I compiled each version clang version 18.3.1 with link-time optimization enabled (LTO). The build command line was:

    flags=( DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 V=1 VERBOSE=1 )

    # for clang+LTO
    AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
        make USE_LTO=1 "${flags[@]}" static_lib db_bench

    Hardware

    I used a small Arm server from the Google cloud running Ubuntu 22.04. The server type was c4a-standard-8-lssd with 8 cores and 32G of RAM. Storage was 2 local SSDs with RAID 0 and ext-4.

    Benchmark

    Overviews on how I use db_bench are here and here.

    The benchmark was run with 1 thread and used the LRU block cache.

    Tests were run for three workloads:

    • byrx - database cached by RocksDB
    • iobuf - database is larger than RAM and RocksDB used buffered IO
    • iodir - database is larger than RAM and RocksDB used O_DIRECT

    The benchmark steps that I focus on are:
    • fillseq
      • load RocksDB in key order with 1 thread
    • revrangeww, fwdrangeww
      • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
    • readww
      • do point queries with a rate-limited writer. Report performance for the point queries.
    • overwrite
      • overwrite (via Put) random keys

    Relative QPS

    Many of the tables below (inlined and via URL) show the relative QPS which is:
        (QPS for my version / QPS for RocksDB 6.29)

    The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than RocksDB 6.29. When it is < 1.0 then there might be a performance regression or there might just be noise.

    The spreadsheet with numbers and charts is here. Performance summaries are here.

    Results: byrx

    This has results for by byrx workload where the database is cached by RocksDB.

    RocksDB 10.x is faster than 6.29 for all tests.

    Results: iobuf

    This has results for by iobuf workload where the database is larger than RAM and RocksDB used buffered IO.

    Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.

    Results: iodir

    This has results for by iodir workload where the database is larger than RAM and RocksDB used O_DIRECT.

    Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.

    Brainrot

    I drive my daughter to school as part of a car pool. Along the way, I am learning a new language, Brainrot.

    So what is brainrot? It is what you get when you marinate your brain with silly TikTok, YouTube Shorts, and Reddit memes. It is slang for "my attention span is fried and I like it". Brainrot is a self-deprecating language. Teens are basically saying: I know this is dumb, but I am choosing to speak it anyway.

    What makes brainrot different from old-school slang is its speed and scale. When we were teenagers, slang spread by word of mouth. It mostly stayed local in our school hallways or neighborhood. Now memes go global in hours. A meme is born in Seoul at breakfast and widespread in Ohio by six seven pm. The language mutates at escape velocity and gets weird fast. 

    Someone even built a brainrot programming language. The joke runs deep, and is getting some infrastructure.


    Here are a few basic brainrot terms you will hear right away.

    • He is cooked: It means he is finished, doomed, beyond saving.
    • He is cooking: The opposite. It means he is doing something impressive. Let him cook.
    • Mewing: Jawline exercises that started half as fitness advice and half as a meme. Now it mostly means trying too hard to look sharp.
    • Aura: Your invisible social vibe. You either have it or you do not. Your aura-farming do not impress my teens.
    • NPC: Someone who acts on autopilot, like a background character in a game.
    • Unc: An out-of-touch older guy.  That would be me?


    I can anticipate the reactions to this post. Teenagers will shrugg: "Obviously. How is this news?" Parents of teens will laugh in recognition. Everyone else will be lost and move away.

    I have seen things. I am not 50 yet, but I am getting there. I usually write about distributed systems and databases. I did not plan to write this post. This post insisted on being written through me. We will return to our regularly scheduled programming.

    But here is my real point. I think the kids are alright.

    It is not uncommon for a generation to get declared doomed by the one before it, yet Gen Z and Gen Alpha may have taken the heaviest hit, and written off as lost causes. But what I see is a generation with sharp self-mocking humor. They have short attention spans for things they do not care about. I think they do this out of sincerity. They don't see the purpose in mundane things, or and for many things theyfeel they lack enough agency. But what is new is how hard they can lock in (focus) on what they care about, and how fast they can form real bonds around shared interests. They are more open with each other. They are more inclusive by default. They are a feeling bunch. They waste no patience on things they find pointless. But when it matters, they show up fully.

    From the outside, their culture may looks absurd and chaotic. But, under the memes, I see a group that feels deeply, adapts quickly, and learns in public. They are improvising in real time. And despite all predictions to the contrary, they might actually know what they are doing.

    Unlocking Secure Connections: SSL/TLS Support in Percona Toolkit

    In today’s interconnected world, data security is paramount. Protecting sensitive information transmitted between applications and databases is crucial, and SSL/TLS (Secure Sockets Layer/Transport Layer Security) plays a vital role in achieving this. Percona Toolkit, a collection of command-line tools for MySQL, MongoDB, and other databases, has long been a go-to resource for database administrators. In […]

    December 05, 2025

    Community Erosion Post License Change: Quantifying the Power of Open Source

    Summary This article is a detailed analysis of the impact of the Redis license change to a non-open-source one on its community. To summarize the findings:  37.5% of contributors (9 of 24) stopped contributing to Redis after the fork Valkey grew from 18 to 49 contributors in 18 months Valkey averages 80 PRs/month in 2025 […]

    December 04, 2025

    "Horses" Feels Tame

    From a letter to Valve Corporation’s CEO Gabe Newell, lightly edited for formatting and links.

    Dear Mr. Newell,

    Steam has been my main source for games for over twenty years. I am disheartened that you chose not to publish Santa Ragione’s recently released game, Horses.

    I’ve read some substantive critique; Polygon and Rock Paper Shotgun’s among them. I have also read Santa Regione’s discussion of the ban. I bought Horses on Itch and played it through; I found it enjoyable and thought provoking, though not transformative. I was surprised to find a much tamer experience than I had been led to believe. I am particularly concerned that Steam found this title dangerous enough to ban it.

    Is Horses unsettling? Yes, though you would see far worse in popular horror films. I find Hostel or Saw gut-wrenching to near-unwatchable; Horses felt almost cute by comparison. It is nowhere near Argento’s classic Suspiria or Aster’s harrowing Midsommar, which deals with similar themes. The game’s pixelated censorship, silly animations, and cutting away from its worst moments comes across as coy, even camp. I suspect this is intentional: the game is in large part concerned with authoritarianism and the reproductive dynamics (in all senses) of cinema and games themselves. It is also concerned with complicity: the character’s choices to voice disgust or approval have apparently no impact on the story. Its four explicit themes—laid out in the embedded narrative of a VHS tape you must watch and decode to progress—are the repression of patriarchy, religion, chastity, and silence.

    Steam has long been willing to publish works engaging with brutal dehumanization and authoritarian violence. Games like Amnesia: A Machine for Pigs or SOMA depict humans involuntarily warped, both physically and mentally, beyond recognition. Like Horses, Amnesia uses horror as a lens for the industrial revolution and the power of the wealthy. Valve’s own work has not shied away from horror; Half Life 2 is entirely about the visceral subjugation of political and bodily autonomy. Valve gave us the headcrab and the stalker—both instances of forcible objectification—and the game’s camera shies away from neither.

    What, then, makes Horses unpublishable? Surely not violence, or you’d have pulled half the Steam catalogue by now. It can’t be body horror: I flinched far more at Dead Space 2’s eyeball scene. Could it be nudity? Half Life 2’s stalkers are fully nude and completely uncensored; I find the image of their mutilated bodies far more visually disturbing than the titular horses. Is it sex? Steam publishes the wildly popular Cyberpunk 2077, which has no shortage of breasts, vaginas, penises, and first-person sex scenes. It also depicts rape, torture, murder, and young boys hooded, restrained, and tortured as livestock on a farm. Could Horses be at fault for flagellation? Surely not; Steam published Robert Yang’s charming Hurt Me Plenty, where players directly spank a simulated consensusal partner. Is it complicity in authoriarian abuse? Lucas Pope’s highly-regarded Papers Please, also on Steam, puts players in the increasingly disturbing role of enforcing a fascist state’s border. It too contains pixelated nudity and violence.

    I love cute, safe, and friendly games; Astroneer is my jam! And as an adult, I also enjoy challenging, even harrowing narratives. For me, Horses falls somewhere in the middle—one might call it the Animal Farm of fascist horror parables. I think Steam ought to publish it, and more transgressive works as well.

    Yours truly,

    Kyle Kingsbury

    Cloud-Native MySQL High Availability: Understanding Virtually SYNC and ASYNC Replication

    When we run databases in Kubernetes, we quickly learn one important truth: things will fail, and we need to be prepared for this. Pods are ephemeral; nodes can come and go, storage is abstracted behind PersistentVolumes and can be either local to a node or backed by network storage, and Kubernetes moves workloads as needed […]

    Build "Sign in with Your App" using Supabase Auth

    Supabase Auth now supports OAuth 2.1 and OpenID Connect server capabilities, turning your project into a full-fledged identity provider for AI agents, third-party developers, and enterprise SSO.

    December 03, 2025

    New Relic - CPU usage (%) and Load Average

    At AWS re:Invent 2025, I had the opportunity to give a brief demonstration at the New Relic booth that opened the stage to some AWS Heroes. In Linux environments, administrators often rely on CPU usage and load average metrics to determine whether an instance is appropriately sized. An oversized instance that sits idle wastes resources and drives up the cloud bill, whereas an undersized instance pushed to its limits can degrade application performance—ultimately impacting a company’s revenue.

    To set the stage for my demo, I began with the Summary tab’s first graph, which displays CPU usage as a percentage. I started a simple CPU-bound task using the yes command, which continuously outputs the character "y". Then, I moved on to a more complex workload: fio with 32 threads performing synchronous, direct I/O disk writes:

    fio --rw=write --ioengine=psync --direct=1 --bs=1M --numjobs=32 --name=test --filename=/tmp/x --size=10G --thread
    

    One could expect the yes command to saturate the CPU, and this fio workload to be I/O-bound, but surprisingly, both tasks show 100% CPU usage:

    The load average graph doesn't help and shows the 32 threads on the CPU:

    We don't have more details on the load average. The Process tab shows yes, but with only low CPU usage, and not fio:

    What could help detail the load average is the process state, not gathered here. The R state is real CPU usage, for running tasks, like the 12.5% of yes. The fio processes are mostly doing disk I/O in the D state, which doesn't utilize CPU, despite what is displayed here.

    Back to the CPU usage graph, I can get more details. I open the query and instead of cpuPercent I display the details:

    SELECT --average(cpuPercent),
     average(cpuUserPercent), average(cpuSystemPercent),  -- running
     average(cpuStealPercent),                            -- hypervisor
     average(cpuIdlePercent), average(cpuIOWaitPercent)   -- idle (incl. wait I/O)
    FROM SystemSample WHERE (entityGuid = 'NzM2NzA3MXxJTkZSQXxOQXw0MDAyNTY2MTYyODgwNDkyMzM0') 
    TIMESERIES AUTO SINCE 5 minutes ago UNTIL now
    

    The 100% CPU usage was in "Steal" for the running yes command because I've run that on an overprovisioned virtual machine where the hypervisor gives only 1/4th of the CPU cycles, and was in "IO Wait" for fio when it was waiting for IO completion rather than running in CPU:

    To explain this "IO Wait" and that it is just like "Idle", I've started yes again while fio was running and the "IO Wait" disappeared:

    The reason is that "IO Wait" is accounted when a CPU is idle, because the process that was running waits on an IO call, and no other process had to run on CPU. Then, the process stays scheduled on the CPU with this state. But if another process comes to run in CPU then the "IO Wait" is not accounted anymore. The CPU usage (%) is not the state of the processes, but the state of the processor:

    • when a process is running on a processor's threads, it is in R state and counts in cpuUserPercent or cpuSystemPercent depending if it is running in userspace (the application) or kernel (system call). If the hypervisor preempted the CPU cycles, it is reported as "Steal"
      • when a process is on an uninterruptible call and not scheduled out of the processor's thread, it is in D state and counts as "IO Wait"
      • when the process in the processor's thread it is waiting on something else, it is accounted as "Idle"

    Back to the load average, the reason why the D state ("IO Wait") is accounted in addition to the R state ("User", "Sys" and "Steal") is visible in the loadavg.c code:

     * The global load average is an exponentially decaying average of nr_running +
     * nr_uninterruptible.
     *
     * Once every LOAD_FREQ:
     *
     *   nr_active = 0;
     *   for_each_possible_cpu(cpu)
     *  nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible;
     *
     *   avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n)
    

    However, the most important comment is the one that explains that it is a silly number:

    /*
     * kernel/sched/loadavg.c
     *
     * This file contains the magic bits required to compute the global loadavg
     * figure. Its a silly number but people think its important. We go through
     * great pains to make it work on big machines and tickless kernels.
     */
    

    The load average metric traces back to early Unix systems (1970s), where it reflected how busy a system was by counting processes in the run queue. At that time, computers used periodic scheduling ticks—even while idle—to keep track of time and processes. Sysadmins would see a load average of 1 on a single CPU as one process running or waiting, thus equating it to CPU usage. Today, with multi-core processors, sophisticated scheduling, and tickless kernels, the load average is a far less reliable indicator of real‑time CPU usage and is often misunderstood without considering the real state of the processes.

    The uninterruptible state D is not necessarily linked to disk activity. For example, asynchronous disk I/O operations that collect completed IO do not enter the D state. I demonstrated this by changing the fio job from psync to async, observing identical I/O throughput and rate but less "IO Wait" and a lower load average. Furthermore, some system calls can appear as "IO Wait" and increase the load average, even when they are idle and harmless. I also showed this by launching a thousand processes with {vfork();sleep(300);}.

    The first step when using the New Relic dashboard for Linux is to replace 'cpuPercent' in the CPU usage charts with more detailed metrics:

    • cpuUserPercent and cpuSystemPercent for tasks running in CPU, respectively, for user (application) and system (kernel) code.
    • cpuStealPercent, cpuIOWaitPercent, cpuIdlePercent for idle CPU, because the hypervisor didn't allow tasks to run (and steals CPU cycles), or no tasks has something to run (with or without a task waiting on uninterruptible call)

    Remember, the load average is not a reliable metric because it does not exclusively reflect actively running processes. It may also include various types of waits, but not all. The 'IO wait' percentage does not indicate high IO activity. Instead, it shows the CPU is idle while many processes are waiting for IO operations. In cloud environments where minimizing costs by reducing idle CPU time is crucial, you should focus on CPU usage for user and system processes, excluding waiting tasks, when sizing an instance. An inexperienced user might misinterpret the 100% usage shown in my fio demo as a sign the instance is too small, while in fact, it's the opposite.

    Expanding Our Reach: Percona Server for MongoDB Now Officially Supports Rocky Linux 8 and 9!

    Your stack, Your rules. That’s our belief, and it’s non-negotiable.  We see the landscape changing. With the massive community migration from CentOS and CentOS Stream to Rocky Linux, we heard your requests loud and clear. You need a trusted, enterprise-ready database on your preferred platform. Our telemetry data, which we receive from you, also confirms […]

    Best of metadata in 2025

    It is that time of year again to look back on a year of posts. I average about sixty posts annually. I don't explicitly plan for the number, and I sometimes skip weeks for travel or work, yet I somehow hit the number by December. Looking back, I always feel a bit proud. The posts make past Murat look sharp and sensible, and I will not argue with that. Here are some of the more interesting pieces from the roughly sixty posts of 2025.


    Advice

    Looks like I wrote several advice posts this year. I must be getting old.

    The Invisible Curriculum of Research

    Academic chat: On PhD

    What I'd do as a College Freshman in 2025

    My Time at MIT

    What makes entrepreneurs entrepreneurial?

    Publish and Perish: Why Ponder Stibbons Left the Ivory Tower


    Databases

    Concurrency Control book reading was fun. Also the series on use of time in distributed databases. And it seems like I got hyperfocused on transaction isolation this year. 

    Use of Time in Distributed Databases (part 5): Lessons learned (Link to the index)

    Morty: Scaling Concurrency Control with Re-Execution

    Serializable Isolation for Snapshot Databases

    Analysing Snapshot Isolation


    Distributed systems

    We started a live paper reading series with Aleksey Charapko. It has been a lot of fun and provably a better way to read papers. My summaries/reviews of the papers we read together are more insightful than the papers I read solo. The first seven paper reviews below are from our live reading sessions. 

    Asymmetric Linearizable Local Reads

    Cabinet: Dynamically Weighted Consensus Made Fast

    Can a Client–Server Cache Tango Accelerate Disaggregated Storage?

    Real Life Is Uncertain. Consensus Should Be Too!

    Vive la Difference: Practical Diff Testing of Stateful Applications

    Mitigating Application Resource Overload with Targeted Task Cancellation

    Tiga: Accelerating Geo-Distributed Transactions with Synchronized Clocks

    Analyzing Metastable Failures in Distributed Systems

    Disaggregation: A New Architecture for Cloud Databases

    Disaggregated Database Management Systems

    Taurus Database: How to be Fast, Available, and Frugal in the Cloud

    ATC/OSDI’25 Technical Sessions


    AI

    Of course AI!

    Barbarians at the Gate: How AI is Upending Systems Research

    Supporting our AI overlords: Redesigning data systems to be Agent-first

    Neurosymbolic AI: Why, What, and How


    Formal methods

    And of course formal methods! Well mostly TLA+ in my case.

    Modular verification of MongoDB Transactions using TLA+

    Multi-Grained Specifications for Distributed System Model Checking and Verification

    Notes from the TLA+ Community Event

    Smart Casual Verification of the Confidential Consortium Framework

    TLA+ Modeling of AWS outage DNS race condition



    Best of previous years


    Best of metadata in 2021

    Best of metadata in 2020

    Best of metadata in 2019

    Best of metadata in 2018

    Research, writing, and career advice

    The new Supabase power for Kiro

    Build full-stack applications faster with the Kiro IDE using deep knowledge of your Supabase project, best practices for database migrations, edge functions, and security policies.

    December 02, 2025

    Optimize for momentum

    Progress comes from motion.  Momentum is the invisible engine of any significant work. A project feels daunting when you face it as a blank page. It feels easier when you built some momentum with some next steps. So, momentum makes the difference between blocked and flowing.

    Think of a stalled truck on a desert road. You can't lift it with superhuman strength. But by rocking it with small periodic forces at the right rhythm (matching its natural frequency) you can get it rolling. Each tiny push adds to the previous one because the timing aligns with the system's response. The truck starts to move, and then the engine catches.

    Projects behave the same way. A big project has its own rhythm. If you revisit it daily, even briefly, your pushes line up. Your brain stays warm. Context stays loaded. Ideas from yesterday are still alive today. Each session amplifies the last because you are operating in phase with your own momentum. When you produce something every day, you never feel stuck. You end each session with a clear record of progress. A researcher who touches their project daily is always a day away from a breakthrough. Skip too many days and you fall out of resonance. Then the project feels heavy again and needs a large effort to budge.

    So the trick is to design your workflow around staying in motion. Don't wait for the perfect idea or the right mood. Act first. Clarity comes after. If a task feels intimidating, cut it down until it becomes trivial. Open the file. Draft one paragraph. Try freewriting. Run one experiment. Or sketch one figure. You want the smallest possible task that gets the wheel turning. Completion, even on tiny tasks, builds momentum and creates the energy to do the next thing. The goal is to get traction and stop getting blocked on an empty page.

    A messy page is better than an empty page to get started. In an interesting Machintosh folklore story, Burrell Smith deliberately made a mess in the classic video game Defender. He shot his own humans and let all mutants loose, just so he could figure out how to clean up the chaos wholesale. Fire and maneuver!


    Practical tips

    This is where I find LLMs help tremendously. (Haha. AI butts its head even in an advice column.) When you face a large messy problem, ask the model to break it into a sequence of concrete subtasks: "List the next ten actions for the experiment" or "Suggest a structure for this section". Then ask the LLM to do one of the easiest tasks in this list. The mediocrity will annoy you just enough to fix it. And now you are moving. We are getting somewhere.

    A ten-minute timer is one of the simplest ways to get things going. Ten minutes is short enough that you can do almost anything for ten minutes. Pick a tiny task, and start. Most of the time you keep going after the timer ends because starting was the hard part. The timer lowers the activation energy and creates the first push on the flywheel.

    Another way to build momentum is to work on the part of the project that feels most attractive at the moment. If you are not in the mood to write the introduction but feel curious about running a side experiment, do the experiment. If you feel more like drawing a diagram, draw it. Interest/love/curiosity is your fuel. Progress is progress. Nobody hands out medals for following a linear plan. The only requirement is that you keep adding small meaningful pieces to the project. 


    Momentum is not a glamorous, sexy idea. But it is reliable. Think of your work as a flywheel. Each nudge adds speed, and over the weeks, the wheel becomes powerful and unstoppable. People often admire the end state but they don't see the messy daily pushes that built the momentum.

    We Believe in Freedom

    Percona is built on the belief that Freedom matters, especially in how organizations run and evolve their database infrastructure. Our mission is to help customers and the broader community maximize that Freedom by giving them control, transparency, and choice at every layer. Here is what that means in practice: Open Source Software At Percona, we […]

    Introducing Analytics Buckets

    Use Analytics Buckets to store huge datasets in Supabase Storage with Apache Iceberg and columnar Parquet format, optimized for analytical workloads.

    Introducing Supabase ETL

    A change-data-capture pipeline that replicates your Postgres tables to analytical destinations like Analytics Buckets and BigQuery in near real time.

    Using db_bench to measure RocksDB performance with gcc and clang

    This has results for db_bench, a benchmark for RocksDB, when compiling it with gcc and clang. On one of my servers I saw a regression on one of the tests (fillseq) when compiling with gcc. The result on that server didn't match what I measured on two other servers. So I repeated tests after compiling with clang to see if I could reproduce it.

    tl;dr

    • a common outcome is
      • ~10% more QPS with clang+LTO than with gcc
      • ~5% more QPS with clang than with gcc
    • the performance gap between clang and gcc is larger in RocksDB 10.x than in earlier versions

    Variance

    I always worry about variance when I search for performance bugs. Variance can be misinterpreted as a performance regression and I strive to avoid that because I don't want to file bogus performance bugs.

    Possible sources of variance are:

    • the compiler toolchain
      • a bad code layout might hurt performance by increasing cache and TLB misses
    • RocksDB
      • the overhead from compaction is intermittent and the LSM tree layout can help or hurt CPU overhead during reads
    • hardware
      • sources include noisy neighbors on public cloud servers, insufficient CPU cooling and CPU frequency management that is too clever
    • benchmark client
      • the way in which I run tests can create more or less variance and more information on that is here and here

    Software

    I used RocksDB versions 6.29.5, 7.10.2, 8.0, 8.4, 8.8, 8.11, 9.0, 9.4, 9.8, 9.11 and 10.0 through 10.8.

    I compiled each version three times:

    • gcc using version 13.3.0
    • clang - using version 18.3.1
    • clang+LTO - using version 18.3.1, where LTO is link-time optimization
    The build command lines are below

    flags=( DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 V=1 VERBOSE=1 )

    # for gcc
    make "${flags[@]}" static_lib db_bench

    # for clang
    AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
        make "${flags[@]}" static_lib db_bench

    # for clang+LTO
    AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
        make USE_LTO=1 "${flags[@]}" static_lib db_bench

    On the small servers I used the LRU block cache. On the large server I used hyper clock when possible:
    • lru_cache was used for versions 7.6 and earlier
    • hyper_clock_cache was used for versions 7.7 through 8.5
    • auto_hyper_clock_cache was used for versions 8.5+

    Hardware

    I used two small servers and one large server, all run Ubuntu 22.04:

    • pn-53
      • Ryzen 7 (AMD) CPU with 8 cores and 32G of RAM. It is v5 in the blog post
      • benchmarks are run with 1 client (thread)
    • arm
      • an ARM server from the Google cloud -- c4a-standard-8-lssd with 8 cores and 32G of RAM, 2 local SSDs using RAID 0 and ext-4
      • benchmarks are run with 1 client (thread)
    • hetzner
      • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled, 128G of RAM, 2 SSDs with RAID 1 (3.8T each) using ext4
      • benchmarks are run with 36 clients (threads)

    Benchmark

    Overviews on how I use db_bench are here and here.

    Tests were run for a workload with the database cached by RocksDB that I call byrx in my scripts.

    The benchmark steps that I focus on are:
    • fillseq
      • load RocksDB in key order with 1 thread
    • revrangeww, fwdrangeww
      • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
    • readww
      • do point queries with a rate-limited writer. Report performance for the point queries.
    • overwrite
      • overwrite (via Put) random keys

    Relative QPS

    Many of the tables below (inlined and via URL) show the relative QPS which is:
        (QPS for my version / QPS for RocksDB 6.29 compiled with gcc)

    The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than the base version. When it is < 1.0 then there might be a performance regression or there might just be noise.

    The spreadsheet with numbers and charts is here.

    Results: fillseq

    Results for the pn53 server

  • clang+LTO provides ~15% more QPS than gcc in RocksDB 10.8
  • clang provides ~11% more QPS than gcc in RocksDB 10.8
  • Results for the Arm server

    • I am fascinated by how stable the QPS is here for clang and clang+LTO
    • clang+LTO and clang provide ~3% more QPS than gcc in RocksDB 10.8

    Results for the Hetzner server

    • I don't show results for 6.29 or 7.x to improve readability
    • the performance for RocksDB 10.8.3 with gcc is what motivated me to repeat tests with clang
    • clang+LTO and clang provide ~20% more QPS than gcc in RocksDB 10.8

    Results: revrangeww

    Results for the pn53 server

    • clang+LTO provides ~9% more QPS than gcc in RocksDB 10.8
    • clang provides ~6% more QPS than gcc in RocksDB 10.8

    Results for the Arm server

  • clang+LTO provides ~11% more QPS than gcc in RocksDB 10.8
  • clang provides ~6% more QPS than gcc in RocksDB 10.8
  • Results for the Hetzner server

    • I don't show results for 6.29 or 7.x to improve readability
    • clang+LTO provides ~8% more QPS than gcc in RocksDB 10.8
    • clang provides ~3% more QPS than gcc in RocksDB 10.8
    • Results: fwdrangeww

      Results for the pn53 server

    • clang+LTO provides ~9% more QPS than gcc in RocksDB 10.8
    • clang provides ~4% more QPS than gcc in RocksDB 10.8
    • Results for the Arm server

    • clang+LTO provides ~13% more QPS than gcc in RocksDB 10.8
    • clang provides ~7% more QPS than gcc in RocksDB 10.8
    • Results for the Hetzner server

      • I don't show results for 6.29 or 7.x to improve readability
      • clang+LTO provides ~4% more QPS than gcc in RocksDB 10.8
      • clang provides ~1% more QPS than gcc in RocksDB 10.8
      • Results: readww

        Results for the pn53 server

      • clang+LTO provides ~6% more QPS than gcc in RocksDB 10.8
      • clang provides ~5% less QPS than gcc in RocksDB 10.8
      • Results for the Arm server

      • clang+LTO provides ~14% more QPS than gcc in RocksDB 10.8
      • clang provides ~2% more QPS than gcc in RocksDB 10.8
      • Results for the Hetzner server

        • I don't show results for 6.29 or 7.x to improve readability
        • clang+LTO provides ~4% more QPS than gcc in RocksDB 10.8
        • clang provides ~1% more QPS than gcc in RocksDB 10.8
        • Results: overwrite

          Results for the pn53 server

        • clang+LTO provides ~6% less QPS than gcc in RocksDB 10.8
        • clang provides ~8% less QPS than gcc in RocksDB 10.8
        • but for most versions there is similar QPS for gcc, clang and clang+LTO
        • Results for the Arm server

          • QPS is similar for gcc, clang and clang+LTO

          Results for the Hetzner server

          • I don't show results for 6.29 or 7.x to improve readability
          • clang+LTO provides ~2% more QPS than gcc in RocksDB 10.8
          • clang provides ~1% more QPS than gcc in RocksDB 10.8