a curated list of database news from authoritative sources

December 12, 2025

No, MongoDB Does Not Mean Skipping Design

With MongoDB, domain-driven design empowers developers to build robust systems by aligning the data model with business logic and access patterns.

Too often, developers are unfairly accused of being careless about data integrity. The logic goes: Without the rigid structure of an SQL database, developers will code impulsively, skipping formal design and viewing it as an obstacle rather than a vital step in building reliable systems.

Because of this misperception, many database administrators (DBAs) believe that the only way to guarantee data quality is to use relational databases. They think that using a document database like MongoDB means they can’t be sure data modeling will be done correctly.

Therefore, DBAs are compelled to predefine and deploy schemas in their database of choice before any application can persist or share data. This also implies that any evolution in the application requires DBAs to validate and run a migration script before the new release reaches users.

However, developers care just as much about data integrity as DBAs do. They put significant effort into the application’s domain model and avoid weakening it by mapping it to a normalized data structure that does not reflect application use cases.

Different Database Models, Different Data Models

Relational and document databases take different approaches to data modeling.

In a document database, you still design your data model. What changes is where and how the design happens, aligning closely with the domain model and the application’s access patterns. This is especially true in teams practicing domain‑driven design (DDD), where developers invest time in understanding domain objects, relationships and usage patterns.

The data model evolves alongside the development process — brainstorming ideas, prototyping, releasing a minimum viable product (MVP) for early feedback and iterating toward a stable, production-ready application.

Relational modeling often starts with a normalized design created before the application is fully understood. This model must then serve diverse future workloads and unpredictable data distributions. For example, a database schema designed for academic software could be used by both primary schools and large universities. This illustrates the strength of relational databases: the logical model exposed to applications is the same, even when the workloads differ greatly.

Document modeling, by contrast, is tailored to specific application usage. Instead of translating the domain model into normalized tables, which adds abstraction and hides performance optimizations, MongoDB stores aggregates directly in the way they appear in your code and business logic. Documents reflect the business transactions and are stored as contiguous blocks on disk, keeping the physical model aligned with the domain schema and optimized for access patterns.

Here are some other ways these two models compare.

Document Modeling Handles Relationships

Relational databases are often thought to excel at “strong relationships” between data, but this is partly because of a misunderstanding of the name — relations refers to mathematical sets of tuples (rows), not to the connections between them, which are relationships. Normalization actually loosens strong relationships, decoupling entities that are later matched at query time via joins.

In entity-relationship diagrams (ERDs), relationships are shown as simple one-to-one or one-to-many links, implemented via primary and foreign keys. ERDs don’t capture characteristics such as the direction of navigation or ownership between entities. Many-to-many relationships are modeled through join tables, which split them into two one-to-many relationships. The only property of a relationship in an ERD is to distinguish one-to-one (direct line) from one-to-many (crow’s foot), and the data model is the same whether the “many” is a few or billions.

Unified Modeling Language (UML)-class diagrams in object-oriented design, by comparison, are richer: They have a navigation direction and distinguish between association, aggregation, composition and inheritance. In MongoDB, these concepts map naturally:

  • Composition (for instance, an order and its order lines) often appears as embedded documents, sharing a life cycle and preventing partial deletion.
  • Aggregation ( a customer and their orders) uses references when life cycles differ or when the parent ownership is shared.
  • Inheritance can be represented via polymorphism, a concept ERDs don’t directly capture and workaround with nullable columns.

Domain models in object-oriented applications and MongoDB documents better mirror real-world relationships. In relational databases, schemas are rigid for entities, while relationships are resolved at runtime with joins — more like a data scientist discovering correlations during analysis. SQL’s foreign keys prevent orphaned rows, but they aren’t explicitly referenced when writing SQL queries. Each query can define a different relationship.

Schema Validation Protects Data Integrity

MongoDB is schema-flexible, not schema-less. This feature is especially valuable for early-stage projects — such as brainstorming, prototyping, or building an MVP — because you don’t need to execute Data Definition Language (DDL) statements before writing data. The schema resides within the application code, and documents are stored as-is, without additional validation at first, as consistency is ensured by the same application that writes and reads them.

As the model matures, you can define schema validation rules directly in the database — field requirements, data types, and accepted ranges. You don’t need to declare every field immediately. You add validation as the schema matures, becomes stable, and is shared. This ensures consistent structure when multiple components depend on the same fields, or when indexing, since only the fields used by the application are helpful in the index.

Schema flexibility boosts development speed at every stage of your application. Early in prototyping, you can add fields freely without worrying about immediate validation. Later, with schema validation in place, you can rely on the database to enforce data integrity, reducing the need to write and maintain code that checks incoming data.

Schema validation can also enforce physical bounds. If you embed order items in the order document, you might validate that the array does not exceed a certain threshold. Instead of failing outright — like SQL’s check constraints (which often cause unhandled application errors) — MongoDB can log a warning, alerting the team without disrupting user operations. This enables the application to stay available while still flagging potential anomalies or necessary evolutions.

Application Logic vs. Foreign Keys

In SQL databases, foreign keys are constraints, not actual definitions of relationships, which are evaluated at query time. SQL joins define relationships by listing columns as filter predicates, and foreign keys are not used in the JOIN clause. Foreign keys help prevent certain anomalies, such as orphaned children or cascading deletes, that arise from normalization.

MongoDB takes a different approach: By embedding tightly coupled entities, you solve major integrity concerns upfront. For example, embedding order lines inside their order document means orphaned line items are impossible by design. Referential relationships are handled by application logic, often reading from stable collections (lists of values) before embedding their values into a document.

Because MongoDB models are built for known access patterns and life cycles, referential integrity is maintained through business rules rather than enforced generically. In practice, this better reflects real-world processes, where updates or deletions must follow specific conditions (such asa price drop might apply to ongoing orders, but a price increase might not).

In relational databases, the schema is application-agnostic, so you must protect against any possible Data Manipulation Language (DML) modifications, not just those that result from valid business transactions. Doing so in the application would require extra locks or higher isolation levels, so it’s often more efficient to declare foreign keys for the database to enforce.

However, when domain use cases are well understood, protections are required for only a few cases and can be integrated into the business logic itself. For example, a product will never be deleted while ongoing transactions are using it. The business workflow often marks the product as unavailable long before it is physically deleted, and transactions are short-lived enough that there’s no overlap, preventing orphans without additional checks.

In domain‑driven models, where the schema is designed around specific application use cases, integrity can be fully managed by the application team alongside the business rules. While additional database verification may serve as a safeguard, it could limit scalability, particularly with sharding, and limit flexibility. An alternative is to run a periodic aggregation pipeline that asynchronously detects anomalies.

Next Time You Hear That Myth

MongoDB does not mean “no design.” It means integrating database design with application design — embedding, referencing, schema validation and application‑level integrity checks to reflect actual domain semantics.

This approach keeps data modeling a first‑class concern for developers, aligning directly with the way domain objects are represented in code. The database structure evolves alongside the application, and integrity is enforced in the same language and pipelines that deliver the application itself.

In environments where DBAs only see the database model and SQL operations, foreign keys may appear indispensable. But in a DevOps workflow where the same team handles both the database and the application, schema rules can be implemented first in code and refined in the database as specifications stabilize. This avoids maintaining two separate models and the associated migration overhead, enabling faster, iterative releases while preserving integrity.

mlrd: DynamoDB-Compatible API on MySQL

Introducing mlrd (“mallard”) to the world: a DynamoDB-compatible API on MySQL. Crazy, but it works really well and I’m confident it will help a lot of business save a lot of money. Here’s why.

Support for Crunchy Hardened PostgreSQL Ends Soon: Don’t Get Caught Off Guard.

Support shifts for hardened builds draw quick attention in regulated sectors, so when discussions surface about the future of a distribution, teams responsible for compliance and continuity take notice. Recent community discussions and rumors suggest that Crunchy Hardened PostgreSQL may reach end of support sometime around April 2026, and, while this has not been formally […]

December 11, 2025

Introducing Amazon Aurora powers for Kiro

In this post, we show how you can turn your ideas into full-stack applications with Kiro powers for Aurora. We explore how a new innovation, Kiro powers, can help you use Amazon Aurora best practices built into your development workflow, automatically implementing configurations and optimizations that make sure your database layer is production-ready from day one.

Sysbench for MySQL 5.6 through 9.5 on a 2-socket, 24-core server

This has results for the sysbench benchmark on a 2-socket, 24-core server. A post with results from 8-core and 32-core servers is here.

tl;dr

  • old bad news - there were many large regressions from 5.6 to 5.7 to 8.0
  • new bad news - there are some new regressions after MySQL 8.0
Normally I claim that there are few regressions after MySQL 8.0 but that isn't the case here. I also see regressions after MySQL 8.0 on the other larger servers that I use, but that topic will explained in another post.

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM, one m.2 SSD (2TB,  ext4 with discard enabled). The OS is Ubuntu 24.04. The CPUs are Intel Xeon Silver 4214R CPU @ 2.40GHz.

The config files are here for 5.6, 5.7, 8.0, 8.4 and 9.x.

Benchmark

I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB.

The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. The benchmark is run with 16 clients and 8 tables with 10M rows per table. 

The purpose is to search for regressions from new CPU overhead and mutex contention. The workload is cached -- there should be no read IO but will be some write IO.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than the base version.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than the base version.

I present two sets of charts. One set uses MySQL 5.6.51 as the base version which is my standard practice. The other uses MySQL 8.0.44 as the base version to show 

Values from iostat and vmstat divided by QPS are hereThese can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

The spreadsheet and charts are here and in some cases are easier to read than the charts below. Converting the Google Sheets charts to PNG files does the wrong thing for some of the test names listed at the bottom of the charts below.

Results: point queries

Summary
  • from 5.6 to 5.7 there are big improvements for 5 tests, no changes for 2 tests and small  regressions for 2 tests
  • from 5.7 to 8.0 there are big regressions for all tests
  • from 8.0 to 9.5 performance is stable
  • for 9.5 the common result is ~20% less throughput vs 5.6
Using vmstat from the hot-points test to understand the performance changes (see here)
  • context switch rate (cs/o) is stable, mutex contention hasn't changed
  • CPU per query (cpu/o) drops by 35% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 23% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: range queries without aggregation

Summary
  • from 5.6 to 5.7 throughput drops by 10% to 15%
  • from 5.7 to 8.0 throughput drops by about 15%
  • from 8.0 to 9.5 throughput is stable
  • for 9.5 the common result is ~30% less throughput vs 5.6
Using vmstat from the scan test to understand the performance changes (see here)
  • context switch rates are low and can be ignored
  • CPU per query (cpu/o) grows by 11% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: range queries with aggregation

Summary
  • from 5.6 to 5.7 there are big improvements for 2 tests, no changes for 1 tests and regressions for 5 tests
  • from 5.7 to 8.0 there are regressions for all tests
  • from 8.0 through 9.5 performance is stable
  • for 9.5 the common result is ~25% less throughput vs 5.6
Using vmstat from the read-only-count test to understand the performance changes (see here)
  • context switch rates are similar
  • CPU per query (cpu/o) grows by 16% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: writes

Summary
  • from 5.6 to 5.7 there are big improvements for 9 tests and no changes for 1 test
  • from 5.7 to 8.0 there are regressions for all tests
  • from 8.4 to 9.x there are regressions for 8 tests and no change for 2 tests
  • for 9.5 vs 5.6: 5 are slower in 9.5, 3 are similar and 2 are faster in 9.5
Using vmstat from the insert test to understand the performance changes (see here)
  • in 5.7, CPU per insert drops by 30% while context switch rates are stable vs 5.6
  • in 8.0, CPU per insert grows by 36% while context switch rates are stable vs 5.7
  • in 9.5, CPU per insert grows by 3% while context switch rates grow by 23% vs 8.4
The first chart doesn't truncate the y-axis to show the big improvement for update-index but that makes it hard to see the smaller changes on the other tests.
This chart truncates the y-axis to make it easier to see changes on tests other than update-index.


A Christmas Carol of Two Databases

Being a Tale of Databases, Binary Logs, WAL Files, and the Redemption of Ebenezer Scrooge, DBA Part the First — In Which We Meet Ebenezer Scrooge, Database Administrator Extraordinary It was a cold, dark, and CPU-bound night. The wind blew fierce across the datacenter racks, and the disks did rattle in their trays like bones. […]

December 10, 2025

The insert benchmark on a small server : MySQL 5.6 through 9.5

This has results for MySQL versions 5.6 through 9.5 with the Insert Benchmark on a small server. Results for Postgres on the same hardware are here.

tl;dr

  • good news - there are no large regressions after MySQL 8.0
  • bad news - there are many large regressions from 5.6 to 5.7 to 8.0

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

The config files are here: 5.6.515.7.448.0.4x8.4.x9.x.0.

The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from MySQL 5.6.51.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

Results: cached

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.59)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) has similar throughput for 5.6.51 and 8.4.7
  • the second write-only step (l.i2) is 14% slower in 8.4.7 vs 8.4.7
  • the range-query steps (qr*) are ~30% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 38% slower in 8.4.7 vs 5.6.51

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.531.161.090.830.830.830.840.830.83
8.0.440.602.421.050.870.690.620.700.620.700.62
8.4.70.592.541.040.860.680.610.680.610.670.60
9.4.00.592.571.030.860.690.620.690.620.700.61
9.5.00.592.611.050.850.690.620.690.620.690.62

Results: IO-bound

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.60)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) is 1.54X faster for 8.4.7 vs 5.6.51
  • the second write-only step (l.i2) is  1.82X faster for 8.4.7 vs 5.6.51
  • the range-query steps (qr*) are ~20% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 13% slower, 3% slower and 17% faster in 8.4.7 vs 5.6.51
dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.421.521.780.840.920.870.970.931.17
8.0.440.622.581.561.810.760.880.790.990.851.18
8.4.70.602.651.541.820.740.870.770.980.821.17
9.4.00.612.681.521.760.750.860.800.970.851.16
9.5.00.602.751.531.730.750.870.790.970.841.17

The insert benchmark on a small server : Postgres 12.22 through 18.1

This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.

Postgres continues to be boring in a good way. It is hard to find performance regressions.

 tl;dr for a cached workload

  • performance has been stable from Postgres 12 through 18
tl;dr for an IO-bound workload
  • performance has mostly been stable
  • create index has been ~10% faster since Postgres 15
  • throughput for the write-only steps has been ~10% less since Postgres 15
  • throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

For versions prior to 18, the config file is named conf.diff.cx10a_c8r32 and they are as similar as possible and here for versions 12, 13, 14, 15, 16 and 17.

For Postgres 18 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 12.22.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
This statement doesn't apply to this blog post, but I keep it here for copy/paste into future posts. Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: cached

The performance summaries are here for all versions and latest versions.

I focus on the  latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range

Results: IO-bound

The performance summaries are here for all versions and latest versions.

I focus on the latest versions.
  • throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
  • throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
  • throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
  • throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
  • throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22
The improvements for the index step arrived in Postgres 15.

The regressions for the write-only steps arrived in Postgres 15 and are likely from two issues -- vacuum and get_actual_variable_range

The improvements for the point-query steps arrived in Postgres 13.













    Rotate SSL/TLS Certificates in Valkey/Redis Without Downtime

    If your Valkey/Redis deployments use SSL/TLS, you will eventually need to rotate the TLS certificates. Perhaps it is because the certificates are expiring, or you made mistakes when creating them, or it could be that the private key has been leaked. This article explains the process of rotating the TLS/SSL certificates used by Valkey/Redis deployments […]

    December 09, 2025

    How to Turn a MySQL Unique Key Into a Primary Key

    A unique constraint specifies, one or more columns as unique it identifies. It is satisfied only when no two rows store the same non-null values at its core. A primary key constraint is a unique one that will say PRIMARY KEY in its defined way. It is satisfied only when rows unfold, and none may […]

    December 08, 2025

    RocksDB performance over time on a small Arm server

    This post has results for RocksDB on an Arm server. I previously shared results for RocksDB performance using gcc and clang. Here I share results using clang with LTO.

    RocksDB is boring, there are few performance regressions.

    tl;dr

    • for cached workloads throughput with RocksDB 10.8 is as good or better than with 6.29
    • for not-cached workloads throughput with RocksDB 10.8 is similar to 6.29 except for the overwrite test where it is 7% less, probably from correctness checks added in 7.x and 8.x.

    Software

    I used RocksDB versions 6.29, 7.0, 7.10, 8.0, 8.4, 8.8, 8.11, 9.0, 9.4, 9.8, 9.11 and 10.0 through 10.8.

    I compiled each version clang version 18.3.1 with link-time optimization enabled (LTO). The build command line was:

    flags=( DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 V=1 VERBOSE=1 )

    # for clang+LTO
    AR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \
        make USE_LTO=1 "${flags[@]}" static_lib db_bench

    Hardware

    I used a small Arm server from the Google cloud running Ubuntu 22.04. The server type was c4a-standard-8-lssd with 8 cores and 32G of RAM. Storage was 2 local SSDs with RAID 0 and ext-4.

    Benchmark

    Overviews on how I use db_bench are here and here.

    The benchmark was run with 1 thread and used the LRU block cache.

    Tests were run for three workloads:

    • byrx - database cached by RocksDB
    • iobuf - database is larger than RAM and RocksDB used buffered IO
    • iodir - database is larger than RAM and RocksDB used O_DIRECT

    The benchmark steps that I focus on are:
    • fillseq
      • load RocksDB in key order with 1 thread
    • revrangeww, fwdrangeww
      • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
    • readww
      • do point queries with a rate-limited writer. Report performance for the point queries.
    • overwrite
      • overwrite (via Put) random keys

    Relative QPS

    Many of the tables below (inlined and via URL) show the relative QPS which is:
        (QPS for my version / QPS for RocksDB 6.29)

    The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than RocksDB 6.29. When it is < 1.0 then there might be a performance regression or there might just be noise.

    The spreadsheet with numbers and charts is here. Performance summaries are here.

    Results: byrx

    This has results for by byrx workload where the database is cached by RocksDB.

    RocksDB 10.x is faster than 6.29 for all tests.

    Results: iobuf

    This has results for by iobuf workload where the database is larger than RAM and RocksDB used buffered IO.

    Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.

    Results: iodir

    This has results for by iodir workload where the database is larger than RAM and RocksDB used O_DIRECT.

    Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.

    Brainrot

    I drive my daughter to school as part of a car pool. Along the way, I am learning a new language, Brainrot.

    So what is brainrot? It is what you get when you marinate your brain with silly TikTok, YouTube Shorts, and Reddit memes. It is slang for "my attention span is fried and I like it". Brainrot is a self-deprecating language. Teens are basically saying: I know this is dumb, but I am choosing to speak it anyway.

    What makes brainrot different from old-school slang is its speed and scale. When we were teenagers, slang spread by word of mouth. It mostly stayed local in our school hallways or neighborhood. Now memes go global in hours. A meme is born in Seoul at breakfast and widespread in Ohio by six seven pm. The language mutates at escape velocity and gets weird fast. 

    Someone even built a brainrot programming language. The joke runs deep, and is getting some infrastructure.


    Here are a few basic brainrot terms you will hear right away.

    • He is cooked: It means he is finished, doomed, beyond saving.
    • He is cooking: The opposite. It means he is doing something impressive. Let him cook.
    • Mewing: Jawline exercises that started half as fitness advice and half as a meme. Now it mostly means trying too hard to look sharp.
    • Aura: Your invisible social vibe. You either have it or you do not. Your aura-farming do not impress my teens.
    • NPC: Someone who acts on autopilot, like a background character in a game.
    • Unc: An out-of-touch older guy.  That would be me?


    I can anticipate the reactions to this post. Teenagers will shrugg: "Obviously. How is this news?" Parents of teens will laugh in recognition. Everyone else will be lost and move away.

    I have seen things. I am not 50 yet, but I am getting there. I usually write about distributed systems and databases. I did not plan to write this post. This post insisted on being written through me. We will return to our regularly scheduled programming.

    But here is my real point. I think the kids are alright.

    It is not uncommon for a generation to get declared doomed by the one before it, yet Gen Z and Gen Alpha may have taken the heaviest hit, and written off as lost causes. But what I see is a generation with sharp self-mocking humor. They have short attention spans for things they do not care about. I think they do this out of sincerity. They don't see the purpose in mundane things, or and for many things theyfeel they lack enough agency. But what is new is how hard they can lock in (focus) on what they care about, and how fast they can form real bonds around shared interests. They are more open with each other. They are more inclusive by default. They are a feeling bunch. They waste no patience on things they find pointless. But when it matters, they show up fully.

    From the outside, their culture may looks absurd and chaotic. But, under the memes, I see a group that feels deeply, adapts quickly, and learns in public. They are improvising in real time. And despite all predictions to the contrary, they might actually know what they are doing.

    Unlocking Secure Connections: SSL/TLS Support in Percona Toolkit

    In today’s interconnected world, data security is paramount. Protecting sensitive information transmitted between applications and databases is crucial, and SSL/TLS (Secure Sockets Layer/Transport Layer Security) plays a vital role in achieving this. Percona Toolkit, a collection of command-line tools for MySQL, MongoDB, and other databases, has long been a go-to resource for database administrators. In […]

    December 05, 2025

    Community Erosion Post License Change: Quantifying the Power of Open Source

    Summary This article is a detailed analysis of the impact of the Redis license change to a non-open-source one on its community. To summarize the findings:  37.5% of contributors (9 of 24) stopped contributing to Redis after the fork Valkey grew from 18 to 49 contributors in 18 months Valkey averages 80 PRs/month in 2025 […]

    December 04, 2025

    "Horses" Feels Tame

    From a letter to Valve Corporation’s CEO Gabe Newell, lightly edited for formatting and links.

    Dear Mr. Newell,

    Steam has been my main source for games for over twenty years. I am disheartened that you chose not to publish Santa Ragione’s recently released game, Horses.

    I’ve read some substantive critique; Polygon and Rock Paper Shotgun’s among them. I have also read Santa Regione’s discussion of the ban. I bought Horses on Itch and played it through; I found it enjoyable and thought provoking, though not transformative. I was surprised to find a much tamer experience than I had been led to believe. I am particularly concerned that Steam found this title dangerous enough to ban it.

    Is Horses unsettling? Yes, though you would see far worse in popular horror films. I find Hostel or Saw gut-wrenching to near-unwatchable; Horses felt almost cute by comparison. It is nowhere near Argento’s classic Suspiria or Aster’s harrowing Midsommar, which deals with similar themes. The game’s pixelated censorship, silly animations, and cutting away from its worst moments comes across as coy, even camp. I suspect this is intentional: the game is in large part concerned with authoritarianism and the reproductive dynamics (in all senses) of cinema and games themselves. It is also concerned with complicity: the character’s choices to voice disgust or approval have apparently no impact on the story. Its four explicit themes—laid out in the embedded narrative of a VHS tape you must watch and decode to progress—are the repression of patriarchy, religion, chastity, and silence.

    Steam has long been willing to publish works engaging with brutal dehumanization and authoritarian violence. Games like Amnesia: A Machine for Pigs or SOMA depict humans involuntarily warped, both physically and mentally, beyond recognition. Like Horses, Amnesia uses horror as a lens for the industrial revolution and the power of the wealthy. Valve’s own work has not shied away from horror; Half Life 2 is entirely about the visceral subjugation of political and bodily autonomy. Valve gave us the headcrab and the stalker—both instances of forcible objectification—and the game’s camera shies away from neither.

    What, then, makes Horses unpublishable? Surely not violence, or you’d have pulled half the Steam catalogue by now. It can’t be body horror: I flinched far more at Dead Space 2’s eyeball scene. Could it be nudity? Half Life 2’s stalkers are fully nude and completely uncensored; I find the image of their mutilated bodies far more visually disturbing than the titular horses. Is it sex? Steam publishes the wildly popular Cyberpunk 2077, which has no shortage of breasts, vaginas, penises, and first-person sex scenes. It also depicts rape, torture, murder, and young boys hooded, restrained, and tortured as livestock on a farm. Could Horses be at fault for flagellation? Surely not; Steam published Robert Yang’s charming Hurt Me Plenty, where players directly spank a simulated consensusal partner. Is it complicity in authoriarian abuse? Lucas Pope’s highly-regarded Papers Please, also on Steam, puts players in the increasingly disturbing role of enforcing a fascist state’s border. It too contains pixelated nudity and violence.

    I love cute, safe, and friendly games; Astroneer is my jam! And as an adult, I also enjoy challenging, even harrowing narratives. For me, Horses falls somewhere in the middle—one might call it the Animal Farm of fascist horror parables. I think Steam ought to publish it, and more transgressive works as well.

    Yours truly,

    Kyle Kingsbury

    Cloud-Native MySQL High Availability: Understanding Virtually SYNC and ASYNC Replication

    When we run databases in Kubernetes, we quickly learn one important truth: things will fail, and we need to be prepared for this. Pods are ephemeral; nodes can come and go, storage is abstracted behind PersistentVolumes and can be either local to a node or backed by network storage, and Kubernetes moves workloads as needed […]

    Build "Sign in with Your App" using Supabase Auth

    Supabase Auth now supports OAuth 2.1 and OpenID Connect server capabilities, turning your project into a full-fledged identity provider for AI agents, third-party developers, and enterprise SSO.