a curated list of database news from authoritative sources

October 03, 2025

How to URL-encode query parameters in ClickHouse

Learn how to safely URL-encode query parameters in ClickHouse using encodeURLFormComponent, including syntax, examples, and performance tips for web applications.

October 02, 2025

How to extract the protocol of a URL in ClickHouse

Learn how to extract URL protocols in ClickHouse using the protocol() function with practical examples, performance tips, and real-time API implementation.

How to round dates in ClickHouse

Master ClickHouse date rounding with toStartOfYear, toStartOfMonth, toStartOfWeek and more - complete guide with syntax, examples, and API integration.

Measuring scaleup for MariaDB with sysbench

This post has results to measure scaleup for MariaDB 11.8.3 on a 48-core server.

tl;dr

  • Scaleup is better for range queries than for point queries
  • For tests where results were less than great, the problem appears to be mutex contention within InnoDB

Builds, Configuration & Hardware

The server has an AMD EPYC 9454P 48-Core Processor with AMD SMT disabled, 128G of RAM and SW RAID 0 with 2 NVMe devices. The OS is Ubuntu 22.04.

I compiled MariaDB 11.8.3 from source and the my.cnf file is here.

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by MariaDB. Each microbenchmark is run for 300 seconds.

The benchmark is run with 1, 2, 4, 8, 12, 16, 20, 24, 32, 40 and 48 clients. The purpose is to determine how well MariaDB scales up.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I still use relative QPS here, but in a different way. The relative QPS here is:
(QPS at X clients) / (QPS at 1 client)

The goal is to determine scaleup efficiency for MariaDB. When the relative QPS at X clients is a value near X, then things are great. But sometimes things aren't great and the relative QPS is much less than X. One issue is data contention for some of the write-heavy microbenchmarks. Another issue is mutex and rw-lock contention.

Perf debugging via vmstat and iostat

I use normalized results from vmstat and iostat to help explain why things aren't as fast as expected. By normalized I mean I divide the average values from vmstat and iostat by QPS to see things like how much CPU is used per query or how many context switches occur per write. And note that a high context switch rate is often a sign of mutex contention.

Charts: point queries

The spreadsheet with all of the results is here.

For point queries

  • tests for which the relative QPS at 48 clients is greater than 40
    • point-query
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • none
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • hot-points, points-covered-si, random-points_range=10
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • points-covered-pk, points-notcovered-pk, points-notcovered-si, random-points_range=100
  • tests for which the relative QPS at 48 clients is less than 10
    • random-points_range=1000
For 5 of the 9 point query tests, QPS stops improving beyond 16 clients. And I assume that mutex contention is the problem.

Results for the random-points_range=Z tests are interesting. They use oltp_inlist_select.lua which does a SELECT with a large IN-list where the IN-list entries can find rows by exact match on the PK. The value of Z is the number of entries in the IN-list. And here MariaDB scales worse with a larger Z (1000) than with a smaller Z (10 or 100), which means that the thing that limits scaleup is more likely in InnoDB than the parser or optimizer.

From the normalized vmstat metrics (see here) for 1 client and 48 clients the number of context switches per query (the cs/o column) grows a lot more from 1 to 48 clients for random-points_range=1000 than for random-points_range=10. The ratio (cs/o at 48 clients / cs/o at 1 client) is 1.46 for random-points_range=10 and then increases to 19.96 for random-points_range=1000. The problem appears to be mutex contention.

Charts: range queries without aggregation

The spreadsheet with all of the results is here.

For range queries without aggregation:

  • tests for which the relative QPS at 48 clients is greater than 40
    • range-covered-pk, range-covered-si, range-notcovered-pk
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • scan
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • none
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • none
  • tests for which the relative QPS at 48 clients is less than 10
    • range-notcovered-si
Only one test has less than great results for scaleup -- range-notcovered-si. QPS for it stops growing beyond 12 clients. The root cause appears to be mutex contention based on the large value for cs/o in the normalized vmstat metrics (see here). For all of the range-*covered-* tests, has the most InnoDB activity per query -- the query isn't covering so it must do PK index access per index entry it finds in the secondary index.

Charts: range queries with aggregation

The spreadsheet with all of the results is here.

For range queries with aggregation:

  • tests for which the relative QPS at 48 clients is greater than 40
    • read-only-distinct, read-only-order, read-only-range=Y, read-only-sum
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • read-only-count, read-only-simple
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • none
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • none
  • tests for which the relative QPS at 48 clients is less than 10
    • none
Results here are excellent, and better than the results above for range queries without aggregation. The difference might mean that there is less concurrent activity within InnoDB because aggregation code is run after each row is fetched from InnoDB.

Charts: writes

The spreadsheet with all of the results is here.

For writes:

  • tests for which the relative QPS at 48 clients is greater than 40
    • none
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • read-write_range=Y
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • update-index, write-only
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • delete, insert, update-inlist, update-nonindex, update-zipf
  • tests for which the relative QPS at 48 clients is less than 10
    • update-one
The best result is for the read-write_range=Y tests which are the classic sysbench transaction that does a mix of writes, point and range queries. 

The worst result is from update-one which suffers from data contention as all updates are to the same row. A poor result is expected here.



October 01, 2025

The Redis License Has Changed: What You Need to Know

Redis has always been the go-to when you need fast, in-memory data storage. You’ll find it everywhere. Big ecommerce sites. Mobile apps. Maybe your own projects, too. But if you’re relying on Redis today, you’re facing a new reality: the licensing terms have changed, and that shift could affect the way you use Redis going […]

September 30, 2025

How to get the hostname from a URL in ClickHouse

Learn how to extract hostnames from URLs in ClickHouse using the domain() function, plus performance tips and real-world examples for web analytics.

How to decode URL-encoded strings in ClickHouse

Learn how to decode URL-encoded strings in ClickHouse using decodeURLComponent, with performance tips, edge cases, and production deployment strategies.

How to parse numeric date formats in ClickHouse

Learn how to convert numeric date formats to ClickHouse Date/DateTime types using YYYYMMDDToDate functions for better performance and built-in date operations.

Tackling the Cache Invalidation and Cache Stampede Problem in Valkey with Debezium Platform

There are two hard problems in computer science: cache invalidation, naming things, and off-by-1 errors. This classic joke, often attributed to Phil Karlton, highlights a very real and persistent challenge for software developers. We’re constantly striving to build faster, more responsive systems, and caching is a fundamental strategy for achieving that. But while caching offers […]

September 29, 2025

Postgres 18.0 vs sysbench on a 24-core, 2-socket server

This post has results from sysbench run at higher concurrency for Postgres versions 12 through 18 on a server with 24 cores and 2 sockets. My previous post had results for sysbench run with low concurrency. The goal is to search for regressions from new CPU overhead and mutex contention.

tl;dr, from Postgres 17.6 to 18.0

  • For most microbenchmarks Postgres 18.0 is between 1% and 3% slower than 17.6
  • The root cause might be new CPU overhead. It will take more time to gain confidence in results like this. On other servers with sysbench run at low concurrency I only see regressions for some of the range-query microbenchmarks. Here I see them for point-query and writes.

tl;dr, from Postgres 12.22 through 18.0

  • For point queries Postgres 18.0 is usually about 5% faster than 12.22
  • For range queries Postgres 18.0 is usually as fast as 12.22
  • For writes Postgres 18.0 is much faster than 12.22

Builds, configuration and hardware

I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.

The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM. The CPU is Intel Xeon Silver 4214R CPU @ 2.40GHz. It runs Ubuntu 24.04. Storage is a 1TB m.2 NVMe device with ext-4 and discard enabled.

Prior to 18.0, the configuration file was named conf.diff.cx10a_c24r64 and is here for 12.2213.2214.1915.1416.10 and 17.6.

For 18.0 I tried 3 configuration files:

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

The benchmark is run with 16 clients and 8 tables with 10M rows per table. The purpose is to search for regressions from new CPU overhead and mutex contention.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

I present results for:
  • versions 12 through 18 using 12.22 as the base version
  • versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0

Results per microbenchmark from vmstat and iostat are here.

For point queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the random-points microbenchmarks.

For range queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the read-only_range=X microbenchmarks.

For writes queries, 18.0 often gets between 1% and 2% less QPS than 17.6 and the root cause might be new CPU overhead. I ignore the write-heavy microbenchmarks that also do queries as the regressions for them might be from the queries. See the cpu/o column (CPU per query) in the vmstat metrics here for the update-index microbenchmark.

Relative to: 17.6
col-1 : 18.0 with the x10b config
col-2 : 18.0 with the x10c config
col-3 : 18.0 with the x10d config

col-1   col-2   col-3   point queries
1.00    0.99    1.00    hot-points_range=100
0.99    0.98    1.00    point-query_range=100
0.98    0.99    0.99    points-covered-pk_range=100
0.99    0.99    0.98    points-covered-si_range=100
0.97    0.99    0.98    points-notcovered-pk_range=100
0.98    0.99    0.97    points-notcovered-si_range=100
0.98    0.99    0.98    random-points_range=1000
0.97    0.99    0.98    random-points_range=100
0.99    0.99    0.98    random-points_range=10

col-1   col-2   col-3   range queries without aggregation
0.98    0.98    0.99    range-covered-pk_range=100
0.98    0.98    0.98    range-covered-si_range=100
0.98    0.99    0.98    range-notcovered-pk_range=100
1.00    1.02    0.99    range-notcovered-si_range=100
1.01    1.01    1.01    scan_range=100

col-1   col-2   col-3   range queries with aggregation
0.99    1.00    0.98    read-only-count_range=1000
0.98    0.98    0.98    read-only-distinct_range=1000
0.97    0.97    0.96    read-only-order_range=1000
0.97    0.98    0.97    read-only_range=10000
0.98    0.99    0.98    read-only_range=100
0.99    0.99    0.99    read-only_range=10
0.98    0.99    0.99    read-only-simple_range=1000
0.98    1.00    0.98    read-only-sum_range=1000

col-1   col-2   col-3   writes
0.99    0.99    0.99    delete_range=100
0.99    0.99    0.99    insert_range=100
0.98    0.98    0.98    read-write_range=100
0.99    1.00    0.99    read-write_range=10
0.99    0.98    0.97    update-index_range=100
0.99    0.99    1.00    update-inlist_range=100
1.00    0.97    0.99    update-nonindex_range=100
0.97    1.00    0.98    update-one_range=100
1.00    0.99    1.01    update-zipf_range=100
0.98    0.98    0.97    write-only_range=10000

Results: Postgres 12 to 18

For the Postgres 18.0 results in col-6, the result is in green when relative QPS is >= 1.05 and in yellow when relative QPS is <= 0.98. Yellow indicates a possible regression.

Results per microbenchmark from vmstat and iostat are here.

Relative to: 12.22
col-1 : 13.22
col-2 : 14.19
col-3 : 15.14
col-4 : 16.10
col-5 : 17.6
col-6 : 18.0 with the x10b config

col-1   col-2   col-3   col-4   col-5   col-6   point queries
0.98    0.96    0.99    0.98    2.13    2.13    hot-points_range=100
1.00    1.02    1.01    1.02    1.03    1.01    point-query_range=100
0.99    1.05    1.05    1.08    1.07    1.05    points-covered-pk_range=100
0.99    1.08    1.05    1.07    1.07    1.05    points-covered-si_range=100
0.99    1.04    1.05    1.06    1.07    1.05    points-notcovered-pk_range=100
0.99    1.05    1.04    1.05    1.06    1.04    points-notcovered-si_range=100
0.98    1.03    1.04    1.06    1.06    1.04    random-points_range=1000
0.98    1.04    1.05    1.07    1.07    1.05    random-points_range=100
0.99    1.02    1.03    1.05    1.05    1.04    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   range queries without aggregation
1.02    1.04    1.03    1.04    1.03    1.01    range-covered-pk_range=100
1.05    1.07    1.06    1.06    1.06    1.05    range-covered-si_range=100
0.99    1.00    1.00    1.00    1.01    0.98    range-notcovered-pk_range=100
0.97    0.99    1.00    1.01    1.01    1.01    range-notcovered-si_range=100
0.86    1.06    1.08    1.17    1.18    1.20    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   range queries with aggregation
0.98    0.97    0.97    1.00    0.98    0.97    read-only-count_range=1000
0.99    0.99    1.02    1.02    1.01    0.99    read-only-distinct_range=1000
1.00    0.99    1.02    1.05    1.05    1.02    read-only-order_range=1000
0.99    0.99    1.04    1.07    1.09    1.06    read-only_range=10000
0.99    1.00    1.00    1.01    1.02    0.99    read-only_range=100
1.00    1.00    1.00    1.01    1.01    1.00    read-only_range=10
0.99    0.99    1.00    1.01    1.01    0.99    read-only-simple_range=1000
0.98    0.99    0.99    1.00    1.00    0.98    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   writes
0.98    1.09    1.09    1.04    1.29    1.27    delete_range=100
0.99    1.03    1.02    1.03    1.08    1.07    insert_range=100
1.00    1.03    1.04    1.05    1.07    1.05    read-write_range=100
1.01    1.09    1.09    1.09    1.15    1.14    read-write_range=10
1.00    1.04    1.03    0.86    1.44    1.42    update-index_range=100
1.01    1.11    1.11    1.12    1.13    1.12    update-inlist_range=100
0.99    1.04    1.06    1.05    1.25    1.25    update-nonindex_range=100
1.05    0.92    0.90    0.84    1.18    1.15    update-one_range=100
0.98    1.04    1.03    1.01    1.26    1.26    update-zipf_range=100
1.02    1.05    1.10    1.09    1.21    1.18    write-only_range=10000

New File Copy-Based Initial Sync Overwhelms the Logical Initial Sync in Percona Server for MongoDB

In a previous article, Scalability for the Large-Scale: File Copy-Based Initial Sync for Percona Server for MongoDB, we presented some early benchmarks of the new File Copy-Based Initial Sync (FCBIS) available in Percona Server for MongoDB. Those first results already suggested significant improvements compared to the default Logical Initial Sync. In this post, we extend our […]

September 28, 2025

WiredTigerHS.wt: MongoDB MVCC Durable History Store

MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later — at checkpoint or when cache eviction needs space — into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.
MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:

ls -l /data/db/WiredTigerHS.wt

-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt

This 4KB file holds no records:

wt -h /data/db dump -j file:WiredTigerHS.wt

{
    "WiredTiger Dump Version" : "1 (12.0.0)",
    "file:WiredTigerHS.wt" : [
        {
            "config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
            "colgroups" : [],
            "indices" : []
        },
        {
            "data" : [            ]
        }
    ]
}

The file contains only a header block with the configuration metadata. It defines the key and value format:

key_format=IuQQ    
value_format=QQQu  

Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.

The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter. Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.

To get some records in it, I start MongoDB as a one-member replicaset:

mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &  

mongosh --eval '
  rs.initiate( { _id: "rs0", members: [
  {_id: 0, priority: 1, host: "localhost:27017"},
  ]});
'

I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:

db.test.drop();  
for (let i = 0; i < 5; i++) {  
    db.test.insertOne({  
        _id: i,  
        val: "oldvalue",  
        filler: "X".repeat(1024)  
    });  
}   
for (let i = 0; i < 5; i++) {  
    db.test.updateOne(  
        { _id: i },  
        { $set: { val: "newvalue" } } // change to whatever new value you want  
    );  
}  

Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:

{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I",  "c":"WTCHKPT",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}

The durable history storage file size has increased:

ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt

I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:

pkill mongod

There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand 'X' characters (0x58), so they are easy to spot in a hex/BSON dump:

wt -h /data/db dump file:WiredTigerHS.wt 

WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\00
                                    
                                    
                                    
                                    
                                

WiredTigerHS.wt: MongoDB MVCC Durable History Store

MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: Uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later—at checkpoint or when cache eviction needs space—into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.

MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:

ls -l /data/db/WiredTigerHS.wt

-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt

This 4KB file holds no records:

wt -h /data/db dump -j file:WiredTigerHS.wt

{
    "WiredTiger Dump Version" : "1 (12.0.0)",
    "file:WiredTigerHS.wt" : [
        {
            "config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
            "colgroups" : [],
            "indices" : []
        },
        {
            "data" : [            ]
        }
    ]
}

The file contains only a header block with the configuration metadata. It defines the key and value format:

key_format=IuQQ    
value_format=QQQu  

Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.

The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter (if multiple updates at the same timestamp). Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.

To get some records in it, I start MongoDB as a one-member replicaset:

mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &  

mongosh --eval '
  rs.initiate( { _id: "rs0", members: [
  {_id: 0, priority: 1, host: "localhost:27017"},
  ]});
'

I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:

db.test.drop();  
for (let i = 0; i < 5; i++) {  
    db.test.insertOne({  
        _id: i,  
        val: "oldvalue",  
        filler: "X".repeat(1024)  
    });  
}   
for (let i = 0; i < 5; i++) {  
    db.test.updateOne(  
        { _id: i },  
        { $set: { val: "newvalue" } } // change to whatever new value you want  
    );  
}  

Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:

{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I",  "c":"WTCHKPT",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}

The durable history storage file size has increased:

ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt

I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:

pkill mongod

There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand "X" characters (0x58), so they are easy to spot in a hex/BSON dump:

wt -h /data/db dump file:WiredTigerHS.wt 

WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\0
                                    
                                    
                                    
                                    
                                

September 26, 2025

Identifying and resolving performance issues caused by TOAST OID contention in Amazon Aurora PostgreSQL Compatible Edition and Amazon RDS for PostgreSQL

In this post, we explore the challenges of OID exhaustion in PostgreSQL, focusing on its impact on TOAST tables and how it leads to performance issues. We will cover how to identify the problem by reviewing wait events, session activity, and table usage. Additionally, we discuss practical solutions, from cleaning up data to more advanced strategies such as partitioning.

Postgres 18.0 vs sysbench on a small server

This has benchmark results for Postgres 18.0 using sysbench on a small server. Previous results for 18 rc1 are here.

tl;dr

  • From 12.22 to 18.0
    • there are no regressions larger than 2% but many improvements larger than 5%. Postgres continues to do a great job at avoiding regressions over time.
  • From 17.6 to 18.0
    • I continue to see small CPU regressions (1% or 2%) in Postgres 18 for short range queries on low-concurrency workloads. I see it for shorter but not for longer range queries so my guess is that this is new overhead in query execution setup or optimization. I hope to explain this.
Builds, configuration and hardware

I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.

The HW is an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.

Prior to 18.0, the configuration file was named conf.diff.cx10a_c8r32 and is here for 12.22, 13.22, 14.19, 15.14, 16.10 and 17.6.

For 18.0 I tried 3 configuration files:

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

The benchmark is run with 1 client, 1 table and 50M rows. The purpose is to search for CPU regressions.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

I present results for:
  • versions 12 through 18 using 12.22 as the base version
  • versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0

For the read-only_range=X benchmarks there might be small regressions (1% or 2%) when X is 10 or 100 but not 10000. The value of X is the length of the range scan. I have seen similar regressions in the beta and RC releases. Given that this occurs when the range scan is shorter, the problem might be new overhead in query execution setup or optimization. But I have yet to explain this.

Relative to: 17.6 with x10a
col-1 : 18.0 with x10b and io_method=sync
col-2 : 18.0 with x10c and io_method=worker
col-3 : 18.0 with x10d and io_method=io_uring

col-1   col-2   col-3  point queries
1.01    1.01    0.97    hot-points_range=100
1.01    1.00    0.99    point-query_range=100
1.01    1.01    1.00    points-covered-pk_range=100
1.01    1.02    1.01    points-covered-si_range=100
1.01    1.01    1.00    points-notcovered-pk_range=100
1.01    0.99    1.00    points-notcovered-si_range=100
1.02    1.02    1.03    random-points_range=1000
1.01    1.00    0.99    random-points_range=100
1.00    1.00    0.99    random-points_range=10

col-1   col-2   col-3  range queries without aggregation
0.99    0.99    0.98    range-covered-pk_range=100
1.00    0.99    1.00    range-covered-si_range=100
1.00    0.99    0.98    range-notcovered-pk_range=100
0.99    0.99    0.99    range-notcovered-si_range=100
1.04    1.04    1.04    scan_range=100

col-1   col-2   col-3  range queries with aggregation
1.01    1.00    1.01    read-only-count_range=1000
1.01    1.00    1.00    read-only-distinct_range=1000
0.99    1.00    0.98    read-only-order_range=1000
1.01    1.00    1.00    read-only_range=10000
0.99    0.99    0.98    read-only_range=100
0.98    0.99    0.98    read-only_range=10
1.01    1.00    0.99    read-only-simple_range=1000
1.00    1.00    0.99    read-only-sum_range=1000

col-1   col-2   col-3  writes
1.00    1.00    0.99    delete_range=100
0.99    0.99    0.98    insert_range=100
0.99    0.99    0.98    read-write_range=100
0.98    0.99    0.98    read-write_range=10
0.99    1.00    0.99    update-index_range=100
0.99    1.00    1.00    update-inlist_range=100
0.99    1.00    0.98    update-nonindex_range=100
0.99    0.99    0.98    update-one_range=100
0.99    1.00    0.99    update-zipf_range=100
1.00    1.00    0.99    write-only_range=10000

Results: Postgres 12 to 18

From 12.22 to 18.0 there are no regressions larger than 2% but many improvements larger than 5% (highlighted in greeen). Postgres continues to do a great job at avoiding regressions over time.

Relative to: 12.22
col-1 : 13.22
col-2 : 14.19
col-3 : 15.14
col-4 : 16.10
col-5 : 17.6
col-6 : 18.0 with the x10b config

col-1   col-2   col-3   col-4   col-5   col-6   point queries
1.06    1.05    1.05    1.09    2.04    2.05    hot-points_range=100
1.01    1.03    1.03    1.02    1.04    1.04    point-query_range=100
1.00    0.99    0.99    1.03    0.99    1.01    points-covered-pk_range=100
1.04    1.03    1.02    1.05    1.01    1.03    points-covered-si_range=100
1.01    1.00    1.01    1.04    1.01    1.02    points-notcovered-pk_range=100
1.01    1.02    1.03    1.05    1.02    1.04    points-notcovered-si_range=100
1.02    1.00    1.02    1.05    1.00    1.02    random-points_range=1000
1.01    1.01    1.01    1.03    1.01    1.02    random-points_range=100
1.01    1.01    1.01    1.02    1.01    1.01    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   range queries with aggregation
0.99    1.00    1.00    1.00    0.99    0.98    range-covered-pk_range=100
1.01    1.01    1.00    1.00    0.99    0.99    range-covered-si_range=100
1.00    1.00    1.01    1.01    1.00    1.00    range-notcovered-pk_range=100
1.00    1.00    1.00    1.01    1.02    1.01    range-notcovered-si_range=100
1.00    1.30    1.19    1.18    1.16    1.20    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   range queries without aggregation
1.04    1.02    1.00    1.05    1.02    1.03    read-only-count_range=1000
1.00    1.00    1.03    1.04    1.03    1.04    read-only-distinct_range=1000
1.00    1.00    1.04    1.04    1.06    1.06    read-only-order_range=1000
1.01    1.01    1.04    1.07    1.06    1.07    read-only_range=10000
1.00    1.00    1.01    1.01    1.02    1.01    read-only_range=100
1.00    1.00    1.00    0.99    1.01    0.99    read-only_range=10
1.01    1.01    1.02    1.02    1.03    1.03    read-only-simple_range=1000
1.01    1.00    1.00    1.03    1.02    1.02    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   writes
1.01    1.02    1.01    1.03    1.13    1.12    delete_range=100
0.99    0.98    0.97    0.98    1.06    1.05    insert_range=100
0.99    1.00    1.00    1.01    1.02    1.02    read-write_range=100
0.99    1.01    1.01    1.01    1.03    1.01    read-write_range=10
1.00    1.00    1.01    1.00    1.09    1.08    update-index_range=100
1.00    1.10    1.09    1.09    1.10    1.09    update-inlist_range=100
1.03    1.05    1.06    1.05    1.15    1.14    update-nonindex_range=100
0.99    0.98    0.99    0.98    1.07    1.06    update-one_range=100
1.01    1.04    1.06    1.05    1.18    1.17    update-zipf_range=100
0.98    1.01    1.01    0.99    1.07    1.07    write-only_range=10000


MySQL 8.0 End of Life Support: What Are Your Options?

We’ve mentioned this a few times here on the blog already, but in case you missed it, MySQL 8.0’s end-of-life date is April 2026. This probably sounds forever away, but it’s going to sneak up before you know it. Maybe you’ve been putting off thinking about it, or maybe you’re already weighing your options but […]