October 03, 2025
How to get the current timestamp with sub-second precision in ClickHouse
How to truncate dates in ClickHouse with date_trunc
How to strip query parameters from URLs in ClickHouse
How to URL-encode query parameters in ClickHouse
October 02, 2025
How to build complete URL hierarchies with path truncation in ClickHouse
How to update specific date parts (year month day hour) in ClickHouse
How to extract query parameter values from URLs in ClickHouse
How to extract ISO week numbers from dates in ClickHouse
How to extract URL fragments without hash symbols (#) in ClickHouse
How to extract the protocol of a URL in ClickHouse
How to round dates in ClickHouse
Measuring scaleup for MariaDB with sysbench
This post has results to measure scaleup for MariaDB 11.8.3 on a 48-core server.
tl;dr
- Scaleup is better for range queries than for point queries
- For tests where results were less than great, the problem appears to be mutex contention within InnoDB
Builds, Configuration & Hardware
The server has an AMD EPYC 9454P 48-Core Processor with AMD SMT disabled, 128G of RAM and SW RAID 0 with 2 NVMe devices. The OS is Ubuntu 22.04.
I compiled MariaDB 11.8.3 from source and the my.cnf file is here.
Benchmark
(QPS at X clients) / (QPS at 1 client)
The goal is to determine scaleup efficiency for MariaDB. When the relative QPS at X clients is a value near X, then things are great. But sometimes things aren't great and the relative QPS is much less than X. One issue is data contention for some of the write-heavy microbenchmarks. Another issue is mutex and rw-lock contention.
Perf debugging via vmstat and iostat
I use normalized results from vmstat and iostat to help explain why things aren't as fast as expected. By normalized I mean I divide the average values from vmstat and iostat by QPS to see things like how much CPU is used per query or how many context switches occur per write. And note that a high context switch rate is often a sign of mutex contention.
Charts: point queries
The spreadsheet with all of the results is here.
For point queries
- tests for which the relative QPS at 48 clients is greater than 40
- point-query
- tests for which the relative QPS at 48 clients is between 30 and 40
- none
- tests for which the relative QPS at 48 clients is between 20 and 30
- hot-points, points-covered-si, random-points_range=10
- tests for which the relative QPS at 48 clients is between 10 and 20
- points-covered-pk, points-notcovered-pk, points-notcovered-si, random-points_range=100
- tests for which the relative QPS at 48 clients is less than 10
- random-points_range=1000
Charts: range queries without aggregation
The spreadsheet with all of the results is here.
For range queries without aggregation:
- tests for which the relative QPS at 48 clients is greater than 40
- range-covered-pk, range-covered-si, range-notcovered-pk
- tests for which the relative QPS at 48 clients is between 30 and 40
- scan
- tests for which the relative QPS at 48 clients is between 20 and 30
- none
- tests for which the relative QPS at 48 clients is between 10 and 20
- none
- tests for which the relative QPS at 48 clients is less than 10
- range-notcovered-si
Charts: range queries with aggregation
The spreadsheet with all of the results is here.
For range queries with aggregation:
- tests for which the relative QPS at 48 clients is greater than 40
- read-only-distinct, read-only-order, read-only-range=Y, read-only-sum
- tests for which the relative QPS at 48 clients is between 30 and 40
- read-only-count, read-only-simple
- tests for which the relative QPS at 48 clients is between 20 and 30
- none
- tests for which the relative QPS at 48 clients is between 10 and 20
- none
- tests for which the relative QPS at 48 clients is less than 10
- none
Charts: writes
The spreadsheet with all of the results is here.
For writes:
- tests for which the relative QPS at 48 clients is greater than 40
- none
- tests for which the relative QPS at 48 clients is between 30 and 40
- read-write_range=Y
- tests for which the relative QPS at 48 clients is between 20 and 30
- update-index, write-only
- tests for which the relative QPS at 48 clients is between 10 and 20
- delete, insert, update-inlist, update-nonindex, update-zipf
- tests for which the relative QPS at 48 clients is less than 10
- update-one
October 01, 2025
The Redis License Has Changed: What You Need to Know
Larger than RAM Vector Indexes for Relational Databases
September 30, 2025
How to extract ISO year in ClickHouse: toISOYear vs toYear
How to extract the first Significant Subdomain From URLs in ClickHouse
How to get the current query timestamp with timezone in ClickHouse
How to convert dates to compact numeric formats in ClickHouse
How to convert DateTimes to a different timezone in ClickHouse
How to extract URL paths without query strings in ClickHouse
How to handle DST-aware UTC offset calculations in ClickHouse
How to extract port numbers from URLs in ClickHouse
How to get the hostname from a URL in ClickHouse
How to decode URL-encoded strings in ClickHouse
How to convert dates to Unix timestamps in ClickHouse®
How to parse numeric date formats in ClickHouse
How to retrieve the server timezone in ClickHouse®
How to extract top-level domains from URLs in ClickHouse®
How to extract timezone information from Datetime values in ClickHouse®
Tackling the Cache Invalidation and Cache Stampede Problem in Valkey with Debezium Platform
September 29, 2025
Postgres 18.0 vs sysbench on a 24-core, 2-socket server
This post has results from sysbench run at higher concurrency for Postgres versions 12 through 18 on a server with 24 cores and 2 sockets. My previous post had results for sysbench run with low concurrency. The goal is to search for regressions from new CPU overhead and mutex contention.
tl;dr, from Postgres 17.6 to 18.0
- For most microbenchmarks Postgres 18.0 is between 1% and 3% slower than 17.6
- The root cause might be new CPU overhead. It will take more time to gain confidence in results like this. On other servers with sysbench run at low concurrency I only see regressions for some of the range-query microbenchmarks. Here I see them for point-query and writes.
tl;dr, from Postgres 12.22 through 18.0
- For point queries Postgres 18.0 is usually about 5% faster than 12.22
- For range queries Postgres 18.0 is usually as fast as 12.22
- For writes Postgres 18.0 is much faster than 12.22
For 18.0 I tried 3 configuration files:
- conf.diff.cx10b_c8r32 (x10b) - uses io_method=sync
- conf.diff.cx10c_c8r32 (x10c) - uses io_method=worker
- conf.diff.cx10d_c8r32 (x10d) - uses io_method=io_uring
Benchmark
The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.
The benchmark is run with 16 clients and 8 tables with 10M rows per table. The purpose is to search for regressions from new CPU overhead and mutex contention.
I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
I present results for:
- versions 12 through 18 using 12.22 as the base version
- versions 17.6 and 18.0 using 17.6 as the base version
col-1 col-2 col-3 writes
New File Copy-Based Initial Sync Overwhelms the Logical Initial Sync in Percona Server for MongoDB
The ACID Test: Why We Think Search Needs Transactions
September 28, 2025
WiredTigerHS.wt: MongoDB MVCC Durable History Store
MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later — at checkpoint or when cache eviction needs space — into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.
MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:
ls -l /data/db/WiredTigerHS.wt
-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt
This 4KB file holds no records:
wt -h /data/db dump -j file:WiredTigerHS.wt
{
"WiredTiger Dump Version" : "1 (12.0.0)",
"file:WiredTigerHS.wt" : [
{
"config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
"colgroups" : [],
"indices" : []
},
{
"data" : [ ]
}
]
}
The file contains only a header block with the configuration metadata. It defines the key and value format:
key_format=IuQQ
value_format=QQQu
Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.
The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter. Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.
To get some records in it, I start MongoDB as a one-member replicaset:
mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &
mongosh --eval '
rs.initiate( { _id: "rs0", members: [
{_id: 0, priority: 1, host: "localhost:27017"},
]});
'
I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:
db.test.drop();
for (let i = 0; i < 5; i++) {
db.test.insertOne({
_id: i,
val: "oldvalue",
filler: "X".repeat(1024)
});
}
for (let i = 0; i < 5; i++) {
db.test.updateOne(
{ _id: i },
{ $set: { val: "newvalue" } } // change to whatever new value you want
);
}
Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:
{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}
The durable history storage file size has increased:
ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt
I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:
pkill mongod
There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand 'X' characters (0x58), so they are easy to spot in a hex/BSON dump:
wt -h /data/db dump file:WiredTigerHS.wt
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\00
WiredTigerHS.wt: MongoDB MVCC Durable History Store
MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: Uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later—at checkpoint or when cache eviction needs space—into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.
MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:
ls -l /data/db/WiredTigerHS.wt
-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt
This 4KB file holds no records:
wt -h /data/db dump -j file:WiredTigerHS.wt
{
"WiredTiger Dump Version" : "1 (12.0.0)",
"file:WiredTigerHS.wt" : [
{
"config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
"colgroups" : [],
"indices" : []
},
{
"data" : [ ]
}
]
}
The file contains only a header block with the configuration metadata. It defines the key and value format:
key_format=IuQQ
value_format=QQQu
Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.
The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter (if multiple updates at the same timestamp). Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.
To get some records in it, I start MongoDB as a one-member replicaset:
mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &
mongosh --eval '
rs.initiate( { _id: "rs0", members: [
{_id: 0, priority: 1, host: "localhost:27017"},
]});
'
I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:
db.test.drop();
for (let i = 0; i < 5; i++) {
db.test.insertOne({
_id: i,
val: "oldvalue",
filler: "X".repeat(1024)
});
}
for (let i = 0; i < 5; i++) {
db.test.updateOne(
{ _id: i },
{ $set: { val: "newvalue" } } // change to whatever new value you want
);
}
Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:
{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}
The durable history storage file size has increased:
ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt
I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:
pkill mongod
There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand "X" characters (0x58), so they are easy to spot in a hex/BSON dump:
wt -h /data/db dump file:WiredTigerHS.wt
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\0
September 26, 2025
Identifying and resolving performance issues caused by TOAST OID contention in Amazon Aurora PostgreSQL Compatible Edition and Amazon RDS for PostgreSQL
In this post, we explore the challenges of OID exhaustion in PostgreSQL, focusing on its impact on TOAST tables and how it leads to performance issues. We will cover how to identify the problem by reviewing wait events, session activity, and table usage. Additionally, we discuss practical solutions, from cleaning up data to more advanced strategies such as partitioning.
Postgres 18.0 vs sysbench on a small server
This has benchmark results for Postgres 18.0 using sysbench on a small server. Previous results for 18 rc1 are here.
tl;dr
- From 12.22 to 18.0
- there are no regressions larger than 2% but many improvements larger than 5%. Postgres continues to do a great job at avoiding regressions over time.
- From 17.6 to 18.0
- I continue to see small CPU regressions (1% or 2%) in Postgres 18 for short range queries on low-concurrency workloads. I see it for shorter but not for longer range queries so my guess is that this is new overhead in query execution setup or optimization. I hope to explain this.
Builds, configuration and hardware
I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.
The HW is an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
Prior to 18.0, the configuration file was named conf.diff.cx10a_c8r32 and is here for 12.22, 13.22, 14.19, 15.14, 16.10 and 17.6.
For 18.0 I tried 3 configuration files:- conf.diff.cx10b_c8r32 (x10b) - uses io_method=sync
- conf.diff.cx10c_c8r32 (x10c) - uses io_method=worker
- conf.diff.cx10d_c8r32 (x10d) - uses io_method=io_uring
Benchmark
I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.
The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.
The benchmark is run with 1 client, 1 table and 50M rows. The purpose is to search for CPU regressions.
Results
The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation.
I provide charts below with relative QPS. The relative QPS is the following:(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version. When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.
I present results for:
- versions 12 through 18 using 12.22 as the base version
- versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0
For the read-only_range=X benchmarks there might be small regressions (1% or 2%) when X is 10 or 100 but not 10000. The value of X is the length of the range scan. I have seen similar regressions in the beta and RC releases. Given that this occurs when the range scan is shorter, the problem might be new overhead in query execution setup or optimization. But I have yet to explain this.
Relative to: 17.6 with x10acol-1 : 18.0 with x10b and io_method=synccol-2 : 18.0 with x10c and io_method=workercol-3 : 18.0 with x10d and io_method=io_uring
col-1 col-2 col-3 point queries1.01 1.01 0.97 hot-points_range=1001.01 1.00 0.99 point-query_range=1001.01 1.01 1.00 points-covered-pk_range=1001.01 1.02 1.01 points-covered-si_range=1001.01 1.01 1.00 points-notcovered-pk_range=1001.01 0.99 1.00 points-notcovered-si_range=1001.02 1.02 1.03 random-points_range=10001.01 1.00 0.99 random-points_range=1001.00 1.00 0.99 random-points_range=10
col-1 col-2 col-3 range queries without aggregation0.99 0.99 0.98 range-covered-pk_range=1001.00 0.99 1.00 range-covered-si_range=1001.00 0.99 0.98 range-notcovered-pk_range=1000.99 0.99 0.99 range-notcovered-si_range=1001.04 1.04 1.04 scan_range=100
col-1 col-2 col-3 range queries with aggregation1.01 1.00 1.01 read-only-count_range=10001.01 1.00 1.00 read-only-distinct_range=10000.99 1.00 0.98 read-only-order_range=10001.01 1.00 1.00 read-only_range=100000.99 0.99 0.98 read-only_range=1000.98 0.99 0.98 read-only_range=101.01 1.00 0.99 read-only-simple_range=10001.00 1.00 0.99 read-only-sum_range=1000
col-1 col-2 col-3 writes1.00 1.00 0.99 delete_range=1000.99 0.99 0.98 insert_range=1000.99 0.99 0.98 read-write_range=1000.98 0.99 0.98 read-write_range=100.99 1.00 0.99 update-index_range=1000.99 1.00 1.00 update-inlist_range=1000.99 1.00 0.98 update-nonindex_range=1000.99 0.99 0.98 update-one_range=1000.99 1.00 0.99 update-zipf_range=1001.00 1.00 0.99 write-only_range=10000
Results: Postgres 12 to 18
From 12.22 to 18.0 there are no regressions larger than 2% but many improvements larger than 5% (highlighted in greeen). Postgres continues to do a great job at avoiding regressions over time.
Relative to: 12.22col-1 : 13.22col-2 : 14.19col-3 : 15.14col-4 : 16.10col-5 : 17.6col-6 : 18.0 with the x10b config
col-1 col-2 col-3 col-4 col-5 col-6 point queries1.06 1.05 1.05 1.09 2.04 2.05 hot-points_range=1001.01 1.03 1.03 1.02 1.04 1.04 point-query_range=1001.00 0.99 0.99 1.03 0.99 1.01 points-covered-pk_range=1001.04 1.03 1.02 1.05 1.01 1.03 points-covered-si_range=1001.01 1.00 1.01 1.04 1.01 1.02 points-notcovered-pk_range=1001.01 1.02 1.03 1.05 1.02 1.04 points-notcovered-si_range=1001.02 1.00 1.02 1.05 1.00 1.02 random-points_range=10001.01 1.01 1.01 1.03 1.01 1.02 random-points_range=1001.01 1.01 1.01 1.02 1.01 1.01 random-points_range=10
col-1 col-2 col-3 col-4 col-5 col-6 range queries with aggregation0.99 1.00 1.00 1.00 0.99 0.98 range-covered-pk_range=1001.01 1.01 1.00 1.00 0.99 0.99 range-covered-si_range=1001.00 1.00 1.01 1.01 1.00 1.00 range-notcovered-pk_range=1001.00 1.00 1.00 1.01 1.02 1.01 range-notcovered-si_range=1001.00 1.30 1.19 1.18 1.16 1.20 scan_range=100
col-1 col-2 col-3 col-4 col-5 col-6 range queries without aggregation1.04 1.02 1.00 1.05 1.02 1.03 read-only-count_range=10001.00 1.00 1.03 1.04 1.03 1.04 read-only-distinct_range=10001.00 1.00 1.04 1.04 1.06 1.06 read-only-order_range=10001.01 1.01 1.04 1.07 1.06 1.07 read-only_range=100001.00 1.00 1.01 1.01 1.02 1.01 read-only_range=1001.00 1.00 1.00 0.99 1.01 0.99 read-only_range=101.01 1.01 1.02 1.02 1.03 1.03 read-only-simple_range=10001.01 1.00 1.00 1.03 1.02 1.02 read-only-sum_range=1000
col-1 col-2 col-3 col-4 col-5 col-6 writes1.01 1.02 1.01 1.03 1.13 1.12 delete_range=1000.99 0.98 0.97 0.98 1.06 1.05 insert_range=1000.99 1.00 1.00 1.01 1.02 1.02 read-write_range=1000.99 1.01 1.01 1.01 1.03 1.01 read-write_range=101.00 1.00 1.01 1.00 1.09 1.08 update-index_range=1001.00 1.10 1.09 1.09 1.10 1.09 update-inlist_range=1001.03 1.05 1.06 1.05 1.15 1.14 update-nonindex_range=1000.99 0.98 0.99 0.98 1.07 1.06 update-one_range=1001.01 1.04 1.06 1.05 1.18 1.17 update-zipf_range=1000.98 1.01 1.01 0.99 1.07 1.07 write-only_range=10000
MySQL 8.0 End of Life Support: What Are Your Options?
We’ve mentioned this a few times here on the blog already, but in case you missed it, MySQL 8.0’s end-of-life date is April 2026. This probably sounds forever away, but it’s going to sneak up before you know it. Maybe you’ve been putting off thinking about it, or maybe you’re already weighing your options but […]
Elasticsearch, Postgres, and the ACID Test
A developer’s look at how Elasticsearch and Postgres stack up against the ACID test