WiredTigerHS.wt: MongoDB MVCC Durable History Store
MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later — at checkpoint or when cache eviction needs space — into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.
MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:
ls -l /data/db/WiredTigerHS.wt
-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt
This 4KB file holds no records:
wt -h /data/db dump -j file:WiredTigerHS.wt
{
"WiredTiger Dump Version" : "1 (12.0.0)",
"file:WiredTigerHS.wt" : [
{
"config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
"colgroups" : [],
"indices" : []
},
{
"data" : [ ]
}
]
}
The file contains only a header block with the configuration metadata. It defines the key and value format:
key_format=IuQQ
value_format=QQQu
Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.
The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter. Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.
To get some records in it, I start MongoDB as a one-member replicaset:
mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &
mongosh --eval '
rs.initiate( { _id: "rs0", members: [
{_id: 0, priority: 1, host: "localhost:27017"},
]});
'
I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:
db.test.drop();
for (let i = 0; i < 5; i++) {
db.test.insertOne({
_id: i,
val: "oldvalue",
filler: "X".repeat(1024)
});
}
for (let i = 0; i < 5; i++) {
db.test.updateOne(
{ _id: i },
{ $set: { val: "newvalue" } } // change to whatever new value you want
);
}
Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:
{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}
The durable history storage file size has increased:
ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt
I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:
pkill mongod
There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand 'X' characters (0x58), so they are easy to spot in a hex/BSON dump:
wt -h /data/db dump file:WiredTigerHS.wt
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\00
WiredTigerHS.wt: MongoDB MVCC Durable History Store
MongoDB uses the WiredTiger storage engine, which implements Multi‑Version Concurrency Control (MVCC) to provide lock‑free read consistency, similar to many RDBMS. Unlike many RDBMS, it follows a No‑Force/No‑Steal policy: Uncommitted changes stay only in memory. They are never written to disk, and committed changes are written later—at checkpoint or when cache eviction needs space—into the WiredTiger table files we have explored in the previous post, persisting only the latest committed version.
MongoDB also maintains recent committed MVCC versions for a specified period in a separate, durable history store (WiredTigerHS.wt). This enables the system to reconstruct snapshots from earlier points in time. In the previous article in this series, I described all WiredTiger files except WiredTigerHS.wt, because it was empty:
ls -l /data/db/WiredTigerHS.wt
-rw-------. 1 root root 4096 Sep 27 11:01 /data/db/WiredTigerHS.wt
This 4KB file holds no records:
wt -h /data/db dump -j file:WiredTigerHS.wt
{
"WiredTiger Dump Version" : "1 (12.0.0)",
"file:WiredTigerHS.wt" : [
{
"config" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none",
"colgroups" : [],
"indices" : []
},
{
"data" : [ ]
}
]
}
The file contains only a header block with the configuration metadata. It defines the key and value format:
key_format=IuQQ
value_format=QQQu
Those are WiredTiger types: I and Q are integers, respectively 4-byte and 8-byte, and u is a variable-length type, as an array of bytes.
The history store key (IuQQ) includes the table identifier (collection), the key in this table (recordID), the MVCC start timestamp (indicating when this version was current), and a counter (if multiple updates at the same timestamp). Its value (QQQu) contains the MVCC stop timestamp (when the version became obsolete), the durable timestamp (reflecting when the record reached a persistence point, such as a checkpoint), an update type, and the byte array is the BSON representation of the document version. Start and stop timestamps track version visibility for this document version. The durable timestamp shows when a version is safe to remove, supporting features such as rollback-to-stable, replication catch-up, and crash recovery.
To get some records in it, I start MongoDB as a one-member replicaset:
mongod --dbpath /data/db --replSet rs0 --wiredTigerCacheSizeGB 0.25 &
mongosh --eval '
rs.initiate( { _id: "rs0", members: [
{_id: 0, priority: 1, host: "localhost:27017"},
]});
'
I insert five documents and update them, to have two versions of the documents, the current one with { val: "newvalue" } and the previous one with { val: "oldvalue" }:
db.test.drop();
for (let i = 0; i < 5; i++) {
db.test.insertOne({
_id: i,
val: "oldvalue",
filler: "X".repeat(1024)
});
}
for (let i = 0; i < 5; i++) {
db.test.updateOne(
{ _id: i },
{ $set: { val: "newvalue" } } // change to whatever new value you want
);
}
Until a checkpoint or cache eviction occurs, all changes remain in memory (the WiredTiger cache), protected by write-ahead logging (WAL). To get something in the files, I watch the mongod log and wait for a checkpoint:
{"t":{"$date":"2025-09-27T20:33:18.140+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1759005198,"ts_usec":140184,"thread":"12233:0x7f908e1f76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 196, snapshot max: 196 snapshot count: 0, oldest timestamp: (1759005138, 1) , meta checkpoint timestamp: (1759005188, 1) base write gen: 1"}}}
The durable history storage file size has increased:
ls -alrt WiredTigerHS.wt
-rw-------. 1 root root 20480 Sep 27 20:33 WiredTigerHS.wt
I stopped mongod to be able to read the files with wt (that I compiled in a Docker container, as in the earlier post) of this series:
pkill mongod
There are 18 records in the durable history file, and the ones from my collection are visible as I filled a field with a thousand "X" characters (0x58), so they are easy to spot in a hex/BSON dump:
wt -h /data/db dump file:WiredTigerHS.wt
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
file:WiredTigerHS.wt
access_pattern_hint=none,allocation_size=4KB,app_metadata=,assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16KB,key_format=IuQQ,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=QQQu,verbose=[],write_timestamp_usage=none
Data
\83\81\8b\e8h\d8I\d1\ff\ff\df\c3\80
\e8h\d8I\d1\ff\ff\df\c4\e8h\d8I\d1\ff\ff\df\c3\83t\01\00\00\03md\00\ea\00\00\00\02ns\00\14\00\00\00config.transactions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\044\bb\80\11\e7\b3J\9b\a3^\ef\15\f0\d0\ee\ef\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-20-12692322139498239785\00\00\02ns\00\14\00\00\00config.transactions\00\02ident\00#\00\00\00collection-19-12692322139498239785\00\00
\83\81\90\e8h\d8I\d1\ff\ff\df\ca\80
\e8h\d8I\d1\ff\ff\df\cd\e8h\d8I\d1\ff\ff\df\ca\83\90\01\00\00\03md\00\f8\00\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e2\92\b2"\03\d8A\c0\97\1e\df\f2\a7\9bp\02\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-30-12692322139498239785\00\00\02ns\00"\00\00\00config.analyzeShardKeySplitPoints\00\02ident\00#\00\00\00collection-28-12692322139498239785\00\00
\83\81\91\e8h\d8I\d1\ff\ff\df\ce\80
\e8h\d8I\d1\ff\ff\df\cf\e8h\d8I\d1\ff\ff\df\ce\83x\01\00\00\03md\00\ec\00\00\00\02ns\00\16\00\00\00config.sampledQueries\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\e0\b8]\03\90\10Bp\80\d2\0d^\e5w\f1\c8\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-33-12692322139498239785\00\00\02ns\00\16\00\00\00config.sampledQueries\00\02ident\00#\00\00\00collection-32-12692322139498239785\00\00
\83\81\92\e8h\d8I\d1\ff\ff\df\d0\80
\e8h\d8I\d1\ff\ff\df\d1\e8h\d8I\d1\ff\ff\df\d0\83\80\01\00\00\03md\00\f0\00\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\95\f8\18\a6}\c3H\db\a2\8d\90\9f\a0R\d3\e4\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00-\00\00\00\02_id_\00\1e\00\00\00index-36-12692322139498239785\00\00\02ns\00\1a\00\00\00config.sampledQueriesDiff\00\02ident\00#\00\00\00collection-35-12692322139498239785\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c2\80
\e8h\d8I\f8\ff\ff\df\c3\e8h\d8I\f8\ff\ff\df\c2\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\01\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\02\00\00\00\f9I\d8h\12t\00\01\00\00\00\00\00\00\00\00\09lastWriteDate\00\17\f7\e0\8c\99\01\00\00\00
\97\81\81\e8h\d8I\f8\ff\ff\df\c3\80
\e8h\d8I\f8\ff\ff\df\c4\e8h\d8I\f8\ff\ff\df\c3\83\a6\00\00\00\03_id\00H\00\00\00\05id\00\10\00\00\00\042\a8s\b1!<E\0a\88'\08\8d]\985\01\05uid\00 \00\00\00\00\e3\b0\c4B\98\fc\1c\14\9a\fb\f4\c8\99o\b9$'\aeA\e4d\9b\93L\a4\95\99\1bxR\b8U\00\12txnNum\00\02\00\00\00\00\00\00\00\03lastWriteOpTime\00\1c\00\00\00\11ts\00\03\00\00\00\f9I\d8h\12t\00\01\00\0
September 26, 2025
Identifying and resolving performance issues caused by TOAST OID contention in Amazon Aurora PostgreSQL Compatible Edition and Amazon RDS for PostgreSQL
In this post, we explore the challenges of OID exhaustion in PostgreSQL, focusing on its impact on TOAST tables and how it leads to performance issues. We will cover how to identify the problem by reviewing wait events, session activity, and table usage. Additionally, we discuss practical solutions, from cleaning up data to more advanced strategies such as partitioning.
Postgres 18.0 vs sysbench on a small server
This has benchmark results for Postgres 18.0 using sysbench on a small server. Previous results for 18 rc1 are here.
tl;dr
- From 12.22 to 18.0
- there are no regressions larger than 2% but many improvements larger than 5%. Postgres continues to do a great job at avoiding regressions over time.
- From 17.6 to 18.0
- I continue to see small CPU regressions (1% or 2%) in Postgres 18 for short range queries on low-concurrency workloads. I see it for shorter but not for longer range queries so my guess is that this is new overhead in query execution setup or optimization. I hope to explain this.
Builds, configuration and hardware
I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.
The HW is an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
Prior to 18.0, the configuration file was named conf.diff.cx10a_c8r32 and is here for 12.22, 13.22, 14.19, 15.14, 16.10 and 17.6.
For 18.0 I tried 3 configuration files:- conf.diff.cx10b_c8r32 (x10b) - uses io_method=sync
- conf.diff.cx10c_c8r32 (x10c) - uses io_method=worker
- conf.diff.cx10d_c8r32 (x10d) - uses io_method=io_uring
Benchmark
I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.
The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.
The benchmark is run with 1 client, 1 table and 50M rows. The purpose is to search for CPU regressions.
Results
The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation.
I provide charts below with relative QPS. The relative QPS is the following:(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version. When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.
I present results for:
- versions 12 through 18 using 12.22 as the base version
- versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0
For the read-only_range=X benchmarks there might be small regressions (1% or 2%) when X is 10 or 100 but not 10000. The value of X is the length of the range scan. I have seen similar regressions in the beta and RC releases. Given that this occurs when the range scan is shorter, the problem might be new overhead in query execution setup or optimization. But I have yet to explain this.
Relative to: 17.6 with x10acol-1 : 18.0 with x10b and io_method=synccol-2 : 18.0 with x10c and io_method=workercol-3 : 18.0 with x10d and io_method=io_uring
col-1 col-2 col-3 point queries1.01 1.01 0.97 hot-points_range=1001.01 1.00 0.99 point-query_range=1001.01 1.01 1.00 points-covered-pk_range=1001.01 1.02 1.01 points-covered-si_range=1001.01 1.01 1.00 points-notcovered-pk_range=1001.01 0.99 1.00 points-notcovered-si_range=1001.02 1.02 1.03 random-points_range=10001.01 1.00 0.99 random-points_range=1001.00 1.00 0.99 random-points_range=10
col-1 col-2 col-3 range queries without aggregation0.99 0.99 0.98 range-covered-pk_range=1001.00 0.99 1.00 range-covered-si_range=1001.00 0.99 0.98 range-notcovered-pk_range=1000.99 0.99 0.99 range-notcovered-si_range=1001.04 1.04 1.04 scan_range=100
col-1 col-2 col-3 range queries with aggregation1.01 1.00 1.01 read-only-count_range=10001.01 1.00 1.00 read-only-distinct_range=10000.99 1.00 0.98 read-only-order_range=10001.01 1.00 1.00 read-only_range=100000.99 0.99 0.98 read-only_range=1000.98 0.99 0.98 read-only_range=101.01 1.00 0.99 read-only-simple_range=10001.00 1.00 0.99 read-only-sum_range=1000
col-1 col-2 col-3 writes1.00 1.00 0.99 delete_range=1000.99 0.99 0.98 insert_range=1000.99 0.99 0.98 read-write_range=1000.98 0.99 0.98 read-write_range=100.99 1.00 0.99 update-index_range=1000.99 1.00 1.00 update-inlist_range=1000.99 1.00 0.98 update-nonindex_range=1000.99 0.99 0.98 update-one_range=1000.99 1.00 0.99 update-zipf_range=1001.00 1.00 0.99 write-only_range=10000
Results: Postgres 12 to 18
From 12.22 to 18.0 there are no regressions larger than 2% but many improvements larger than 5% (highlighted in greeen). Postgres continues to do a great job at avoiding regressions over time.
Relative to: 12.22col-1 : 13.22col-2 : 14.19col-3 : 15.14col-4 : 16.10col-5 : 17.6col-6 : 18.0 with the x10b config
col-1 col-2 col-3 col-4 col-5 col-6 point queries1.06 1.05 1.05 1.09 2.04 2.05 hot-points_range=1001.01 1.03 1.03 1.02 1.04 1.04 point-query_range=1001.00 0.99 0.99 1.03 0.99 1.01 points-covered-pk_range=1001.04 1.03 1.02 1.05 1.01 1.03 points-covered-si_range=1001.01 1.00 1.01 1.04 1.01 1.02 points-notcovered-pk_range=1001.01 1.02 1.03 1.05 1.02 1.04 points-notcovered-si_range=1001.02 1.00 1.02 1.05 1.00 1.02 random-points_range=10001.01 1.01 1.01 1.03 1.01 1.02 random-points_range=1001.01 1.01 1.01 1.02 1.01 1.01 random-points_range=10
col-1 col-2 col-3 col-4 col-5 col-6 range queries with aggregation0.99 1.00 1.00 1.00 0.99 0.98 range-covered-pk_range=1001.01 1.01 1.00 1.00 0.99 0.99 range-covered-si_range=1001.00 1.00 1.01 1.01 1.00 1.00 range-notcovered-pk_range=1001.00 1.00 1.00 1.01 1.02 1.01 range-notcovered-si_range=1001.00 1.30 1.19 1.18 1.16 1.20 scan_range=100
col-1 col-2 col-3 col-4 col-5 col-6 range queries without aggregation1.04 1.02 1.00 1.05 1.02 1.03 read-only-count_range=10001.00 1.00 1.03 1.04 1.03 1.04 read-only-distinct_range=10001.00 1.00 1.04 1.04 1.06 1.06 read-only-order_range=10001.01 1.01 1.04 1.07 1.06 1.07 read-only_range=100001.00 1.00 1.01 1.01 1.02 1.01 read-only_range=1001.00 1.00 1.00 0.99 1.01 0.99 read-only_range=101.01 1.01 1.02 1.02 1.03 1.03 read-only-simple_range=10001.01 1.00 1.00 1.03 1.02 1.02 read-only-sum_range=1000
col-1 col-2 col-3 col-4 col-5 col-6 writes1.01 1.02 1.01 1.03 1.13 1.12 delete_range=1000.99 0.98 0.97 0.98 1.06 1.05 insert_range=1000.99 1.00 1.00 1.01 1.02 1.02 read-write_range=1000.99 1.01 1.01 1.01 1.03 1.01 read-write_range=101.00 1.00 1.01 1.00 1.09 1.08 update-index_range=1001.00 1.10 1.09 1.09 1.10 1.09 update-inlist_range=1001.03 1.05 1.06 1.05 1.15 1.14 update-nonindex_range=1000.99 0.98 0.99 0.98 1.07 1.06 update-one_range=1001.01 1.04 1.06 1.05 1.18 1.17 update-zipf_range=1000.98 1.01 1.01 0.99 1.07 1.07 write-only_range=10000
MySQL 8.0 End of Life Support: What Are Your Options?
We’ve mentioned this a few times here on the blog already, but in case you missed it, MySQL 8.0’s end-of-life date is April 2026. This probably sounds forever away, but it’s going to sneak up before you know it. Maybe you’ve been putting off thinking about it, or maybe you’re already weighing your options but […]
Elasticsearch, Postgres, and the ACID Test
A developer’s look at how Elasticsearch and Postgres stack up against the ACID test
September 24, 2025
Four Ivies. Two days.
This is my long-overdue trip report from last summer: July 10–11, 2024. We toured Ivy League campuses to help our rising senior son weigh his options, with our two daughters (our kids are four years apart each) tagging along for an early preview. Day one was Yale and Brown, followed by a night in New Jersey. Day two took us to Princeton and UPenn, then the long drive back to Buffalo. Of course we drove, that's how we roll.
Prelude
Lining up campus tours is its own sport. They are booked months in advance. Pro-tip: when your kid is born, call the colleges to reserve their campus visit. We lucked into two open slots, then hacked together a Python script to snipe cancellations and grabbed the other two. Not proud of this, but that's what it takes if you don't book months in advance.
The U.S. college admissions process is Byzantine. It is a weird mix of ritual and performance. There are entire books about how to write the college essay. I have plenty to say about the so-called holistic review process, but that's for another post. Back in Turkey, I just had to take a National University Entrance exam, and score very high to get placed in to a top university. That was also a broken system and was stressful, but at least there were no essays, no extracurriculars, no culture fit, no campus visits.
Here, though, the campus visit is part of the show. It is especially essential if you are considering to sign a binding early decision. Early decision boosts acceptance odds 3–4x. But it also locks you in. Our son didn't end up doing ED. His top choices didn't offer it, and he didn't want to burn his chances elsewhere.
Yale: Cathedrals and Low Energy
After six hours on the road we rolled into New Haven, paid for street parking, and joined the tour. Yale sits right in the city center, and the architecture hits you: gothic cathedrals and stone facades older than the country itself.
The name still carries weight. Even in Turkey, Yale was known through Yale locks, whose founder was a distant relative of the university's founder, Elihu Yale. The Yale programs are ranked high, the libraries are priceless, and the faculty-to-student ratio is great.
One stop in our campus tour was the Beinecke Library. Its marble-and-granite exterior filters light to protect fragile manuscripts. Our guide told us that in a fire, oxygen would be sucked out to save the books, even at the expense of people inside. Dying for the books is romantic, but the fact-check says this is a myth.
Yale also revealed the Ivy pattern we would see in other stops of our tour: Two years of mandatory dorms, no AP credit (just placement), and an abundance of pride in being an Ivy.
We noted these as downsides at Yale. There is no strong pitch for undergraduate research. Some buildings are beautiful, others are just tired: 1960s concrete, no AC, worn interiors. On a hot day, it felt even worse. The CS building in particular was old, dark, and smelly. It looked like it was designed by someone who hated students.
Brown: The underrated Ivy
Ninety minutes later we were at Providence, attending the Brown campus tour at 3 pm.
Brown impressed us. The open curriculum gives students great freedom, for example, CS mixed with theater, neuroscience, and entrepreneurship. Research opportunities are emphasized from the start of the tour. Every student writes a senior thesis. Brown supports student research financially, and third- and fourth-year students can TA undergrad classes. Stay for a fifth year and you can leave with a combined MS. The culture is collaborative, not competitive. If you fail a class, it doesn't show up on your transcript. This way students are encouraged to take risks... or quit and be lazy, I don't know.
The campus sits on a hill overlooking Providence, close enough to the city. The faculty are strong, the vibe is progressive, and the students approachable. No Ivy airs here. Did you know that Emma Watson studied here? Our tour guide was excellent. Under his spell, my youngest daughter declared she would apply early decision to Brown when her time comes. Our son liked it too. He pointed out that Brown CS graduates earn the most one year out of college. The CS building is cramped and outdated, but still much better than Yale's. We all walked away charmed.
We drove to Jersey for a hotel. Our dinner was Dave's Hot Chicken.
Princeton: The Old Country Club
Next morning: Princeton. It has a huge campus. We parked at the stadium, and took a shuttle to the welcome center.
Princeton is historic and prestigious. Einstein once taught here. Princeton still has very strong faculty and a lot of resources. Undergraduates do research for senior thesis. But our tour guide spent more time talking about dining clubs and traditions than academics. It felt hollow. Too polished, too self-satisfied. Brown had been about people. Princeton was about tradition and Ivy airs. Unlike Brown, Princeton does not offer an open curriculum or fifth-year MS.
One odd scene was a busload of Chinese families arriving with luggage in tow, apparently right out of the airport. They dragged luggages across campus, straight into the tour. Princeton seems to have strong prestige in China.
UPenn: Philly Hustle
From Princeton, it was a short drive to Philadelphia. But what a view change. UPenn is right under the Philly downtown skyscrapers scenery.
UPenn struck us as hands-on and pragmatic. In your first year you get a writing course. In your 3rd and 4th years you write research reports and senior thesis. Double majors are allowed, minors too. The Wharton School of Business looms large: alumni include both Donald Trump and Jho Low, the billion-dollar corruption guy. What are they teaching there?
Food trucks lined the campus streets, serving better meals than many college dining halls. The lamb shawarma was awesome! While UPenn is dead in the center of downtown, they still have compulsory two years dorm stay like the other ivies. Our younger daughter adored the tour guide, adopted her as an older sister, and by the end of the tour, declared UPenn as her new top choice.
Closing
So our ranking is:
- Brown
- UPenn
- Princeton
- Yale
Brown feels underrated within the Ivies. The Ivies as a whole, though, are overrated. Colleges in general are overrated. These schools still coast on prestige built centuries ago. But the world has changed drastically. If they want to matter in the age of the internet and AI, they will need to adapt.
Choosing the Right Key-Value Store: Redis vs Valkey
Not long ago, picking an in-memory key-value store was easy. Redis was the default. Fast, simple, everywhere. Then the rules changed. Redis moved to a much more restrictive license. Suddenly, many companies had to rethink their plans, especially if they cared about staying open source or needed flexibility for the cloud. That’s when Valkey arrived. […]
Partnering with Cloudflare to bring you the fastest globally distributed applications
You can now easily set up PlanetScale databases with Cloudflare Workers using this native integration.
Processes and Threads
Processes and threads are fundamental abstrations for operating systems. Learn how they work and how they impact database performance in this interactive article.
September 23, 2025
Long-term storage and analysis of Amazon RDS events with Amazon S3 and Amazon Athena
In this post, we show you how to implement an automated solution for archiving Amazon RDS events to Amazon Simple Storage Service (Amazon S3). We also discuss how to analyze the events with Amazon Athena which helps enable proactive database management, helps maintain security and compliance, and provides valuable insights for capacity planning and troubleshooting.
Announcing OpenBao Support in Percona Server for MongoDB
At Percona, we believe that an open world is a better world. Our mission has always been to empower organizations with secure, scalable, and reliable open source database solutions without locking them into expensive proprietary ecosystems. Today, we’re excited to share another step forward in this journey: Percona Server for MongoDB now supports OpenBao for […]
September 22, 2025
Migrate full-text search from SQL Server to Amazon Aurora PostgreSQL-compatible edition or Amazon RDS for PostgreSQL
In this post, we show you how to migrate full-text search in Microsoft SQL Server to Amazon Aurora PostgreSQL using text searching data types tsvector and tsquery. We also show you how to implement FTS using pg_trgm and pg_bigm extensions.
Keep PostgreSQL Secure with TDE and the Latest Updates
This fall feels like a good moment to stop and look at what’s changed in PostgreSQL security over the last months and also what you can use right now to make your PostgreSQL deployments safer. PostgreSQL Transparent Data Encryption (TDE) from Percona For many years, Transparent Data Encryption (TDE) was a missing piece for security […]
PlanetScale for Postgres is now GA
PlanetScale for Postgres is now generally available.
September 21, 2025
MongoDB Search Index internals with Luke (Lucene Toolbox GUI tool)
Previously, I demonstrated MongoDB text search scoring with a simple example, creating a dynamic index without specifying fields explicitly. You might be curious about what data is actually stored in such an index. Let's delve into the specifics. Unlike regular MongoDB collections and indexes, which use WiredTiger for storage, search indexes leverage Lucene technology. We can inspect these indexes using Luke, the Lucene Toolbox GUI tool.
Set up a lab
I started an Atlas local deployment to get a container for my lab:
# download Atlas CLI if you don't have it. Here it is for my Mac:
wget https://www.mongodb.com/try/download/atlascli
unzip mongodb-atlas-cli_1.43.0_macos_arm64.zip
# start a container
bin/atlas deployments setup atlas --type local --port 27017 --force
Sample data
I connected with mongosh and created a collection, and a search index, like in previous post:
mongosh --eval '
db.articles.deleteMany({});
db.articles.insertMany([
{ description : "🍏 🍌 🍊" }, // short, 1 🍏
{ description : "🍎 🍌 🍊" }, // short, 1 🍎
{ description : "🍎 🍌 🍊 🍎" }, // larger, 2 🍎
{ description : "🍎 🍌 🍊 🍊 🍊" }, // larger, 1 🍎
{ description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" }, // large, 1 🍎
{ description : "🍎 🍎 🍎 🍎 🍎 🍎" }, // large, 6 🍎
{ description : "🍎 🍌" }, // very short, 1 🍎
{ description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" }, // large, 1 🍎
{ description : "🍎 🍎 🍌 🍌 🍌" }, // shorter, 2 🍎
]);
db.articles.createSearchIndex("default",
{ mappings: { dynamic: true } }
);
'
Get Lucene indexes
While MongoDB collections and secondary indexes are stored by the WiredTiger storage engine (by default in /data/db directory), the text search indexes use Lucene in a mongot process (with files stored by default in /data/mongot). I copied it to my laptop:
docker cp atlas:/data/mongot ./mongot_copy
cd mongot_copy
One file is easy to read, as it is in JSON format, and it is the metadata listing the search indexes, with their MongoDB configuration:
cat configJournal.json | jq
{
"version": 1,
"stagedIndexes": [],
"indexes": [
{
"index": {
"indexID": "68d0588abf7ab96dd26277b1",
"name": "default",
"database": "test",
"lastObservedCollectionName": "articles",
"collectionUUID": "a18b587d-a380-4067-95aa-d0e9d4871b64",
"numPartitions": 1,
"mappings": {
"dynamic": true,
"fields": {}
},
"indexFeatureVersion": 4
},
"analyzers": [],
"generation": {
"userVersion": 0,
"formatVersion": 6
}
}
],
"deletedIndexes": [],
"stagedVectorIndexes": [],
"vectorIndexes": [],
"deletedVectorIndexes": []
}
The directory where Lucene files are stored has the IndexID in their names:
ls 68d0588abf7ab96dd26277b1*
_0.cfe _0.cfs _0.si
_1.cfe _1.cfs _1.si
_2.cfe _2.cfs _2.si
_3.cfe _3.cfs _3.si
_4.cfe _4.cfs _4.si
_5.cfe _5.cfs _5.si
_6.cfe _6.cfs _6.si
segments_2
write.lock
In a Lucene index, each .cfs/.cfe/.si set represents one immutable segment containing a snapshot of indexed data (with .cfs holding the actual data, .cfe its table of contents, and .si the segment’s metadata), and the segments_2 file is the global manifest that tracks all active segments so Lucene can search across them as one index.
Install and use Luke
I installed the Lucene binaries and started Luke:
wget https://dlcdn.apache.org/lucene/java/9.12.2/lucene-9.12.2.tgz
tar -zxvf lucene-9.12.2.tgz
lucene-9.12.2/bin/luke.sh
This starts the GUI asking for the index directory:
The "Overview" tab shows lots of information:
The field names are prefixed with the type. My description field was indexed as a string and named $type:string/description. There are 9 documents and 9 different terms:
The Lucene index keeps the overall frequency in order to apply Inverse Document Frequency (IDF). Here, 🍎 is present in 8 documents and 🍏 in one.
The "Document" tab lets us browse the documents and see what is indexed. For example, 🍏 is present in one document with { description: "🍏 🍌 🍊" }:
The flags IdfpoN-S mean that it is a fully indexed text field with docs, frequencies, positions, and offsets, with norms and stored values
The "Search" tab allows to run queries. For example, { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } } from my previous post is:
This is exactly what I got from MongoDB:
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
If you double-click on a document in the result, you can get the explanation of the score:
For example, the score of 🍏 🍌 🍊 is explained by:
1.0242119 sum of:
1.0242119 weight($type:string/description:🍏 in 0) [BM25Similarity], result of:
1.0242119 score(freq=1.0), computed as boost * idf * tf from:
1.89712 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
1 n, number of documents containing term
9 N, total number of documents with field
0.5398773 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
1.0 freq, occurrences of term within document
1.2 k1, term saturation parameter
0.75 b, length normalization parameter
3.0 dl, length of field
4.888889 avgdl, average length of field
The BM25 core formula is score = boost × idf × tf
IDF (Inverse Document Frequency) is idf = log(1 + (N - n + 0.5) / (n + 0.5)) where n is the number of documents containing 🍏 , so 1, and N is the total documents in the index for this field, so 9
TF (Term Frequency normalization) is tf = freq / ( freq + k1 × (1 - b + b × (dl / avgdl)) ) where freq is the term occurrences in this doc’s field, so 1, k1 is the term saturation parameter, which defaults to 1.2 in Lucene, b is the length normalization, which defaults to 0.75 in Lucene, dl is the document length, which is 3 tokens here, and avgdl is the average document length for this field in the segment, here 4.888889
Daily usage in MongoDB search also allows boosting via the query, which multiplies into the BM25 scoring formula (boost).
The "Analysis" tab helps explain how strings are tokenized and processed. For example, the standard analyzer explicitly recognized emojis:
Conclusion
MongoDB search indexes are designed to work optimally out of the box. In my earlier post, I relied entirely on default settings, dynamic mapping, and even replaced words with emojis — yet still got relevant, well-ranked results without extra tuning. If you want to go deeper and fine-tune your search behavior, or simply learn more about how it works, inspecting the underlying Lucene index can provide great insights. Since Atlas Search indexes are Lucerne-compatible, tools like Luke allow you to see exactly how your text is tokenized, stored, and scored — giving you full transparency into how queries match your documents.
MongoDB Search Index Internals With Luke (Lucene Toolbox GUI Tool)
Previously, I demonstrated MongoDB text search scoring with a simple example, creating a dynamic index without specifying fields explicitly. You might be curious about what data is actually stored in such an index. Let's delve into the specifics. Unlike regular MongoDB collections and indexes, which use WiredTiger for storage, search indexes leverage Lucene technology. We can inspect these indexes using Luke, the Lucene Toolbox GUI tool.
Set up a lab
I started a Atlas local deployment to get a container for my lab:
# download Atlas CLI if you don't have it. Here it is for my Mac:
wget https://www.mongodb.com/try/download/atlascli
unzip mongodb-atlas-cli_1.43.0_macos_arm64.zip
# start a container
bin/atlas deployments setup atlas --type local --port 27017 --force
Sample data
I connected with mongosh and created a collection, and a search index, like in the previous post:
mongosh --eval '
db.articles.deleteMany({});
db.articles.insertMany([
{ description : "🍏 🍌 🍊" }, // short, 1 🍏
{ description : "🍎 🍌 🍊" }, // short, 1 🍎
{ description : "🍎 🍌 🍊 🍎" }, // larger, 2 🍎
{ description : "🍎 🍌 🍊 🍊 🍊" }, // larger, 1 🍎
{ description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" }, // large, 1 🍎
{ description : "🍎 🍎 🍎 🍎 🍎 🍎" }, // large, 6 🍎
{ description : "🍎 🍌" }, // very short, 1 🍎
{ description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" }, // large, 1 🍎
{ description : "🍎 🍎 🍌 🍌 🍌" }, // shorter, 2 🍎
]);
db.articles.createSearchIndex("default",
{ mappings: { dynamic: true } }
);
'
Get Lucene indexes
While MongoDB collections and secondary indexes are stored by the WiredTiger storage engine (by default, in the /data/db directory), the text search indexes use Lucene in a mongot process (with files stored by default in /data/mongot). I copied it to my laptop:
docker cp atlas:/data/mongot ./mongot_copy
cd mongot_copy
One file is easy to read, as it is in JSON format, and it is the metadata listing the search indexes, with their MongoDB configuration:
cat configJournal.json | jq
{
"version": 1,
"stagedIndexes": [],
"indexes": [
{
"index": {
"indexID": "68d0588abf7ab96dd26277b1",
"name": "default",
"database": "test",
"lastObservedCollectionName": "articles",
"collectionUUID": "a18b587d-a380-4067-95aa-d0e9d4871b64",
"numPartitions": 1,
"mappings": {
"dynamic": true,
"fields": {}
},
"indexFeatureVersion": 4
},
"analyzers": [],
"generation": {
"userVersion": 0,
"formatVersion": 6
}
}
],
"deletedIndexes": [],
"stagedVectorIndexes": [],
"vectorIndexes": [],
"deletedVectorIndexes": []
}
The directory where Lucene files are stored has the IndexID in their names:
ls 68d0588abf7ab96dd26277b1*
_0.cfe _0.cfs _0.si
_1.cfe _1.cfs _1.si
_2.cfe _2.cfs _2.si
_3.cfe _3.cfs _3.si
_4.cfe _4.cfs _4.si
_5.cfe _5.cfs _5.si
_6.cfe _6.cfs _6.si
segments_2
write.lock
In a Lucene index, each .cfs/.cfe/.si set represents one immutable segment containing a snapshot of indexed data (with .cfs holding the actual data, .cfe its table of contents, and .si the segment’s metadata), and the segments_2 file is the global manifest that tracks all active segments so Lucene can search across them as one index.
Install and use Luke
I installed the Lucene binaries and started Luke:
wget https://dlcdn.apache.org/lucene/java/9.12.2/lucene-9.12.2.tgz
tar -zxvf lucene-9.12.2.tgz
lucene-9.12.2/bin/luke.sh
This starts the GUI asking for the index directory:
The "Overview" tab shows lots of information:
The field names are prefixed with the type. My description field was indexed as a string and named $type:string/description. There are nine documents and nine different terms:
The Lucene index keeps the overall frequency in order to apply inverse document frequency (IDF). Here, 🍎 is present in eight documents and 🍏 in one.
The "Document" tab lets us browse the documents and see what is indexed. For example, 🍏 is present in one document with { description: "🍏 🍌 🍊" }:
The flags IdfpoN-S mean that it is a fully indexed text field with docs, frequencies, positions, and offsets, with norms and stored values.
The "Search" tab allows us to run queries. For example, { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } } from my previous post is:
This is exactly what I got from MongoDB:
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
If you double-click on a document in the result, you can get the explanation of the score:
For example, the score of 🍏 🍌 🍊 is explained by:
1.0242119 sum of:
1.0242119 weight($type:string/description:🍏 in 0) [BM25Similarity], result of:
1.0242119 score(freq=1.0), computed as boost * idf * tf from:
1.89712 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
1 n, number of documents containing term
9 N, total number of documents with field
0.5398773 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
1.0 freq, occurrences of term within document
1.2 k1, term saturation parameter
0.75 b, length normalization parameter
3.0 dl, length of field
4.888889 avgdl, average length of field
The BM25 core formula is score = boost × idf × tf.
IDF is idf = log(1 + (N - n + 0.5) / (n + 0.5)) where n is the number of documents containing 🍏 , so 1, and N is the total documents in the index for this field, so 9.
TF is tf = freq / ( freq + k1 × (1 - b + b × (dl / avgdl)) ) where freq is the term occurrences in this doc’s field, so 1, k1 is the term saturation parameter, which defaults to 1.2 in Lucene, b is the length normalization, which defaults to 0.75 in Lucene, dl is the document length, which is three tokens here, and avgdl is the average document length for this field in the segment—here, 4.888889.
Daily usage in MongoDB search also allows boosting via the query, which multiplies into the BM25 scoring formula (boost).
The "Analysis" tab helps explain how strings are tokenized and processed. For example, the standard analyzer explicitly recognized emojis:
Finally, I inserted 500 documents with other fruits, like in the previous post, and the collection-wide term frequency has been updated:
The scores reflect the change:
The explanation of the new rank for 🍏 🍌 🍊 is:
3.2850468 sum of:
3.2850468 weight($type:string/description:🍏 in 205) [BM25Similarity], result of:
3.2850468 score(freq=1.0), computed as boost * idf * tf from:
5.8289456 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
1 n, number of documents containing term
509 N, total number of documents with field
0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
1.0 freq, occurrences of term within document
1.2 k1, term saturation parameter
0.75 b, length normalization parameter
3.0 dl, length of field
5.691552 avgdl, average length of field
After adding 500 documents without 🍏, BM25 recalculates IDF for 🍏 with a much larger N, making it appear far rarer in the corpus, so its score contribution more than triples.
Notice that when I queried for 🍎🍏, no documents contained both terms, so the scoring explanation included only one weight. If I modify the query to include 🍎🍏🍊, the document 🍏 🍌 🍊 scores highest, as it combines the weights for both matching terms, 🍏 and 🍊:
4.3254924 sum of:
3.2850468 weight($type:string/description:🍏 in 205) [BM25Similarity], result of:
3.2850468 score(freq=1.0), computed as boost * idf * tf from:
5.8289456 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
1 n, number of documents containing term
509 N, total number of documents with field
0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
1.0 freq, occurrences of term within document
1.2 k1, term saturation parameter
0.75 b, length normalization parameter
3.0 dl, length of field
5.691552 avgdl, average length of field
1.0404456 weight($type:string/description:🍊 in 205) [BM25Similarity], result of:
1.0404456 score(freq=1.0), computed as boost * idf * tf from:
1.8461535 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
80 n, number of documents containing term
509 N, total number of documents with field
0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
1.0 freq, occurrences of term within document
1.2 k1, term saturation parameter
0.75 b, length normalization parameter
3.0 dl, length of field
5.691552 avgdl, average length of field
Here is the same query in MongoDB:
db.articles.aggregate([
{ $search: { text: { query: ["🍏", "🍊"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } },
{ $limit: 15 }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
4.325 🍏 🍌 🍊
1.354 🍎 🍌 🍊 🍊 🍊
1.259 🫐 🍊 🥑 🍊
1.137 🥥 🍊 🍊 🍅 🍈 🍈
1.137 🍍 🍓 🍊 🍊 🥑 🍉
1.137 🥥 🍆 🍊 🍊 🍍 🍉
1.084 🍊 🍑 🍊 🥥 🍌 🍍 🫐
1.084 🍊 🫐 🥝 🍋 🥑 🍇 🍊
1.084 🥭 🍍 🥑 🍋 🍈 🍊 🍊
1.040 🍊 🫐 🥭
1.040 🍊 🍉 🍍
1.040 🍎 🍌 🍊
1.040 🍊 🍋 🍋
1.040 🍐 🍌 🍊
1.036 🍐 🥥 🍍 🍈 🍐 🍊 🍆 🍊
While the scores may feel intuitively correct when you look at the data, it's important to remember there's no magic—everything is based on well‑known mathematics and formulas. Lucene’s scoring algorithms are used in many systems, including Elasticsearch, Apache Solr, and the search indexes built into MongoDB.
Conclusion
MongoDB search indexes are designed to work optimally out of the box. In my earlier post, I relied entirely on default settings, dynamic mapping, and even replaced words with emojis—yet still got relevant, well-ranked results without extra tuning. If you want to go deeper and fine-tune your search behavior, or simply learn more about how it works, inspecting the underlying Lucene index can provide great insights. Since MongoDB Atlas Search indexes are Lucene-compatible, tools like Luke allow you to see exactly how your text is tokenized, stored, and scored—giving you full transparency into how queries match your documents.
September 19, 2025
Text Search with MongoDB and PostgreSQL
MongoDB Search Indexes provide full‑text search capabilities directly within MongoDB, allowing complex queries to be run without copying data to a separate search system. Initially deployed in Atlas, MongoDB’s managed service, Search Indexes are now also part of the community edition. This post compares the default full‑text search behaviour between MongoDB and PostgreSQL, using a simple example to illustrate the ranking algorithm.
Setup: a small dataset
I’ve inserted nine small documents, each consisting of different fruits, using emojis to make it more visual. The 🍎 and 🍏 emojis represent our primary search terms. They appear at varying frequencies in documents of different lengths.
db.articles.deleteMany({});
db.articles.insertMany([
{ description : "🍏 🍌 🍊" }, // short, 1 🍏
{ description : "🍎 🍌 🍊" }, // short, 1 🍎
{ description : "🍎 🍌 🍊 🍎" }, // larger, 2 🍎
{ description : "🍎 🍌 🍊 🍊 🍊" }, // larger, 1 🍎
{ description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" }, // large, 1 🍎
{ description : "🍎 🍎 🍎 🍎 🍎 🍎" }, // large, 6 🍎
{ description : "🍎 🍌" }, // very short, 1 🍎
{ description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" }, // large, 1 🍎
{ description : "🍎 🍎 🍌 🍌 🍌" }, // shorter, 2 🍎
]);
To enable dynamic indexing, I created a MongoDB Search Index without specifying any particular field names:
db.articles.createSearchIndex("default",
{ mappings: { dynamic: true } }
);
I created the equivalent on PostgreSQL:
DROP TABLE IF EXISTS articles;
CREATE TABLE articles (
id BIGSERIAL PRIMARY KEY,
description TEXT
);
INSERT INTO articles(description) VALUES
('🍏 🍌 🍊'),
('🍎 🍌 🍊'),
('🍎 🍌 🍊 🍎'),
('🍎 🍌 🍊 🍊 🍊'),
('🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰'),
('🍎 🍎 🍎 🍎 🍎 🍎'),
('🍎 🍌'),
('🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎'),
('🍎 🍎 🍌 🍌 🍌');
Since text search needs multiple index entries for each row, I set up a GIN (Generalized Inverted Index) and use tsvector to extract and index the relevant tokens.
CREATE INDEX articles_fts_idx
ON articles USING GIN (to_tsvector('simple', description))
;
MongoDB Text Search (Lucene BM25):
I use my custom search index to find articles containing either 🍎 or 🍏 in their descriptions. The results are sorted by relevance score and displayed as follows:
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
Here are the results, presented in order of best to worst match:
1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
All documents were retrieved by this search since each contains a red or green apple. However, they are assigned different scores:
-
Multiple appearances boost the score: When a document contains the search term more than once, its ranking increases compared to those with only a single appearance. That's why documents featuring several 🍎 are ranked higher than those containing only one.
-
Rarity outweighs quantity: When a term like 🍎 appears in every document, it has less impact than a rare term, such as 🍏. Therefore, even if 🍏 only appears once, the document containing it ranks higher than others with multiple 🍎. In this model, rarity carries more weight than mere frequency.
-
Diminishing returns on term frequency: Each extra occurrence of a term adds less to the relevance score. For instance, increasing 🍎 from one to six times (from 🍎 🍌 to 🍎 🍎 🍎 🍎 🍎 🍎) boosts the score, but not by a factor of six. The effect of term repetition diminishes as the count rises.
-
Document length matters: A term that appears only once is scored higher in a short document than in a long one. That's why 🍎 🍌 ranks higher than 🍎 🍌 🍊, which itself ranks higher than 🍎 🍌 🍊 🍊 🍊.
MongoDB Atlas Search indexes are powered by Lucene’s BM25 algorithm, a refinement of the classic TF‑IDF model:
-
Term Frequency (TF): More occurrences of a term in a document increase its relevance score, but with diminishing returns.
-
Inverse Document Frequency (IDF): Terms that appear in fewer documents receive higher weighting.
-
Length Normalization: Matches in shorter documents contribute more to relevance than the same matches in longer documents.
To demonstrate the impact of IDF, I added several documents that do not contain any of the apples I'm searching for.
const fruits = [ "🍐","🍊","🍋","🍌","🍉","🍇","🍓","🫐",
"🥝","🥭","🍍","🥥","🍈","🍅","🥑","🍆",
"🍋","🍐","🍓","🍇","🍈","🥭","🍍","🍑",
"🥝","🫐","🍌","🍉","🥥","🥑","🥥","🍍" ];
function randomFruitSentence(min=3, max=8) {
const len = Math.floor(Math.random() * (max - min + 1)) + min;
return Array.from({length: len}, () => fruits[Math.floor(Math.random()*fruits.length)]).join(" ");
}
db.articles.insertMany(
Array.from({length: 500}, () => ({ description: randomFruitSentence() }))
);
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
3.365 🍎 🍎 🍎 🍎 🍎 🍎
3.238 🍏 🍌 🍊
2.760 🍎 🍌 🍊 🍎
2.613 🍎 🍎 🍌 🍌 🍌
2.506 🍎 🍌
2.274 🍎 🍌 🍊
1.919 🍎 🍌 🍊 🍊 🍊
1.554 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
1.554 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
Although the result set is unchanged, the score has increased and the frequency gap between 🍎 and 🍏 has narrowed. As a result, 🍎 🍎 🍎 🍎 🍎 🍎 now ranks higher than 🍏 🍌 🍊, since the inverse document frequency (IDF) of 🍏 does not fully offset its term frequency (TF) within a single document. Crucially, changes made in other documents can influence the score of any given document, unlike in traditional indexes where changes in one document do not impact others' index entries.
PostgreSQL Text Search (TF only):
Here is the result in PostgreSQL:
SELECT ts_rank_cd(
to_tsvector('simple', description)
,
to_tsquery('simple', '🍎 | 🍏')
) AS score, description
FROM articles
WHERE
to_tsvector('simple', description)
@@
to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC;
It retrieves the same documents, but with many having the same score, even with different patterns:
score | description
-------+-------------------------
0.6 | 🍎 🍎 🍎 🍎 🍎 🍎
0.2 | 🍎 🍌 🍊 🍎
0.2 | 🍎 🍎 🍌 🍌 🍌
0.1 | 🍏 🍌 🍊
0.1 | 🍎 🍌
0.1 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
0.1 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.1 | 🍎 🍌 🍊
0.1 | 🍎 🍌 🍊 🍊 🍊
(9 rows)
With PostgreSQL text search, only the term frequency (TF) matters, and is a direct multiplicator of the score: 6 apples ranks 3x higher than two, and 6x than one.
There's some possible normalization available with additiona flags:
SELECT ts_rank_cd(
to_tsvector('simple', description),
to_tsquery('simple', '🍎 | 🍏') ,
0 -- (the default) ignores the document length
| 1 -- divides the rank by 1 + the logarithm of the document length
-- | 2 -- divides the rank by the document length
-- | 4 -- divides the rank by the mean harmonic distance between extents (this is implemented only by ts_rank_cd)
| 8 -- divides the rank by the number of unique words in document
-- | 16 -- divides the rank by 1 + the logarithm of the number of unique words in document
-- | 32 -- divides the rank by itself + 1
) AS score,
description
FROM articles
WHERE to_tsvector('simple', description) @@ to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC
;
score | description
-------------+-------------------------
0.308339 | 🍎 🍎 🍎 🍎 🍎 🍎
0.055811062 | 🍎 🍎 🍌 🍌 🍌
0.04551196 | 🍎 🍌
0.04142233 | 🍎 🍌 🍊 🍎
0.024044918 | 🍏 🍌 🍊
0.024044918 | 🍎 🍌 🍊
0.018603688 | 🍎 🍌 🍊 🍊 🍊
0.005688995 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.005688995 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
(9 rows)
This penalizes longer documents and those with more unique terms. Still, it doesn't consider other documents like IDF.
PostgreSQL Full Text Search scoring with ts_rank_cd is based on term frequency and proximity. It does not compute inverse document frequency, so scores do not change as the corpus changes. Normalization flags can penalize long documents or those with many unique terms, but they are length-based adjustments, not true IDF, like we have in TF‑IDF or BM25‑style search engine.
ParadeDB with pg_search (Tantivy BM25)
PostgreSQL popularity is not only due to its features but also its extensibility and ecosystem. The pg_search extension adds functions and operators that use BM25 indexes (Tantivy, a Rust-based search library inspired by Lucene). It is easy to test with ParadeDB:
docker run --rm -it paradedb/paradedb bash
POSTGRES_PASSWORD=x docker-entrypoint.sh postgres &
psql -U postgres
The extension is installed in version 0.18.4:
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
------------------------+---------+------------+------------------------------------------------------------
fuzzystrmatch | 1.2 | public | determine similarities and distance between strings
pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL
pg_ivm | 1.9 | pg_catalog | incremental view maintenance on PostgreSQL
pg_search | 0.18.4 | paradedb | pg_search: Full text search for PostgreSQL using BM25
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
postgis | 3.6.0 | public | PostGIS geometry and geography spatial types and functions
postgis_tiger_geocoder | 3.6.0 | tiger | PostGIS tiger geocoder and reverse geocoder
postgis_topology | 3.6.0 | topology | PostGIS topology spatial types and functions
vector | 0.8.0 | public | vector data type and ivfflat and hnsw access methods
(9 rows)
I created and inserted the same as I did above on PostgreSQL and created the BM25 index:
CREATE INDEX search_idx ON articles
USING bm25 (id, description)
WITH (key_field='id')
;
We can query using the @@@ operator and rank with paradedb.score(id). Unlike PostgreSQL’s built‑in @@, which uses query‑local statistics, @@@ computes scores using global IDF and Lucene’s BM25 length normalization — so adding unrelated documents can still change the scores.
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ '🍎' OR description @@@ '🍏'
ORDER BY score DESC, description;
description | score
-------------+-------
(0 rows)
The result is empty. Using emoji as terms can lead to inconsistent tokenization results, so I replaced them with text labels instead:
UPDATE articles SET description
= replace(description, '🍎', 'Gala');
UPDATE articles SET description
= replace(description, '🍏', 'Granny Smith');
UPDATE articles SET description
= replace(description, '🍊', 'Orange');
This time, the scoring is more precise and takes into account the term frequency within the document (TF), the term’s rarity across the entire indexed corpus (IDF), along with a length normalization factor to prevent longer documents from having an unfair advantage:
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description;
description | score
-------------------------------+------------
Granny Smith 🍌 Orange | 3.1043208
Gala Gala Gala Gala Gala Gala | 0.79529095
Gala Gala 🍌 🍌 🍌 | 0.7512194
Gala 🍌 | 0.69356775
Gala 🍌 Orange Gala | 0.63589364
Gala 🍌 Orange | 0.5195716
Gala 🍌 Orange 🌴 🫐 🍈 🍇 | 0.5195716
🍌 Orange 🌴 🫐 🍈 🍇 Gala | 0.5195716
Gala 🍌 Orange Orange Orange | 0.34597924
(9 rows)
PostgreSQL’s built-in search only provides basic, local term frequency scoring. To get a full-feature text search that can be used in application's search boxes, it can be extended with third-party tools like ParadeDB's pg_search.
Conclusion
Relevance scoring in text search can differ widely between systems because each uses its own ranking algorithms and analyzers. To better visualize my results in these tests, I used emojis and opted for the simplest definitions. I selected PostgreSQL's to_tsvector('simple') configuration to prevent language-specific processing, while for MongoDB Atlas Search, I used the default dynamic mapping.
MongoDB Atlas Search (and now in MongoDB Community Edition) uses Lucene’s BM25 algorithm, combining:
- Term Frequency (TF): Frequent terms in a document boost scores, but with diminishing returns
- Inverse Document Frequency (IDF): Rare terms across the corpus get higher weight
- Length normalization: Matches in shorter documents are weighted more than the same matches in longer ones
PostgreSQL’s full-text search (ts_rank_cd()) evaluates only term frequency and position, overlooking other metrics like IDF. For more advanced features such as BM25, extensions like ParadeDB’s pg_search are needed, which require extra configuration and are not always available on managed platforms. PostgreSQL offers a modular approach, where extensions can add advanced ranking algorithms like BM25. MongoDB provides built‑in BM25‑based full‑text search in both Atlas and the Community Edition.
Text Search With MongoDB (BM25 TF-IDF) and PostgreSQL
MongoDB search indexes provide full‑text search capabilities directly within MongoDB, allowing complex queries to be run without copying data to a separate search system. Initially deployed in Atlas, MongoDB’s managed service, search indexes are now also part of the community edition. This post compares the default full‑text search behaviour between MongoDB and PostgreSQL, using a simple example to illustrate the ranking algorithm.
Setup: a small dataset
I’ve inserted nine small documents, each consisting of different fruits, using emojis to make it more visual. The 🍎 and 🍏 emojis represent our primary search terms. They appear at varying frequencies in documents of different lengths.
db.articles.deleteMany({});
db.articles.insertMany([
{ description : "🍏 🍌 🍊" }, // short, 1 🍏
{ description : "🍎 🍌 🍊" }, // short, 1 🍎
{ description : "🍎 🍌 🍊 🍎" }, // larger, 2 🍎
{ description : "🍎 🍌 🍊 🍊 🍊" }, // larger, 1 🍎
{ description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" }, // large, 1 🍎
{ description : "🍎 🍎 🍎 🍎 🍎 🍎" }, // large, 6 🍎
{ description : "🍎 🍌" }, // very short, 1 🍎
{ description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" }, // large, 1 🍎
{ description : "🍎 🍎 🍌 🍌 🍌" }, // shorter, 2 🍎
]);
To enable dynamic indexing, I created a MongoDB search index without specifying any particular field names:
db.articles.createSearchIndex("default",
{ mappings: { dynamic: true } }
);
I created the equivalent on PostgreSQL:
DROP TABLE IF EXISTS articles;
CREATE TABLE articles (
id BIGSERIAL PRIMARY KEY,
description TEXT
);
INSERT INTO articles(description) VALUES
('🍏 🍌 🍊'),
('🍎 🍌 🍊'),
('🍎 🍌 🍊 🍎'),
('🍎 🍌 🍊 🍊 🍊'),
('🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰'),
('🍎 🍎 🍎 🍎 🍎 🍎'),
('🍎 🍌'),
('🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎'),
('🍎 🍎 🍌 🍌 🍌');
Since text search needs multiple index entries for each row, I set up a Generalized Inverted Index (GIN) and use tsvector to extract and index the relevant tokens.
CREATE INDEX articles_fts_idx
ON articles USING GIN (to_tsvector('simple', description))
;
MongoDB text search (Lucene BM25):
I use my custom search index to find articles containing either 🍎 or 🍏 in their descriptions. The results are sorted by relevance score and displayed as follows:
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
Here are the results, presented in order of best to worst match:
1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
All documents were retrieved by this search since each contains a red or green apple. However, they are assigned different scores:
-
Multiple appearances boost the score: When a document contains the search term more than once, its ranking increases compared to those with only a single appearance. That's why documents featuring several 🍎 are ranked higher than those containing only one.
-
Rarity outweighs quantity: When a term like 🍎 appears in every document, it has less impact than a rare term, such as 🍏. Therefore, even if 🍏 only appears once, the document containing it ranks higher than others with multiple 🍎. In this model, rarity carries more weight than mere frequency.
-
Diminishing returns on term frequency: Each extra occurrence of a term adds less to the relevance score. For instance, increasing 🍎 from one to six times (from 🍎 🍌 to 🍎 🍎 🍎 🍎 🍎 🍎) boosts the score, but not by a factor of six. The effect of term repetition diminishes as the count rises.
-
Document length matters: A term that appears only once is scored higher in a short document than in a long one. That's why 🍎 🍌 ranks higher than 🍎 🍌 🍊, which itself ranks higher than 🍎 🍌 🍊 🍊 🍊.
MongoDB Atlas Search indexes are powered by Lucene’s BM25 algorithm, a refinement of the classic TF‑IDF model:
-
Term frequency (TF): More occurrences of a term in a document increase its relevance score, but with diminishing returns.
-
Inverse document frequency (IDF): Terms that appear in fewer documents receive higher weighting.
-
Length normalization: Matches in shorter documents contribute more to relevance than the same matches in longer documents.
To demonstrate the impact of IDF, I added several documents that do not contain any of the apples I'm searching for.
const fruits = [ "🍐","🍊","🍋","🍌","🍉","🍇","🍓","🫐",
"🥝","🥭","🍍","🥥","🍈","🍅","🥑","🍆",
"🍋","🍐","🍓","🍇","🍈","🥭","🍍","🍑",
"🥝","🫐","🍌","🍉","🥥","🥑","🥥","🍍" ];
function randomFruitSentence(min=3, max=8) {
const len = Math.floor(Math.random() * (max - min + 1)) + min;
return Array.from({length: len}, () => fruits[Math.floor(Math.random()*fruits.length)]).join(" ");
}
db.articles.insertMany(
Array.from({length: 500}, () => ({ description: randomFruitSentence() }))
);
db.articles.aggregate([
{ $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
{ $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
{ $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )
3.365 🍎 🍎 🍎 🍎 🍎 🍎
3.238 🍏 🍌 🍊
2.760 🍎 🍌 🍊 🍎
2.613 🍎 🍎 🍌 🍌 🍌
2.506 🍎 🍌
2.274 🍎 🍌 🍊
1.919 🍎 🍌 🍊 🍊 🍊
1.554 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
1.554 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
Although the result set is unchanged, the score has increased and the frequency gap between 🍎 and 🍏 has narrowed. As a result, 🍎 🍎 🍎 🍎 🍎 🍎 now ranks higher than 🍏 🍌 🍊, since the inverse document frequency (IDF) of 🍏 does not fully offset its term frequency (TF) within a single document. Crucially, changes made in other documents can influence the score of any given document, unlike in traditional indexes, where changes in one document do not impact others' index entries.
PostgreSQL text search (TF only):
Here is the result in PostgreSQL:
SELECT ts_rank_cd(
to_tsvector('simple', description)
,
to_tsquery('simple', '🍎 | 🍏')
) AS score, description
FROM articles
WHERE
to_tsvector('simple', description)
@@
to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC;
It retrieves the same documents, but with many having the same score, even with different patterns:
score | description
-------+-------------------------
0.6 | 🍎 🍎 🍎 🍎 🍎 🍎
0.2 | 🍎 🍌 🍊 🍎
0.2 | 🍎 🍎 🍌 🍌 🍌
0.1 | 🍏 🍌 🍊
0.1 | 🍎 🍌
0.1 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
0.1 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.1 | 🍎 🍌 🍊
0.1 | 🍎 🍌 🍊 🍊 🍊
(9 rows)
With PostgreSQL text search, only the term frequency (TF) matters, and is a direct multiplicator of the score: six apples rank three times higher than two, and six times higher than one.
There's some possible normalization available with additiona flags:
SELECT ts_rank_cd(
to_tsvector('simple', description),
to_tsquery('simple', '🍎 | 🍏') ,
0 -- (the default) ignores the document length
| 1 -- divides the rank by 1 + the logarithm of the document length
-- | 2 -- divides the rank by the document length
-- | 4 -- divides the rank by the mean harmonic distance between extents (this is implemented only by ts_rank_cd)
| 8 -- divides the rank by the number of unique words in document
-- | 16 -- divides the rank by 1 + the logarithm of the number of unique words in document
-- | 32 -- divides the rank by itself + 1
) AS score,
description
FROM articles
WHERE to_tsvector('simple', description) @@ to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC
;
score | description
-------------+-------------------------
0.308339 | 🍎 🍎 🍎 🍎 🍎 🍎
0.055811062 | 🍎 🍎 🍌 🍌 🍌
0.04551196 | 🍎 🍌
0.04142233 | 🍎 🍌 🍊 🍎
0.024044918 | 🍏 🍌 🍊
0.024044918 | 🍎 🍌 🍊
0.018603688 | 🍎 🍌 🍊 🍊 🍊
0.005688995 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.005688995 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
(9 rows)
This penalizes longer documents and those with more unique terms. Still, it doesn't consider other documents like IDF.
PostgreSQL fFull text search scoring with ts_rank_cd is based on term frequency and proximity. It does not compute inverse document frequency, so scores do not change as the corpus changes. Normalization flags can penalize long documents or those with many unique terms, but they are length-based adjustments, not true IDF, like we have in TF‑IDF or BM25‑style search engine.
ParadeDB with pg_search (Tantivy BM25)
PostgreSQL popularity is not only due to its features but also its extensibility and ecosystem. The pg_search extension adds functions and operators that use BM25 indexes (Tantivy, a Rust-based search library inspired by Lucene). It is easy to test with ParadeDB:
docker run --rm -it paradedb/paradedb bash
POSTGRES_PASSWORD=x docker-entrypoint.sh postgres &
psql -U postgres
The extension is installed in version 0.18.4:
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
------------------------+---------+------------+------------------------------------------------------------
fuzzystrmatch | 1.2 | public | determine similarities and distance between strings
pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL
pg_ivm | 1.9 | pg_catalog | incremental view maintenance on PostgreSQL
pg_search | 0.18.4 | paradedb | pg_search: Full text search for PostgreSQL using BM25
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
postgis | 3.6.0 | public | PostGIS geometry and geography spatial types and functions
postgis_tiger_geocoder | 3.6.0 | tiger | PostGIS tiger geocoder and reverse geocoder
postgis_topology | 3.6.0 | topology | PostGIS topology spatial types and functions
vector | 0.8.0 | public | vector data type and ivfflat and hnsw access methods
(9 rows)
I created and inserted the same as I did above on PostgreSQL and created the BM25 index:
CREATE INDEX search_idx ON articles
USING bm25 (id, description)
WITH (key_field='id')
;
We can query using the @@@ operator and rank with paradedb.score(id). Unlike PostgreSQL’s built‑in @@, which uses query‑local statistics, @@@ computes scores using global IDF and Lucene’s BM25 length normalization—so adding unrelated documents can still change the scores.
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ '🍎' OR description @@@ '🍏'
ORDER BY score DESC, description;
description | score
-------------+-------
(0 rows)
The result is empty. Using emoji as terms can lead to inconsistent tokenization results, so I replaced them with text labels instead:
UPDATE articles SET description
= replace(description, '🍎', 'Gala');
UPDATE articles SET description
= replace(description, '🍏', 'Granny Smith');
UPDATE articles SET description
= replace(description, '🍊', 'Orange');
This time, the scoring is more precise and takes into account the term frequency within the document (TF), the term’s rarity across the entire indexed corpus (IDF), along with a length normalization factor to prevent longer documents from having an unfair advantage:
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description;
description | score
-------------------------------+------------
Granny Smith 🍌 Orange | 3.1043208
Gala Gala Gala Gala Gala Gala | 0.79529095
Gala Gala 🍌 🍌 🍌 | 0.7512194
Gala 🍌 | 0.69356775
Gala 🍌 Orange Gala | 0.63589364
Gala 🍌 Orange | 0.5195716
Gala 🍌 Orange 🌴 🫐 🍈 🍇 | 0.5195716
🍌 Orange 🌴 🫐 🍈 🍇 Gala | 0.5195716
Gala 🍌 Orange Orange Orange | 0.34597924
(9 rows)
It looks very similar to the MongoDB result. Lucene may give a slight edge to terms that appear more frequently (🍎 🍌 🍊 🍎), even if the document length penalty is higher. Tantivy might apply length normalization in a slightly different way, so the shorter (🍎 🍌) gets a bigger boost.
Here is the execution plan in ParadeDB:
EXPLAIN(ANALYZE, BUFFERS, VERBOSE)
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description
;
Gather Merge (cost=1010.06..1010.68 rows=5 width=31) (actual time=5.893..8.237 rows=8 loops=1)
Output: description, (score(id))
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=333
-> Sort (cost=10.04..10.05 rows=3 width=31) (actual time=0.529..0.540 rows=3 loops=3)
Output: description, (score(id))
Sort Key: (score(articles.id)) DESC, articles.description
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=306
Worker 0: actual time=0.548..0.558 rows=0 loops=1
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=64
Worker 1: actual time=0.596..0.607 rows=0 loops=1
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=64
-> Parallel Custom Scan (ParadeDB Scan) on public.articles (cost=10.00..10.02 rows=3 width=31) (actual time=0.367..0.444 rows=3 loops=3)
Output: description, score(id)
Table: articles
Index: search_idx
Segment Count: 5
Heap Fetches: 8
Virtual Tuples: 0
Invisible Tuples: 0
Parallel Workers: {"-1":{"query_count":0,"claimed_segments":[{"id":"a17b19a2","deleted_docs":0,"max_doc":9},{"id":"3fa71653","deleted_docs":6,"max_doc":6},{"id":"3c243f8e","deleted_docs":1,"max_doc":1},{"id":"badbcd7e","deleted_docs":8,"max_doc":8},{"id":"add79d5d","deleted_docs":9,"max_doc":9}]}}
Exec Method: NormalScanExecState
Scores: true
Tantivy Query: {"boolean":{"should":[{"with_index":{"query":{"parse_with_field":{"field":"description","query_string":"Gala","lenient":null,"conjunction_mode":null}}}},{"with_index":{"query":{"parse_with_field":{"field":"description","query_string":"Granny Smith","lenient":null,"conjunction_mode":null}}}}]}}
Buffers: shared hit=216
Worker 0: actual time=0.431..0.441 rows=0 loops=1
Buffers: shared hit=19
Worker 1: actual time=0.447..0.457 rows=0 loops=1
Buffers: shared hit=19
This PostgreSQL plan shows ParadeDB executing a parallel full-text search with Tantivy. The Parallel Custom Scan node issues a BM25 query (Gala OR "Granny Smith") to the segmented Tantivy index. Each worker searches its segments, scores, fetches matching descriptions, and sorts locally. The Gather Merge then combines these into a single ranked list. Since search and scoring are done within Tantivy across CPU cores and results are fetched from shared memory, the query is quick and efficient.
In the execution plan, the Tantivy query closely resembles a MongoDB search query. Specifically, "boolean" in Tantivy is equivalent to "compound" in MongoDB, "should" matches "should", "parse_with_field.field" is similar to "path".
PostgreSQL’s built-in search only provides basic, local term frequency scoring. To get a full-featured text search that can be used in an applica... (truncated)