a curated list of database news from authoritative sources

September 24, 2025

Choosing the Right Key-Value Store: Redis vs Valkey

Not long ago, picking an in-memory key-value store was easy. Redis was the default. Fast, simple, everywhere. Then the rules changed. Redis moved to a much more restrictive license. Suddenly, many companies had to rethink their plans, especially if they cared about staying open source or needed flexibility for the cloud. That’s when Valkey arrived. […]

Processes and Threads

Processes and threads are fundamental abstrations for operating systems. Learn how they work and how they impact database performance in this interactive article.

September 23, 2025

Long-term storage and analysis of Amazon RDS events with Amazon S3 and Amazon Athena

In this post, we show you how to implement an automated solution for archiving Amazon RDS events to Amazon Simple Storage Service (Amazon S3). We also discuss how to analyze the events with Amazon Athena which helps enable proactive database management, helps maintain security and compliance, and provides valuable insights for capacity planning and troubleshooting.

Announcing OpenBao Support in Percona Server for MongoDB

At Percona, we believe that an open world is a better world. Our mission has always been to empower organizations with secure, scalable, and reliable open source database solutions without locking them into expensive proprietary ecosystems. Today, we’re excited to share another step forward in this journey: Percona Server for MongoDB now supports OpenBao for […]

September 22, 2025

Keep PostgreSQL Secure with TDE and the Latest Updates

This fall feels like a good moment to stop and look at what’s changed in PostgreSQL security over the last months and also what you can use right now to make your PostgreSQL deployments safer. PostgreSQL Transparent Data Encryption (TDE) from Percona For many years, Transparent Data Encryption (TDE) was a missing piece for security […]

September 21, 2025

MongoDB Search Index internals with Luke (Lucene Toolbox GUI tool)

Previously, I demonstrated MongoDB text search scoring with a simple example, creating a dynamic index without specifying fields explicitly. You might be curious about what data is actually stored in such an index. Let's delve into the specifics. Unlike regular MongoDB collections and indexes, which use WiredTiger for storage, search indexes leverage Lucene technology. We can inspect these indexes using Luke, the Lucene Toolbox GUI tool.

Set up a lab

I started an Atlas local deployment to get a container for my lab:

# download Atlas CLI if you don't have it. Here it is for my Mac:
wget https://www.mongodb.com/try/download/atlascli
unzip mongodb-atlas-cli_1.43.0_macos_arm64.zip

# start a container
bin/atlas deployments setup  atlas --type local --port 27017 --force

Sample data

I connected with mongosh and created a collection, and a search index, like in previous post:

mongosh --eval '

db.articles.deleteMany({});

db.articles.insertMany([
 { description : "🍏 🍌 🍊" },                // short, 1 🍏
 { description : "🍎 🍌 🍊" },                // short, 1 🍎
 { description : "🍎 🍌 🍊 🍎" },             // larger, 2 🍎
 { description : "🍎 🍌 🍊 🍊 🍊" },          // larger, 1 🍎
 { description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" },  // large, 1 🍎
 { description : "🍎 🍎 🍎 🍎 🍎 🍎" },       // large, 6 🍎
 { description : "🍎 🍌" },                 // very short, 1 🍎
 { description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" },  // large, 1 🍎
 { description : "🍎 🍎 🍌 🍌 🍌" },          // shorter, 2 🍎
]);

db.articles.createSearchIndex("default",
  { mappings: { dynamic: true } }
);

'

Get Lucene indexes

While MongoDB collections and secondary indexes are stored by the WiredTiger storage engine (by default in /data/db directory), the text search indexes use Lucene in a mongot process (with files stored by default in /data/mongot). I copied it to my laptop:

docker cp atlas:/data/mongot ./mongot_copy
cd mongot_copy

One file is easy to read, as it is in JSON format, and it is the metadata listing the search indexes, with their MongoDB configuration:

cat configJournal.json | jq

{
  "version": 1,
  "stagedIndexes": [],
  "indexes": [
    {
      "index": {
        "indexID": "68d0588abf7ab96dd26277b1",
        "name": "default",
        "database": "test",
        "lastObservedCollectionName": "articles",
        "collectionUUID": "a18b587d-a380-4067-95aa-d0e9d4871b64",
        "numPartitions": 1,
        "mappings": {
          "dynamic": true,
          "fields": {}
        },
        "indexFeatureVersion": 4
      },
      "analyzers": [],
      "generation": {
        "userVersion": 0,
        "formatVersion": 6
      }
    }
  ],
  "deletedIndexes": [],
  "stagedVectorIndexes": [],
  "vectorIndexes": [],
  "deletedVectorIndexes": []
}

The directory where Lucene files are stored has the IndexID in their names:

ls 68d0588abf7ab96dd26277b1*

_0.cfe _0.cfs _0.si
_1.cfe _1.cfs _1.si
_2.cfe _2.cfs _2.si
_3.cfe _3.cfs _3.si
_4.cfe _4.cfs _4.si
_5.cfe _5.cfs _5.si
_6.cfe _6.cfs _6.si
segments_2
write.lock

In a Lucene index, each .cfs/.cfe/.si set represents one immutable segment containing a snapshot of indexed data (with .cfs holding the actual data, .cfe its table of contents, and .si the segment’s metadata), and the segments_2 file is the global manifest that tracks all active segments so Lucene can search across them as one index.

Install and use Luke

I installed the Lucene binaries and started Luke:

wget https://dlcdn.apache.org/lucene/java/9.12.2/lucene-9.12.2.tgz
tar -zxvf lucene-9.12.2.tgz
lucene-9.12.2/bin/luke.sh

This starts the GUI asking for the index directory:

The "Overview" tab shows lots of information:

The field names are prefixed with the type. My description field was indexed as a string and named $type:string/description. There are 9 documents and 9 different terms:

The Lucene index keeps the overall frequency in order to apply Inverse Document Frequency (IDF). Here, 🍎 is present in 8 documents and 🍏 in one.

The "Document" tab lets us browse the documents and see what is indexed. For example, 🍏 is present in one document with { description: "🍏 🍌 🍊" }:

The flags IdfpoN-S mean that it is a fully indexed text field with docs, frequencies, positions, and offsets, with norms and stored values

The "Search" tab allows to run queries. For example, { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } } from my previous post is:

This is exactly what I got from MongoDB:

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

If you double-click on a document in the result, you can get the explanation of the score:

For example, the score of 🍏 🍌 🍊 is explained by:

1.0242119 sum of:
  1.0242119 weight($type:string/description:🍏 in 0) [BM25Similarity], result of:
    1.0242119 score(freq=1.0), computed as boost * idf * tf from:
      1.89712 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
        1 n, number of documents containing term
        9 N, total number of documents with field
      0.5398773 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
        1.0 freq, occurrences of term within document
        1.2 k1, term saturation parameter
        0.75 b, length normalization parameter
        3.0 dl, length of field
        4.888889 avgdl, average length of field

The BM25 core formula is score = boost × idf × tf

IDF (Inverse Document Frequency) is idf = log(1 + (N - n + 0.5) / (n + 0.5)) where n is the number of documents containing 🍏 , so 1, and N is the total documents in the index for this field, so 9

TF (Term Frequency normalization) is tf = freq / ( freq + k1 × (1 - b + b × (dl / avgdl)) ) where freq is the term occurrences in this doc’s field, so 1, k1 is the term saturation parameter, which defaults to 1.2 in Lucene, b is the length normalization, which defaults to 0.75 in Lucene, dl is the document length, which is 3 tokens here, and avgdl is the average document length for this field in the segment, here 4.888889

Daily usage in MongoDB search also allows boosting via the query, which multiplies into the BM25 scoring formula (boost).

The "Analysis" tab helps explain how strings are tokenized and processed. For example, the standard analyzer explicitly recognized emojis:

Conclusion

MongoDB search indexes are designed to work optimally out of the box. In my earlier post, I relied entirely on default settings, dynamic mapping, and even replaced words with emojis — yet still got relevant, well-ranked results without extra tuning. If you want to go deeper and fine-tune your search behavior, or simply learn more about how it works, inspecting the underlying Lucene index can provide great insights. Since Atlas Search indexes are Lucerne-compatible, tools like Luke allow you to see exactly how your text is tokenized, stored, and scored — giving you full transparency into how queries match your documents.

MongoDB Search Index Internals With Luke (Lucene Toolbox GUI Tool)

Previously, I demonstrated MongoDB text search scoring with a simple example, creating a dynamic index without specifying fields explicitly. You might be curious about what data is actually stored in such an index. Let's delve into the specifics. Unlike regular MongoDB collections and indexes, which use WiredTiger for storage, search indexes leverage Lucene technology. We can inspect these indexes using Luke, the Lucene Toolbox GUI tool.

Set up a lab

I started a Atlas local deployment to get a container for my lab:

# download Atlas CLI if you don't have it. Here it is for my Mac:
wget https://www.mongodb.com/try/download/atlascli
unzip mongodb-atlas-cli_1.43.0_macos_arm64.zip

# start a container
bin/atlas deployments setup  atlas --type local --port 27017 --force

Sample data

I connected with mongosh and created a collection, and a search index, like in the previous post:

mongosh --eval '

db.articles.deleteMany({});

db.articles.insertMany([
 { description : "🍏 🍌 🍊" },                // short, 1 🍏
 { description : "🍎 🍌 🍊" },                // short, 1 🍎
 { description : "🍎 🍌 🍊 🍎" },             // larger, 2 🍎
 { description : "🍎 🍌 🍊 🍊 🍊" },          // larger, 1 🍎
 { description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" },  // large, 1 🍎
 { description : "🍎 🍎 🍎 🍎 🍎 🍎" },       // large, 6 🍎
 { description : "🍎 🍌" },                 // very short, 1 🍎
 { description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" },  // large, 1 🍎
 { description : "🍎 🍎 🍌 🍌 🍌" },          // shorter, 2 🍎
]);

db.articles.createSearchIndex("default",
  { mappings: { dynamic: true } }
);

'

Get Lucene indexes

While MongoDB collections and secondary indexes are stored by the WiredTiger storage engine (by default, in the /data/db directory), the text search indexes use Lucene in a mongot process (with files stored by default in /data/mongot). I copied it to my laptop:

docker cp atlas:/data/mongot ./mongot_copy
cd mongot_copy

One file is easy to read, as it is in JSON format, and it is the metadata listing the search indexes, with their MongoDB configuration:

cat configJournal.json | jq

{
  "version": 1,
  "stagedIndexes": [],
  "indexes": [
    {
      "index": {
        "indexID": "68d0588abf7ab96dd26277b1",
        "name": "default",
        "database": "test",
        "lastObservedCollectionName": "articles",
        "collectionUUID": "a18b587d-a380-4067-95aa-d0e9d4871b64",
        "numPartitions": 1,
        "mappings": {
          "dynamic": true,
          "fields": {}
        },
        "indexFeatureVersion": 4
      },
      "analyzers": [],
      "generation": {
        "userVersion": 0,
        "formatVersion": 6
      }
    }
  ],
  "deletedIndexes": [],
  "stagedVectorIndexes": [],
  "vectorIndexes": [],
  "deletedVectorIndexes": []
}

The directory where Lucene files are stored has the IndexID in their names:

ls 68d0588abf7ab96dd26277b1*

_0.cfe _0.cfs _0.si
_1.cfe _1.cfs _1.si
_2.cfe _2.cfs _2.si
_3.cfe _3.cfs _3.si
_4.cfe _4.cfs _4.si
_5.cfe _5.cfs _5.si
_6.cfe _6.cfs _6.si
segments_2
write.lock

In a Lucene index, each .cfs/.cfe/.si set represents one immutable segment containing a snapshot of indexed data (with .cfs holding the actual data, .cfe its table of contents, and .si the segment’s metadata), and the segments_2 file is the global manifest that tracks all active segments so Lucene can search across them as one index.

Install and use Luke

I installed the Lucene binaries and started Luke:

wget https://dlcdn.apache.org/lucene/java/9.12.2/lucene-9.12.2.tgz
tar -zxvf lucene-9.12.2.tgz
lucene-9.12.2/bin/luke.sh

This starts the GUI asking for the index directory:

The "Overview" tab shows lots of information:

The field names are prefixed with the type. My description field was indexed as a string and named $type:string/description. There are nine documents and nine different terms:

The Lucene index keeps the overall frequency in order to apply inverse document frequency (IDF). Here, 🍎 is present in eight documents and 🍏 in one.

The "Document" tab lets us browse the documents and see what is indexed. For example, 🍏 is present in one document with { description: "🍏 🍌 🍊" }:

The flags IdfpoN-S mean that it is a fully indexed text field with docs, frequencies, positions, and offsets, with norms and stored values.

The "Search" tab allows us to run queries. For example, { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } } from my previous post is:

This is exactly what I got from MongoDB:

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

If you double-click on a document in the result, you can get the explanation of the score:

For example, the score of 🍏 🍌 🍊 is explained by:

1.0242119 sum of:
  1.0242119 weight($type:string/description:🍏 in 0) [BM25Similarity], result of:
    1.0242119 score(freq=1.0), computed as boost * idf * tf from:
      1.89712 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
        1 n, number of documents containing term
        9 N, total number of documents with field
      0.5398773 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
        1.0 freq, occurrences of term within document
        1.2 k1, term saturation parameter
        0.75 b, length normalization parameter
        3.0 dl, length of field
        4.888889 avgdl, average length of field

The BM25 core formula is score = boost × idf × tf.

IDF is idf = log(1 + (N - n + 0.5) / (n + 0.5)) where n is the number of documents containing 🍏 , so 1, and N is the total documents in the index for this field, so 9.

TF is tf = freq / ( freq + k1 × (1 - b + b × (dl / avgdl)) ) where freq is the term occurrences in this doc’s field, so 1, k1 is the term saturation parameter, which defaults to 1.2 in Lucene, b is the length normalization, which defaults to 0.75 in Lucene, dl is the document length, which is three tokens here, and avgdl is the average document length for this field in the segment—here, 4.888889.

Daily usage in MongoDB search also allows boosting via the query, which multiplies into the BM25 scoring formula (boost).

The "Analysis" tab helps explain how strings are tokenized and processed. For example, the standard analyzer explicitly recognized emojis:

Finally, I inserted 500 documents with other fruits, like in the previous post, and the collection-wide term frequency has been updated:

The scores reflect the change:

The explanation of the new rank for 🍏 🍌 🍊 is:

3.2850468 sum of:
  3.2850468 weight($type:string/description:🍏 in 205) [BM25Similarity], result of:
    3.2850468 score(freq=1.0), computed as boost * idf * tf from:
      5.8289456 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
        1 n, number of documents containing term
        509 N, total number of documents with field
      0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
        1.0 freq, occurrences of term within document
        1.2 k1, term saturation parameter
        0.75 b, length normalization parameter
        3.0 dl, length of field
        5.691552 avgdl, average length of field

After adding 500 documents without 🍏, BM25 recalculates IDF for 🍏 with a much larger N, making it appear far rarer in the corpus, so its score contribution more than triples.

Notice that when I queried for 🍎🍏, no documents contained both terms, so the scoring explanation included only one weight. If I modify the query to include 🍎🍏🍊, the document 🍏 🍌 🍊 scores highest, as it combines the weights for both matching terms, 🍏 and 🍊:

4.3254924 sum of:
  3.2850468 weight($type:string/description:🍏 in 205) [BM25Similarity], result of:
    3.2850468 score(freq=1.0), computed as boost * idf * tf from:
      5.8289456 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
        1 n, number of documents containing term
        509 N, total number of documents with field
      0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
        1.0 freq, occurrences of term within document
        1.2 k1, term saturation parameter
        0.75 b, length normalization parameter
        3.0 dl, length of field
        5.691552 avgdl, average length of field
  1.0404456 weight($type:string/description:🍊 in 205) [BM25Similarity], result of:
    1.0404456 score(freq=1.0), computed as boost * idf * tf from:
      1.8461535 idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
        80 n, number of documents containing term
        509 N, total number of documents with field
      0.5635748 tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:
        1.0 freq, occurrences of term within document
        1.2 k1, term saturation parameter
        0.75 b, length normalization parameter
        3.0 dl, length of field
        5.691552 avgdl, average length of field

Here is the same query in MongoDB:

db.articles.aggregate([
  { $search: { text: { query: ["🍏", "🍊"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } },
  { $limit: 15  }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

4.325 🍏 🍌 🍊
1.354 🍎 🍌 🍊 🍊 🍊
1.259 🫐 🍊 🥑 🍊
1.137 🥥 🍊 🍊 🍅 🍈 🍈
1.137 🍍 🍓 🍊 🍊 🥑 🍉
1.137 🥥 🍆 🍊 🍊 🍍 🍉
1.084 🍊 🍑 🍊 🥥 🍌 🍍 🫐
1.084 🍊 🫐 🥝 🍋 🥑 🍇 🍊
1.084 🥭 🍍 🥑 🍋 🍈 🍊 🍊
1.040 🍊 🫐 🥭
1.040 🍊 🍉 🍍
1.040 🍎 🍌 🍊
1.040 🍊 🍋 🍋
1.040 🍐 🍌 🍊
1.036 🍐 🥥 🍍 🍈 🍐 🍊 🍆 🍊

While the scores may feel intuitively correct when you look at the data, it's important to remember there's no magic—everything is based on well‑known mathematics and formulas. Lucene’s scoring algorithms are used in many systems, including Elasticsearch, Apache Solr, and the search indexes built into MongoDB.

Conclusion

MongoDB search indexes are designed to work optimally out of the box. In my earlier post, I relied entirely on default settings, dynamic mapping, and even replaced words with emojis—yet still got relevant, well-ranked results without extra tuning. If you want to go deeper and fine-tune your search behavior, or simply learn more about how it works, inspecting the underlying Lucene index can provide great insights. Since MongoDB Atlas Search indexes are Lucene-compatible, tools like Luke allow you to see exactly how your text is tokenized, stored, and scored—giving you full transparency into how queries match your documents.

September 19, 2025

Text Search with MongoDB and PostgreSQL

MongoDB Search Indexes provide full‑text search capabilities directly within MongoDB, allowing complex queries to be run without copying data to a separate search system. Initially deployed in Atlas, MongoDB’s managed service, Search Indexes are now also part of the community edition. This post compares the default full‑text search behaviour between MongoDB and PostgreSQL, using a simple example to illustrate the ranking algorithm.

Setup: a small dataset

I’ve inserted nine small documents, each consisting of different fruits, using emojis to make it more visual. The 🍎 and 🍏 emojis represent our primary search terms. They appear at varying frequencies in documents of different lengths.

db.articles.deleteMany({});

db.articles.insertMany([
 { description : "🍏 🍌 🍊" },                // short, 1 🍏
 { description : "🍎 🍌 🍊" },                // short, 1 🍎
 { description : "🍎 🍌 🍊 🍎" },             // larger, 2 🍎
 { description : "🍎 🍌 🍊 🍊 🍊" },          // larger, 1 🍎
 { description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" },  // large, 1 🍎
 { description : "🍎 🍎 🍎 🍎 🍎 🍎" },       // large, 6 🍎
 { description : "🍎 🍌" },                 // very short, 1 🍎
 { description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" },  // large, 1 🍎
 { description : "🍎 🍎 🍌 🍌 🍌" },          // shorter, 2 🍎
]);

To enable dynamic indexing, I created a MongoDB Search Index without specifying any particular field names:

db.articles.createSearchIndex("default",
  { mappings: { dynamic: true } }
);

I created the equivalent on PostgreSQL:

DROP TABLE IF EXISTS articles;
CREATE TABLE articles (
    id BIGSERIAL PRIMARY KEY,
    description TEXT
);

INSERT INTO articles(description) VALUES
('🍏 🍌 🍊'),
('🍎 🍌 🍊'),
('🍎 🍌 🍊 🍎'),
('🍎 🍌 🍊 🍊 🍊'),
('🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰'),
('🍎 🍎 🍎 🍎 🍎 🍎'),
('🍎 🍌'),
('🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎'),
('🍎 🍎 🍌 🍌 🍌');

Since text search needs multiple index entries for each row, I set up a GIN (Generalized Inverted Index) and use tsvector to extract and index the relevant tokens.

CREATE INDEX articles_fts_idx
  ON articles USING GIN (to_tsvector('simple', description))
;

MongoDB Text Search (Lucene BM25):

I use my custom search index to find articles containing either 🍎 or 🍏 in their descriptions. The results are sorted by relevance score and displayed as follows:

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

Here are the results, presented in order of best to worst match:

1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

All documents were retrieved by this search since each contains a red or green apple. However, they are assigned different scores:

  • Multiple appearances boost the score: When a document contains the search term more than once, its ranking increases compared to those with only a single appearance. That's why documents featuring several 🍎 are ranked higher than those containing only one.
  • Rarity outweighs quantity: When a term like 🍎 appears in every document, it has less impact than a rare term, such as 🍏. Therefore, even if 🍏 only appears once, the document containing it ranks higher than others with multiple 🍎. In this model, rarity carries more weight than mere frequency.
  • Diminishing returns on term frequency: Each extra occurrence of a term adds less to the relevance score. For instance, increasing 🍎 from one to six times (from 🍎 🍌 to 🍎 🍎 🍎 🍎 🍎 🍎) boosts the score, but not by a factor of six. The effect of term repetition diminishes as the count rises.
  • Document length matters: A term that appears only once is scored higher in a short document than in a long one. That's why 🍎 🍌 ranks higher than 🍎 🍌 🍊, which itself ranks higher than 🍎 🍌 🍊 🍊 🍊.

MongoDB Atlas Search indexes are powered by Lucene’s BM25 algorithm, a refinement of the classic TF‑IDF model:

  • Term Frequency (TF): More occurrences of a term in a document increase its relevance score, but with diminishing returns.
  • Inverse Document Frequency (IDF): Terms that appear in fewer documents receive higher weighting.
  • Length Normalization: Matches in shorter documents contribute more to relevance than the same matches in longer documents.

To demonstrate the impact of IDF, I added several documents that do not contain any of the apples I'm searching for.

const fruits = [ "🍐","🍊","🍋","🍌","🍉","🍇","🍓","🫐",         
                 "🥝","🥭","🍍","🥥","🍈","🍅","🥑","🍆",  
                 "🍋","🍐","🍓","🍇","🍈","🥭","🍍","🍑",  
                 "🥝","🫐","🍌","🍉","🥥","🥑","🥥","🍍" ];
function randomFruitSentence(min=3, max=8) {
  const len = Math.floor(Math.random() * (max - min + 1)) + min;
  return Array.from({length: len}, () => fruits[Math.floor(Math.random()*fruits.length)]).join(" ");
}
db.articles.insertMany(
  Array.from({length: 500}, () => ({ description: randomFruitSentence() }))
);

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

3.365 🍎 🍎 🍎 🍎 🍎 🍎
3.238 🍏 🍌 🍊
2.760 🍎 🍌 🍊 🍎
2.613 🍎 🍎 🍌 🍌 🍌
2.506 🍎 🍌
2.274 🍎 🍌 🍊
1.919 🍎 🍌 🍊 🍊 🍊
1.554 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
1.554 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

Although the result set is unchanged, the score has increased and the frequency gap between 🍎 and 🍏 has narrowed. As a result, 🍎 🍎 🍎 🍎 🍎 🍎 now ranks higher than 🍏 🍌 🍊, since the inverse document frequency (IDF) of 🍏 does not fully offset its term frequency (TF) within a single document. Crucially, changes made in other documents can influence the score of any given document, unlike in traditional indexes where changes in one document do not impact others' index entries.

PostgreSQL Text Search (TF only):

Here is the result in PostgreSQL:

SELECT ts_rank_cd(  

        to_tsvector('simple', description)
     ,  
        to_tsquery('simple', '🍎 | 🍏')  

       ) AS score, description  
FROM articles  
WHERE
       to_tsvector('simple', description) 
    @@ 
       to_tsquery('simple', '🍎 | 🍏')  

ORDER BY score DESC;  

It retrieves the same documents, but with many having the same score, even with different patterns:

 score |       description
-------+-------------------------
   0.6 | 🍎 🍎 🍎 🍎 🍎 🍎
   0.2 | 🍎 🍌 🍊 🍎
   0.2 | 🍎 🍎 🍌 🍌 🍌
   0.1 | 🍏 🍌 🍊
   0.1 | 🍎 🍌
   0.1 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
   0.1 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
   0.1 | 🍎 🍌 🍊
   0.1 | 🍎 🍌 🍊 🍊 🍊
(9 rows)

With PostgreSQL text search, only the term frequency (TF) matters, and is a direct multiplicator of the score: 6 apples ranks 3x higher than two, and 6x than one.

There's some possible normalization available with additiona flags:

SELECT ts_rank_cd(
         to_tsvector('simple', description),
         to_tsquery('simple', '🍎 | 🍏')  ,
            0 -- (the default) ignores the document length
         |  1 -- divides the rank by 1 + the logarithm of the document length
    --   |  2 -- divides the rank by the document length
    --   |  4 -- divides the rank by the mean harmonic distance between extents (this is implemented only by ts_rank_cd)
         |  8 -- divides the rank by the number of unique words in document
    --   | 16 -- divides the rank by 1 + the logarithm of the number of unique words in document
    --   | 32 -- divides the rank by itself + 1
       ) AS score,
       description
FROM articles
WHERE to_tsvector('simple', description) @@ to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC
;
    score    |       description
-------------+-------------------------
    0.308339 | 🍎 🍎 🍎 🍎 🍎 🍎
 0.055811062 | 🍎 🍎 🍌 🍌 🍌
  0.04551196 | 🍎 🍌
  0.04142233 | 🍎 🍌 🍊 🍎
 0.024044918 | 🍏 🍌 🍊
 0.024044918 | 🍎 🍌 🍊
 0.018603688 | 🍎 🍌 🍊 🍊 🍊
 0.005688995 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
 0.005688995 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
(9 rows)

This penalizes longer documents and those with more unique terms. Still, it doesn't consider other documents like IDF.

PostgreSQL Full Text Search scoring with ts_rank_cd is based on term frequency and proximity. It does not compute inverse document frequency, so scores do not change as the corpus changes. Normalization flags can penalize long documents or those with many unique terms, but they are length-based adjustments, not true IDF, like we have in TF‑IDF or BM25‑style search engine.

ParadeDB with pg_search (Tantivy BM25)

PostgreSQL popularity is not only due to its features but also its extensibility and ecosystem. The pg_search extension adds functions and operators that use BM25 indexes (Tantivy, a Rust-based search library inspired by Lucene). It is easy to test with ParadeDB:

docker run --rm -it paradedb/paradedb bash

POSTGRES_PASSWORD=x docker-entrypoint.sh postgres &

psql -U postgres

The extension is installed in version 0.18.4:

postgres=# \dx
                                        List of installed extensions
          Name          | Version |   Schema   |                        Description
------------------------+---------+------------+------------------------------------------------------------
 fuzzystrmatch          | 1.2     | public     | determine similarities and distance between strings
 pg_cron                | 1.6     | pg_catalog | Job scheduler for PostgreSQL
 pg_ivm                 | 1.9     | pg_catalog | incremental view maintenance on PostgreSQL
 pg_search              | 0.18.4  | paradedb   | pg_search: Full text search for PostgreSQL using BM25
 plpgsql                | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgis                | 3.6.0   | public     | PostGIS geometry and geography spatial types and functions
 postgis_tiger_geocoder | 3.6.0   | tiger      | PostGIS tiger geocoder and reverse geocoder
 postgis_topology       | 3.6.0   | topology   | PostGIS topology spatial types and functions
 vector                 | 0.8.0   | public     | vector data type and ivfflat and hnsw access methods
(9 rows)

I created and inserted the same as I did above on PostgreSQL and created the BM25 index:

CREATE INDEX search_idx ON articles
       USING bm25 (id, description)
       WITH (key_field='id')
;

We can query using the @@@ operator and rank with paradedb.score(id). Unlike PostgreSQL’s built‑in @@, which uses query‑local statistics, @@@ computes scores using global IDF and Lucene’s BM25 length normalization — so adding unrelated documents can still change the scores.

SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ '🍎' OR description @@@ '🍏'
ORDER BY score DESC, description;

 description | score
-------------+-------
(0 rows)

The result is empty. Using emoji as terms can lead to inconsistent tokenization results, so I replaced them with text labels instead:

UPDATE articles SET description 
 = replace(description, '🍎', 'Gala');
UPDATE articles SET description 
 = replace(description, '🍏', 'Granny Smith');
UPDATE articles SET description 
 = replace(description, '🍊', 'Orange');

This time, the scoring is more precise and takes into account the term frequency within the document (TF), the term’s rarity across the entire indexed corpus (IDF), along with a length normalization factor to prevent longer documents from having an unfair advantage:

SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description;

          description          |   score
-------------------------------+------------
 Granny Smith 🍌 Orange        |  3.1043208
 Gala Gala Gala Gala Gala Gala | 0.79529095
 Gala Gala 🍌 🍌 🍌            |  0.7512194
 Gala 🍌                       | 0.69356775
 Gala 🍌 Orange Gala           | 0.63589364
 Gala 🍌 Orange                |  0.5195716
 Gala 🍌 Orange 🌴 🫐 🍈 🍇   |  0.5195716
 🍌 Orange 🌴 🫐 🍈 🍇   Gala |  0.5195716
 Gala 🍌 Orange Orange Orange  | 0.34597924
(9 rows)

PostgreSQL’s built-in search only provides basic, local term frequency scoring. To get a full-feature text search that can be used in application's search boxes, it can be extended with third-party tools like ParadeDB's pg_search.

Conclusion

Relevance scoring in text search can differ widely between systems because each uses its own ranking algorithms and analyzers. To better visualize my results in these tests, I used emojis and opted for the simplest definitions. I selected PostgreSQL's to_tsvector('simple') configuration to prevent language-specific processing, while for MongoDB Atlas Search, I used the default dynamic mapping.

MongoDB Atlas Search (and now in MongoDB Community Edition) uses Lucene’s BM25 algorithm, combining:

  • Term Frequency (TF): Frequent terms in a document boost scores, but with diminishing returns
  • Inverse Document Frequency (IDF): Rare terms across the corpus get higher weight
  • Length normalization: Matches in shorter documents are weighted more than the same matches in longer ones

PostgreSQL’s full-text search (ts_rank_cd()) evaluates only term frequency and position, overlooking other metrics like IDF. For more advanced features such as BM25, extensions like ParadeDB’s pg_search are needed, which require extra configuration and are not always available on managed platforms. PostgreSQL offers a modular approach, where extensions can add advanced ranking algorithms like BM25. MongoDB provides built‑in BM25‑based full‑text search in both Atlas and the Community Edition.

Text Search With MongoDB (BM25 TF-IDF) and PostgreSQL

MongoDB search indexes provide full‑text search capabilities directly within MongoDB, allowing complex queries to be run without copying data to a separate search system. Initially deployed in Atlas, MongoDB’s managed service, search indexes are now also part of the community edition. This post compares the default full‑text search behaviour between MongoDB and PostgreSQL, using a simple example to illustrate the ranking algorithm.

Setup: a small dataset

I’ve inserted nine small documents, each consisting of different fruits, using emojis to make it more visual. The 🍎 and 🍏 emojis represent our primary search terms. They appear at varying frequencies in documents of different lengths.

db.articles.deleteMany({});

db.articles.insertMany([
 { description : "🍏 🍌 🍊" },                // short, 1 🍏
 { description : "🍎 🍌 🍊" },                // short, 1 🍎
 { description : "🍎 🍌 🍊 🍎" },             // larger, 2 🍎
 { description : "🍎 🍌 🍊 🍊 🍊" },          // larger, 1 🍎
 { description : "🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰" },  // large, 1 🍎
 { description : "🍎 🍎 🍎 🍎 🍎 🍎" },       // large, 6 🍎
 { description : "🍎 🍌" },                 // very short, 1 🍎
 { description : "🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎" },  // large, 1 🍎
 { description : "🍎 🍎 🍌 🍌 🍌" },          // shorter, 2 🍎
]);

To enable dynamic indexing, I created a MongoDB search index without specifying any particular field names:

db.articles.createSearchIndex("default",
  { mappings: { dynamic: true } }
);

I created the equivalent on PostgreSQL:

DROP TABLE IF EXISTS articles;
CREATE TABLE articles (
    id BIGSERIAL PRIMARY KEY,
    description TEXT
);

INSERT INTO articles(description) VALUES
('🍏 🍌 🍊'),
('🍎 🍌 🍊'),
('🍎 🍌 🍊 🍎'),
('🍎 🍌 🍊 🍊 🍊'),
('🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰'),
('🍎 🍎 🍎 🍎 🍎 🍎'),
('🍎 🍌'),
('🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎'),
('🍎 🍎 🍌 🍌 🍌');

Since text search needs multiple index entries for each row, I set up a Generalized Inverted Index (GIN) and use tsvector to extract and index the relevant tokens.

CREATE INDEX articles_fts_idx
  ON articles USING GIN (to_tsvector('simple', description))
;

MongoDB text search (Lucene BM25):

I use my custom search index to find articles containing either 🍎 or 🍏 in their descriptions. The results are sorted by relevance score and displayed as follows:

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

Here are the results, presented in order of best to worst match:

1.024 🍏 🍌 🍊
0.132 🍎 🍎 🍎 🍎 🍎 🍎
0.107 🍎 🍌 🍊 🍎
0.101 🍎 🍎 🍌 🍌 🍌
0.097 🍎 🍌
0.088 🍎 🍌 🍊
0.073 🍎 🍌 🍊 🍊 🍊
0.059 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
0.059 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

All documents were retrieved by this search since each contains a red or green apple. However, they are assigned different scores:

  • Multiple appearances boost the score: When a document contains the search term more than once, its ranking increases compared to those with only a single appearance. That's why documents featuring several 🍎 are ranked higher than those containing only one.
  • Rarity outweighs quantity: When a term like 🍎 appears in every document, it has less impact than a rare term, such as 🍏. Therefore, even if 🍏 only appears once, the document containing it ranks higher than others with multiple 🍎. In this model, rarity carries more weight than mere frequency.
  • Diminishing returns on term frequency: Each extra occurrence of a term adds less to the relevance score. For instance, increasing 🍎 from one to six times (from 🍎 🍌 to 🍎 🍎 🍎 🍎 🍎 🍎) boosts the score, but not by a factor of six. The effect of term repetition diminishes as the count rises.
  • Document length matters: A term that appears only once is scored higher in a short document than in a long one. That's why 🍎 🍌 ranks higher than 🍎 🍌 🍊, which itself ranks higher than 🍎 🍌 🍊 🍊 🍊.

MongoDB Atlas Search indexes are powered by Lucene’s BM25 algorithm, a refinement of the classic TF‑IDF model:

  • Term frequency (TF): More occurrences of a term in a document increase its relevance score, but with diminishing returns.
  • Inverse document frequency (IDF): Terms that appear in fewer documents receive higher weighting.
  • Length normalization: Matches in shorter documents contribute more to relevance than the same matches in longer documents.

To demonstrate the impact of IDF, I added several documents that do not contain any of the apples I'm searching for.

const fruits = [ "🍐","🍊","🍋","🍌","🍉","🍇","🍓","🫐",         
                 "🥝","🥭","🍍","🥥","🍈","🍅","🥑","🍆",  
                 "🍋","🍐","🍓","🍇","🍈","🥭","🍍","🍑",  
                 "🥝","🫐","🍌","🍉","🥥","🥑","🥥","🍍" ];
function randomFruitSentence(min=3, max=8) {
  const len = Math.floor(Math.random() * (max - min + 1)) + min;
  return Array.from({length: len}, () => fruits[Math.floor(Math.random()*fruits.length)]).join(" ");
}
db.articles.insertMany(
  Array.from({length: 500}, () => ({ description: randomFruitSentence() }))
);

db.articles.aggregate([
  { $search: { text: { query: ["🍎", "🍏"], path: "description" }, index: "default" } },
  { $project: { _id: 0, score: { $meta: "searchScore" }, description: 1 } },
  { $sort: { score: -1 } }
]).forEach( i=> print(i.score.toFixed(3).padStart(5, " "),i.description) )

3.365 🍎 🍎 🍎 🍎 🍎 🍎
3.238 🍏 🍌 🍊
2.760 🍎 🍌 🍊 🍎
2.613 🍎 🍎 🍌 🍌 🍌
2.506 🍎 🍌
2.274 🍎 🍌 🍊
1.919 🍎 🍌 🍊 🍊 🍊
1.554 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
1.554 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎

Although the result set is unchanged, the score has increased and the frequency gap between 🍎 and 🍏 has narrowed. As a result, 🍎 🍎 🍎 🍎 🍎 🍎 now ranks higher than 🍏 🍌 🍊, since the inverse document frequency (IDF) of 🍏 does not fully offset its term frequency (TF) within a single document. Crucially, changes made in other documents can influence the score of any given document, unlike in traditional indexes, where changes in one document do not impact others' index entries.

PostgreSQL text search (TF only):

Here is the result in PostgreSQL:

SELECT ts_rank_cd(  

        to_tsvector('simple', description)
     ,  
        to_tsquery('simple', '🍎 | 🍏')  

       ) AS score, description  
FROM articles  
WHERE
       to_tsvector('simple', description) 
    @@ 
       to_tsquery('simple', '🍎 | 🍏')  

ORDER BY score DESC;  

It retrieves the same documents, but with many having the same score, even with different patterns:

 score |       description
-------+-------------------------
   0.6 | 🍎 🍎 🍎 🍎 🍎 🍎
   0.2 | 🍎 🍌 🍊 🍎
   0.2 | 🍎 🍎 🍌 🍌 🍌
   0.1 | 🍏 🍌 🍊
   0.1 | 🍎 🍌
   0.1 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
   0.1 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
   0.1 | 🍎 🍌 🍊
   0.1 | 🍎 🍌 🍊 🍊 🍊
(9 rows)

With PostgreSQL text search, only the term frequency (TF) matters, and is a direct multiplicator of the score: six apples rank three times higher than two, and six times higher than one.

There's some possible normalization available with additiona flags:

SELECT ts_rank_cd(
         to_tsvector('simple', description),
         to_tsquery('simple', '🍎 | 🍏')  ,
            0 -- (the default) ignores the document length
         |  1 -- divides the rank by 1 + the logarithm of the document length
    --   |  2 -- divides the rank by the document length
    --   |  4 -- divides the rank by the mean harmonic distance between extents (this is implemented only by ts_rank_cd)
         |  8 -- divides the rank by the number of unique words in document
    --   | 16 -- divides the rank by 1 + the logarithm of the number of unique words in document
    --   | 32 -- divides the rank by itself + 1
       ) AS score,
       description
FROM articles
WHERE to_tsvector('simple', description) @@ to_tsquery('simple', '🍎 | 🍏')
ORDER BY score DESC
;
    score    |       description
-------------+-------------------------
    0.308339 | 🍎 🍎 🍎 🍎 🍎 🍎
 0.055811062 | 🍎 🍎 🍌 🍌 🍌
  0.04551196 | 🍎 🍌
  0.04142233 | 🍎 🍌 🍊 🍎
 0.024044918 | 🍏 🍌 🍊
 0.024044918 | 🍎 🍌 🍊
 0.018603688 | 🍎 🍌 🍊 🍊 🍊
 0.005688995 | 🍎 🍌 🍊 🌴 🫐 🍈 🍇 🌰
 0.005688995 | 🍌 🍊 🌴 🫐 🍈 🍇 🌰 🍎
(9 rows)

This penalizes longer documents and those with more unique terms. Still, it doesn't consider other documents like IDF.

PostgreSQL fFull text search scoring with ts_rank_cd is based on term frequency and proximity. It does not compute inverse document frequency, so scores do not change as the corpus changes. Normalization flags can penalize long documents or those with many unique terms, but they are length-based adjustments, not true IDF, like we have in TF‑IDF or BM25‑style search engine.

ParadeDB with pg_search (Tantivy BM25)

PostgreSQL popularity is not only due to its features but also its extensibility and ecosystem. The pg_search extension adds functions and operators that use BM25 indexes (Tantivy, a Rust-based search library inspired by Lucene). It is easy to test with ParadeDB:

docker run --rm -it paradedb/paradedb bash

POSTGRES_PASSWORD=x docker-entrypoint.sh postgres &

psql -U postgres

The extension is installed in version 0.18.4:

postgres=# \dx
                                        List of installed extensions
          Name          | Version |   Schema   |                        Description
------------------------+---------+------------+------------------------------------------------------------
 fuzzystrmatch          | 1.2     | public     | determine similarities and distance between strings
 pg_cron                | 1.6     | pg_catalog | Job scheduler for PostgreSQL
 pg_ivm                 | 1.9     | pg_catalog | incremental view maintenance on PostgreSQL
 pg_search              | 0.18.4  | paradedb   | pg_search: Full text search for PostgreSQL using BM25
 plpgsql                | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgis                | 3.6.0   | public     | PostGIS geometry and geography spatial types and functions
 postgis_tiger_geocoder | 3.6.0   | tiger      | PostGIS tiger geocoder and reverse geocoder
 postgis_topology       | 3.6.0   | topology   | PostGIS topology spatial types and functions
 vector                 | 0.8.0   | public     | vector data type and ivfflat and hnsw access methods
(9 rows)

I created and inserted the same as I did above on PostgreSQL and created the BM25 index:

CREATE INDEX search_idx ON articles
       USING bm25 (id, description)
       WITH (key_field='id')
;

We can query using the @@@ operator and rank with paradedb.score(id). Unlike PostgreSQL’s built‑in @@, which uses query‑local statistics, @@@ computes scores using global IDF and Lucene’s BM25 length normalization—so adding unrelated documents can still change the scores.

SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ '🍎' OR description @@@ '🍏'
ORDER BY score DESC, description;

 description | score
-------------+-------
(0 rows)

The result is empty. Using emoji as terms can lead to inconsistent tokenization results, so I replaced them with text labels instead:

UPDATE articles SET description 
 = replace(description, '🍎', 'Gala');
UPDATE articles SET description 
 = replace(description, '🍏', 'Granny Smith');
UPDATE articles SET description 
 = replace(description, '🍊', 'Orange');

This time, the scoring is more precise and takes into account the term frequency within the document (TF), the term’s rarity across the entire indexed corpus (IDF), along with a length normalization factor to prevent longer documents from having an unfair advantage:

SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description;

          description          |   score
-------------------------------+------------
 Granny Smith 🍌 Orange        |  3.1043208
 Gala Gala Gala Gala Gala Gala | 0.79529095
 Gala Gala 🍌 🍌 🍌            |  0.7512194
 Gala 🍌                       | 0.69356775
 Gala 🍌 Orange Gala           | 0.63589364
 Gala 🍌 Orange                |  0.5195716
 Gala 🍌 Orange 🌴 🫐 🍈 🍇   |  0.5195716
 🍌 Orange 🌴 🫐 🍈 🍇   Gala |  0.5195716
 Gala 🍌 Orange Orange Orange  | 0.34597924
(9 rows)

It looks very similar to the MongoDB result. Lucene may give a slight edge to terms that appear more frequently (🍎 🍌 🍊 🍎), even if the document length penalty is higher. Tantivy might apply length normalization in a slightly different way, so the shorter (🍎 🍌) gets a bigger boost.

Here is the execution plan in ParadeDB:

EXPLAIN(ANALYZE, BUFFERS, VERBOSE)
SELECT description, paradedb.score(id) AS score
FROM articles
WHERE description @@@ 'Gala' OR description @@@ 'Granny Smith'
ORDER BY score DESC, description
;

 Gather Merge  (cost=1010.06..1010.68 rows=5 width=31) (actual time=5.893..8.237 rows=8 loops=1)
   Output: description, (score(id))
   Workers Planned: 2
   Workers Launched: 2
   Buffers: shared hit=333
   ->  Sort  (cost=10.04..10.05 rows=3 width=31) (actual time=0.529..0.540 rows=3 loops=3)
         Output: description, (score(id))
         Sort Key: (score(articles.id)) DESC, articles.description
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=306
         Worker 0:  actual time=0.548..0.558 rows=0 loops=1
           Sort Method: quicksort  Memory: 25kB
           Buffers: shared hit=64
         Worker 1:  actual time=0.596..0.607 rows=0 loops=1
           Sort Method: quicksort  Memory: 25kB
           Buffers: shared hit=64
         ->  Parallel Custom Scan (ParadeDB Scan) on public.articles  (cost=10.00..10.02 rows=3 width=31) (actual time=0.367..0.444 rows=3 loops=3)
               Output: description, score(id)
               Table: articles
               Index: search_idx
               Segment Count: 5
               Heap Fetches: 8
               Virtual Tuples: 0
               Invisible Tuples: 0
               Parallel Workers: {"-1":{"query_count":0,"claimed_segments":[{"id":"a17b19a2","deleted_docs":0,"max_doc":9},{"id":"3fa71653","deleted_docs":6,"max_doc":6},{"id":"3c243f8e","deleted_docs":1,"max_doc":1},{"id":"badbcd7e","deleted_docs":8,"max_doc":8},{"id":"add79d5d","deleted_docs":9,"max_doc":9}]}}
               Exec Method: NormalScanExecState
               Scores: true
               Tantivy Query: {"boolean":{"should":[{"with_index":{"query":{"parse_with_field":{"field":"description","query_string":"Gala","lenient":null,"conjunction_mode":null}}}},{"with_index":{"query":{"parse_with_field":{"field":"description","query_string":"Granny Smith","lenient":null,"conjunction_mode":null}}}}]}}
               Buffers: shared hit=216
               Worker 0:  actual time=0.431..0.441 rows=0 loops=1
                 Buffers: shared hit=19
               Worker 1:  actual time=0.447..0.457 rows=0 loops=1
                 Buffers: shared hit=19

This PostgreSQL plan shows ParadeDB executing a parallel full-text search with Tantivy. The Parallel Custom Scan node issues a BM25 query (Gala OR "Granny Smith") to the segmented Tantivy index. Each worker searches its segments, scores, fetches matching descriptions, and sorts locally. The Gather Merge then combines these into a single ranked list. Since search and scoring are done within Tantivy across CPU cores and results are fetched from shared memory, the query is quick and efficient.

In the execution plan, the Tantivy query closely resembles a MongoDB search query. Specifically, "boolean" in Tantivy is equivalent to "compound" in MongoDB, "should" matches "should", "parse_with_field.field" is similar to "path".

PostgreSQL’s built-in search only provides basic, local term frequency scoring. To get a full-featured text search that can be used in an applica... (truncated)

September 18, 2025

Dynamic view-based data masking in Amazon RDS and Amazon Aurora MySQL

Data masking is an important technique in cybersecurity, allowing organizations to safeguard personally identifiable information (PII) and other confidential data, while maintaining its utility for development, testing, and analytics purposes. Data masking involves replacing original sensitive data with false, yet realistic information. This process helps ensure that the masked version preserves the format and characteristics […]

Help Shape the Future of Vector Search in MySQL

AI and machine learning are seemingly everywhere, and that’s forcing every database company to think about vector search. Companies want to build things like smart search that actually understands what you mean, recommendation systems that know what you’ll like, and tools that can spot when something’s off. To make all of this work at the […]

Combine Two JSON Collections with Nested Arrays: MongoDB and PostgreSQL Aggregations

Suppose you need to merge two sources of data—both JSON documents containing nested arrays. This was a question on StackOverflow, with a simple example, easy to reproduce. Let's examine how to accomplish this in PostgreSQL and MongoDB, and compare the approaches.

Description of the problem

I have two tables. One is stored on one server, and the other on another. And I need to combine their data on daily statistics once in a while. The tables are identical in fields and structure. But I don't know how to combine the jsonb fields into one array by grouping them by some fields and calculating the total number.

So, we have sales transactions stored in two sources, each containing an array of cash registers, each cash register containing an array of products sold that day.
We want to merge both sources, and aggregate the counts by product and register in nested arrays.

They provided an example on db<>fiddle. To make it simpler, I've put the sample data in a table, with the two sources ("server_table" and "my_temp") and the expected result in bold:

date cash register product name count source
2025-09-01 2 name1 2 server_table
2
2025-09-01 2 name2 4 server_table
4
2025-09-01 3 name1 2 my_temp
2
2025-09-01 3 name2 4 my_temp
4
2025-09-01 4 name2 4 my_temp
2025-09-01 4 name2 8 server_table
12
2025-09-01 4 name8 12 my_temp
2025-09-01 4 name8 6 server_table
18
2025-09-02 1 name1 2 my_temp
2025-09-02 1 name1 2 server_table
4
2025-09-02 1 name2 4 my_temp
2025-09-02 1 name2 4 server_table
8
2025-09-02 3 name2 4 my_temp
4
2025-09-02 3 name8 12 my_temp
12
2025-09-02 4 name2 4 server_table
4
2025-09-02 4 name4 5 server_table
5
2025-09-03 2 name1 2 my_temp
2025-09-03 2 name1 2 server_table
4
2025-09-03 2 name2 4 my_temp
2025-09-03 2 name2 4 server_table
8
2025-09-03 4 name2 4 my_temp
2025-09-03 4 name2 4 server_table
8
2025-09-03 4 name8 12 my_temp
2025-09-03 4 name8 12 server_table
24
2025-09-04 1 name1 2 my_temp
2025-09-04 1 name1 2 server_table
4
2025-09-04 1 name2 4 my_temp
2025-09-04 1 name2 4 server_table
8
2025-09-04 4 name2 4 my_temp
2025-09-04 4 name2 4 server_table
8
2025-09-04 4 name8 12 my_temp
2025-09-04 4 name8 12 server_table
24

Sample data in PostgreSQL

Here is the example provided in the post, as a db<>fiddle link:


-- Create first table  
CREATE TABLE my_temp (  
    employee_id TEXT,  
    date DATE,  
    info JSONB  
);  

-- Insert sample data into my_temp  
INSERT INTO my_temp (employee_id, date, info)  
VALUES  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-01',  
    '[  
        { "cash_register": 3,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-02',  
    '[  
        { "cash_register": 1,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 3,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-03',  
    '[  
        { "cash_register": 2,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-04',  
    '[  
        { "cash_register": 1,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
);  

-- Create second table  
CREATE TABLE server_table (  
    employee_id TEXT,  
    date DATE,  
    info JSONB  
);  

-- Insert sample data into server_table  
INSERT INTO server_table (employee_id, date, info)  
VALUES  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-01',  
    '[  
        { "cash_register": 2,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 6 },  
            { "productName": "name2", "count": 8 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-02',  
    '[  
        { "cash_register": 1,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name4", "count": 5 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-03',  
    '[  
        { "cash_register": 2,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
),  
(  
    '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',  
    '2025-09-04',  
    '[  
        { "cash_register": 1,  
          "products": [  
            { "productName": "name1", "count": 2 },  
            { "productName": "name2", "count": 4 }  
          ]  
        },  
        { "cash_register": 4,  
          "products": [  
            { "productName": "name8", "count": 12 },  
            { "productName": "name2", "count": 4 }  
          ]  
        }  
     ]'  
);  

Our goal is to aggregate data from two tables and calculate their total counts. Although I have 30 years of experience working with relational databases and am generally stronger in SQL, I find MongoDB to be more intuitive when working with JSON documents. Let's begin there.

Sample data in MongoDB

I create two collections with the same data as the PostgreSQL example:

db.my_temp.insertMany([  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-01"),  
    info: [  
      {  
        cash_register: 3,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-02"),  
    info: [  
      {  
        cash_register: 1,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 3,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-03"),  
    info: [  
      {  
        cash_register: 2,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-04"),  
    info: [  
      {  
        cash_register: 1,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  }  
]);  


db.server_table.insertMany([  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-01"),  
    info: [  
      {  
        cash_register: 2,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 6 },  
          { productName: "name2", count: 8 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-02"),  
    info: [  
      {  
        cash_register: 1,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name4", count: 5 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-03"),  
    info: [  
      {  
        cash_register: 2,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  },  
  {  
    employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",  
    date: ISODate("2025-09-04"),  
    info: [  
      {  
        cash_register: 1,  
        products: [  
          { productName: "name1", count: 2 },  
          { productName: "name2", count: 4 }  
        ]  
      },  
      {  
        cash_register: 4,  
        products: [  
          { productName: "name8", count: 12 },  
          { productName: "name2", count: 4 }  
        ]  
      }  
    ]  
  }  
]);  

While PostgreSQL stores the employee ID and date in separate columns—since JSONB doesn’t support every BSON data type, a document database stores all related data within a single document. Despite these structural differences, the JSON representation appears similar, whether it is stored as JSONB in PostgreSQL or BSON in MongoDB.

Solution in MongoDB

The aggregation framework helps to decompose a problem as successive stages in a pipeline, making it easier to code, read, and debug. I'll need the following stages:

  1. $unionWith to concatenate from "server_table" with those read from "my_temp"
  2. $unwind to flatten the array items to multiple documents
  3. $group and $sum to aggregate
  4. $group to get back the multiple documents into arrays

Here is my query:

db.my_temp.aggregate([
  // concatenate with the other source
  { $unionWith: { coll: "server_table" } },
  // flatten the info to apply aggregation
  { $unwind: "$info" },
  { $unwind: "$info.products" },
  { // sum and group by employee/date/register/product
    $group: {
      _id: {
        employee_id: "$employee_id",
        date: "$date",
        cash_register: "$info.cash_register",
        productName: "$info.products.productName"
      },
      total_count: { $sum: "$info.products.count" }
    }
  },
  { // Regroup by register (inverse of unwind)
    $group: {
      _id: {
        employee_id: "$_id.employee_id",
        date: "$_id.date",
        cash_register: "$_id.cash_register"
      },
      products: {
        $push: {
          productName: "$_id.productName",
          count: "$total_count"
        }
      }
    }
  },
  { // Regroup by employee/date  (inverse of first unwind)
    $group: {
      _id: {
        employee_id: "$_id.employee_id",
        date: "$_id.date"
      },
      info: {
        $push: {
          cash_register: "$_id.cash_register",
          products: "$products"
        }
      }
    }
  },
  { $project: { _id: 0, employee_id: "$_id.employee_id", date: "$_id.date", info: 1 } },
  { $sort: { date: 1 } }
]);

Here is the result:

[
  {
    info: [
      { cash_register: 2, products: [ { productName: 'name1', count: 2 }, { productName: 'name2', count: 4 } ] },
      { cash_register: 4, products: [ { productName: 'name8', count: 18 }, { productName: 'name2', count: 12 } ] },
      { cash_register: 3, products: [ { productName: 'name2', count: 4 }, { productName: 'name1', count: 2 } ] }
    ],
    employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
    date: ISODate('2025-09-01T00:00:00.000Z')
  },
  {
    info: [
      { cash_register: 1, products: [ { productName: 'name2', count: 8 }, { productName: 'name1', count: 4 } ] },
      { cash_register: 4, products: [ { productName: 'name4', count: 5 }, { productName: 'name2', count: 4 } ] },
      { cash_register: 3, products: [ { productName: 'name8', count: 12 }, { productName: 'name2', count: 4 } ] }
    ],
    employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
    date: ISODate('2025-09-02T00:00:00.000Z')
  },
  {
    info: [
      { cash_register: 2, products: [ { productName: 'name2', count: 8 }, { productName: 'name1', count: 4 } ] },
      { cash_register: 4, products: [ { productName: 'name8', count: 24 }, { productName: 'name2', count: 8 } ] }
    ],
    employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
    date: ISODate('2025-09-03T00:00:00.000Z')
  },
  {
    info: [
      { cash_register: 4, products: [ { productName: 'name8', count: 24 }, { productName: 'name2', count: 8 } ] },
      { cash_register: 1, products: [ { productName: 'name1', count: 4 }, { productName: 'name2', count: 8 } ] }
    ],
    employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
    date: ISODate('2025-09-04T00:00:00.000Z')
  }
]

Solution in PostgreSQL

In SQL, you can emulate an aggregation pipeline by using the WITH clause, where each stage corresponds to a separate common table expression:

WITH
all_data AS ( -- Union to concatenate the two tables
    SELECT employee_id, "date", info FROM my_temp
    UNION ALL
    SELECT employee_id, "date", info FROM server_table
),
unwound AS ( -- Unwind cash registers and products
    SELECT
        ad.employee_id,
        ad.date,
        (reg_elem->>'cash_register')::int AS cash_register,
        prod_elem->>'productName' AS product_name,
        (prod_elem->>'count')::int AS product_count
    FROM all_data ad
    CROSS JOIN LATERAL jsonb_array_elements(ad.info) AS reg_elem
    CROSS JOIN LATERAL jsonb_array_elements(reg_elem->'products') AS prod_elem
),
product_totals AS ( -- Sum and group by employee, date, register, product
    SELECT
        employee_id,
        date,
        cash_register,
        product_name,
        SUM(product_count) AS total_count
    FROM unwound
    GROUP BY employee_id, date, cash_register, product_name
),
register_group AS ( -- Regroup by register
    SELECT
        employee_id,
        date,
        cash_register,
        jsonb_agg(
            jsonb_build_object(
                'productName', product_name,
                'count', total_count
            )
            ORDER BY product_name
        ) AS products
    FROM product_totals
    GROUP BY employee_id, date, cash_register
),
employee_group AS ( -- Regroup by employee, date
    SELECT
        employee_id,
        date,
        jsonb_agg(
            jsonb_build_object(
                'cash_register', cash_register,
                'products', products
            )
            ORDER BY cash_register
        ) AS info
    FROM register_group
    GROUP BY employee_id, date
)
SELECT *
FROM employee_group
ORDER BY date;

Beyond the classic SQL operations, like UNION, JOIN, GROUP BY, we had to use JSON operators such as jsonb_array_elements(), ->>, jsonb_build_object(), jsonb_agg() to unwind and aggregate.

PostgreSQL follows the standard SQL/JSON since PostgreSQL 17 and the query can be written with JSON_TABLE(), JSON_OBJECT() and JSON_ARRAYAGG()

WITH all_data AS (  
    SELECT employee_id, date, info FROM my_temp  
    UNION ALL  
    SELECT employee_id, date, info FROM server_table  
),  
-- Flatten registers and products in one pass  
unwound AS (  
    SELECT  
        t.employee_id,  
        t.date,  
        jt.
                                    
                                    
                                    
                                    
                                

Elasticsearch Was Never a Database

Elasticsearch is a search engine, not a database. Here’s why it falls short as a system of record.

September 17, 2025

Supporting our AI overlords: Redesigning data systems to be Agent-first

This Berkeley systems group paper opens with the thesis that LLM agents will soon dominate data system workloads. These agents, acting on behalf of users, do not query like human analysts or even like the applications written by them. Instead, the LLM agents bombard databases with a storm of exploratory requests: schema inspections, partial aggregates, speculative joins, rollback-heavy what-if updates. The authors calls this behavior agentic speculation.

Agentic speculation is positioned as both the problem and the opportunity. The problem is that traditional DBMSs are built for exact intermittent workloads and cannot handle the high-throughput redundant and inefficient querying of LLM agents. The opportunity also lies here. Agentic speculation has recognizable properties and features that invite new designs. Databases should adapt by offering approximate answers, sharing computation across repeated subplans, caching grounding information in an agentic memory store, and even steering agents with cost estimates or semantic hints.

The paper argues the database must bend to the agent's style. But why don't we also consider the other way around? Why shouldn't agents be trained to issue smarter, fewer, more schema-aware queries? The authors take agent inefficiency as a given, I think, in order to preserve the blackbox nature of general LLM agents. After a walkthrough of the paper, I'll revisit this question as well as other directions that occur to me.  


Case studies

The authors provide experiments to ground their claims about agentic speculation. The first study uses the BIRD benchmark (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) with DuckDB as the backend. The goal here is to evaluate how well LLMs can convert natural language questions into SQL queries. I, for one, welcome our SQL wielding AI overlords! Here, agents are instantiated as GPT-4o-mini and Qwen2.5-Coder-7B.

The central finding from Figure 1 is that  accuracy improves with more attempts for both sequential (one agent issuing multiple turns) and parallel setups (many agents in charge at once). The success rate climbs by 14–70% as the number of queries increases. Brute forcing helps, but it also means flooding the database with redundant queries.

Figure 2 drives that point home. Across 50 independent attempts at a single task, fewer than 10–20% of query subplans are unique. Most work is repeated, often  in a trivial manner. Result caching looks like an obvious win here.

The second case study moves beyond single queries into multi-database integration tasks that combine Postgres, MongoDB, DuckDB, and SQLite. Figure 3 plots how OpenAI's o3 model proceeds. Agents begin with metadata exploration (tables, columns), move to partial queries, and eventually to full attempts. But the phases overlap in a messy and uncertain way. The paper then explains that injecting grounding hints into the prompts (such as which column contains information pertinent to the task) reduced the number of queries by more than 20%, which shows how steerability helps. So, the agent is like a loudmouth politician who spews less bullshit when his handlers give him some direction.

The case studies illustrate the four features of agentic speculation: Scale (more attempts do improve success), Redundancy (most attempts repeat prior work), Heterogeneity (workloads mix metadata exploration with partial and complete solutions), and Steerability (agents can be nudged toward efficiency). 


Architecture

The proposed architecture aims to redesign the database stack for agent workloads. The key idea is that LLM agents send probes instead of bare SQL. These probes include not just queries but also "briefs" (natural language descriptions of intent, tolerance for approximation, and hints about the phase of work). Communication is key, folks! The database, in turn, parses these probes through an agentic interpreter, optimizes them via a probe optimizer that satisfices rather than guarantees exact results. It then executes these queries against a storage layer augmented with an agentic memory store and a shared transaction manager designed for speculative branching and rollback. Alongside answers, the system may return proactive feedback (hints, cost estimates, schema nudges) to steer agents.

The architecture is maybe too tidy. It shows a single agent swarm funneling probes into a single database engine, which responds in kind. So, this looks very much like a single-client, single-node system. There is no real discussion of multi-tenancy: what happens when two clients, with different goals and different access privileges, hit the same backend? Does one client's agentic memory contaminate another's? Are cached probes and approximations shared across tenants, and if so, who arbitrates correctness and privacy? These questions are briefly mentioned in privacy concerns in Section 6, but the architecture itself is silent. Whether this single-client abstraction can scale to the real, distributed, multi-tenant world remains as the important question.


Query Interfaces

Section 4 focuses on the interface between agents and databases. Probes extend SQL into a dialogue by bundling multiple queries together with a natural-language "brief" describing goals, priorities, and tolerance for error. This allows the system to understand not just what is being asked, but why. For example, is the agent is in metadata exploration or solution formulation mode? The database can then prioritize accordingly and provide a rough sample for schema discovery, and a more exact computation for validation.

Two directions stand out for me. First, on the agent-to-system side, probes may request things SQL cannot express, like "find tables semantically related to electronics". This would require embedding-based similarity operators built into the DBMS.

Second, on the system-to-agent side, the database is now encouraged to become proactive, returning not just answers but feedback. These "sleeper agents" inside the DB can explain why a query returned empty results, suggest alternative tables, or give cost estimates so the agent can rethink a probe before execution. 


Processing and Optimizing Probes

Section 5 focuses on how to process probes at scale, and what does it mean to optimize them? The key shift is that the database no longer aims for exact answers to each query. Instead, it seeks to satisfice: provide results that are good enough for the agent to decide its next step.

The paper calls this the approximation technique, and presents it as two folds. First, it provides exploratory scaffolding: quick samples, coarse aggregates, and partial results that help the agent discover which tables, filters, and joins matter. Second, it can be decision-making approximation: estimates with bounded error that may themselves be the final answer, because the human behind the agent cares more about trends than exact counts.

Let's consider the task of finding reasons for why profits in coffee bean sales in Berkeley was low this year relative to last. A human analyst would cut to the chase: they would join sales with stores, compare 2024 vs. 2025, then check returns or closures. A schema-blind LLM agent would issue a flood of redundant queries, many of them dead ends. The proposed system here splits the difference: it prunes irrelevant exploration, offers approximate aggregates up front (coffee sales down ~15%), and caches this in memory so later probes can build from it.

To achieve this, the probe optimizer adapts familiar techniques. Multi-query optimization collapses redundant subplans, approximate query processing provides fast sketches instead of full scans, and incremental evaluation streams partial results with early stopping when the trend is clear. The optimizer works both within a batch of probes (intra-probe) and across turns (inter-probe). It caches results and materializes common joins so that the agent's next attempts don't repeat the same work. The optimization goal is not minimizing per-query latency but minimizing the total interaction time between agent and database, a subtle but important shift.


Indexing, Storage, and Transactions

Section 6 addresses the lower layers of the stack: indexing, storage, and transactions. In order to deal with the  dynamic, overlapping, and branch-heavy nature of agentic speculation, it proposes an agentic memory store for semantic grounding, and a new transactional model for branched updates.

The agentic memory store is essentially a semantic cache. It stores results of prior probes, metadata, column encodings, and even embeddings to support similarity search. This way, when the agent inevitably repeats itself, the system can serve cached or related results. The open problem is staleness: if schemas or data evolve, cached grounding may mislead future probes. Traditional DBs handle this through strict invalidation (drop and recompute indexes, refresh materialized views). The paper hints that agentic memory may simply be good enough until corrected, a looser consistency model that may suit LLM's temperament. 

For branched updates, the paper proposes "a new transactions framework that is centered on state sharing across probes, each of which may be independently attempting to complete a user-defined sequence of updates". The paper argues for multi-world isolation: each branch must be logically isolated, but may physically overlap to exploit shared state. Supporting thousands of concurrent speculative branches requires something beyond Postgres-style MVCC or Aurora's copy-on-write snapshots. 


Discussion

The paper offers an ambitious rethinking of how databases should respond to the arrival of LLM agents. This naturally leaves several open questions for discussion.

In my view, the paper frames the problem asymmetrically: agents are messy, exploratory, redundant, so databases must bend to accommodate them. But is that the only path forward? Alternatively, agents could be fine-tuned to issue smarter probes that are more schema-aware, less redundant, more considerate of cost. A protocol of mutual compromise seems more sustainable than a one-sided redesign. Otherwise we risk ossifying the data systems around today's inefficient LLM habits.

Multi-client operation remains an open issue. The architecture is sketched as though one user's army of agents owns the system. Real deployments will have many clients, with different goals and different access rights, colliding on the same backend. What does agentic memory mean in this context? Similarly, how does load management work? How do we allocate resources fairly among tenants when each may field thousands of speculative queries per second? Traditional databases long ago developed notions of connection pooling, admission control, and multi-tenant isolation; agent-first systems will need new equivalents attuned to speculation.

Finally, there is the question of distribution. The architecture as presented looks like a single-node system: one interpreter, one optimizer, one agentic memory, one transaction manager. Yet the workloads described are precisely the heavy workloads that drove databases toward distributed execution. How should agentic memory be partitioned or replicated across nodes? How would speculative branching work here? How can bandwidth limits be managed when repeated scans, approximate sampling, and multi-query optimization saturate storage I/O? How can cross-shard communication be kept from overwhelming the system when speculative branches and rollbacks trigger network communication at scale?


Future Directions: A Neurosymbolic Angle

If we squint, there is a neurosymbolic flavor to this entire setup. LLM agents represent the neural side: fuzzy reasoning, associative/semantic search, and speculative exploration. Databases constitute the symbolic side with schemas, relational algebra, logical operators, and transactional semantics. The paper is then all about creating an interface where the neural can collaborate with the symbolic by combining the flexibility of learned models with the structure and rigor of symbolic systems.

Probes are already halfway to symbolic logic queries: part SQL fragments, part logical forms, and part neural briefs encoding intent and constraints. If databases learn to proactively steer agents with rules and constraints, and if agents learn to ask more structured probes, the result would look even more like a neurosymbolic reasoning system, where neural components generate hypotheses and symbolic databases test, prune, and ground them. If that happens, we can talk about building a new kind of reasoning stack where the two halves ground and reinforce each other.


MongoDB as an AI-First Platform

Document databases offer an interesting angle on the AI–database melding problem. Their schema flexibility and tolerance for semistructured data make them well-suited for the exploratory phase of agent workloads, when LLMs are still feeling out what fields and joins matter. The looseness of document stores may align naturally with the fuzziness of LLM probes, especially when embeddings are brought into play for semantic search.

MongoDB's acquisition of Voyage AI points at this convergence. With built-in embeddings and vector search, MongoDB aims to support probes that ask for semantically similar documents and provide approximate retrieval early in the exploration phase.


How the 2018 Berkeley AI-Systems Vision Fared

Back in 2018, Berkeley systems group presented a broad vision of the systems challenges for AI. Continuing our tradition of checking in on influential Berkeley AI-systems papers, let's give a brief evaluation. Many of its predictions were directionally correct: specialized hardware, privacy and federated learning, and explainability. Others remain underdeveloped, like cloud–edge integration and  continual learning in dynamic environments. What it clearly missed was the rise and dominance of LLMs as the interface to data and applications. As I said back then, plans are useless, but planning is indispensable.

Compared with that blue sky agenda, this new Agent-First Data Systems paper is more technical, grounded, and focused. It does not try to map a decade of AI systems research, but rather focuses on a single pressing problem and proposes mechanisms to cope.

MySQL with Diagrams Part Three: The Life Story of the Writing Process

When you run a simple write, …it may look simple, but under the hood, MySQL’s InnoDB engine kicks off a pretty complex sequence to ensure your data stays safe, consistent, and crash-recoverable. In the top-left corner of the diagram, we see exactly where this begins — the moment the query is executed: [crayon-68cab7e2c68be431462678/] The log […]