September 18, 2025
How to retrieve current session timezone using timeZone() in ClickHouse®
How to calculate date differences using unit boundaries with dateDiff in ClickHouse®
Help Shape the Future of Vector Search in MySQL
Combine Two JSON Collections with Nested Arrays: MongoDB and PostgreSQL Aggregations
Suppose you need to merge two sources of data—both JSON documents containing nested arrays. This was a question on StackOverflow, with a simple example, easy to reproduce. Let's examine how to accomplish this in PostgreSQL and MongoDB, and compare the approaches.
Description of the problem
I have two tables. One is stored on one server, and the other on another. And I need to combine their data on daily statistics once in a while. The tables are identical in fields and structure. But I don't know how to combine the jsonb fields into one array by grouping them by some fields and calculating the total number.
So, we have sales transactions stored in two sources, each containing an array of cash registers, each cash register containing an array of products sold that day.
We want to merge both sources, and aggregate the counts by product and register in nested arrays.
They provided an example on db<>fiddle. To make it simpler, I've put the sample data in a table, with the two sources ("server_table" and "my_temp") and the expected result in bold:
| date | cash register | product name | count | source |
|---|---|---|---|---|
| 2025-09-01 | 2 | name1 | 2 | server_table |
| 2 | ||||
| 2025-09-01 | 2 | name2 | 4 | server_table |
| 4 | ||||
| 2025-09-01 | 3 | name1 | 2 | my_temp |
| 2 | ||||
| 2025-09-01 | 3 | name2 | 4 | my_temp |
| 4 | ||||
| 2025-09-01 | 4 | name2 | 4 | my_temp |
| 2025-09-01 | 4 | name2 | 8 | server_table |
| 12 | ||||
| 2025-09-01 | 4 | name8 | 12 | my_temp |
| 2025-09-01 | 4 | name8 | 6 | server_table |
| 18 | ||||
| 2025-09-02 | 1 | name1 | 2 | my_temp |
| 2025-09-02 | 1 | name1 | 2 | server_table |
| 4 | ||||
| 2025-09-02 | 1 | name2 | 4 | my_temp |
| 2025-09-02 | 1 | name2 | 4 | server_table |
| 8 | ||||
| 2025-09-02 | 3 | name2 | 4 | my_temp |
| 4 | ||||
| 2025-09-02 | 3 | name8 | 12 | my_temp |
| 12 | ||||
| 2025-09-02 | 4 | name2 | 4 | server_table |
| 4 | ||||
| 2025-09-02 | 4 | name4 | 5 | server_table |
| 5 | ||||
| 2025-09-03 | 2 | name1 | 2 | my_temp |
| 2025-09-03 | 2 | name1 | 2 | server_table |
| 4 | ||||
| 2025-09-03 | 2 | name2 | 4 | my_temp |
| 2025-09-03 | 2 | name2 | 4 | server_table |
| 8 | ||||
| 2025-09-03 | 4 | name2 | 4 | my_temp |
| 2025-09-03 | 4 | name2 | 4 | server_table |
| 8 | ||||
| 2025-09-03 | 4 | name8 | 12 | my_temp |
| 2025-09-03 | 4 | name8 | 12 | server_table |
| 24 | ||||
| 2025-09-04 | 1 | name1 | 2 | my_temp |
| 2025-09-04 | 1 | name1 | 2 | server_table |
| 4 | ||||
| 2025-09-04 | 1 | name2 | 4 | my_temp |
| 2025-09-04 | 1 | name2 | 4 | server_table |
| 8 | ||||
| 2025-09-04 | 4 | name2 | 4 | my_temp |
| 2025-09-04 | 4 | name2 | 4 | server_table |
| 8 | ||||
| 2025-09-04 | 4 | name8 | 12 | my_temp |
| 2025-09-04 | 4 | name8 | 12 | server_table |
| 24 |
Sample data in PostgreSQL
Here is the example provided in the post, as a db<>fiddle link:
-- Create first table
CREATE TABLE my_temp (
employee_id TEXT,
date DATE,
info JSONB
);
-- Insert sample data into my_temp
INSERT INTO my_temp (employee_id, date, info)
VALUES
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-01',
'[
{ "cash_register": 3,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-02',
'[
{ "cash_register": 1,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 3,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-03',
'[
{ "cash_register": 2,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-04',
'[
{ "cash_register": 1,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
);
-- Create second table
CREATE TABLE server_table (
employee_id TEXT,
date DATE,
info JSONB
);
-- Insert sample data into server_table
INSERT INTO server_table (employee_id, date, info)
VALUES
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-01',
'[
{ "cash_register": 2,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 6 },
{ "productName": "name2", "count": 8 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-02',
'[
{ "cash_register": 1,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name4", "count": 5 },
{ "productName": "name2", "count": 4 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-03',
'[
{ "cash_register": 2,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
),
(
'3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
'2025-09-04',
'[
{ "cash_register": 1,
"products": [
{ "productName": "name1", "count": 2 },
{ "productName": "name2", "count": 4 }
]
},
{ "cash_register": 4,
"products": [
{ "productName": "name8", "count": 12 },
{ "productName": "name2", "count": 4 }
]
}
]'
);
Our goal is to aggregate data from two tables and calculate their total counts. Although I have 30 years of experience working with relational databases and am generally stronger in SQL, I find MongoDB to be more intuitive when working with JSON documents. Let's begin there.
Sample data in MongoDB
I create two collections with the same data as the PostgreSQL example:
db.my_temp.insertMany([
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-01"),
info: [
{
cash_register: 3,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-02"),
info: [
{
cash_register: 1,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 3,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-03"),
info: [
{
cash_register: 2,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-04"),
info: [
{
cash_register: 1,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
}
]);
db.server_table.insertMany([
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-01"),
info: [
{
cash_register: 2,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 6 },
{ productName: "name2", count: 8 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-02"),
info: [
{
cash_register: 1,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name4", count: 5 },
{ productName: "name2", count: 4 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-03"),
info: [
{
cash_register: 2,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
},
{
employee_id: "3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9",
date: ISODate("2025-09-04"),
info: [
{
cash_register: 1,
products: [
{ productName: "name1", count: 2 },
{ productName: "name2", count: 4 }
]
},
{
cash_register: 4,
products: [
{ productName: "name8", count: 12 },
{ productName: "name2", count: 4 }
]
}
]
}
]);
While PostgreSQL stores the employee ID and date in separate columns—since JSONB doesn’t support every BSON data type, a document database stores all related data within a single document. Despite these structural differences, the JSON representation appears similar, whether it is stored as JSONB in PostgreSQL or BSON in MongoDB.
Solution in MongoDB
The aggregation framework helps to decompose a problem as successive stages in a pipeline, making it easier to code, read, and debug. I'll need the following stages:
-
$unionWithto concatenate from "server_table" with those read from "my_temp" -
$unwindto flatten the array items to multiple documents -
$groupand$sumto aggregate -
$groupto get back the multiple documents into arrays
Here is my query:
db.my_temp.aggregate([
// concatenate with the other source
{ $unionWith: { coll: "server_table" } },
// flatten the info to apply aggregation
{ $unwind: "$info" },
{ $unwind: "$info.products" },
{ // sum and group by employee/date/register/product
$group: {
_id: {
employee_id: "$employee_id",
date: "$date",
cash_register: "$info.cash_register",
productName: "$info.products.productName"
},
total_count: { $sum: "$info.products.count" }
}
},
{ // Regroup by register (inverse of unwind)
$group: {
_id: {
employee_id: "$_id.employee_id",
date: "$_id.date",
cash_register: "$_id.cash_register"
},
products: {
$push: {
productName: "$_id.productName",
count: "$total_count"
}
}
}
},
{ // Regroup by employee/date (inverse of first unwind)
$group: {
_id: {
employee_id: "$_id.employee_id",
date: "$_id.date"
},
info: {
$push: {
cash_register: "$_id.cash_register",
products: "$products"
}
}
}
},
{ $project: { _id: 0, employee_id: "$_id.employee_id", date: "$_id.date", info: 1 } },
{ $sort: { date: 1 } }
]);
Here is the result:
[
{
info: [
{ cash_register: 2, products: [ { productName: 'name1', count: 2 }, { productName: 'name2', count: 4 } ] },
{ cash_register: 4, products: [ { productName: 'name8', count: 18 }, { productName: 'name2', count: 12 } ] },
{ cash_register: 3, products: [ { productName: 'name2', count: 4 }, { productName: 'name1', count: 2 } ] }
],
employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
date: ISODate('2025-09-01T00:00:00.000Z')
},
{
info: [
{ cash_register: 1, products: [ { productName: 'name2', count: 8 }, { productName: 'name1', count: 4 } ] },
{ cash_register: 4, products: [ { productName: 'name4', count: 5 }, { productName: 'name2', count: 4 } ] },
{ cash_register: 3, products: [ { productName: 'name8', count: 12 }, { productName: 'name2', count: 4 } ] }
],
employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
date: ISODate('2025-09-02T00:00:00.000Z')
},
{
info: [
{ cash_register: 2, products: [ { productName: 'name2', count: 8 }, { productName: 'name1', count: 4 } ] },
{ cash_register: 4, products: [ { productName: 'name8', count: 24 }, { productName: 'name2', count: 8 } ] }
],
employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
date: ISODate('2025-09-03T00:00:00.000Z')
},
{
info: [
{ cash_register: 4, products: [ { productName: 'name8', count: 24 }, { productName: 'name2', count: 8 } ] },
{ cash_register: 1, products: [ { productName: 'name1', count: 4 }, { productName: 'name2', count: 8 } ] }
],
employee_id: '3dd280f2-e4d3-4568-9d97-1cc3a9dff1e9',
date: ISODate('2025-09-04T00:00:00.000Z')
}
]
Solution in PostgreSQL
In SQL, you can emulate an aggregation pipeline by using the WITH clause, where each stage corresponds to a separate common table expression:
WITH
all_data AS ( -- Union to concatenate the two tables
SELECT employee_id, "date", info FROM my_temp
UNION ALL
SELECT employee_id, "date", info FROM server_table
),
unwound AS ( -- Unwind cash registers and products
SELECT
ad.employee_id,
ad.date,
(reg_elem->>'cash_register')::int AS cash_register,
prod_elem->>'productName' AS product_name,
(prod_elem->>'count')::int AS product_count
FROM all_data ad
CROSS JOIN LATERAL jsonb_array_elements(ad.info) AS reg_elem
CROSS JOIN LATERAL jsonb_array_elements(reg_elem->'products') AS prod_elem
),
product_totals AS ( -- Sum and group by employee, date, register, product
SELECT
employee_id,
date,
cash_register,
product_name,
SUM(product_count) AS total_count
FROM unwound
GROUP BY employee_id, date, cash_register, product_name
),
register_group AS ( -- Regroup by register
SELECT
employee_id,
date,
cash_register,
jsonb_agg(
jsonb_build_object(
'productName', product_name,
'count', total_count
)
ORDER BY product_name
) AS products
FROM product_totals
GROUP BY employee_id, date, cash_register
),
employee_group AS ( -- Regroup by employee, date
SELECT
employee_id,
date,
jsonb_agg(
jsonb_build_object(
'cash_register', cash_register,
'products', products
)
ORDER BY cash_register
) AS info
FROM register_group
GROUP BY employee_id, date
)
SELECT *
FROM employee_group
ORDER BY date;
Beyond the classic SQL operations, like UNION, JOIN, GROUP BY, we had to use JSON operators such as jsonb_array_elements(), ->>, jsonb_build_object(), jsonb_agg() to unwind and aggregate.
PostgreSQL follows the standard SQL/JSON since PostgreSQL 17 and the query can be written with JSON_TABLE(), JSON_OBJECT() and JSON_ARRAYAGG()
WITH all_data AS (
SELECT employee_id, date, info FROM my_temp
UNION ALL
SELECT employee_id, date, info FROM server_table
),
-- Flatten registers and products in one pass
unwound AS (
SELECT
t.employee_id,
t.date,
jt.
Elasticsearch Was Never a Database
Elasticsearch is a search engine, not a database. Here’s why it falls short as a system of record.
Elasticsearch Was Never a Database
Elasticsearch is a search engine, not a database. Here's why it falls short as a system of record.
September 17, 2025
Supporting our AI overlords: Redesigning data systems to be Agent-first
This Berkeley systems group paper opens with the thesis that LLM agents will soon dominate data system workloads. These agents, acting on behalf of users, do not query like human analysts or even like the applications written by them. Instead, the LLM agents bombard databases with a storm of exploratory requests: schema inspections, partial aggregates, speculative joins, rollback-heavy what-if updates. The authors calls this behavior agentic speculation.
Agentic speculation is positioned as both the problem and the opportunity. The problem is that traditional DBMSs are built for exact intermittent workloads and cannot handle the high-throughput redundant and inefficient querying of LLM agents. The opportunity also lies here. Agentic speculation has recognizable properties and features that invite new designs. Databases should adapt by offering approximate answers, sharing computation across repeated subplans, caching grounding information in an agentic memory store, and even steering agents with cost estimates or semantic hints.
The paper argues the database must bend to the agent's style. But why don't we also consider the other way around? Why shouldn't agents be trained to issue smarter, fewer, more schema-aware queries? The authors take agent inefficiency as a given, I think, in order to preserve the blackbox nature of general LLM agents. After a walkthrough of the paper, I'll revisit this question as well as other directions that occur to me.
Case studies
The authors provide experiments to ground their claims about agentic speculation. The first study uses the BIRD benchmark (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) with DuckDB as the backend. The goal here is to evaluate how well LLMs can convert natural language questions into SQL queries. I, for one, welcome our SQL wielding AI overlords! Here, agents are instantiated as GPT-4o-mini and Qwen2.5-Coder-7B.
The central finding from Figure 1 is that accuracy improves with more attempts for both sequential (one agent issuing multiple turns) and parallel setups (many agents in charge at once). The success rate climbs by 14–70% as the number of queries increases. Brute forcing helps, but it also means flooding the database with redundant queries.
Figure 2 drives that point home. Across 50 independent attempts at a single task, fewer than 10–20% of query subplans are unique. Most work is repeated, often in a trivial manner. Result caching looks like an obvious win here.
The second case study moves beyond single queries into multi-database integration tasks that combine Postgres, MongoDB, DuckDB, and SQLite. Figure 3 plots how OpenAI's o3 model proceeds. Agents begin with metadata exploration (tables, columns), move to partial queries, and eventually to full attempts. But the phases overlap in a messy and uncertain way. The paper then explains that injecting grounding hints into the prompts (such as which column contains information pertinent to the task) reduced the number of queries by more than 20%, which shows how steerability helps. So, the agent is like a loudmouth politician who spews less bullshit when his handlers give him some direction.
The case studies illustrate the four features of agentic speculation: Scale (more attempts do improve success), Redundancy (most attempts repeat prior work), Heterogeneity (workloads mix metadata exploration with partial and complete solutions), and Steerability (agents can be nudged toward efficiency).
Architecture
The proposed architecture aims to redesign the database stack for agent workloads. The key idea is that LLM agents send probes instead of bare SQL. These probes include not just queries but also "briefs" (natural language descriptions of intent, tolerance for approximation, and hints about the phase of work). Communication is key, folks! The database, in turn, parses these probes through an agentic interpreter, optimizes them via a probe optimizer that satisfices rather than guarantees exact results. It then executes these queries against a storage layer augmented with an agentic memory store and a shared transaction manager designed for speculative branching and rollback. Alongside answers, the system may return proactive feedback (hints, cost estimates, schema nudges) to steer agents.
The architecture is maybe too tidy. It shows a single agent swarm funneling probes into a single database engine, which responds in kind. So, this looks very much like a single-client, single-node system. There is no real discussion of multi-tenancy: what happens when two clients, with different goals and different access privileges, hit the same backend? Does one client's agentic memory contaminate another's? Are cached probes and approximations shared across tenants, and if so, who arbitrates correctness and privacy? These questions are briefly mentioned in privacy concerns in Section 6, but the architecture itself is silent. Whether this single-client abstraction can scale to the real, distributed, multi-tenant world remains as the important question.
Query Interfaces
Section 4 focuses on the interface between agents and databases. Probes extend SQL into a dialogue by bundling multiple queries together with a natural-language "brief" describing goals, priorities, and tolerance for error. This allows the system to understand not just what is being asked, but why. For example, is the agent is in metadata exploration or solution formulation mode? The database can then prioritize accordingly and provide a rough sample for schema discovery, and a more exact computation for validation.
Two directions stand out for me. First, on the agent-to-system side, probes may request things SQL cannot express, like "find tables semantically related to electronics". This would require embedding-based similarity operators built into the DBMS.
Second, on the system-to-agent side, the database is now encouraged to become proactive, returning not just answers but feedback. These "sleeper agents" inside the DB can explain why a query returned empty results, suggest alternative tables, or give cost estimates so the agent can rethink a probe before execution.
Processing and Optimizing Probes
Section 5 focuses on how to process probes at scale, and what does it mean to optimize them? The key shift is that the database no longer aims for exact answers to each query. Instead, it seeks to satisfice: provide results that are good enough for the agent to decide its next step.
The paper calls this the approximation technique, and presents it as two folds. First, it provides exploratory scaffolding: quick samples, coarse aggregates, and partial results that help the agent discover which tables, filters, and joins matter. Second, it can be decision-making approximation: estimates with bounded error that may themselves be the final answer, because the human behind the agent cares more about trends than exact counts.
Let's consider the task of finding reasons for why profits in coffee bean sales in Berkeley was low this year relative to last. A human analyst would cut to the chase: they would join sales with stores, compare 2024 vs. 2025, then check returns or closures. A schema-blind LLM agent would issue a flood of redundant queries, many of them dead ends. The proposed system here splits the difference: it prunes irrelevant exploration, offers approximate aggregates up front (coffee sales down ~15%), and caches this in memory so later probes can build from it.
To achieve this, the probe optimizer adapts familiar techniques. Multi-query optimization collapses redundant subplans, approximate query processing provides fast sketches instead of full scans, and incremental evaluation streams partial results with early stopping when the trend is clear. The optimizer works both within a batch of probes (intra-probe) and across turns (inter-probe). It caches results and materializes common joins so that the agent's next attempts don't repeat the same work. The optimization goal is not minimizing per-query latency but minimizing the total interaction time between agent and database, a subtle but important shift.
Indexing, Storage, and Transactions
Section 6 addresses the lower layers of the stack: indexing, storage, and transactions. In order to deal with the dynamic, overlapping, and branch-heavy nature of agentic speculation, it proposes an agentic memory store for semantic grounding, and a new transactional model for branched updates.
The agentic memory store is essentially a semantic cache. It stores results of prior probes, metadata, column encodings, and even embeddings to support similarity search. This way, when the agent inevitably repeats itself, the system can serve cached or related results. The open problem is staleness: if schemas or data evolve, cached grounding may mislead future probes. Traditional DBs handle this through strict invalidation (drop and recompute indexes, refresh materialized views). The paper hints that agentic memory may simply be good enough until corrected, a looser consistency model that may suit LLM's temperament.
For branched updates, the paper proposes "a new transactions framework that is centered on state sharing across probes, each of which may be independently attempting to complete a user-defined sequence of updates". The paper argues for multi-world isolation: each branch must be logically isolated, but may physically overlap to exploit shared state. Supporting thousands of concurrent speculative branches requires something beyond Postgres-style MVCC or Aurora's copy-on-write snapshots.
Discussion
The paper offers an ambitious rethinking of how databases should respond to the arrival of LLM agents. This naturally leaves several open questions for discussion.
In my view, the paper frames the problem asymmetrically: agents are messy, exploratory, redundant, so databases must bend to accommodate them. But is that the only path forward? Alternatively, agents could be fine-tuned to issue smarter probes that are more schema-aware, less redundant, more considerate of cost. A protocol of mutual compromise seems more sustainable than a one-sided redesign. Otherwise we risk ossifying the data systems around today's inefficient LLM habits.
Multi-client operation remains an open issue. The architecture is sketched as though one user's army of agents owns the system. Real deployments will have many clients, with different goals and different access rights, colliding on the same backend. What does agentic memory mean in this context? Similarly, how does load management work? How do we allocate resources fairly among tenants when each may field thousands of speculative queries per second? Traditional databases long ago developed notions of connection pooling, admission control, and multi-tenant isolation; agent-first systems will need new equivalents attuned to speculation.
Finally, there is the question of distribution. The architecture as presented looks like a single-node system: one interpreter, one optimizer, one agentic memory, one transaction manager. Yet the workloads described are precisely the heavy workloads that drove databases toward distributed execution. How should agentic memory be partitioned or replicated across nodes? How would speculative branching work here? How can bandwidth limits be managed when repeated scans, approximate sampling, and multi-query optimization saturate storage I/O? How can cross-shard communication be kept from overwhelming the system when speculative branches and rollbacks trigger network communication at scale?
Future Directions: A Neurosymbolic Angle
If we squint, there is a neurosymbolic flavor to this entire setup. LLM agents represent the neural side: fuzzy reasoning, associative/semantic search, and speculative exploration. Databases constitute the symbolic side with schemas, relational algebra, logical operators, and transactional semantics. The paper is then all about creating an interface where the neural can collaborate with the symbolic by combining the flexibility of learned models with the structure and rigor of symbolic systems.
Probes are already halfway to symbolic logic queries: part SQL fragments, part logical forms, and part neural briefs encoding intent and constraints. If databases learn to proactively steer agents with rules and constraints, and if agents learn to ask more structured probes, the result would look even more like a neurosymbolic reasoning system, where neural components generate hypotheses and symbolic databases test, prune, and ground them. If that happens, we can talk about building a new kind of reasoning stack where the two halves ground and reinforce each other.
MongoDB as an AI-First Platform
Document databases offer an interesting angle on the AI–database melding problem. Their schema flexibility and tolerance for semistructured data make them well-suited for the exploratory phase of agent workloads, when LLMs are still feeling out what fields and joins matter. The looseness of document stores may align naturally with the fuzziness of LLM probes, especially when embeddings are brought into play for semantic search.
MongoDB's acquisition of Voyage AI points at this convergence. With built-in embeddings and vector search, MongoDB aims to support probes that ask for semantically similar documents and provide approximate retrieval early in the exploration phase.
How the 2018 Berkeley AI-Systems Vision Fared
Back in 2018, Berkeley systems group presented a broad vision of the systems challenges for AI. Continuing our tradition of checking in on influential Berkeley AI-systems papers, let's give a brief evaluation. Many of its predictions were directionally correct: specialized hardware, privacy and federated learning, and explainability. Others remain underdeveloped, like cloud–edge integration and continual learning in dynamic environments. What it clearly missed was the rise and dominance of LLMs as the interface to data and applications. As I said back then, plans are useless, but planning is indispensable.
Compared with that blue sky agenda, this new Agent-First Data Systems paper is more technical, grounded, and focused. It does not try to map a decade of AI systems research, but rather focuses on a single pressing problem and proposes mechanisms to cope.
MySQL with Diagrams Part Three: The Life Story of the Writing Process
September 16, 2025
What is Percona’s Transparent Data Encryption Extension for PostgreSQL (pg_tde)?
MongoDB Multikey Indexes and Index Bound Optimization
Previously, I discussed how MongoDB keeps track of whether indexed fields contain arrays. This matters because if the database knows a filter is operating on scalar values, it can optimize index range scans.
As an example, let's begin with a collection that has an index on two fields, both containing only scalar values:
db.demo.createIndex( { field1:1, field2:1 } );
db.demo.insertOne(
{ _id: 0, field1: 2 , field2: "y" },
);
With arrays, each combination is stored as a separate entry in the index. Instead of diving into the internals (as in the previous post), you can visualize this with an aggregation pipeline by unwinding every field:
db.demo.aggregate([
{ $unwind: "$field1" },
{ $unwind: "$field2" },
{ $project: { field1: 1, field2: 1, "document _id": "$_id", _id: 0 } },
{ $sort: { "document _id": 1 } }
]);
[ { field1: 2, field2: 'y', 'document _id': 0 } ]
Note that this is simply an example to illustrate how index entries are created, ordered, and how they belong to a document. In a real index, the internal key is used. However, since showRecordId() can be used only on cursors and not in aggregation pipelines, I displayed "_id" instead.
Single-key index with bound intersection
As I have documents with only scalars, there's only one entry for this document, and the index is not multikey and this is visible in the execution plan as isMultiKey: false and multiKeyPaths: { field1: [], field2: [] } with empty markers:
db.demo.find(
{ field1: { $gt: 1, $lt: 3 } }
).explain("executionStats").executionStats;
{
executionSuccess: true,
nReturned: 1,
executionTimeMillis: 0,
totalKeysExamined: 1,
totalDocsExamined: 1,
executionStages: {
isCached: false,
stage: 'FETCH',
nReturned: 1,
executionTimeMillisEstimate: 0,
works: 2,
advanced: 1,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 1,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 1,
executionTimeMillisEstimate: 0,
works: 2,
advanced: 1,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { field1: 1, field2: 1 },
indexName: 'field1_1_field2_1',
isMultiKey: false,
multiKeyPaths: { field1: [], field2: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { field1: [ '(1, 3)' ], field2: [ '[MinKey, MaxKey]' ] },
keysExamined: 1,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
}
}
As the query planner knows that there's only one index entry per document, the filter { field1: {$gt: 1, $lt: 3} } can be applied as one index range indexBounds: { field1: [ '(1, 3)' ], field2: [ '[MinKey, MaxKey]' ] }, reading only the index entry (keysExamined: 1) required to get the result (nReturned: 1).
Multikey index with bound intersection
I add a document with an array in field2:
db.demo.insertOne(
{ _id:1, field1: 2 , field2: [ "x", "y", "z" ] },
);
My visualization of the index entries show multiple keys per document:
db.demo.aggregate([
{ $unwind: "$field1" },
{ $unwind: "$field2" },
{ $project: { field1: 1, field2: 1, "document _id": "$_id", _id: 0 } } ,
{ $sort: { "document _id": 1 } }
]);
[
{ field1: 2, field2: 'y', 'document _id': 0 },
{ field1: 2, field2: 'x', 'document _id': 1 },
{ field1: 2, field2: 'y', 'document _id': 1 },
{ field1: 2, field2: 'z', 'document _id': 1 }
]
The index is marked as multikey (isMultiKey: true), but only for field2 (multiKeyPaths: { field1: [], field2: [ 'field2' ] }):
db.demo.find(
{ field1: { $gt: 1, $lt: 3 } }
).explain("executionStats").executionStats;
{
executionSuccess: true,
nReturned: 2,
executionTimeMillis: 0,
totalKeysExamined: 4,
totalDocsExamined: 2,
executionStages: {
isCached: false,
stage: 'FETCH',
nReturned: 2,
executionTimeMillisEstimate: 0,
works: 5,
advanced: 2,
needTime: 2,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 2,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 2,
executionTimeMillisEstimate: 0,
works: 5,
advanced: 2,
needTime: 2,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { field1: 1, field2: 1 },
indexName: 'field1_1_field2_1',
isMultiKey: true,
multiKeyPaths: { field1: [], field2: [ 'field2' ] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { field1: [ '(1, 3)' ], field2: [ '[MinKey, MaxKey]' ] },
keysExamined: 4,
seeks: 1,
dupsTested: 4,
dupsDropped: 2
}
}
}
As the query planner knows that there's only one value per document for field1, the multiple keys have all the same value for this field. The filter can apply on it with tight bounds ((1, 3)) and the multiple keys are deduplicated (dupsTested: 4, dupsDropped: 2).
Multikey index with larger bound
I add a document with an array in field1:
db.demo.insertOne(
{ _id:2, field1: [ 0,5 ] , field2: "x" },
);
My visualization of the index entries show the combinations:
db.demo.aggregate([
{ $unwind: "$field1" },
{ $unwind: "$field2" },
{ $project: { field1: 1, field2: 1, "document _id": "$_id", _id: 0 } } ,
{ $sort: { "document _id": 1 } }
]);
[
{ field1: 2, field2: 'y', 'document _id': 0 },
{ field1: 2, field2: 'x', 'document _id': 1 },
{ field1: 2, field2: 'y', 'document _id': 1 },
{ field1: 2, field2: 'z', 'document _id': 1 },
{ field1: 0, field2: 'x', 'document _id': 2 },
{ field1: 5, field2: 'x', 'document _id': 2 }
]
If I apply the filter to the collection, the document with {_id: 2} matches because it has values of field1 greater than 1 and values of field1 lower than 3 (I didn't use $elemMatch to apply to the same element):
db.demo.aggregate([
{ $match: { field1: { $gt: 1, $lt: 3 } } },
{ $unwind: "$field1" },
{ $unwind: "$field2" },
{ $project: { field1: 1, field2: 1, "document _id": "$_id", _id: 0 } } ,
{ $sort: { "document _id": 1 } }
]);
[
{ field1: 2, field2: 'y', 'document _id': 0 },
{ field1: 2, field2: 'x', 'document _id': 1 },
{ field1: 2, field2: 'y', 'document _id': 1 },
{ field1: 2, field2: 'z', 'document _id': 1 },
{ field1: 0, field2: 'x', 'document _id': 2 },
{ field1: 5, field2: 'x', 'document _id': 2 }
]
However, if I apply the same filter after the $unwind, that simulates the index entries, there's no entry for {_id: 2} that verifies { field1: { $gt: 1, $lt: 3 } }:
db.demo.aggregate([
{ $unwind: "$field1" },
{ $match: { field1: { $gt: 1, $lt: 3 } } },
{ $unwind: "$field2" },
{ $project: { field1: 1, field2: 1, "document _id": "$_id", _id: 0 } } ,
{ $sort: { "document _id": 1 } }
]);
[
{ field1: 2, field2: 'y', 'document _id': 0 },
{ field1: 2, field2: 'x', 'document _id': 1 },
{ field1: 2, field2: 'y', 'document _id': 1 },
{ field1: 2, field2: 'z', 'document _id': 1 }
]
This shows that the query planner cannot utilize the tight range field1: [ '(1, 3)' ] anymore. Because field1 is multikey (multiKeyPaths: { field1: [ 'field1' ] }), the planner can only apply the bound to the leading index field during the index scan. The other predicate must be evaluated after fetching the document that contains the whole array (filter: { field1: { '$gt': 1 } }):
db.demo.find(
{ field1: { $gt: 1, $lt: 3 } }
).explain("executionStats").executionStats;
{
executionSuccess: true,
nReturned: 3,
executionTimeMillis: 0,
totalKeysExamined: 5,
totalDocsExamined: 3,
executionStages: {
isCached: false,
stage: 'FETCH',
filter: { field1: { '$gt': 1 } },
nReturned: 3,
executionTimeMillisEstimate: 0,
works: 7,
advanced: 3,
needTime: 2,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 3,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 3,
executionTimeMillisEstimate: 0,
works: 6,
advanced: 3,
needTime: 2,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { field1: 1, field2: 1 },
indexName: 'field1_1_field2_1',
isMultiKey: true,
multiKeyPaths: { field1: [ 'field1' ], field2: [ 'field2' ] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { field1: [ '[-inf.0, 3)' ], field2: [ '[MinKey, MaxKey]' ] },
keysExamined: 5,
seeks: 1,
dupsTested: 5,
dupsDropped: 2
}
}
}
This execution plan is logically equivalent to the following:
db.demo.aggregate([
// simulate index entries as combinations
{ $unwind: "$field1" },
{ $unwind: "$field2" },
// simulate index range scan bounds
{ $match: { field1: { $lt: 3 } } },
// simulate deduplication
{ $group: { _id: "$_id" } },
// simulate fetching the full document
{ $lookup: { from: "demo", localField: "_id", foreignField: "_id", as: "fetch" } },
{ $unwind: "$fetch" },
{ $replaceRoot: { newRoot: "$fetch" } },
// apply the remaining filter
{ $match: { field1: { $gt: 1 } } },
]).explain();
I show this to make it easier to understand why MongoDB cannot intersect the index bounds in this case, resulting in a wider scan range ([MinKey, MaxKey]), especially when the filter must apply to multiple keys (this behavior would be different if the query used $elemMatch). MongoDB allows flexible schema where a field can be an array, but keeps track of it to optimize the index range scan when it is known that there are only scalars in a field.
Final notes
MongoDB’s flexible schema lets you embed many‑to‑many relationships within a document, improving data locality and often eliminating the need for joins. Despite this, most queries traverse one-to-many relationships, building hierarchical views for particular use cases. When optimizing access through secondary indexes, it’s important not to combine too many multikey fields. If you attempt to do so, MongoDB will block both the creation of such indexes and insertion into collections containing them:
db.demo.insertOne(
{ _id:3, field1: [ 0,5 ] , field2: [ "x", "y", "z" ] },
);
MongoServerError: cannot index parallel arrays [field2] [field1]
MongoDB doesn't allow compound indexes on two array fields from the same document, known as "parallel arrays." Indexing such fields would generate a massive number of index entries due to all possible element combinations, making maintenance and semantics unmanageable. If you attempt this, MongoDB rejects it with a "cannot index parallel arrays" error. This rule ensures index entries remain well-defined. This is per-document, so partial indexes may be created as long a same index doesn't have entries for multiple arrays from the same document.
While the execution plan may display isMultiKey and multiKeyPaths, the multiKey status is managed on a per-index and per-field path basis. This state is updated automatically as documents are inserted, updated, or deleted, and stored in the index metadata, as illustrated in the previous post, rather than recalculated dynamically for each query.
I used top-level fields in this demonstration but arrays can be nested — which is why the list is called "path". multiKeyPaths is the list of dot-separated paths within a field that cause the index to be multikey (i.e., where MongoDB encountered arrays).
MongoDB determines precise index bounds on compound indexes by tracking which fields are multikey using multiKeyPaths metadata. This allows optimized range scans on scalar fields, even if other indexed fields contain arrays. What the query planner can do depends on the path and the presence of array in a field within the path. There are examples documented in Compound Bounds of Multiple Fields from the Same Array. To show a quick example, I add a field with a nested array and an index on its fields:
db.demo.createIndex( { "obj.sub1":1, "obj.sub2":1 } )
;
db.demo.insertOne(
{ _id:4, obj: { sub1: [ 1, 2, 3 ] , sub2: "x" } },
);
db.demo.find(
{ "obj.sub1": { $gt:1 } , "obj.sub2": { $lt: 3 } }
).explain("executionStats").executionStats
;
The plan shows that the multikey status comes from only one array field (obj.sub1) and all filters are pushed down to the index bounds:
keyPattern: { 'obj.sub1': 1, 'obj.sub2': 1 },
isMultiKey: true,
multiKeyPaths: { 'obj.sub1': [ 'obj.sub1' ], 'obj.sub2': [] },
indexBounds: {
'obj.sub1': [ '(1, inf.0]' ],
'obj.sub2': [ '[-inf.0, 3)' ]
}
I insert another document with an array at higher level:
db.demo.insertOne(
{ _id:5, obj: [ { sub1: [ 1, 2,
Processing large jobs with Edge Functions, Cron, and Queues
Learn how to build scalable data processing pipelines using Supabase Edge Functions, cron jobs, and database queues and handle large workloads without timeouts or crashes.
Defense in Depth for MCP Servers
Learn about the security risks of connecting AI agents to databases and how to implement defense in depth strategies to protect your data from prompt injection attacks.
September 15, 2025
Deploying Percona Operator for MongoDB Across GKE Clusters with MCS
In response to a developer asking about systems
Sometimes I get asked questions that would be more fun to answer in public. All letters are treated as anonymous unless permission is otherwise granted.
Hey [Redacted]! It's great to hear from you. I'm very glad you joined the coffee club and met some good folks. :)
You asked how to learn about systems. A great question! I think I need to start first with what I mean when I say systems.
My definition of systems is all of the underlying software we developers use but are taught not to think about because they are so solid: our compilers and interpreters, our databases, our operating system, our browser, and so on. We think of them as basically not having bugs, we just count on them to be correct and fast enough so we can build the applications that really matter to users.
But 1) some developers do actually have to work on these fundamental blocks (compilers, databases, operating systems, browsers, etc.) and 2) it's not thaaaat hard to get into this development professionally and 3) even if you don't get into it professionally, having a better understanding of these fundamental blocks will make you a better application developer. At least I think so.
To get into systems I think it starts by you just questioning how each layer you build on works. Try building that layer yourself. For example you've probably used a web framework like Rails or Next.js. But you can just go and write that layer yourself too (for education).
And you've probably used Postgres or SQLite or DynamoDB. But you can also just go and write that layer yourself (for education). It's this habit of thinking and digging into the next lower layer that will get you into systems. Basically, not being satisfied with the black box.
I do not think there are many good books on programming in general, and very very few must-read ones, but one that I recommend to everybody is Designing Data Intensive Applications. I think it's best if you read it with a group of people. (My book club will read it in December when the 2nd edition comes out, you should join.) But this book is specific to data obviously and not interested in the fundamentals of other systems things like compilers or operating systems or browsers or so on.
Also, I see getting into this as a long-term thing. Throughout my whole career (almost 11 years now) I definitely always tried to dig into compilers and interpreters, I wrote and blogged about toy implementations a lot. And then 5 years ago I started digging into databases and saw that there was more career potential there. But it still took 4 years until I got my first job as a developer working on a database (the job I currently have).
Things take time to learn and that's ok! You have a long career to look forward to. And if you end up not wanting to dig into this stuff that's totally fine too. I think very few developers actually do. And they still have fine careers.
Anyway, I hope this is at least mildly useful. I hope you join nycsystems.xyz as well and look forward to seeing you at future coffee clubs!
Cheers,
Phil
September 14, 2025
MongoDB Internals: How Collections and Indexes Are Stored in WiredTiger
WiredTiger is MongoDB’s default storage engine, but what really occurs behind the scenes when collections and indexes are saved to disk? In this short deep dive, we’ll explore the internals of WiredTiger data files, covering everything from _mdb_catalog metadata and B-Tree page layouts to BSON storage, primary and secondary indexes, and multi-key array handling. The goal is to introduce useful low-level tools like wt and other utilities.
I ran this experiment in a Docker container, set up as described in a previous blog post:
docker run --rm -it --cap-add=SYS_PTRACE mongo bash
# install required packages
apt-get update && apt-get install -y git xxd strace curl jq python3 python3-dev python3-pip python3-venv python3-pymongo python3-bson build-essential cmake gcc g++ libstdc++-12-dev libtool autoconf automake swig liblz4-dev zlib1g-dev libmemkind-dev libsnappy-dev libsodium-dev libzstd-dev
# get WiredTiger main branch
curl -L $(curl -s https://api.github.com/repos/wiredtiger/wiredtiger/releases/latest | jq -r '.tarball_url') -o wiredtiger.tar.gz
git clone https://github.com/wiredtiger/wiredtiger.git
cd wiredtiger
# Compile
mkdir build && cmake -S /wiredtiger -B /wiredtiger/build \
-DCMAKE_C_FLAGS="-O0 -Wno-error -Wno-format-overflow -Wno-error=array-bounds -Wno-error=format-overflow -Wno-error=nonnull" \
-DHAVE_BUILTIN_EXTENSION_SNAPPY=1 \
-DCMAKE_BUILD_TYPE=Release
cmake --build /wiredtiger/build
# add `wt` binaries and other tools in the PATH
export PATH=$PATH:/wiredtiger/build:/wiredtiger/tools
# Start mongodb
mongod &
I use the mongo image, add the WiredTiger sources from the main branch, compile it to get wt, and start mongod.
I create a small collection with three documents, and an index, and stop mongod:
mongosh <<'JS'
db.franck.insertMany([
{_id:"aaa",val1:"xxx",val2:"yyy",val3:"zzz",msg:"hello world"},
{_id:"bbb",val1:"xxx",val2:"yyy",val3:"zzz",msg:["hello","world"]},
{_id:"ccc",val1:"xxx",val2:"yyy",val3:"zzz",msg:["hello","world","hello","again"]}
]);
db.franck.createIndex({_id:1,val1:1,val2:1,val3:1,msg:1});
db.franck.find().showRecordId();
use admin;
db.shutdownServer();
JS
I stop MongoDB so that I can access the WiredTiger files with wt without them being opened and locked by another program. Before stopping, I displayed the documents:
[
{
_id: 'aaa',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: 'hello world',
'$recordId': Long('1')
},
{
_id: 'bbb',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: [ 'hello', 'world' ],
'$recordId': Long('2')
},
{
_id: 'ccc',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: [ 'hello', 'world', 'hello', 'again' ],
'$recordId': Long('3')
}
]
The files are stored in the default WiredTiger directory /data/db
MongoDB catalog, which maps the MongoDB collections to their storage attributes, is stored in a WiredTiger table _mdb_catalog. The default WiredTiger directory is /data/db:
root@72cf410c04cb:/wiredtiger# ls -altU /data/db
drwxr-xr-x. 4 root root 32 Sep 1 23:10 ..
-rw-------. 1 root root 0 Sep 13 20:33 mongod.lock
drwx------. 2 root root 74 Sep 13 20:29 journal
-rw-------. 1 root root 21 Sep 12 22:47 WiredTiger.lock
-rw-------. 1 root root 50 Sep 12 22:47 WiredTiger
-rw-------. 1 root root 73728 Sep 13 20:33 WiredTiger.wt
-rw-r--r--. 1 root root 1504 Sep 13 20:33 WiredTiger.turtle
-rw-------. 1 root root 4096 Sep 13 20:33 WiredTigerHS.wt
-rw-------. 1 root root 36864 Sep 13 20:33 sizeStorer.wt
-rw-------. 1 root root 36864 Sep 13 20:33 _mdb_catalog.wt
-rw-------. 1 root root 114 Sep 12 22:47 storage.bson
-rw-------. 1 root root 20480 Sep 13 20:33 collection-0-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:33 index-1-3767590060964183367.wt
-rw-------. 1 root root 36864 Sep 13 20:33 collection-2-3767590060964183367.wt
-rw-------. 1 root root 36864 Sep 13 20:33 index-3-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:20 collection-4-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:20 index-5-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:33 index-6-3767590060964183367.wt
drwx------. 2 root root 4096 Sep 13 20:33 diagnostic.data
drwx------. 3 root root 21 Sep 13 20:17 .mongodb
-rw-------. 1 root root 20480 Sep 13 20:33 collection-0-6917019827977430149.wt
-rw-------. 1 root root 20480 Sep 13 20:23 index-1-6917019827977430149.wt
-rw-------. 1 root root 20480 Sep 13 20:25 index-2-6917019827977430149.wt
Catalog
_mdb_catalog maps MongoDB names to WiredTiger table names. wt lists the key (recordId) and value (BSON):
root@72cf410c04cb:~# wt -h /data/db dump table:_mdb_catalog
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:_mdb_catalog
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source="file:_mdb_catalog.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
\81
r\01\00\00\03md\00\eb\00\00\00\02ns\00\15\00\00\00admin.system.version\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\ba\fc\c2\a9;EC\94\9d\a1\df(\c9\87\eaW\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00+\00\00\00\02_id_\00\1c\00\00\00index-1-3767590060964183367\00\00\02ns\00\15\00\00\00admin.system.version\00\02ident\00!\00\00\00collection-0-3767590060964183367\00\00
\82
\7f\01\00\00\03md\00\fb\00\00\00\02ns\00\12\00\00\00local.startup_log\00\03options\003\00\00\00\05uuid\00\10\00\00\00\042}_\a9\16,L\13\aa*\09\b5<\ea\aa\d6\08capped\00\01\10size\00\00\00\a0\00\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00+\00\00\00\02_id_\00\1c\00\00\00index-3-3767590060964183367\00\00\02ns\00\12\00\00\00local.startup_log\00\02ident\00!\00\00\00collection-2-3767590060964183367\00\00
\83
^\02\00\00\03md\00\a7\01\00\00\02ns\00\17\00\00\00config.system.sessions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04D\09],\c6\15FG\b6\e2m!\ba\c4j<\00\04indexes\00Q\01\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\031\00\b7\00\00\00\03spec\00R\00\00\00\10v\00\02\00\00\00\03key\00\12\00\00\00\10lastUse\00\01\00\00\00\00\02name\00\0d\00\00\00lsidTTLIndex\00\10expireAfterSeconds\00\08\07\00\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\14\00\00\00\05lastUse\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00Y\00\00\00\02_id_\00\1c\00\00\00index-5-3767590060964183367\00\02lsidTTLIndex\00\1c\00\00\00index-6-3767590060964183367\00\00\02ns\00\17\00\00\00config.system.sessions\00\02ident\00!\00\00\00collection-4-3767590060964183367\00\00
\84
\a6\02\00\00\03md\00\e6\01\00\00\02ns\00\0c\00\00\00test.franck\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04>\04\ec\e2SUK\ca\98\e8\bf\fe\0eu\81L\00\04indexes\00\9b\01\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\031\00\01\01\00\00\03spec\00q\00\00\00\10v\00\02\00\00\00\03key\005\00\00\00\10_id\00\01\00\00\00\10val1\00\01\00\00\00\10val2\00\01\00\00\00\10val3\00\01\00\00\00\10msg\00\01\00\00\00\00\02name\00!\00\00\00_id_1_val1_1_val2_1_val3_1_msg_1\00\00\08ready\00\01\08multikey\00\01\03multikeyPaths\00?\00\00\00\05_id\00\01\00\00\00\00\00\05val1\00\01\00\00\00\00\00\05val2\00\01\00\00\00\00\00\05val3\00\01\00\00\00\00\00\05msg\00\01\00\00\00\00\01\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00m\00\00\00\02_id_\00\1c\00\00\00index-1-6917019827977430149\00\02_id_1_val1_1_val2_1_val3_1_msg_1\00\1c\00\00\00index-2-6917019827977430149\00\00\02ns\00\0c\00\00\00test.franck\00\02ident\00!\00\00\00collection-0-6917019827977430149\00\00
I can decode the BSON value with wt_to_mdb_bson.py to display it as JSON, and use jq to filter the file information about the collection I've created:
wt -h /data/db dump -x table:_mdb_catalog |
wt_to_mdb_bson.py -m dump -j |
jq 'select(.value.ns == "test.franck") |
{ns: .value.ns, ident: .value.ident, idxIdent: .value.idxIdent}
'
{
"ns": "test.franck",
"ident": "collection-0-6917019827977430149",
"idxIdent": {
"_id_": "index-1-6917019827977430149",
"_id_1_val1_1_val2_1_val3_1_msg_1": "index-2-6917019827977430149"
}
}
ident is the WiredTiger table name (collection-...) for the collection documents. All collections have a primary key index on "_id" and additional secondary indexes, stored in WiredTiger tables (index-...). These indexes are stored as .wt files in the data directory.
Collection
Using the WiredTiger table name for the collection, I dump the content, keys, and values, and decode it as JSON:
wt -h /data/db dump -x table:collection-0-6917019827977430149 |
wt_to_mdb_bson.py -m dump -j
{"key": "81", "value": {"_id": "aaa", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": "hello world"}}
{"key": "82", "value": {"_id": "bbb", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": ["hello", "world"]}}
{"key": "83", "value": {"_id": "ccc", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": ["hello", "world", "hello", "again"]}}
The "key" here is the recordId — an internal, unsigned 64-bit integer MongoDB uses (when not using clustered collections) to order documents in the collection table. The 0x80 offset is because the storage key is stored as a signed 8‑bit integer, but encoded in an order-preserving way.
I can also use wt_binary_decode.py to look at the file blocks. Here is the leaf page (page type: 7 (WT_PAGE_ROW_LEAF)) that contains my three documents as six key and value cells (cells (oflow len): 6) :
wt_binary_decode.py --offset 4096 --page 1 --verbose --split --bson /data/db/collection-0-6917019827977430149.wt
/data/db/collection-0-6917019827977430149.wt, position 0x1000/0x5000, pagelimit 1
Decode at 4096 (0x1000)
0: 00 00 00 00 00 00 00 00 1f 0f 00 00 00 00 00 00 5f 01 00 00
06 00 00 00 07 04 00 01 00 10 00 00 64 0a ec 4b 01 00 00 00
Page Header:
recno: 0
writegen: 3871
memsize: 351
ncells (oflow len): 6
page type: 7 (WT_PAGE_ROW_LEAF)
page flags: 0x4
version: 1
Block Header:
disk_size: 4096
checksum: 0x4bec0a64
block flags: 0x1
0: 28: 05 81
desc: 0x5 short key 1 bytes:
<packed 1 (0x1)>
1: 2a: 80 91 51 00 00 00 02 5f 69 64 00 04 00 00 00 61 61 61 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 02 6d 73 67 00 0c 00 00 00 68 65 6c 6c 6f 20 77 6f 72 6c
64 00 00
cell is valid BSON
{ '_id': 'aaa',
'msg': 'hello world',
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
2: 7d: 05 82
desc: 0x5 short key 1 bytes:
<packed 2 (0x2)>
3: 7f: 80 a0 60 00 00 00 02 5f 69 64 00 04 00 00 00 62 62 62 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 04 6d 73 67 00 1f 00 00 00 02 30 00 06 00 00 00 68 65 6c
6c 6f 00 02 31 00 06 00 00 00 77 6f 72 6c 64 00 00 00
cell is valid BSON
{ '_id': 'bbb',
'msg': ['hello', 'world'],
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
4: e1: 05 83
desc: 0x5 short key 1 bytes:
<packed 3 (0x3)>
5: e3: 80 ba 7a 00 00 00 02 5f 69 64 00 04 00 00 00 63 63 63 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 04 6d 73 67 00 39 00 00 00 02 30 00 06 00 00 00 68 65 6c
6c 6f 00 02 31 00 06 00 00 00 77 6f 72 6c 64 00 02 32 00 06
00 00 00 68 65 6c 6c 6f 00 02 33 00 06 00 00 00 61 67 61 69
6e 00 00 00
cell is valid BSON
{ '_id': 'ccc',
'msg': ['hello', 'world', 'hello', 'again'],
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
The script shows the raw hexadecimal bytes for the key, a description of the cell type, and the decoded logical value using WiredTiger’s order‑preserving integer encoding (packed int encoding). In this example, the raw byte 0x81 decodes to record ID 1:
0: 28: 05 81
desc: 0x5 short key 1 bytes:
<packed 1 (0x1)>
Here is the branch page (page type: 6 (WT_PAGE_ROW_INT)) that references it:
wt_binary_decode.py --offset 8192 --page 1 --verbose --split --bson /data/db/collection-0-6917019827977430149.wt
/data/db/collection-0-6917019827977430149.wt, position 0x2000/0x5000, pagelimit 1
Decode at 8192 (0x2000)
0: 00 00 00 00 00 00 00 00 20 0f 00 00 00 00 00 00 34 00 00 00
02 00 00 00 06 00 00 01 00 10 00 00 21 df 20 d6 01 00 00 00
Page Header:
recno: 0
writegen: 3872
memsize: 52
ncells (oflow len): 2
page type: 6 (WT_PAGE_ROW_INT)
page flags: 0x0
version: 1
Block Header:
disk_size: 4096
checksum: 0xd620df21
block flags: 0x1
0: 28: 05 00
desc: 0x5 short key 1 bytes:
""
1: 2a: 38 00 87 80 81 e4 4b eb ea 24
desc: 0x38 addr (leaf no-overflow) 7 bytes:
<packed 0 (0x0)> <packed 1 (0x1)> <packed 1273760356 (0x4bec0a64)>
As we have seen in the previous blog post, the pointer includes the checksum of the page it references (0x4bec0a64) to detect disc corruption.
Another utility, bsondump, can be used to display the output of wt dump -x as JSON, like wt_to_mdb_bson.py, but requires some filtering to get the BSON content:
wt -h /data/db dump -x table:collection-0-6917019827977430149 | # dump in hexa
egrep '025f696400' | # all documents have an "_id " field
xxd -r -p | # gets the plain binary data
bsondump --type=json # display BSON it as JSON
{"_id":"aaa","val1":"xxx","val2":"yyy","val3":"zzz","msg":"hello world"}
{"_id":"bbb","val1":"xxx","val2":"yyy","val3":"zzz","msg":["hello","world"]}
{"_id":"ccc","val1":"xxx","val2":"yyy","val3":"zzz","msg":["hello","world","hello","again"]}
2025-09-14T08:57:36.182+0000 3 objects found
It also provides a debug type output that gives more insights into how it is stored internally, especially for documents with arrays:
wt -h /data/db dump -x table:collection-0-6917019827977430149 | # dump in hexa
egrep '025f696400' | # all documents have an "_id " field
xxd -r -p | # gets the plain binary data
bsondump --type=debug # display BSON as it is stored
--- new object ---
size : 81
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 2 size: 21
--- new object ---
size : 96
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 4 size: 36
--- new object ---
size : 31
0
type: 2 size: 13
1
type: 2 size: 13
--- new object ---
size : 122
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 4 size: 62
--- new object ---
size : 57
0
type: 2 size: 13
1
type: 2 size: 13
2
type: 2 size: 13
3
type: 2 size: 13
2025-09-14T08:59:15.268+0000 3 objects found
Arrays in BSON are just sub-objects with the array position as a field name.
Primary index
RecordId is an internal, logical key used in the BTree to store the collection. It allows documents to be physically moved without fragmentation when they're updated. All indexes reference documents by recordId, not their physical location. Access by "_id" requires a unique index created automatically with the collection and stored as another WiredTiger table. Here is the content:
wt -h /data/db dump -p table:index-1-6917019827977430149
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:index-1-6917019827977430149
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source="file:index-1-6917019827977430149.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
<aaa\00\04
\00\08
<bbb\00\04
\00\10
<ccc\00\04
\00\18
There are three entries, one for each document, with the "_id" value (aaa,bbb,ccc) as the key, and the recordId as the value. The values are packed (see documentation), for example < prefixes a little-endian value.
In MongoDB’s KeyString format, the recordId is stored in a special packed encoding where three bits are added to the right of the big-endian value, to be able to store the length at the end of the key. The same is used when it is in the value part of the index entry, in a unique index. To decode it, you need to shift the last byte right by three bits. Here, 0x08 >> 3 = 1, 0x10 >> 3 = 2, and 0x18 >> 3 = 3, which are the recordId of my documents.
I decode the page that contains those index entries:
wt_binary_decode.py --offset 4096 --page 1 --verbose --split /data/db/index-1-6917019827977430149.wt
/data/db/index-1-6917019827977430149.wt, position 0x1000/0x5000, pagelimit 1
Decode at 4096 (0x1000)
0: 00 00 00 00 00 00 00 00 1f 0f 00 00 00 00 00 00 46 00 00 00
06 00 00 00 07 04 00 01 00 10 00 00 7c d3 87 60 01 00 00 00
Page Header:
recno: 0
writegen: 3871
memsize: 70
ncells (oflow len): 6
page type: 7 (WT_PAGE_ROW_LEAF)
page flags: 0x4
version: 1
Block Header:
disk_size: 4096
checksum: 0x6087d37c
block flags: 0x1
0: 28: 19 3c 61 61 61 00 04
desc: 0x19 short key 6 bytes:
"<aaa"
1: 2f: 0b 00 08
desc: 0xb short val 2 bytes:
"
2: 32: 19 3c 62 62 62 00 04
desc: 0x19 short key 6 bytes:
"<bbb"
3: 39: 0b 00 10
desc: 0xb short val 2 bytes:
""
4: 3c: 19 3c 63 63 63 00 04
desc: 0x19 short key 6 bytes:
"<ccc"
5: 43: 0b 00 18
desc: 0xb short val 2 bytes:
""
This utility doesn't decode the recordId, we need to shift it. There's no BSON to decode in the indexes.
Secondary index
Secondary indexes are similar, except that they can be composed of multiple fields, and any indexed field can contain an array, which may result in multiple index entries for a single document, like an inverted index.
MongoDB tracks which indexed fields contain arrays to improve query planning. A multikey index creates an entry for each array element, and if multiple fields are multikey, it stores entries for all combinations of their values. By knowing exactly which fields are multikey, the query planner can apply tighter index bounds when only one field is involved. This information is stored in the catalog as a "multikey" flag along with the specific "multikeyPaths":
wt -h /data/db dump -x table:_mdb_catalog |
wt_to_mdb_bson.py -m dump -j |
jq 'select(.value.ns == "test.franck") |
.value.md.indexes[] |
{name: .spec.name, key: .spec.key, multikey: .multikey, multikeyPaths: .multikeyPaths | keys}
'
{
"name": "_id_",
"key": {
"_id": { "$numberInt": "1" },
},
"multikey": false,
"multikeyPaths": [
"_id"
]
}
{
"name": "_id_1_val1_1_val2_1_val3_1_msg_1",
"key": {
"_id": { "$numberInt": "1" },
"val1": { "$numberInt": "1" },
"val2": { "$numberInt": "1" },
"val3": { "$numberInt": "1" },
"msg": { "$numberInt": "1" },
},
"multikey": true,
"multikeyPaths": [
"_id",
"msg",
"val1",
"val2",
"val3"
]
}
Here is the dump of my index on {_id:1,val1:1,val2:1,val3:1,msg:1}:
wt -h /data/db dump -p table:index-2-6917019827977430149
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:index-2-6917019827977430149
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source="file:index-2-6917019827977430149.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
<aaa\00<xxx\00<yyy\00<zzz\00<hello world\00\04\00\08
(null)
<bbb\00<xxx\00<yyy\00<zzz\00<hello\00\04\00\10
(null)
<bbb\00<xxx\00<yyy\00<zzz
MongoDB Internals: How Collections and Indexes Are Stored in WiredTiger
WiredTiger is MongoDB’s default storage engine, but what really occurs behind the scenes when collections and indexes are saved to disk? In this short deep dive, we’ll explore the internals of WiredTiger data files, covering everything from _mdb_catalog metadata and B-tree page layouts to BSON storage, primary and secondary indexes, and multi-key array handling. The goal is to introduce useful low-level tools like wt and other utilities.
I ran this experiment in a Docker container, set up as described in a previous blog post:
docker run --rm -it --cap-add=SYS_PTRACE mongo bash
# install required packages
apt-get update && apt-get install -y git xxd strace curl jq python3 python3-dev python3-pip python3-venv python3-pymongo python3-bson build-essential cmake gcc g++ libstdc++-12-dev libtool autoconf automake swig liblz4-dev zlib1g-dev libmemkind-dev libsnappy-dev libsodium-dev libzstd-dev
# get WiredTiger main branch
curl -L $(curl -s https://api.github.com/repos/wiredtiger/wiredtiger/releases/latest | jq -r '.tarball_url') -o wiredtiger.tar.gz
git clone https://github.com/wiredtiger/wiredtiger.git
cd wiredtiger
# Compile
mkdir build && cmake -S /wiredtiger -B /wiredtiger/build \
-DCMAKE_C_FLAGS="-O0 -Wno-error -Wno-format-overflow -Wno-error=array-bounds -Wno-error=format-overflow -Wno-error=nonnull" \
-DHAVE_BUILTIN_EXTENSION_SNAPPY=1 \
-DCMAKE_BUILD_TYPE=Release
cmake --build /wiredtiger/build
# add `wt` binaries and other tools in the PATH
export PATH=$PATH:/wiredtiger/build:/wiredtiger/tools
# Start mongodb
mongod &
I use the mongo image, add the WiredTiger sources from the main branch, compile it to get wt, and start mongod.
I create a small collection with three documents and an index, and stop mongod:
mongosh <<'JS'
db.franck.insertMany([
{_id:"aaa",val1:"xxx",val2:"yyy",val3:"zzz",msg:"hello world"},
{_id:"bbb",val1:"xxx",val2:"yyy",val3:"zzz",msg:["hello","world"]},
{_id:"ccc",val1:"xxx",val2:"yyy",val3:"zzz",msg:["hello","world","hello","again"]}
]);
db.franck.createIndex({_id:1,val1:1,val2:1,val3:1,msg:1});
db.franck.find().showRecordId();
use admin;
db.shutdownServer();
JS
I stop MongoDB so that I can access the WiredTiger files with wt without them being opened and locked by another program. Before stopping, I displayed the documents:
[
{
_id: 'aaa',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: 'hello world',
'$recordId': Long('1')
},
{
_id: 'bbb',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: [ 'hello', 'world' ],
'$recordId': Long('2')
},
{
_id: 'ccc',
val1: 'xxx',
val2: 'yyy',
val3: 'zzz',
msg: [ 'hello', 'world', 'hello', 'again' ],
'$recordId': Long('3')
}
]
The files are stored in the default WiredTiger directory /data/db. MongoDB catalog, which maps the MongoDB collections to their storage attributes, is stored in a WiredTiger table _mdb_catalog. The default WiredTiger directory is /data/db:
root@72cf410c04cb:/wiredtiger# ls -altU /data/db
drwxr-xr-x. 4 root root 32 Sep 1 23:10 ..
-rw-------. 1 root root 0 Sep 13 20:33 mongod.lock
drwx------. 2 root root 74 Sep 13 20:29 journal
-rw-------. 1 root root 21 Sep 12 22:47 WiredTiger.lock
-rw-------. 1 root root 50 Sep 12 22:47 WiredTiger
-rw-------. 1 root root 73728 Sep 13 20:33 WiredTiger.wt
-rw-r--r--. 1 root root 1504 Sep 13 20:33 WiredTiger.turtle
-rw-------. 1 root root 4096 Sep 13 20:33 WiredTigerHS.wt
-rw-------. 1 root root 36864 Sep 13 20:33 sizeStorer.wt
-rw-------. 1 root root 36864 Sep 13 20:33 _mdb_catalog.wt
-rw-------. 1 root root 114 Sep 12 22:47 storage.bson
-rw-------. 1 root root 20480 Sep 13 20:33 collection-0-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:33 index-1-3767590060964183367.wt
-rw-------. 1 root root 36864 Sep 13 20:33 collection-2-3767590060964183367.wt
-rw-------. 1 root root 36864 Sep 13 20:33 index-3-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:20 collection-4-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:20 index-5-3767590060964183367.wt
-rw-------. 1 root root 20480 Sep 13 20:33 index-6-3767590060964183367.wt
drwx------. 2 root root 4096 Sep 13 20:33 diagnostic.data
drwx------. 3 root root 21 Sep 13 20:17 .mongodb
-rw-------. 1 root root 20480 Sep 13 20:33 collection-0-6917019827977430149.wt
-rw-------. 1 root root 20480 Sep 13 20:23 index-1-6917019827977430149.wt
-rw-------. 1 root root 20480 Sep 13 20:25 index-2-6917019827977430149.wt
Catalog
_mdb_catalog maps MongoDB names to WiredTiger table names. wt lists the key (recordId) and value (BSON):
root@72cf410c04cb:~# wt -h /data/db dump table:_mdb_catalog
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:_mdb_catalog
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source="file:_mdb_catalog.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
\81
r\01\00\00\03md\00\eb\00\00\00\02ns\00\15\00\00\00admin.system.version\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04\ba\fc\c2\a9;EC\94\9d\a1\df(\c9\87\eaW\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00+\00\00\00\02_id_\00\1c\00\00\00index-1-3767590060964183367\00\00\02ns\00\15\00\00\00admin.system.version\00\02ident\00!\00\00\00collection-0-3767590060964183367\00\00
\82
\7f\01\00\00\03md\00\fb\00\00\00\02ns\00\12\00\00\00local.startup_log\00\03options\003\00\00\00\05uuid\00\10\00\00\00\042}_\a9\16,L\13\aa*\09\b5<\ea\aa\d6\08capped\00\01\10size\00\00\00\a0\00\00\04indexes\00\97\00\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00+\00\00\00\02_id_\00\1c\00\00\00index-3-3767590060964183367\00\00\02ns\00\12\00\00\00local.startup_log\00\02ident\00!\00\00\00collection-2-3767590060964183367\00\00
\83
^\02\00\00\03md\00\a7\01\00\00\02ns\00\17\00\00\00config.system.sessions\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04D\09],\c6\15FG\b6\e2m!\ba\c4j<\00\04indexes\00Q\01\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\031\00\b7\00\00\00\03spec\00R\00\00\00\10v\00\02\00\00\00\03key\00\12\00\00\00\10lastUse\00\01\00\00\00\00\02name\00\0d\00\00\00lsidTTLIndex\00\10expireAfterSeconds\00\08\07\00\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\14\00\00\00\05lastUse\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00Y\00\00\00\02_id_\00\1c\00\00\00index-5-3767590060964183367\00\02lsidTTLIndex\00\1c\00\00\00index-6-3767590060964183367\00\00\02ns\00\17\00\00\00config.system.sessions\00\02ident\00!\00\00\00collection-4-3767590060964183367\00\00
\84
\a6\02\00\00\03md\00\e6\01\00\00\02ns\00\0c\00\00\00test.franck\00\03options\00 \00\00\00\05uuid\00\10\00\00\00\04>\04\ec\e2SUK\ca\98\e8\bf\fe\0eu\81L\00\04indexes\00\9b\01\00\00\030\00\8f\00\00\00\03spec\00.\00\00\00\10v\00\02\00\00\00\03key\00\0e\00\00\00\10_id\00\01\00\00\00\00\02name\00\05\00\00\00_id_\00\00\08ready\00\01\08multikey\00\00\03multikeyPaths\00\10\00\00\00\05_id\00\01\00\00\00\00\00\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\031\00\01\01\00\00\03spec\00q\00\00\00\10v\00\02\00\00\00\03key\005\00\00\00\10_id\00\01\00\00\00\10val1\00\01\00\00\00\10val2\00\01\00\00\00\10val3\00\01\00\00\00\10msg\00\01\00\00\00\00\02name\00!\00\00\00_id_1_val1_1_val2_1_val3_1_msg_1\00\00\08ready\00\01\08multikey\00\01\03multikeyPaths\00?\00\00\00\05_id\00\01\00\00\00\00\00\05val1\00\01\00\00\00\00\00\05val2\00\01\00\00\00\00\00\05val3\00\01\00\00\00\00\00\05msg\00\01\00\00\00\00\01\00\12head\00\00\00\00\00\00\00\00\00\08backgroundSecondary\00\00\00\00\00\03idxIdent\00m\00\00\00\02_id_\00\1c\00\00\00index-1-6917019827977430149\00\02_id_1_val1_1_val2_1_val3_1_msg_1\00\1c\00\00\00index-2-6917019827977430149\00\00\02ns\00\0c\00\00\00test.franck\00\02ident\00!\00\00\00collection-0-6917019827977430149\00\00
I can decode the BSON value with wt_to_mdb_bson.py to display it as JSON, and use jq to filter the file information about the collection I've created:
wt -h /data/db dump -x table:_mdb_catalog |
wt_to_mdb_bson.py -m dump -j |
jq 'select(.value.ns == "test.franck") |
{ns: .value.ns, ident: .value.ident, idxIdent: .value.idxIdent}
'
{
"ns": "test.franck",
"ident": "collection-0-6917019827977430149",
"idxIdent": {
"_id_": "index-1-6917019827977430149",
"_id_1_val1_1_val2_1_val3_1_msg_1": "index-2-6917019827977430149"
}
}
ident is the WiredTiger table name (collection-...) for the collection documents. All collections have a primary key index on "_id" and additional secondary indexes, stored in WiredTiger tables (index-...). These indexes are stored as .wt files in the data directory.
Collection
Using the WiredTiger table name for the collection, I dump the content, keys, and values, and decode it as JSON:
wt -h /data/db dump -x table:collection-0-6917019827977430149 |
wt_to_mdb_bson.py -m dump -j
{"key": "81", "value": {"_id": "aaa", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": "hello world"}}
{"key": "82", "value": {"_id": "bbb", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": ["hello", "world"]}}
{"key": "83", "value": {"_id": "ccc", "val1": "xxx", "val2": "yyy", "val3": "zzz", "msg": ["hello", "world", "hello", "again"]}}
The "key" here is the recordId—an internal, unsigned 64-bit integer MongoDB uses (when not using clustered collections) to order documents in the collection table. The 0x80 offset is because the storage key is stored as a signed 8‑bit integer, but encoded in an order-preserving way.
I can also use wt_binary_decode.py to look at the file pages. Here is the leaf page (page type: 7 (WT_PAGE_ROW_LEAF)) that contains my three documents as six key and value cells (cells (oflow len): 6) :
wt_binary_decode.py --offset 4096 --page 1 --verbose --split --bson /data/db/collection-0-6917019827977430149.wt
/data/db/collection-0-6917019827977430149.wt, position 0x1000/0x5000, pagelimit 1
Decode at 4096 (0x1000)
0: 00 00 00 00 00 00 00 00 1f 0f 00 00 00 00 00 00 5f 01 00 00
06 00 00 00 07 04 00 01 00 10 00 00 64 0a ec 4b 01 00 00 00
Page Header:
recno: 0
writegen: 3871
memsize: 351
ncells (oflow len): 6
page type: 7 (WT_PAGE_ROW_LEAF)
page flags: 0x4
version: 1
Block Header:
disk_size: 4096
checksum: 0x4bec0a64
block flags: 0x1
0: 28: 05 81
desc: 0x5 short key 1 bytes:
<packed 1 (0x1)>
1: 2a: 80 91 51 00 00 00 02 5f 69 64 00 04 00 00 00 61 61 61 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 02 6d 73 67 00 0c 00 00 00 68 65 6c 6c 6f 20 77 6f 72 6c
64 00 00
cell is valid BSON
{ '_id': 'aaa',
'msg': 'hello world',
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
2: 7d: 05 82
desc: 0x5 short key 1 bytes:
<packed 2 (0x2)>
3: 7f: 80 a0 60 00 00 00 02 5f 69 64 00 04 00 00 00 62 62 62 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 04 6d 73 67 00 1f 00 00 00 02 30 00 06 00 00 00 68 65 6c
6c 6f 00 02 31 00 06 00 00 00 77 6f 72 6c 64 00 00 00
cell is valid BSON
{ '_id': 'bbb',
'msg': ['hello', 'world'],
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
4: e1: 05 83
desc: 0x5 short key 1 bytes:
<packed 3 (0x3)>
5: e3: 80 ba 7a 00 00 00 02 5f 69 64 00 04 00 00 00 63 63 63 00 02
76 61 6c 31 00 04 00 00 00 78 78 78 00 02 76 61 6c 32 00 04
00 00 00 79 79 79 00 02 76 61 6c 33 00 04 00 00 00 7a 7a 7a
00 04 6d 73 67 00 39 00 00 00 02 30 00 06 00 00 00 68 65 6c
6c 6f 00 02 31 00 06 00 00 00 77 6f 72 6c 64 00 02 32 00 06
00 00 00 68 65 6c 6c 6f 00 02 33 00 06 00 00 00 61 67 61 69
6e 00 00 00
cell is valid BSON
{ '_id': 'ccc',
'msg': ['hello', 'world', 'hello', 'again'],
'val1': 'xxx',
'val2': 'yyy',
'val3': 'zzz'}
The script shows the raw hexadecimal bytes for the key, a description of the cell type, and the decoded logical value using WiredTiger’s order‑preserving integer encoding (packed int encoding). In this example, the raw byte 0x81 decodes to record ID 1:
0: 28: 05 81
desc: 0x5 short key 1 bytes:
<packed 1 (0x1)>
Here is the branch page (page type: 6 (WT_PAGE_ROW_INT)) that references it:
wt_binary_decode.py --offset 8192 --page 1 --verbose --split --bson /data/db/collection-0-6917019827977430149.wt
/data/db/collection-0-6917019827977430149.wt, position 0x2000/0x5000, pagelimit 1
Decode at 8192 (0x2000)
0: 00 00 00 00 00 00 00 00 20 0f 00 00 00 00 00 00 34 00 00 00
02 00 00 00 06 00 00 01 00 10 00 00 21 df 20 d6 01 00 00 00
Page Header:
recno: 0
writegen: 3872
memsize: 52
ncells (oflow len): 2
page type: 6 (WT_PAGE_ROW_INT)
page flags: 0x0
version: 1
Block Header:
disk_size: 4096
checksum: 0xd620df21
block flags: 0x1
0: 28: 05 00
desc: 0x5 short key 1 bytes:
""
1: 2a: 38 00 87 80 81 e4 4b eb ea 24
desc: 0x38 addr (leaf no-overflow) 7 bytes:
<packed 0 (0x0)> <packed 1 (0x1)> <packed 1273760356 (0x4bec0a64)>
As we have seen in the previous blog post, the pointer includes the checksum of the page it references (0x4bec0a64) to detect disc corruption.
Another utility, bsondump, can be used to display the output of wt dump -x as JSON, like wt_to_mdb_bson.py, but requires some filtering to get the BSON content:
wt -h /data/db dump -x table:collection-0-6917019827977430149 | # dump in hexa
egrep '025f696400' | # all documents have an "_id " field
xxd -r -p | # gets the plain binary data
bsondump --type=json # display BSON it as JSON
{"_id":"aaa","val1":"xxx","val2":"yyy","val3":"zzz","msg":"hello world"}
{"_id":"bbb","val1":"xxx","val2":"yyy","val3":"zzz","msg":["hello","world"]}
{"_id":"ccc","val1":"xxx","val2":"yyy","val3":"zzz","msg":["hello","world","hello","again"]}
2025-09-14T08:57:36.182+0000 3 objects found
It also provides a debug type output that gives more insights into how it is stored internally, especially for documents with arrays:
wt -h /data/db dump -x table:collection-0-6917019827977430149 | # dump in hexa
egrep '025f696400' | # all documents have an "_id " field
xxd -r -p | # gets the plain binary data
bsondump --type=debug # display BSON as it is stored
--- new object ---
size : 81
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 2 size: 21
--- new object ---
size : 96
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 4 size: 36
--- new object ---
size : 31
0
type: 2 size: 13
1
type: 2 size: 13
--- new object ---
size : 122
_id
type: 2 size: 13
val1
type: 2 size: 14
val2
type: 2 size: 14
val3
type: 2 size: 14
msg
type: 4 size: 62
--- new object ---
size : 57
0
type: 2 size: 13
1
type: 2 size: 13
2
type: 2 size: 13
3
type: 2 size: 13
2025-09-14T08:59:15.268+0000 3 objects found
Arrays in BSON are just sub-objects with the array position as a field name.
Primary index
RecordId is an internal, logical key used in the B-tree to store the collection. It allows documents to be physically moved without fragmentation when they're updated. All indexes reference documents by recordId, not their physical location. Access by "_id" requires a unique index created automatically with the collection and stored as another WiredTiger table. Here is the content:
wt -h /data/db dump -p table:index-1-6917019827977430149
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:index-1-6917019827977430149
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source="file:index-1-6917019827977430149.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
<aaa\00\04
\00\08
<bbb\00\04
\00\10
<ccc\00\04
\00\18
There are three entries, one for each document, with the "_id" value (aaa,bbb,ccc) as the key, and the recordId as the value. The values are packed (see documentation)—for example, < prefixes a little-endian value.
In MongoDB’s KeyString format, the recordId is stored in a special packed encoding where three bits are added to the right of the big-endian value, to be able to store the length at the end of the key. The same is used when it is in the value part of the index entry, in a unique index. To decode it, you need to shift the last byte right by three bits. Here, 0x08 >> 3 = 1, 0x10 >> 3 = 2, and 0x18 >> 3 = 3, which are the recordId of my documents.
I decode the page that contains those index entries:
wt_binary_decode.py --offset 4096 --page 1 --verbose --split /data/db/index-1-6917019827977430149.wt
/data/db/index-1-6917019827977430149.wt, position 0x1000/0x5000, pagelimit 1
Decode at 4096 (0x1000)
0: 00 00 00 00 00 00 00 00 1f 0f 00 00 00 00 00 00 46 00 00 00
06 00 00 00 07 04 00 01 00 10 00 00 7c d3 87 60 01 00 00 00
Page Header:
recno: 0
writegen: 3871
memsize: 70
ncells (oflow len): 6
page type: 7 (WT_PAGE_ROW_LEAF)
page flags: 0x4
version: 1
Block Header:
disk_size: 4096
checksum: 0x6087d37c
block flags: 0x1
0: 28: 19 3c 61 61 61 00 04
desc: 0x19 short key 6 bytes:
"<aaa"
1: 2f: 0b 00 08
desc: 0xb short val 2 bytes:
"
2: 32: 19 3c 62 62 62 00 04
desc: 0x19 short key 6 bytes:
"<bbb"
3: 39: 0b 00 10
desc: 0xb short val 2 bytes:
""
4: 3c: 19 3c 63 63 63 00 04
desc: 0x19 short key 6 bytes:
"<ccc"
5: 43: 0b 00 18
desc: 0xb short val 2 bytes:
""
This utility doesn't decode the recordId—we need to shift it. There's no BSON to decode in the indexes.
Secondary index
Secondary indexes are similar, except that they can be composed of multiple fields, and any indexed field can contain an array, which may result in multiple index entries for a single document, like an inverted index.
MongoDB tracks which indexed fields contain arrays to improve query planning. A multikey index creates an entry for each array element, and if multiple fields are multikey, it stores entries for all combinations of their values. By knowing exactly which fields are multikey, the query planner can apply tighter index bounds when only one field is involved. This information is stored in the catalog as a "multikey" flag along with the specific "multikeyPaths":
wt -h /data/db dump -x table:_mdb_catalog |
wt_to_mdb_bson.py -m dump -j |
jq 'select(.value.ns == "test.franck") |
.value.md.indexes[] |
{name: .spec.name, key: .spec.key, multikey: .multikey, multikeyPaths: .multikeyPaths | keys}
'
{
"name": "_id_",
"key": {
"_id": { "$numberInt": "1" },
},
"multikey": false,
"multikeyPaths": [
"_id"
]
}
{
"name": "_id_1_val1_1_val2_1_val3_1_msg_1",
"key": {
"_id": { "$numberInt": "1" },
"val1": { "$numberInt": "1" },
"val2": { "$numberInt": "1" },
"val3": { "$numberInt": "1" },
"msg": { "$numberInt": "1" },
},
"multikey": true,
"multikeyPaths": [
"_id",
"msg",
"val1",
"val2",
"val3"
]
}
Here is the dump of my index on {_id:1,val1:1,val2:1,val3:1,msg:1}:
wt -h /data/db dump -p table:index-2-6917019827977430149
WiredTiger Dump (WiredTiger Version 12.0.0)
Format=print
Header
table:index-2-6917019827977430149
access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,block_manager=default,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,disaggregated=(page_log=),encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,panic_corrupt=true,repair=false),in_memory=false,ingest=,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=true),lsm=(auto_throttle=,bloom=,bloom_bit_count=,bloom_config=,bloom_hash_count=,bloom_oldest=,chunk_count_limit=,chunk_max=,chunk_size=,merge_max=,merge_min=),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source="file:index-2-6917019827977430149.wt",split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,stable=,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none
Data
<aaa\00<xxx\00<yyy\00<zzz\00<hello world\00\04\00\08
(null)
<bbb\00<xxx\00<yyy\00<zzz\00<hello\00\04\00\10
(null)
<bbb\00<xxx\00<yyy\00<zzz
September 13, 2025
Setsum - order agnostic, additive, subtractive checksum
A brief introduction to Setsum - order agnostic, additive, subtractive checksum
September 12, 2025
Postgres High Availability with CDC
Why a lagging client can stall or break failover, and how MySQL’s GTID model avoids it.