MongoDB is used for its strength in managing online transaction processing (OLTP) with a document model that naturally aligns with domain-specific transactions and their access patterns. In addition to these capabilities, MongoDB supports advanced search techniques through its Atlas Search index, based on Apache Lucene. This can be used for near-real-time analytics and, combined with aggregation pipeline, add some online analytical processing (OLAP) capabilities. Thanks to the document model, this analytics capability doesn't require a different data structure and enables MongoDB to execute hybrid transactional and analytical (HTAP) workloads efficiently, as demonstrated in this article with an example from a healthcare domain.
Traditional relational databases employ a complex query optimization method known as "star transformation" and rely on multiple single-column indexes, along with bitmap operations, for efficient ad-hoc queries. This requires a dimensional schema, or star schema, which differs from the normalized operational schema updated by transactions. In contrast, MongoDB can be queried with a similar strategy using its document schema for operational use cases, simply requiring the addition of an Atlas Search index on the collection that stores transaction facts.
To demonstrate how a single index on a fact collection enables efficient queries even when filters are applied to other dimension collections, I utilize the MedSynora DW dataset, which is similar to a star schema with dimensions and facts. This dataset, published by M. Ebrar Küçük on Kaggle, is a synthetic hospital data warehouse covering patient encounters, treatments, and lab tests, and is compliant with privacy standards for healthcare data science and machine learning.
Import the dataset
The dataset is accessible on Kaggle as a folder of comma-separated values (CSV) files for dimensions and facts compressed into a 730MB zip file. The largest fact table that I'll use holds 10 million records.
I download the CSV files and uncompress them:
curl -L -o medsynora-dw.zip "https://www.kaggle.com/api/v1/datasets/download/mebrar21/medsynora-dw"
unzip medsynora-dw.zip
I import each file into a collection, using mongoimport from the MongoDB Database Tools:
for i in "MedSynora DW"/*.csv
do
mongoimport -d "MedSynoraDW" --file="$i" --type=csv --headerline -c "$(basename "$i" .csv)" -j 8
done
For this demo, I'm interested in two fact tables: FactEncounter
and FactLabTest
. Here are the fields described in the file headers:
# head -1 "MedSynora DW"/Fact{Encounter,LabTests}.csv
==> MedSynora DW/FactEncounter.csv <==
Encounter_ID,Patient_ID,Disease_ID,ResponsibleDoctorID,InsuranceKey,RoomKey,CheckinDate,CheckoutDate,CheckinDateKey,CheckoutDateKey,Patient_Severity_Score,RadiologyType,RadiologyProcedureCount,EndoscopyType,EndoscopyProcedureCount,CompanionPresent
==> MedSynora DW/FactLabTests.csv <==
Encounter_ID,Patient_ID,Phase,LabType,TestName,TestValue
The fact tables reference the following dimensions:
# head -1 "MedSynora DW"/Dim{Disease,Doctor,Insurance,Patient,Room}.csv
==> MedSynora DW/DimDisease.csv <==
Disease_ID,Admission Diagnosis,Disease Type,Disease Severity,Medical Unit
==> MedSynora DW/DimDoctor.csv <==
Doctor_ID,Doctor Name,Doctor Surname,Doctor Title,Doctor Nationality,Medical Unit,Max Patient Count
==> MedSynora DW/DimInsurance.csv <==
InsuranceKey,Insurance Plan Name,Coverage Limit,Deductible,Excluded Treatments,Partial Coverage Treatments
==> MedSynora DW/DimPatient.csv <==
Patient_ID,First Name,Last Name,Gender,Birth Date,Height,Weight,Marital Status,Nationality,Blood Type
==> MedSynora DW/DimRoom.csv <==
RoomKey,Care_Level,Room Type
Here is the dimensional model, often referred to as a "star schema" because the fact tables are located at the center, referencing the dimensions. Because of normalization, when facts contain a one-to-many composition it is described in two CSV files to fit into two SQL tables:
Star schema with facts and dimensions. The facts are stored in two tables in CSV files or a SQL database, but on a single collection in MongoDB. It holds the fact measures and dimension keys, which reference the key of the dimension collections.
MongoDB allows the storage of one-to-many compositions, such as Encounters
and LabTests
, within a single collection. By embedding LabTests
as an array in Encounter
documents, this design pattern promotes data colocation to reduce disk access and increase cache locality, minimizes duplication to improve storage efficiency, maintains data integrity without requiring additional foreign key processing, and enables more indexing possibilities. The document model also circumvents a common issue in SQL analytic queries, where joining prior to aggregation may yield inaccurate results due to the repetition of parent values in a one-to-many relationship.
As this would be the right data model for an operational database with such data, I create a new collection, using an aggregation pipeline, that I'll use instead of the two that were imported from the normalized CSV:
db.FactLabTests.createIndex({ Encounter_ID: 1, Patient_ID: 1 });
db.FactEncounter.aggregate([
{
$lookup: {
from: "FactLabTests",
localField: "Encounter_ID",
foreignField: "Encounter_ID",
as: "LabTests"
}
},
{
$addFields: {
LabTests: {
$map: {
input: "$LabTests",
as: "test",
in: {
Phase: "$$test.Phase",
LabType: "$$test.LabType",
TestName: "$$test.TestName",
TestValue: "$$test.TestValue"
}
}
}
}
},
{
$out: "FactEncounterLabTests"
}
]);
Here is how one document looks:
AtlasLocalDev atlas [direct: primary] MedSynoraDW>
db.FactEncounterLabTests.find().limit(1)
[
{
_id: ObjectId('67fc3d2f40d2b3c843949c97'),
Encounter_ID: 2158,
Patient_ID: 'TR479',
Disease_ID: 1632,
ResponsibleDoctorID: 905,
InsuranceKey: 82,
RoomKey: 203,
CheckinDate: '2024-01-23 11:09:00',
CheckoutDate: '2024-03-29 17:00:00',
CheckinDateKey: 20240123,
CheckoutDateKey: 20240329,
Patient_Severity_Score: 63.2,
RadiologyType: 'None',
RadiologyProcedureCount: 0,
EndoscopyType: 'None',
EndoscopyProcedureCount: 0,
CompanionPresent: 'True',
LabTests: [
{
Phase: 'Admission',
LabType: 'CBC',
TestName: 'Lymphocytes_abs (10^3/µl)',
TestValue: 1.34
},
{
Phase: 'Admission',
LabType: 'Chem',
TestName: 'ALT (U/l)',
TestValue: 20.5
},
{
Phase: 'Admission',
LabType: 'Lipids',
TestName: 'Triglycerides (mg/dl)',
TestValue: 129.1
},
{
Phase: 'Discharge',
LabType: 'CBC',
TestName: 'RBC (10^6/µl)',
TestValue: 4.08
},
...
In MongoDB, the document model utilizes embedding and reference design patterns, resembling a star schema with a primary fact collection and references to various dimension collections. It is crucial to ensure that the dimension references are properly indexed before querying these collections.
Atlas Search index
Search indexes are distinct from regular indexes, which rely on a single composite key, as they can index multiple fields without requiring a specific order to establish a key. This feature makes them perfect for ad-hoc queries, where the filtering dimensions are not predetermined.
I create a single Atlas Search index that encompasses all dimensions or measures that I might use in predicates, including those found in an embedded document:
db.FactEncounterLabTests.createSearchIndex(
"SearchFactEncounterLabTests", {
mappings: {
dynamic: false,
fields: {
"Encounter_ID": { "type": "number" },
"Patient_ID": { "type": "token" },
"Disease_ID": { "type": "number" },
"InsuranceKey": { "type": "number" },
"RoomKey": { "type": "number" },
"ResponsibleDoctorID": { "type": "number" },
"CheckinDate": { "type": "token" },
"CheckoutDate": { "type": "token" },
"LabTests": {
"type": "document" , fields: {
"Phase": { "type": "token" },
"LabType": { "type": "token" },
"TestName": { "type": "token" },
"TestValue": { "type": "number" }
}
}
}
}
}
);
Since I don't need extra text searching on the keys, I designate the character string ones as token. I label the integer keys as number. Generally, the keys are utilized for equality predicates. However, some can be employed for ranges when the format permits, such as check-in and check-out dates formatted as YYYY-MM-DD.
In a relational database, the star schema approach emphasizes the importance of limiting the number of columns in the fact tables, as they typically contain numerous rows. Smaller dimension tables can hold more columns and are typically denormalized in SQL databases (favoring a star schema over a snowflake schema). Likewise, in document modeling, incorporating all dimension fields would unnecessarily increase the size of the fact collection documents, making it more straightforward to reference the dimension collection. The general principles of data modeling in MongoDB enable querying it as a star schema without requiring extra consideration, as MongoDB databases are designed for the application access patterns.
Star query
A star schema allows processing queries which filter fields within dimension collections in several stages:
- In the first stage, filters are applied to the dimension collections to extract all dimension keys. These keys typically do not require additional indexes, as the dimensions are generally small in size.
- In the second stage, a search is conducted using all previously obtained dimension keys on the fact collection. This process utilizes the search index built on those keys, allowing for quick access to the required documents.
- A third stage may retrieve additional dimensions to gather the necessary fields for aggregation or projection. This multi-stage process ensures that the applied filter reduces the dataset from the large fact collection before any further operations are conducted.
For an example query, I aim to analyze lab test records for female patients who are over 170 cm tall, underwent lipid lab tests, have insurance coverage exceeding 80%, and were treated by Japanese doctors in deluxe rooms for hematological conditions.
Search aggregation pipeline
To optimize the fact collection process and apply all filters, I will begin with a simple aggregation pipeline that starts with a search on the search index. This allows for filters to be applied directly to the fact collection's fields, while additional filters will be incorporated in stage one of the star query. I utilize a local variable with a compound operator to facilitate the addition of more filters for each dimension in stage one of the star query.
Before going though the star query stages to add filters on dimensions, my query has a filter on the lab type which is in the fact collection, and indexed.
const search = {
"$search": {
"index": "SearchFactEncounterLabTests",
"compound": {
"must": [
{ "in": { "path": "LabTests.LabType" , "value": "Lipids" } },
]
},
"sort": { CheckoutDate: -1 }
}
}
I have added a "sort" operation to sort the result by check-out date in descending order. This illustrates the advantage of sorting during the index search rather than in later steps of the aggregation pipeline, particularly when a "limit" is applied.
I'll use this local variable to add more filters in Stage 1 of the star query, so that it can be executed for Stage 2, and collect documents for Stage 3.
Stage 1: Query the dimension collections
In the first phase of the star query, I obtain the dimension keys from the dimension collections. For every dimension with a filter, get the dimension keys, with a find() on the dimension, and append a "must" condition to the "compound" of the fact index search.
The following adds the conditions on the Patient (female patients over 170 cm):
search["$search"]["compound"]["must"].push( { in: {
path: "Patient_ID", // Foreign Key in Fact
value: db.DimPatient.find( // Dimension collection
{Gender: "Female", Height: { "$gt": 170 }} // filter on Dimension
).map(doc => doc["Patient_ID"]).toArray() } // Primary Key in Dimension
})
The following adds the conditions on the Doctor (Japanese):
search["$search"]["compound"]["must"].push( { in: {
path: "ResponsibleDoctorID", // Foreign Key in Fact
value: db.DimDoctor.find( // Dimension collection
{"Doctor Nationality": "Japanese" } // filter on Dimension
).map(doc => doc["Doctor_ID"]).toArray() } // Primary Key in Dimension
})
The following adds the condition on the Room (Deluxe):
search["$search"]["compound"]["must"].push( { in: {
path: "RoomKey", // Foreign Key in Fact
value: db.DimRoom.find( // Dimension collection
{"Room Type": "Deluxe" } // filter on Dimension
).map(doc => doc["RoomKey"]).toArray() } // Primary Key in Dimension
})
The following adds the condition on the Disease (Hematology):
search["$search"]["compound"]["must"].push( { in: {
path: "Disease_ID", // Foreign Key in Fact
value: db.DimDisease.find( // Dimension collection
{"Disease Type": "Hematology" } // filter on Dimension
).map(doc => doc["Disease_ID"]).toArray() } // Primary Key in Dimension
})
Finally, the condition on the Insurance coverage (greater than 80%):
search["$search"]["compound"]["must"].push( { in: {
path: "InsuranceKey", // Foreign Key in Fact
value: db.DimInsurance.find( // Dimension collection
{"Coverage Limit": { "$gt": 0.8 } } // filter on Dimension
).map(doc => doc["InsuranceKey"]).toArray() } // Primary Key in Dimension
})
All these search criteria have the same shape: a find() on the dimension collection, with the filters from the query, resulting in an array of dimension keys (like a primary key in a dimension table) that are used to search in the fact documents using it as a reference (like a foreign key in a fact table).
Each of those steps has queried the dimension collection to obtain a simple array of dimension keys, which are added to the aggregation pipeline. Rather than joining tables like in a relational database, the criteria on the dimensions are pushed down to the query on the fact tables.
Stage 2: Query the fact search index
With short queries on the dimensions, I have built the following pipeline search step:
AtlasLocalDev atlas [direct: primary] MedSynoraDW> print(search)
{
'$search': {
index: 'SearchFactEncounterLabTests',
compound: {
must: [
{ in: { path: 'LabTests.LabType', value: 'Lipids' } },
{
in: {
path: 'Patient_ID',
value: [
'TR551', 'TR751', 'TR897', 'TRGT201', 'TRJB261',
'TRQG448', 'TRSQ510', 'TRTP535', 'TRUC548', 'TRVT591',
'TRABU748', 'TRADD783', 'TRAZG358', 'TRBCI438', 'TRBTY896',
'TRBUH905', 'TRBXU996', 'TRCAJ063', 'TRCIM274', 'TRCXU672',
'TRDAB731', 'TRDFZ885', 'TRDGE890', 'TRDJK974', 'TRDKN003',
'TRE004', 'TRMN351', 'TRRY492', 'TRTI528', 'TRAKA962',
'TRANM052', 'TRAOY090', 'TRARY168', 'TRASU190', 'TRBAG384',
'TRBYT021', 'TRBZO042', 'TRCAS072', 'TRCBF085', 'TRCOB419',
'TRDMD045', 'TRDPE124', 'TRDWV323', 'TREUA926', 'TREZX079',
'TR663', 'TR808', 'TR849', 'TRKA286', 'TRLC314',
'TRMG344', 'TRPT435', 'TRVZ597', 'TRXC626', 'TRACT773',
'TRAHG890', 'TRAKW984', 'TRAMX037', 'TRAQR135', 'TRARX167',
'TRARZ169', 'TRASW192', 'TRAZN365', 'TRBDW478', 'TRBFG514',
'TRBOU762', 'TRBSA846', 'TRBXR993', 'TRCRL507', 'TRDKA990',
'TRDKD993', 'TRDTO238', 'TRDSO212', 'TRDXA328', 'TRDYU374',
'TRDZS398', 'TREEB511', 'TREVT971', 'TREWZ003', 'TREXW026',
'TRFVL639', 'TRFWE658', 'TRGIZ991', 'TRGVK314', 'TRGWY354',
'TRHHV637', 'TRHNS790', 'TRIMV443', 'TRIQR543', 'TRISL589',
'TRIWQ698', 'TRIWL693', 'TRJDT883', 'TRJHH975', 'TRJHT987',
'TRJIM006', 'TRFVZ653', 'TRFYQ722', 'TRFZY756', 'TRGNZ121',
... 6184 more items
]
}
},
{
in: {
path: 'ResponsibleDoctorID',
value: [ 830, 844, 862, 921 ]
}
},
{ in: { path: 'RoomKey', value: [ 203 ] } },
{
in: {
path: 'Disease_ID',
value: [
1519, 1506, 1504, 1510,
1515, 1507, 1503, 1502,
1518, 1517, 1508, 1513,
1509, 1512, 1516, 1511,
1505, 1514
]
}
},
{ in: { path: 'InsuranceKey', value: [ 83, 84 ] } }
]
},
sort: { CheckoutDate: -1
}
}
MongoDB Atlas Search indexes, built on Apache Lucene, efficiently handle complex queries with multiple conditions and manage long arrays of values. In this example, a search operation integrates the compound operator with the "must" clause to apply filters across attributes. This capability simplifies query design after resolving complex filters into lists of dimension keys.
With the "search" operation created above, I can run an aggregation pipeline to get the document I'm interested in:
db.FactEncounterLabTests.aggregate([
search,
])
With my example, nine documents are returned in 50 milliseconds.
Estimate the count
This approach is ideal for queries with filters on many conditions, where none are very selective alone, but the combination is highly selective. Using queries on dimensions and a search index on facts avoids reading more documents than necessary. However, depending on the operations you will add to the aggregation pipeline, it is a good idea to estimate the number of records returned by the search index to avoid runaway queries.
Typically, an application that allows users to execute multi-criteria queries may define a threshold and return an error or warning when the estimated number of documents exceeds it, prompting the user to add more filters. For such cases, you can run a "$searchMeta" on the index before a "$search" operation. For example, the following checks that the number of documents returned by the filter is lower than 10,000:
MedSynoraDW>
by Franck Pachot
Murat Demirbas
I attended the TLA+ Community Event at Hamilton, Ontario on Sunday. Several talks pushed the boundaries of formal methods in the real world through incorporating testing, conformance, model translation, and performance estimation. The common thread was that: TLA+ isn't just for specs anymore. It's being integrated into tooling: fuzzers, trace validators, and compilers. The community is building bridges from models to reality, and it's a good time to be in the loop.
Below is a summary of selected talks, followed by some miscellaneous observations. This is just a teaser; the recordings will be posted soon on the TLA+ webpage.
Model-Based Fuzzing for Distributed Systems — Srinidhi Nagendra
Traditional fuzzing relies on random inputs and coverage-guided mutation, and works well for single-process software. But it fails for distributed systems due to too many concurrent programs, too many interleavings, and no clear notion of global coverage.
Srinidhi's work brings model-based fuzzing to distributed protocols using TLA+ models for coverage feedback. The approach, ModelFuzz, samples test cases from the implementation (e.g., Raft), simulates them on the model, and uses coverage data to guide mutation. Test cases are not sequences of messages, but of scheduling choices and failure events. This avoids over-generating invalid traces (e.g., a non-leader sending an AppendEntries).
The model acts as a coverage oracle. But direct enumeration of model executions is infeasible because of too many traces, too much instrumentation, too much divergence from optimized implementations (e.g., snapshotting in etcd). Instead, ModelFuzz extracts traces with lightweight instrumentation as mentioned above, simulates them on the model, and mutates schedules in simple ways: swapping events, crashes, and message deliveries. This turns out to be surprisingly effective. They found 1 new bug in etcd, 2 known and 12 new bugs in RedisRaft. They also showed faster bug-finding compared to prior techniques.
TraceLink: Automating Trace Validation with PGo — Finn Hackett & Ivan Beschastnikh
Validating implementation traces against TLA+ specs is still hard. Distributed systems don't hand you a total order. Logs are huge. Instrumentation is brittle. This talk introduced TraceLink, a toolchain that builds on PGo (a compiler from PlusCal to Go) to automate trace validation.
There are three key ideas. First, compress traces by grouping symbolically and using the binary format. Second, track causality using vector clocks, and either explore all possible event orderings (breadth-first) or just one (depth-first). Third, generate diverse traces via controlled randomness (e.g., injecting exponential backoffs between high-level PlusCal steps).
TraceLink is currently tied to PGo-compiled code, but they plan to support plain TLA+ models. Markus asked: instead of instrumenting with vector clocks, why not just log with a high-resolution global clock? That might work too.
Finn is a MongoDB PhD fellow, and will be doing his summer internship with us at MongoDB Research in the Distributed Systems Research Group.
Translating C to PlusCal for Model Checking — Asterios Tech
Asterios Tech (a Safran defense group subsidiary) works on embedded systems with tiny kernels and tight constraints. They need to verify their scheduler, and manual testing doesn't cut it. So the idea they explore is to translate a simplified C subset to PlusCal automatically, then model check the algorithms for safety to the face of concurrency.
The translator, c2pluscal, is built as a Frama-C plugin. Due to the nature of the embedded programming domain, the C code is limited: no libc, no malloc, no dynamic structures. This simplicity helps in the translation process. Pointers are modeled using a TLA+ record with fields for memory location, frame pointer, and offset. Load/store/macros are mapped to PlusCal constructs. Arrays become sequences. Structs become records. Loops and pointer arithmetic are handled conservatively.
I am impressed that they model pointer arithmetic. This is a promising approach for analyzing legacy embedded C code formally, without rewriting it by hand.
More talks
The "TLA+ for All: Notebook-Based Teaching" talk introduced Jupyter-style TLA+ notebooks for pedagogy supporting inline explanations, executable specs, and immediate feedback.
I presented the talk "TLA+ Modeling of MongoDB Transactions" (joint work with Will Schultz). We will post a writeup soon.
Jesse J. Davis presented "Are We Serious About Using TLA+ For Statistical Properties?". He plans to blog about it.
Andrew Helwer presented "It’s never been easier to write TLA⁺ tooling!", and I defer to his upcoming blog post as well.
Markus Kuppe, who did the crux of organizing the event, demonstrated that GitHub Copilot can solve the Diehard problem with TLA+ in 4 minutes of reasoning, with some human intervention. He said that the TLA+ Foundation and NVidia is funding the "TLAI" challenge, for exploring novel AI augmentation of TLA+ modeling.
Miscellaneous
The 90-minute lunch breaks were very European. A U.S. conference would cap it at an hour, and DARPA or NSF would eliminate it entirely: brown bag through talks. The long break was great for conversation.
In our workshop, audience questions were frequent and sharp. We are a curious bunch.
The venue was McMaster University in Hamilton, 90 minutes drive from home. Border crossings at the Lewiston-Queenston bridge were smooth without delays. But questions from border officers still stressed my daughters (ages 9 and 13). I reminded them how much worse we had it when we had visas, instead of the US citizenship.
My daughters also noticed how everything (roads, buildings, parks) is called Queen's this and Queen's that. My 9th year old tried to argue that since Canada is so close to US and since it looks so similar to US, it feels more like a U.S. state than a separate country. Strong Trump vibes with that one.
USD to CAD exchange rate is $1.38. I still remembered them to be pretty much on par, so I was surprised. We hadn’t visited Canada since 2020. A Canadian friend told me there's widespread discontent about the economy, rent and housing prices.
Canadians are reputed to be very nice. But drivers were aggressive—cutting in, speeding in mall lots. I also received tense, passive-aggressive encounters from two cashiers and a McMaster staff. Eh.
by Murat (noreply@blogger.com)
Percona Database Performance Blog
Recognizing a gap in the availability of straightforward tools for MongoDB benchmarking, particularly those that do not require complex compilation, configuration, and intricate setups, I developed MongoDB Workload Generator. The aim was to provide MongoDB users with an accessible solution for effortlessly generating realistic data and simulating application workloads on both sharded and non-sharded clusters. […]
by Daniel Almeida
Tinybird Engineering Blog
Introducing Explorations, a conversational UI to chat with your real-time data in Tinybird.
Tinybird Engineering Blog
We just launched a conversational AI feature. Here's how we built the feature that lets you chat with your data.
May 05, 2025
Franck Pachot
In previous posts, I discussed the limitations of MongoDB emulations on databases like Oracle and PostgreSQL. AWS offers Amazon DocumentDB, which provides compatibility with MongoDB 5.0 and may run on top of Aurora PostgreSQL, a guess due to some similarities, never confirmed officially, but the storage capabilities are those of Aurora.
MongoDB's strength is not just in document storage like a key-value store, but also in its efficient indexing for queries with equality, sorting, and range filtering on flexible schemas with embedded arrays and sub-documents.
None of the compatible options I've tested can execute a simple .find().sort().limit()
effectively without excessive document scanning, because of the underlying inverted indexes limitations. Is Amazon DocumentDB better?
I create a single-node instance with the highest version available, which compatible with MongoDB 5.0, an old version (I'm testing this in May 2025):
I created the same collection as in the previous post:
for (let i = 0; i < 10000; i++) {
db.demo.insertOne( {
a: 1 ,
b: Math.random(),
ts: new Date()
} )
}
db.demo.createIndex({ "a": 1 , ts: -1 }) ;
db.demo.countDocuments( { a: 1 } );
db.demo.find( { a: 1 } ).explain("executionStats")
The index I have created is not used, the whole collection is read (COLLSCAN returning 10000 documents):
rs0 [direct: primary] test> db.demo.countDocuments( { a: 1 } )
10000
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).explain("executionStats")
{
queryPlanner: {
plannerVersion: 1,
namespace: 'test.demo',
winningPlan: { stage: 'COLLSCAN' }
},
executionStats: {
executionSuccess: true,
executionTimeMillis: '10.268',
planningTimeMillis: '0.124',
executionStages: {
stage: 'COLLSCAN',
nReturned: '10000',
executionTimeMillisEstimate: '9.803'
}
},
serverInfo: { host: 'docdb-2025-05-02-12-45-45', port: 27017, version: '5.0.0' },
ok: 1,
operationTime: Timestamp({ t: 1746190938, i: 1 })
}
All rows match my filter so maybe Amazon DocumentDB has a different query planner and a full scan is valid here.
To validate that the index can be used, I can add a hint:
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).hint({a:1,ts:-1}).explain("executionStats")
{
queryPlanner: {
plannerVersion: 1,
namespace: 'test.demo',
winningPlan: {
stage: 'SUBSCAN',
inputStage: { stage: 'IXSCAN', indexName: 'a_1_ts_-1', direction: 'forward' }
}
},
executionStats: {
executionSuccess: true,
executionTimeMillis: '73.495',
planningTimeMillis: '16.987',
executionStages: {
stage: 'SUBSCAN',
nReturned: '10000',
executionTimeMillisEstimate: '55.689',
inputStage: {
stage: 'IXSCAN',
nReturned: '10000',
executionTimeMillisEstimate: '42.151',
indexName: 'a_1_ts_-1',
direction: 'forward'
}
}
},
serverInfo: { host: 'docdb-2025-05-02-12-45-45', port: 27017, version: '5.0.0' },
ok: 1,
operationTime: Timestamp({ t: 1746190980, i: 1 })
}
The execution plan is not as verbose as MongoDB, so this doesn't give lots of information about the seek and index keys, but at least I know that my index can be used. Note that SUBSCAN is not a MongoDB execution plan stage, and there's no information telling me if { a: 1 }
was filtered efficiently by the index. The next test will tell more.
Time to test what failed on CosmosDB - a simple compound index used for equality and sort:
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).sort({ts:-1}).limit(10).explain("executionStats")
{
queryPlanner: {
plannerVersion: 1,
namespace: 'test.demo',
winningPlan: {
stage: 'SUBSCAN',
inputStage: {
stage: 'LIMIT_SKIP',
inputStage: {
stage: 'IXSCAN',
indexName: 'a_1_ts_-1',
direction: 'forward'
}
}
}
},
executionStats: {
executionSuccess: true,
executionTimeMillis: '0.398',
planningTimeMillis: '0.161',
executionStages: {
stage: 'SUBSCAN',
nReturned: '10',
executionTimeMillisEstimate: '0.200',
inputStage: {
stage: 'LIMIT_SKIP',
nReturned: '10',
executionTimeMillisEstimate: '0.195',
inputStage: {
stage: 'IXSCAN',
nReturned: '10',
executionTimeMillisEstimate: '0.193',
indexName: 'a_1_ts_-1',
direction: 'forward'
}
}
}
},
serverInfo: { host: 'docdb-2025-05-02-12-45-45', port: 27017, version: '5.0.0' },
ok: 1,
operationTime: Timestamp({ t: 1746191080, i: 1 })
}
rs0 [direct: primary] test> ;
This looks good. Finally, is there a MongoDB-compatible API that can use an index to optimize pagination queries?
This was simple, as the index values are all scalar. However, the flexible schema of MongoDB allows arrays when one document has a One to Many relationship instead of a One to One.
In my opinion, this is the main advantage of a document database: not having to change the complete data model, which carries risks to the existing data not concerned by the change, when a business rule evolves. I explained an example of this, a multi-city airport, in a previous article.
I added similar documents but with an array of values in "a":
for (let i = 0; i < 10000; i++) {
db.demo.insertOne( {
a: [0,1,2] ,
b: Math.random(),
ts: new Date()
} )
}
In parallel, I did the same on a MongoDB Atlas database, running the genuine document database, version 8.0, to illustrate the expected outcomes. The index is utilized efficiently, reading just 10 index keys and fetching 10 documents without subsequent filtering or sorting:
But it is not the same on Amazon DocumentDB where the full collection is scanned, and sorted, before picking the 10 documents of the result:
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).sort({ts:-1}).limit(10).explain("executionStats")
{
queryPlanner: {
plannerVersion: 1,
namespace: 'test.demo',
winningPlan: {
stage: 'SUBSCAN',
inputStage: {
stage: 'LIMIT_SKIP',
inputStage: {
stage: 'SORT',
sortPattern: { ts: -1 },
inputStage: { stage: 'COLLSCAN' }
}
}
}
},
executionStats: {
executionSuccess: true,
executionTimeMillis: '53.092',
planningTimeMillis: '0.479',
executionStages: {
stage: 'SUBSCAN',
nReturned: '10',
executionTimeMillisEstimate: '52.572',
inputStage: {
stage: 'LIMIT_SKIP',
nReturned: '10',
executionTimeMillisEstimate: '52.564',
inputStage: {
stage: 'SORT',
nReturned: '10',
executionTimeMillisEstimate: '52.561',
sortPattern: { ts: -1 },
inputStage: {
stage: 'COLLSCAN',
nReturned: '20000',
executionTimeMillisEstimate: '37.111'
}
}
}
}
},
serverInfo: { host: 'docdb-2025-05-02-12-45-45', port: 27017, version: '5.0.0' },
ok: 1,
operationTime: Timestamp({ t: 1746193272, i: 1 })
}
Although I can force the index with a hint, the process results in a full scan of all index entries. This method fails to apply the equality filter on the key and does not utilize the ordering of entries effectively:
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).sort({ts:-1}).limit(10).hint({a:1,ts:-1}).explain("executionStats")
{
queryPlanner: {
plannerVersion: 1,
namespace: 'test.demo',
winningPlan: {
stage: 'SUBSCAN',
inputStage: {
stage: 'LIMIT_SKIP',
inputStage: {
stage: 'SORT',
sortPattern: { ts: -1 },
inputStage: {
stage: 'FETCH',
inputStage: { stage: 'IXSCAN', indexName: 'a_1_ts_-1' }
}
}
}
}
},
executionStats: {
executionSuccess: true,
executionTimeMillis: '27.382',
planningTimeMillis: '0.241',
executionStages: {
stage: 'SUBSCAN',
nReturned: '10',
executionTimeMillisEstimate: '27.102',
inputStage: {
stage: 'LIMIT_SKIP',
nReturned: '10',
executionTimeMillisEstimate: '27.094',
inputStage: {
stage: 'SORT',
nReturned: '10',
executionTimeMillisEstimate: '27.091',
sortPattern: { ts: -1 },
inputStage: {
stage: 'FETCH',
nReturned: '20000',
executionTimeMillisEstimate: '15.592',
inputStage: {
stage: 'IXSCAN',
nReturned: '20000',
executionTimeMillisEstimate: '4.338',
indexName: 'a_1_ts_-1'
}
}
}
}
}
},
serverInfo: { host: 'docdb-2025-05-02-12-45-45', port: 27017, version: '5.0.0' },
ok: 1,
operationTime: Timestamp({ t: 1746193437, i: 1 })
}
Surprisingly, the result appears to be ordered when I use an index hint. I verified this on my result, although it may be a side effect of the implementation:
db.demo.find({ a: 1 }).hint({ a: 1, ts: -1 }).toArray().forEach((doc, index, docsArray) => {
if (index > 0) {
const previousDoc = docsArray[index - 1];
if (doc.ts > previousDoc.ts) {
console.log("The results are not ordered by 'ts' in descending order.");
}
}
});
But wait, I inserted a batch of scalars and then a batch of arrays. It is possible that each has a different partial index in the underlying PostgreSQL. I started with only scalar values, so one index, and was able to use its full filtering and ordering capabilities. In a second batch, with all arrays, a second index may have been created for them. It can use both indexes, as I've seen when forcing it with a hint, but with two branches in the execution plan, one for each index. With a concatenation, similar to a UNION ALL, PostgreSQL can preserve the order with a merge sort. However, if a bitmap scan is used, the ordering is lost. As my sort() in on the insertion timestamp, it is possible that, by chance, the indexes were scanned in the right order even of the ordering is not guaranteed.
To validate the ordering can be lost, reason why the query planner adds a sort stage, I inserted one more document with a scalar and ran my ordering test again:
rs0 [direct: primary] test> db.demo.insertOne( { a: 1 , b: Math.random(), ts: new Date() } )
{
acknowledged: true,
insertedId: ObjectId('68151d8420044b5ec3d9aea2')
}
rs0 [direct: primary] test> db.demo.find({ a: 1 }).hint({ a: 1, ts: -1 }).toArray().forEach((doc, index, docsArray) => {
... if (index > 0) {
... const previousDoc = docsArray[index - 1];
... if (doc.ts > previousDoc.ts) {
... console.log("The results are not ordered by 'ts' in descending order.");
... }
... }
... });
The results are not ordered by 'ts' in descending order.
Now that scalar values and arrays are interleaved, reading from two distinct indexes does not preserve the order. That would explain why the query planner cannot optimize the query.
The document model's primary advantage lies in having a single index over a flexible schema, as opposed to two indexes on separate tables in a normalized relational database. If my hypothesis holds true, the existence of multiple indexes due to the flexible schema negates this key benefit of a document database.
I tried multiple combinations of index creation and insertions. If I delete all rows (db.demo.deleteMany({})
) it still behaves like a multi-key where the index cannot be used for equality and sort.
Dropping and re-creating the index resets this state. However, if I insert rows with scalar and arrays before creating the index, I got a case where I cannot force the index with a hint:
rs0 [direct: primary] test> db.demo.dropIndex({ "a": 1 , ts: -1 }) ;
{
nIndexesWas: 2,
ok: 1,
operationTime: Timestamp({ t: 1746215608, i: 1 })
}
rs0 [direct: primary] test> db.demo.insertOne( { a: 1 , b: Math.random(), ts: new Date() } )
{
acknowledged: true,
insertedId: ObjectId('681522ba20044b5ec3d9fcc8')
}
rs0 [direct: primary] test> db.demo.insertOne( { a: [ 1,2,3 ] , b: Math.random(), ts: new Date() } )
{
acknowledged: true,
insertedId: ObjectId('681522bb20044b5ec3d9fcc9')
}
rs0 [direct: primary] test> db.demo.createIndex({ "a": 1 , ts: -1 }) ;
a_1_ts_-1
rs0 [direct: primary] test> db.demo.find( { a: 1 }
).hint({a:1,ts:-1}).explain("executionStats")
MongoServerError: Cannot use Hint for this Query.
Index is multi key index , partial index or sparse index and query is not optimized to use this index.
The available statistics and logs are insufficient to identify the type of index in use. It is neither a MongoDB index, as it doesn't have the same capabilities, nor does it behave like one single PostgreSQL index.
In PostgreSQL, an inverted index, such as GIN, utilizes bitmaps for scanning and does not maintain order, even with scalar values. While an expression index can facilitate equality, sorting, and range queries, it cannot be established on a flexible schema with arrays in the key.
I suspect that multiple partial expression indexes are created, and full features can only be utilized before a multi-key is detected. Maybe the SUBSCAN is the operation that combines results from multiple partial indexes, to emulate MongoDB flexible schema, but loosing the optimization of pagination queries.
To effectively compare databases with a compatible API, focus on indexing options and execution plans for OLTP query patterns. Benchmarks must run over extended periods with increasing data, testing critical query patterns like find().sort().limit(). Always examine execution plans first to avoid wasting cloud resources in long tests. You can compare plans scanning MongoDB compound indexes.
Note that Amazon DocumentDB may have other advantages, like the integration with some Amazon services, and Aurora-like separation of compute and storage, but when it comes to take the full advantages of a document database, MongoDB is more advanced, and is available on AWS, like other clouds, with Atlas.
by Franck Pachot
May 04, 2025
Franck Pachot
My next post will be about Amazon DocumentDB and how it compares to MongoDB in terms of indexing a flexible schema with multiple keys. There's a lot of confusion today with the "DocumentDB" name because earlier this year Microsoft announced DocumentDB: Open-Source Announcement which has nothing to do with Amazon DocumentDB.
Amazon DocumentDB is a managed NoSQL database service that supports document data structures and is compatible with MongoDB versions 3.6, 4.0, and 5.0. It allows users to store, query, and index JSON data. Its storage capabilities resemble those of Amazon Aurora, featuring compute and storage separation, a monolithic read-write node, up to 15 read-only replicas, and multi-AZ storage. There are guesses that the API is built on PostgreSQL and, which, if true, brings another similarity with the Microsoft DocumentDB extension for PostgreSQL.
DocumentDB is the name of a PostgreSQL extension used by Microsoft in vCore-based Azure Cosmos DB for MongoDB to emulate some MongoDB capabilities with BSON and RUM indexes.
You might wonder how Microsoft, in 2015, adopted a name already in use by a similar product released six years earlier by its main competitor. Actually, Microsoft owned this name and built a DocumentDB service long before in 2013. Let's use the Web Archive to do some archeology.
- In 2011 the
documentdb.com
was for sale
in 2012 Microsoft registered the domain
in 2013 Microsoft was probably building a document data store product and prepared a web site for it
in 2014 they have put a page with generic content about NoSQL, JSON and document databases. It doesn't mention Microsoft, but one video shows documents in Word and Access.
in 2015 the released a preview for a DocumentDB with an SQL API over JSON.
Later, the www.documentdb.com
redirected to http://azure.microsoft.com/en-us/services/documentdb
and then to http://azure.microsoft.com/en-us/services/cosmos-db
. The product went through a MongoDB-like protocol on top of DocumentDB
Microsoft has a history of renaming services that never went popular in order to attract a broader audience. There's also another PostgreSQL with another extension (Citus) in CosmosDB, which has nothing to do with the PostgreSQL (with DocumentDB extension) behind the vCore-based Azure Cosmos DB for MongoDB.
I hope that clears some confusion. The best is to say "Amazon DocumentDB" when talking about the AWS service
by Franck Pachot
May 03, 2025
Kyle Kingsbury (Aphyr / Jepsen)
Back in 2023 I went through an inter-state move with Flat Rate Movers,
LTD. (US DOT 488466, MC 254356). They bungled the job, damaged my stuff
and my house, and then refused to talk to me or their own insurance company. I
placed dozens of calls, wrote letters, emails, nothing worked. I finally got
some money for the damage by working with their insurer directly. I know a lot
of folks struggle with moving companies, and figured I’d write up a bit of this
story in case it helps. You can skip ahead if you
just want to know about the FMCSA insurer database.
In November 2022 I scheduled the move with Flat Rate. We took a detailed video
inventory of everything to be moved. I paid over $14,000 for an “all inclusive
move”, which, per the move plan, included full packing services, blankets for furniture, boxes for electronics, lamps, and bicycles, a custom crate for a tabletop, and “extended valuation coverage”—their insurance policy.
A few days before the move in March 2023, Flat Rate called to tell me a
five-person team would arrive. When the crew arrived, it wasn’t Flat Rate. It
was a team of two from Esquire Moving Inc (US DOT 2820253, MC 940499). They had
no idea they were supposed to provide packing services. The crate I paid for was nowhere to be found. They didn’t have enough boxes or blankets. I had to have everything out that day, so
there wasn’t any choice—I worked alongside the team for thirteen hours to
complete the packing and move-out.
I let the team handle furniture protection and loading the truck, which was a
mistake. As they unloaded a few days later, I realized much of the furniture had
been completely unprotected. My bicycle wasn’t boxed. Lamps were tossed in
loose. There was extensive damage to furniture, art, books, and my home: I
watched the crew rip a doorframe off the wall. The crew provided me with a
backdated bill of lading only on delivery.
I called Flat Rate a couple days later to discuss my concerns. Their
representive apologized and acknowledged that a crew of two was insufficient. I
sent Flat Rate a detailed email explaining their failure to provide promised
services, accompanied by photographs of the move process and the resulting
damage. I asked for a partial refund, and for Flat Rate to pay for the damage
they caused. I received only an out-of-office auto-response.
On March 21st, I received a generic form letter from Flat Rate’s claims
department. I filled out the claim forms and submitted them on March 25th, and
repeated that I needed more than an insurance claim: Flat Rate obviously failed
to provide the services I’d paid for.
Then things got weird. I placed call after call to the claims department. They
didn’t answer. I left voicemail after voicemail. On April 28th I reached an
actual human, but their computers were apparently down, and no supervisor was
in the building—the representative couldn’t do anything but take down a note
to call me back. They never did. I left another voicemail on May 5th. Again on
May 23rd. Again on August 4th. August 8th. I began to realize that Flat Rate’s
strategy was simply to never answer the phone.
On August 14th, a new hire in the claims department called me: she’d just been
hired and was taking over my case. She dispatched a local furniture company
to inspect the damages and help complete the claim. The inspectors agreed: the
movers had done extensive damage. I provided them with documentation and photos
as well.
On September 14 I called the furniture company and confirmed that yes, they had
submitted their report to Flat Rate. However, the furniture company had a
confidentiality agreement with Flat Rate, and could not release their report to
me. I called their contact at Flat Rate, who didn’t answer, and left a
voicemail. I called the claims representative’s direct line: her number was no
longer in service. On September 19th I got a call from the claims
representative, who promised she’d have a decision by the end of the day. Flat Rate never talked to me again.
I called again and again, but got voicemail every time. I tried a letter from
my lawyers, an arbitration request with the American Trucking
Association, complaints with the Federal Motor Carrier Safety
Administration,
the New York Department of Transportation’s
Motor Carrier complaint department, and the New York State Department of
Financial Services; none of this went anywhere.
Here’s the thing: movers offer coverage, they have claims departments, and they do many insurance-like things in house. But for complicated reasons, they are not actually insurance companies, and are not regulated as such. Their coverage is backstopped by separate, real insurance companies. I eventually discovered I could bypass Flat Rate and talk to their backing insurance company directly. The Federal Motor Carrier Safety
Administration maintains a public database of moving companies. Putting in their US DOT number
(488466) yields a page with some basic info. At the bottom are links with their
active and pending insurance, and the history of their insurers.
The insurance history told me their cargo insurer at the time of my move was
Hanover Insurance Company, with policy RHX H706231. The database also had
Hanover’s phone number. I was able to call Hanover and ask about the
policy—it took a few tries, but I got through to a claims adjuster there who
was able to confirm that yes, I could file a claim directly with Hanover. I
sent over a package of documentation, including the photos and the story so
far.
Hanover Insurance informed me they’d never heard anything about my claim, and
called Flat Rate to check. Over the next few days we discovered that Flat Rate
wasn’t just refusing to answer my calls—they wouldn’t answer calls from their insurance company either. Hanover wasn’t able to obtain the report from the furniture company, but they did agree, on the basis of the photos, to pay a reasonable amount for the damage to cargo. I had a check a few days later.
I’m still unhappy with Flat Rate: they charged me a good deal of money for services
they failed to deliver, and never paid for damage to the house. I was gearing up for small claims court—but during the tail end of this fiasco I was entering a fresh hell of home renovation, and got distracted. If you’re reading this, Flat Rate, you still owe me.
by Aphyr
AWS Database Blog - Amazon Aurora
On April 29, 2025, Jepsen published a report about transaction visibility behavior in Amazon RDS for PostgreSQL Multi-AZ clusters. We appreciate Jepsen’s thorough analysis and would like to provide additional context about this behavior, which exists both in Amazon RDS and community PostgreSQL. In this post, we dive into the specifics of the issue to provide further clarity, discuss what classes of architectures it might affect, share workarounds, and highlight our ongoing commitment to improving community PostgreSQL in all areas, including correctness.
by Sergey Melnik
May 02, 2025
Percona Database Performance Blog
If you’ve tried running MongoDB on Kubernetes, you’ve likely run into some unique challenges. Stateful workloads like replica sets and sharded clusters can be messy in containers, and things that should be easy, like backups, scaling, failovers, and upgrades, suddenly take more time and attention than they should. That’s exactly what good MongoDB Operators are […]
by David Quilty
Percona Database Performance Blog
If you’ve tried running MongoDB on Kubernetes, you’ve likely run into some unique challenges. Stateful workloads like replica sets and sharded clusters can be messy in containers, and things that should be easy, like backups, scaling, failovers, and upgrades, suddenly take more time and attention than they should. That’s exactly what good MongoDB Operators are […]
by David Quilty
Small Datum - Mark Callaghan
After reading a blog post from JFG on changes to innodb_doublewrite_pages and bug 111353, I wanted to understand the impact from that on the Insert Benchmark using a large server.
I test the impact from:
- using a larger (non-default) value for innodb_doublewrite_pages
- disabling the doublewrite buffer
tl;dr
- Using a larger value for innodb_doublewrite_pages improves QPS by up to 10%
- Disabling the InnoDB doublewrite buffer is great for performance, but bad for durability. I don't suggest you do this in production.
Builds, configuration and hardware
I compiled upstream MySQL 8.0.41 from source.
The server is an
ax162-s from Hetzner with 48 cores, AMD 128G RAM and AMD SMT disabled. It uses Ubuntu 22.04 and storage is ext4 using SW RAID 1 over 2 locally attached NVMe devices. More details on it
are here. At list prices a similar server from Google Cloud costs 10X more than from Hetzner.
The MySQL configuration files are:
- cz11a_c32r128 - the base configuration file that does not set innodb_doublewrite_pages and gets innodb_doublewrite_pages=8
- cz11e_c32r128 - adds innodb_doublewrite_pages=128 to the base config
- cz11f_c32r128 - adds innodb_doublewrite=0 to the base config (disables doublewrite)
The Benchmark
The benchmark is
explained here and is run with 20 clients and a table per client with an IO-bound workload. The database is larger than memory with 200M rows per table and 20 tables.
The benchmark steps are:
- l.i0
- insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
- Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
The
summary section in the performance report has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.
Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result with the cz11e_c32r128 or cz11f_c32r128 configs and $base is the result from the cz11a_c32r128 config. The configs are explained above, cz11e_c32r128 increases innodb_doublewrite_pages and cz11f_c32r128 disabled the doublewrite buffer.
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.
Results: more IO-bound
From the cz11e_c32r128 config that increases innodb_doublewrite_pages to 128:
- the impact on write-heavy steps is mixed: create index was ~7% slower and l.i2 was ~10% faster
- the impact on range query + write steps is positive but small. The improvements were 0%, 0% and 4%. Note that these steps are not as IO-bound as point query + write steps and the range queries do ~0.3 reads per query (see here).
- the impact on point query + write steps is positive and larger. The improvements were 3%, 8% and 9%. These benchmark steps are much more IO-bound than the steps that do range queries.
From the cz11f_c32r128 config that disables the InnoDB doublewrite buffer:
- the impact on write-heavy steps is large -- from 1% to 36% faster.
- the impact on range query + write steps is positive but small. The improvements were 0%, 2% and 15%. Note that these steps are not as IO-bound as point query + write steps and the range queries do ~0.3 reads per query (see here).
- the impact on point query + write steps is positive and larger. The improvements were 14%, 41% and 42%.
by Mark Callaghan (noreply@blogger.com)