a curated list of database news from authoritative sources

November 24, 2025

What writeConcern: {w: 1} really means? Isolation and Durability

In MongoDB, a write concern of w:1 indicates that a write operation is considered successful once the primary node acknowledges it, without waiting for the data to be replicated to secondary nodes. While this reduces latency, it also introduces the risk that if the primary fails before replication occurs, the written data could be lost. In replica sets with multiple voters, such writes can be rolled back if a failure happens before a majority acknowledges the change.

This is not the default setting. Most clusters (Primary-Secondary-Secondary) use an implicit w:majority write concern, which ensures durability in the event of a zone failure. The implicit default write concern is w:1 only when an arbiter is present (Primary-Secondary-Arbiter) or when the topology lowers the number of data-bearing voters.

For performance reasons, you may sometimes write with w:1. However, it's important to understand the consequences this setting might have in certain failure scenarios. To clarify, here is an example.

I started a three-node replica set using Docker:

docker network create lab

docker run -d --network lab --name m1 --hostname m1 mongo --bind_ip_all --replSet rs

docker run -d --network lab --name m2 --hostname m2 mongo --bind_ip_all --replSet rs

docker run -d --network lab --name m3 --hostname m3 mongo --bind_ip_all --replSet rs

docker exec -it m1 mongosh --host m1 --eval '
 rs.initiate( {_id: "rs", members: [
  {_id: 0, priority: 3, host: "m1:27017"},
  {_id: 1, priority: 2, host: "m2:27017"},
  {_id: 2, priority: 1, host: "m3:27017"}]
 });
'

I create a collection with one "old" document:

docker exec -it m1 mongosh --host m1 --eval '
  db.myCollection.drop();
  db.myCollection.insertOne(
   {name: "old"},
   {writeConcern: {w: "majority", wtimeout: 15000}}
);
'

I checked that the document is there:

docker exec -it m1 mongosh --host m1 --eval 'db.myCollection.find()'

[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]

I disconnected two nodes, so I no longer had a majority. However, I quickly inserted a new document before the primary stepped down and became a secondary that would not accept new writes:

docker network disconnect lab m2

docker network disconnect lab m3

docker exec -it m1 mongosh --host m1 --eval '
  db.myCollection.insertOne(
   {name: "new"},
   {writeConcern: {w: "1", wtimeout: 15000}}
);
'

Note the use of writeConcern: {w: "1"} to explicitly reduce consistency. Without this setting, the default is "majority". In that case, the write operation would have waited until a timeout, allowing the application to recognize that durability could not be guaranteed and that the write was unsuccessful.

With writeConcern: {w: 1}, the operation was acknowledged and the data became visible:

docker exec -it m1 mongosh --host m1 --eval 'db.myCollection.find()'

[
  { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
  { _id: ObjectId('691dfa0ff09d463d36fa3350'), name: 'new' }
]

Keep in mind that this is visible when using the default 'local' read concern, but not when using 'majority':

docker exec -it m1 mongosh --host m1 --eval '
db.myCollection.find().readConcern("majority")
'

[
  { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' }
]

I checked the Oplog to confirm that the idempotent version of my change was present:

docker exec -it m1 mongosh --host m1 local --eval '
  db.oplog.rs
    .find({ns:"test.myCollection"},{op:1, o:1, t:1})
    .sort({ ts: -1 });
'

[
  {
    op: 'i',
    o: { _id: ObjectId('691dfa0ff09d463d36fa3350'), name: 'new' },
    t: Long('1')
  },
  {
    op: 'i',
    o: { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
    t: Long('1')
  }
]

The primary node accepted w:1 writes only briefly, during the interval between losing quorum and stepping down. Afterwards, it automatically switches to SECONDARY, and since no quorum is present, there is no PRIMARY. This state can persist for some time:

docker exec -it m1 mongosh --host m1 --eval '
 rs.status().members
'

[
  {
    _id: 0,
    name: 'm1:27017',
    health: 1,
    state: 2,
    stateStr: 'SECONDARY',
    uptime: 1172,
    optime: { ts: Timestamp({ t: 1763572239, i: 1 }), t: Long('1') },
    optimeDate: ISODate('2025-11-19T17:10:39.000Z'),
    optimeWritten: { ts: Timestamp({ t: 1763572239, i: 1 }), t: Long('1') },
    optimeWrittenDate: ISODate('2025-11-19T17:10:39.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:10:39.685Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:10:39.685Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:10:39.685Z'),
    syncSourceHost: '',
    syncSourceId: -1,
    infoMessage: '',
    configVersion: 1,
    configTerm: 1,
    self: true,
    lastHeartbeatMessage: ''
  },
  {
    _id: 1,
    name: 'm2:27017',
    health: 0,
    state: 8,
    stateStr: '(not reachable/healthy)',
    uptime: 0,
    optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastHeartbeat: ISODate('2025-11-19T17:26:03.626Z'),
    lastHeartbeatRecv: ISODate('2025-11-19T17:10:37.153Z'),
    pingMs: Long('0'),
    lastHeartbeatMessage: 'Error connecting to m2:27017 :: caused by :: Could not find address for m2:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
    syncSourceHost: '',
    syncSourceId: -1,
    infoMessage: '',
    configVersion: 1,
    configTerm: 1
  },
  {
    _id: 2,
    name: 'm3:27017',
    health: 0,
    state: 8,
    stateStr: '(not reachable/healthy)',
    uptime: 0,
    optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastHeartbeat: ISODate('2025-11-19T17:26:03.202Z'),
    lastHeartbeatRecv: ISODate('2025-11-19T17:10:37.153Z'),
    pingMs: Long('0'),
    lastHeartbeatMessage: 'Error connecting to m3:27017 :: caused by :: Could not find address for m3:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
    syncSourceHost: '',
    syncSourceId: -1,
    infoMessage: '',
    configVersion: 1,
    configTerm: 1
  }
]

When there is no primary, no further writes are accepted—even if you set writeConcern: {w: "1"}:

docker exec -it m1 mongosh --host m1 --eval '
  db.myCollection.insertOne(
   {name: "new"},
   {writeConcern: {w: "1", wtimeout: 15000}}
);
'

MongoServerError: not primary

The system may remain in this state for some time. When at least one sync replica comes back online, it will pull the Oplog and synchronize the write to the quorum, making the acknowledged write durable.

Using writeConcern: {w: "1"} boosts performance, as the primary doesn't wait for acknowledgments from other nodes. This write concern tolerates a single node failure since the quorum remains, and can even withstand another brief failure. However, if a failure persists, additional writes aren't accepted, reducing the risk of unacknowledged writes. Usually, when a node recovers, it synchronizes via the Oplog, and the primary resumes accepting writes.

In the common scenario where brief, transient failures may occur, using writeConcern: {w: "1"} means the database remains available if the failure is just a momentary glitch. However, the point here is to illustrate the worst-case scenario. If one node accepts a write that is not acknowledged by any other node, and this node fails before any others recover, that write may be lost.

To illustrate this possible scenario, I first disconnected this node and then proceeded to connect the remaining ones:

docker network disconnect lab m1
docker network    connect lab m2
docker network    connect lab m3

In this worst-case scenario, a new quorum is formed with a state that predates when the write could be synchronized to the replicas. However, progress continues because a new primary is established:

docker exec -it m2 mongosh --host m2 --eval '
 rs.status().members
'

> '
[
  {
    _id: 0,
    name: 'm1:27017',
    health: 0,
    state: 8,
    stateStr: '(not reachable/healthy)',
    uptime: 0,
    optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
    optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
    optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:10:34.194Z'),
    lastHeartbeat: ISODate('2025-11-19T17:39:02.913Z'),
    lastHeartbeatRecv: ISODate('2025-11-19T17:10:38.153Z'),
    pingMs: Long('0'),
    lastHeartbeatMessage: 'Error connecting to m1:27017 :: caused by :: Could not find address for m1:27017: SocketException: onInvoke :: caused by :: Host not found (authoritative)',
    syncSourceHost: '',
    syncSourceId: -1,
    infoMessage: '',
    configVersion: 1,
    configTerm: 1
  },
  {
    _id: 1,
    name: 'm2:27017',
    health: 1,
    state: 1,
    stateStr: 'PRIMARY',
    uptime: 1952,
    optime: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
    optimeDate: ISODate('2025-11-19T17:38:56.000Z'),
    optimeWritten: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
    optimeWrittenDate: ISODate('2025-11-19T17:38:56.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    syncSourceHost: '',
    syncSourceId: -1,
    infoMessage: 'Could not find member to sync from',
    electionTime: Timestamp({ t: 1763573886, i: 1 }),
    electionDate: ISODate('2025-11-19T17:38:06.000Z'),
    configVersion: 1,
    configTerm: 2,
    self: true,
    lastHeartbeatMessage: ''
  },
  {
    _id: 2,
    name: 'm3:27017',
    health: 1,
    state: 2,
    stateStr: 'SECONDARY',
    uptime: 58,
    optime: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
    optimeDurable: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
    optimeWritten: { ts: Timestamp({ t: 1763573936, i: 1 }), t: Long('2') },
    optimeDate: ISODate('2025-11-19T17:38:56.000Z'),
    optimeDurableDate: ISODate('2025-11-19T17:38:56.000Z'),
    optimeWrittenDate: ISODate('2025-11-19T17:38:56.000Z'),
    lastAppliedWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    lastDurableWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    lastWrittenWallTime: ISODate('2025-11-19T17:38:56.678Z'),
    lastHeartbeat: ISODate('2025-11-19T17:39:02.679Z'),
    lastHeartbeatRecv: ISODate('2025-11-19T17:39:01.178Z'),
    pingMs: Long('0'),
    lastHeartbeatMessage: '',
    syncSourceHost: 'm2:27017',
    syncSourceId: 1,
    infoMessage: '',
    configVersion: 1,
    configTerm: 2
  }
]

This replica set has a primary and is accepting new writes with a new Raft term (configTerm: 2). However, during recovery, it ignored a pending write from the previous term (configTerm: 1) that originated from an unreachable node.

A write made with w:1 after the quorum was lost but before the primary stepped down was lost:

docker exec -it m2 mongosh --host m2 --eval '
  db.myCollection.find()
'

[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]

After reconnecting the first node, it enters recovery mode and synchronizes with the other nodes, all of which are on term 2:

docker network    connect lab m1

docker exec -it m1 mongosh --host m1 --eval '
 db.myCollection.find()
'

MongoServerError: Oplog collection reads are not allowed while in the rollback or startup state.

The rollback process employs the 'Recover To A Timestamp' algorithm to restore the node to the highest majority-committed point. While rolling back, the node transitions to the ROLLBACK state, suspends user operations, finds the common point with the sync source, and recovers to the stable timestamp.

After recovery, changes made in term 1 that did not receive quorum acknowledgment are truncated from the Oplog. This behavior is an extension to the standard Raft algorithm:

docker exec -it m1 mongosh --host m1 --eval '
 rs.status().members
'

[
  {
    op: 'i',
    o: { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' },
    t: Long('1')
  }
]

A w:1 write that was visible at one point, and acknlowledged to the client, but never actually committed to the quorum, has now disappeared:

docker exec -it m1 mongosh --host m1 --eval '
  db.myCollection.find()
'

[ { _id: ObjectId('691df945727482ee30fa3350'), name: 'old' } ]

With writeConcern: {w: 1}, the developer must be aware that such issue can arise if a write occurs immediately after quorum is lost and the primary fails before other nodes recover.

While SQL databases typically abstract physical concerns such as persistence and replication, MongoDB shifts more responsibility to developers. By default, acknowledged writes are considered durable only once a majority of nodes confirm they are synced to disk.

In some cases, strict write guarantees are unnecessary and can be relaxed for improved performance. Developers can adjust the write concern to suit their application's needs. When using writeConcern: {w: 1}, this affects two aspects of ACID:

  • Durability: If there is a failure impacting both the primary and replicas, and only replicas recover, writes not acknowledged by replicas may be rolled back—similar to PostgreSQL's synchronous_commit = local.
  • Isolation: Reads with 'local' concern might not see writes confirmed to the client until these are acknowledged by a majority. There is no PostgreSQL equivalent to MongoDB's 'majority' read concern (MVCC visibility tracking what was applied on the replicas).

Although writeConcern: {w: 1} is sometimes described as permitting 'dirty reads', this term is misleading as it is also used as a synonym of 'read uncommitted' in relational databases. In SQL databases with a single read-write instance, 'uncommitted read' refers to single-server isolation (the I in ACID). However, with writeConcern: {w: 1} and a 'majority' read concern, uncommitted reads do not occur and only committed changes are visible to other sessions. The real challenge involves durability (the D in ACID) in the context of a replica set. With traditional SQL databases replication, writes might be visible before all peers (replica, WAL, application) have fully acknowledged them, since there's no single atomic operation covering all. MongoDB's w:1 is similar, and calling it a 'dirty read' is useful to highlight the implications for developers.

Analyzing the Heartbeat of the MySQL Server: A Look at Repository Statistics

The MySQL database server is a foundational component of the open-source world. While its impact is undeniable, looking at the raw statistics of its core source code repository reveals a dynamic and sometimes surprising development history. By analyzing the total lines of code inserted, the number of commits over the years, and unique contributors, we […]

November 23, 2025

Data Locality vs. Independence: Which Should Your Database Prioritize?

Understand how the principle of "store together what is accessed together" is a game-changer for database performance and scalability.

When your application needs several pieces of data at once, the fastest approach is to read them from a single location in a single call. In a document database, developers can decide what is stored together, both logically and physically.

Fragmentation has never been beneficial for performance. In databases, the proximity of data — on disk, in memory or across the network — is crucial for scalability. Keeping related data together allows a single operation to fetch everything needed, reducing disk I/O, memory cache misses and network round-trips, thereby making performance more predictable.

The principle “store together what is accessed together” is central to modeling in document databases. Yet its purpose is to allow developers to control the physical storage layout, even with flexible data structures.

In contrast, SQL databases were designed for data independence — allowing users to interact with a logical model separate from the physical implementation managed by a database administrator.

Today, the trend is not to separate development and operations, allowing faster development cycles without the complexity of coordinating multiple teams or shared schemas. Avoiding the separation into logical and physical models further simplifies the process.

Understanding the core principle of data locality is essential today, especially as many databases emulate document databases or offer similar syntax on top of SQL. To qualify as a document database, it’s not enough to accept JSON documents with a developer-friendly syntax.

The database must also preserve those documents intact in storage so that accessing them has predictable performance. Whether they expose a relational or document API, it is essential to know if your objective is data independence or data locality.

Why Locality Still Matters in Modern Infrastructure

Modern hardware still suffers from penalties for scattered access. Hard disk drives (HDDs) highlighted the importance of locality because seek and rotational latency are more impactful than transfer speed, especially for online transactional processing (OTLP) workloads.

While solid state drives (SSDs) remove mechanical delays, random writes remain expensive, and cloud storage adds latency due to network access to storage. Even in-memory access isn’t immune: on multisocket servers, non-uniform memory access (NUMA) causes varying access times depending on where the data was loaded into memory by the first access, relative to the CPU core that processes it later.

Scale-out architecture further increases complexity. Vertical scaling — keeping all reads and writes on a single instance with shared disks and memory — has capacity limits. Large instances are expensive, and scaling them down or up often requires downtime, which is risky for always-on applications.

For example, you might need your maximum instance size for Black Friday but would have to scale up progressively in the lead-up, incurring downtime as usage increases. Without horizontal scalability, you end up provisioning well above your average load “just in case,” as in on-premises infrastructures sized years in advance for occasional peaks — something that can be prohibitively costly in the cloud.

Horizontal scaling allows adding or removing nodes without downtime. However, more nodes increase the likelihood of distributed queries, in which operations that once hit local memory must now traverse the network, introducing unpredictable latency. Data locality becomes critical with scale-out databases.

To create scalable database applications, developers should understand storage organization and prioritize single-document operations for performance-critical transactions. CRUD functions (insert, find, update, delete) targeting a single document in MongoDB are always handled by a single node, even in a sharded deployment. If that document isn’t in memory, it can be read from disk in a single I/O operation. Modifications are applied to the in-memory copy and written back as a single document during asynchronous checkpoints, avoiding on-disk fragmentation.

In MongoDB, the WiredTiger storage engine stores each document’s fields together in contiguous storage blocks, allowing developers to follow the principle “store together what is accessed together.” By avoiding cross-document joins, such as the $lookup operation in queries, this design helps prevent scatter-gather operations internally, which promotes consistent performance. This supports predictable performance regardless of document size, update frequency or cluster scale.

The Relational Promise: Physical Data Independence

For developers working with NoSQL databases, what I exposed above seems obvious: There is one single data model — the domain model — defined in the application, and the database stores exactly that model.

The MongoDB data modeling workshop defines a database schema as the physical model that describes how the data is organized in the database. In relational databases, the logical model is typically independent of the physical storage model, regardless of the data type used, because they serve different purposes.

SQL developers work with a relational model that is mapped to their object model via object relational mapping (ORM) tooling or hand-coded SQL joins. The models and schemas are normalized for generality, not necessarily optimized for specific application access patterns.

The goal of the relational model was to serve online interactive use by non-programmers and casual users by providing an abstraction that hides physical concerns. This includes avoiding data anomalies through normalization and enabling declarative query access without procedural code. Physical optimizations, like indexes, are considered implementation details. You will not find CREATE INDEX in the SQL standard.

In practice, a SQL query planner chooses access paths based on statistics. When writing JOIN clauses, the order of tables in the FROM clause should not matter. The SQL query planner reorders based on cost estimates. The database guarantees logical consistency, at least in theory, even with concurrent users and internal replication. The SQL approach is database-centric: rules, constraints and transactional guarantees are defined in the relational database, independent of specific use cases or table sizes.

Today, most relational databases sit behind applications. End users rarely interact with them directly, except in analytical or data science contexts. Applications can enforce data integrity and handle code anomalies, and developers understand data structures and algorithms. Nonetheless, relational database experts still advise keeping constraints, stored procedures, transactions, and joins within the database.

The physical storage remains abstracted — indexes, clustering, and partitions are administrator-level, not application-level, concepts, as if the application developers were like the non-programmer casual users described in the early papers about relational databases.

How Codd’s Rules Apply to SQL/JSON Documents

Because data locality matters, some relational databases have mechanisms to enforce it internally. For example, Oracle has long supported “clustered tables” for co-locating related rows from multiple columns, and more recently offers a choice for JSON storage as either binary JSON (OSON, Oracle’s native binary JSON) or decomposed relational rows (JSON-relational duality views). However, those physical attributes are declared and deployed in the database using a specific data definition language (DDL) and are not exposed to the application developers. This reflects Codd’s “independence” rules:

  • Rule 8: Physical data independence
  • Rule 9: Logical data independence
  • Rule 10: Integrity independence
  • Rule 11: Distribution independence

Rules 8 and 11 relate directly to data locality: The user is not supposed to care whether data is physically together or distributed. The database is opened to users who ignore the physical data model, access paths and algorithms. Developers do not know what is replicated, sharded or distributed across multiple data centers.

Where the SQL Abstraction Begins to Weaken

In practice, no relational database perfectly achieves these rules. Performance tuning often requires looking at execution plans and physical data layouts. Serializable isolation is rarely used due to scalability limitations of two-phase locking, leading developers to fall back to weaker isolation levels or to explicit locking (SELECT ... FOR UPDATE). Physical co-location mechanisms — hash clusters, attribute clustering — exist, but are difficult to size and maintain optimally without precise knowledge of access patterns. They often require regular data reorganization as updates can fragment it again.

The normalized model is inherently application-agnostic, so optimizing for locality often means breaking data independence ( denormalizing, maintaining materialized views, accepting stale reads from replicas, disabling referential integrity). With sharding, constraints like foreign keys and unique indexes generally cannot be enforced across shards. Transactions must be carefully ordered to avoid long waits and deadlocks. Even with an abstraction layer, applications must be aware of the physical distribution for some operations.

The NoSQL Approach: Modeling for Access Patterns

As data volumes and latency expectations grow, a different paradigm has emerged: give developers complete control rather than an abstraction with some exceptions.

NoSQL databases adopt an application-first approach: The physical model matches the access patterns, and the responsibility for maintaining integrity and transactional scope is pushed to the application. Initially, many NoSQL stores delegated all responsibility, including consistency, to developers, acting as “dumb” key-value or document stores. Most lacked ACID (atomicity, consistency, isolation and durability) transactions or query planners. If secondary indexes were present, they needed to be queried explicitly.

This NoSQL approach was the opposite of the relational database world: Instead of one shared, normalized database, there were many purpose-built data stores per application. It reduces the performance and scalability surprises, but at the price of more complexity.

MongoDB’s Middle Road for Flexible Schemas

MongoDB evolved by adding essential relational database capabilities — indexes, query planning, multidocument ACID transactions — while keeping the application-first document model. When you insert a document, it is stored as a single unit.

In WiredTiger, the MongoDB storage engine, BSON documents (binary JSON with additional datatypes and indexing capabilities) are stored in B-trees with variable-sized leaf pages, allowing large documents to remain contiguous, which differs from the fixed-size page structures used by many relational databases. This avoids splitting a business object across multiple blocks and ensures consistent latency for operations that appear as a single operation to developers.

Updates in MongoDB are applied in memory. Committing them as in-place changes on disk would fragment pages. Instead, WiredTiger uses reconciliation to write a complete new version at checkpoints — similar to copy-on-write filesystems, but with a flexible block size. This may cause write amplification, but preserves document locality. With appropriately sized instances, these writes occur in the background and do not affect in-memory write latency.

Locality defined at the application’s document schema flows all the way down to the storage layer, something that relational database engines typically cannot match with their goal of physical data independence.

How Data Locality Improves Application Performance

Designing for locality simplifies development and operations in several ways:

  • Transactions: A business change affecting a single aggregate (in the domain-driven design sense) becomes a single atomic read–modify–write on one document — no multiple roundtrips like BEGIN, SELECT ... FOR UPDATE, multiple updates and COMMIT.
  • Queries and indexing: Related data in one document avoids SQL joins and ORM lazy/eager mapping. A single compound index can cover filters and projections across fields that would otherwise be in separate tables, ensuring predictable plans without join-order uncertainty.
  • Development: The same domain model in the application is used directly as the database schema. Developers can reason about access patterns without mapping to a separate model, making latency and plan stability predictable.
  • Scalability: Most operations targeting a single aggregate, with shard keys chosen accordingly, can be routed to one node, avoiding scatter–gather fan-out for critical use cases.

MongoDB’s optimistic concurrency control avoids locks, though it requires retry logic on write conflict errors. For single-document calls, retries are handled transparently by the databases, which have a complete view of the transaction intent, making it simpler and faster.

Embedding vs. Referencing in Document Data Modeling

Locality doesn’t mean “embed everything.” It means: Embed what you consistently access together. Bounded one-to-many relationships (such as an order and its line items) are candidates for embedding. Rarely updated references and dimensions can also be duplicated and embedded. High-cardinality or unbounded-growth relationships, or independently updated entities, are better represented as separate documents and can be co-located via shard keys.

MongoDB’s compound and multikey indexes support embedded fields, maintaining predictable, selective access without joins. Embedding within the same document is the only way to guarantee co-location at the block level. Multiple documents in a single collection are not stored close together, except for small documents inserted at the same time, as they might share the same block. In sharding, the shard key ensures co-location on the same node but not within the same block.

In MongoDB, locality is an explicit design choice in domain-driven design:

  • Identify aggregates that change and are read together.
  • Store them in one document when appropriate.
  • Use indexes aligned with access paths.
  • Choose shard keys so related operations route to one node.

What MongoDB Emulations Miss About Locality

Given the popularity of the document model, some cloud services offer MongoDB-like APIs on top of SQL databases. These systems may expose a MongoDB-like API while retaining a relational storage model, which typically does not maintain the same level of physical locality.

Relational databases store rows in fixed-size blocks (often 8 KB). Large documents must be split across multiple blocks. Here are some examples in popular SQL databases:

  • PostgreSQL JSONB: Stores JSON in heap tables and large documents in many chunks, using TOAST, the oversized attribute storage technique. The document is compressed and split into chunks stored in another table, accessed via an index. Reading a large document is like a nested loop join between the row and its TOAST table.
  • Oracle JSON-Relational Duality Views: Map JSON documents to relational tables, preserving data independence rather than physical locality. Elements accessed together may be scattered across blocks, requiring internal joins, multiple I/Os and possibly network calls in distributed setups.

In both scenarios, the documents are divided into either binary chunks or normalized tables. Although the API resembles MongoDB, it remains a SQL database that lacks data locality. Instead, it provides an abstraction that keeps the developer unaware of internal processes until they inspect the execution plan and understand the database internals.

Conclusion

“Store together what is accessed together” reflects realities across sharding, I/O patterns, transactions, and memory cache efficiency. Relational database engines abstract away physical layout, which works well for centralized, normalized databases serving multiple applications in a single monolithic server. At a larger scale, especially in elastic cloud environments, horizontal sharding is essential — and often incompatible with pure data independence. Developers must account for locality.

In SQL databases, this means denormalizing, duplicating reference data, and avoiding cross-shard constraints. The document model, when the database truly enforces locality down to storage offers an alternative to this abstraction and exceptions.

In MongoDB, locality can be explicitly defined at the application level while still providing indexing, query planning and transactional features. When assessing “MongoDB-compatible”systems on relational engines, it is helpful to determine whether the engine stores aggregates contiguously on disk and routes them to a single node by design. If not, the performance characteristics may differ from those of a document database that maintains physical locality.

Both approaches are valid. In database-first deployment, developers depend on in-database declarations to ensure performance, working alongside the database administrator and using tools like execution plans for troubleshooting. In contrast, application-first deployment shifts more responsibility to developers, who must validate both the application’s functionality and its performance.

November 22, 2025

Challenges compiling old C++ code on modern Linux

I often compile old versions of MySQL, MariaDB, Postgres and RocksDB in my search for performance regressions. Compiling is easy with Postgres as they do a great job at avoiding compilation warnings and I never encounter broken builds. Certainly the community gets the credit for this, but I suspect their task is easier because they use C.  This started as a LinkedIn post.

I expect people to disagree, and I am far from a C++ expert, but here goes ...

tl;dr - if you maintain widely used header files (widely used by C++ projects) consider not removing that include that you don't really need (like <cstdint>) because such removal is likely to break builds for older releases of projects that use your include.

I have more trouble compiling older releases of C++ projects. For MySQL I have a directory in github that includes patches that must be applied. And for MySQL I have to patch all 5.6 versions, 5.7 versions up to 5.7.33 and 8.0 versions up to 8.0.23. The most common reason for the patch is missing C++ includes (like <cstdint>).

For RocksDB with gcc I don't have to patch files but I need to use gcc-11 for RocksDB 6.x and gcc-12 for RocksDB 7.x.

For RocksDB with clang I don't have to patch files for RocksDB 8.x, 9.x and 10.x while I do have to patch 6.x and 7.x. For RocksDB 7.10 I need to edit two files to add <cstdint>. The files are:

  • table/block_based/data_block_hash_index.h
  • util/string_util.h
All of this is true for Ubuntu 24.04 with clang 18.1.3 and gcc 13.3.0.

November 21, 2025

INNER JOIN and LEFT OUTER JOIN in MongoDB (with $lookup and $unwind)

We often explain $lookup as similar to a relational JOIN, but there’s a key difference in the returned output format. In SQL, a JOIN returns a flat, tabular result set, which means values from the “one” side of a one‑to‑many relationship are repeated for each matching row on the “many” side. In MongoDB, $lookup outputs related data as arrays embedded within each document, so the “one” side appears only once and related items are grouped together.

Here’s the funny part: relational databases were invented to avoid duplication through relational normalization — yet a JOIN immediately re‑introduces duplication in the query results, consumed as one stream of records. That’s why application code or ORMs often have to break those repeated values back into a nested structure. In MongoDB, when data is stored using references, $lookup can resolve those references and embed the related documents directly in the output, avoiding duplication and producing a structure that’s naturally more application‑friendly. MongoDB is a document database, either you embed on write to store related data in a single document, or you embed on read from multiple documents with references, with $lookup, but the result is the same: an object aggregate in JSON, or rather BSON, that is fetched as-is by the drivers.

Still, MongoDB can provide the equivalent of an INNER JOIN or a LEFT OUTER JOIN, when adding an $unwind stage after the $lookup.

I'll demonstrate this with the traditional department and employee schema, using the names from the SCOTT example schema of Oracle database, but as MongoDB collections:

db.createCollection("dept");
db.dept.insertMany([
  { deptno: 10, dname: "ACCOUNTING", loc: "NEW YORK" },
  { deptno: 20, dname: "RESEARCH", loc: "DALLAS" },
  { deptno: 30, dname: "SALES", loc: "CHICAGO" },
  { deptno: 40, dname: "OPERATIONS", loc: "BOSTON" }
]);
db.createCollection("emp");
db.emp.insertMany([
  { empno: 7839, ename: "KING",  deptno: 10 },
  { empno: 7698, ename: "BLAKE", deptno: 30 },
  { empno: 7782, ename: "CLARK", deptno: 10 },
  { empno: 7566, ename: "JONES", deptno: 20 },
  { empno: 7788, ename: "SCOTT", deptno: 20 },
  { empno: 7902, ename: "FORD",  deptno: 20 },
  { empno: 7369, ename: "SMITH", deptno: 20 },
  { empno: 7499, ename: "ALLEN", deptno: 30 },
  { empno: 7521, ename: "WARD",  deptno: 30 },
  { empno: 7654, ename: "MARTIN",deptno: 30 },
  { empno: 7844, ename: "TURNER",deptno: 30 },
  { empno: 7876, ename: "ADAMS", deptno: 20 },
  { empno: 7900, ename: "JAMES", deptno: 30 },
  { empno: 7934, ename: "MILLER",deptno: 10 }
]);

Note that the "OPERATIONS" department has no employees at this point in time.

$lookup: join with no data duplication

In a SQL database, joining tables duplicates data to ensure each record has all necessary information for independent processing. When loading this data as objects rather than records, the application must detect where objects begin and end and eliminate duplicates—or rely on an ORM to do this automatically. Here is an example of a join to show the departments with their employees:

SQL> select dept.*,empno,ename 
     from dept left outer join emp on dept.deptno = emp.deptno
;

   DEPTNO DNAME         LOC            EMPNO ENAME
_________ _____________ ___________ ________ _________
       10 ACCOUNTING    NEW YORK       7,782 CLARK
       10 ACCOUNTING    NEW YORK       7,839 KING
       10 ACCOUNTING    NEW YORK       7,934 MILLER
       20 RESEARCH      DALLAS         7,369 SMITH
       20 RESEARCH      DALLAS         7,566 JONES
       20 RESEARCH      DALLAS         7,788 SCOTT
       20 RESEARCH      DALLAS         7,876 ADAMS
       20 RESEARCH      DALLAS         7,902 FORD
       30 SALES         CHICAGO        7,499 ALLEN
       30 SALES         CHICAGO        7,521 WARD
       30 SALES         CHICAGO        7,654 MARTIN
       30 SALES         CHICAGO        7,698 BLAKE
       30 SALES         CHICAGO        7,844 TURNER
       30 SALES         CHICAGO        7,900 JAMES

14 rows selected.

MongoDB provides a consistent document model across both application code and database storage, and returns the same join result without duplication:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'),deptno: 20,dname: 'RESEARCH',loc: 'DALLAS',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'),deptno: 30,dname: 'SALES',loc: 'CHICAGO',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dd'), deptno: 40,dname: 'OPERATIONS', loc: 'BOSTON',
    employees: []
  }
]

The $lookup operation in MongoDB is similar to a LEFT OUTER JOIN except that it keeps the data in a structured schema, to be consumed by the application.

$lookup + $unwind : inner join (with duplication)

The result of $lookup is the easiest for the application, but if you need a record-like output to process further, $unwind can unnest the array into as many documents as the array items:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: 
       "$employees" 
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
  }
]

Internally, the implementation avoids materializing the array and immediately returns one document per match, the two stages $lookup and $unwind being optimized into a single lookup-unwind stage that acts like a SQL join.

Note that because there are no items in the employees array, there are no documents in the output for department 40. This is exactly like an INNER JOIN in SQL.

$lookup + $unwind with preserveNullAndEmptyArrays: left outer join

It is possible to preserve the LEFT OUTER JOIN output of $lookup through $unwind with preserveNullAndEmptyArrays: true:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: {
       path: "$employees" ,
       preserveNullAndEmptyArrays: true
     }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ... (truncated)
                                    

November 20, 2025

Performing Standby Datacentre Promotions of a Patroni Cluster

Unlike the standard multi-node Postgres replication cluster, when managed by Patroni, all failovers are automatically executed. However, this is not the case when dealing with inter-datacentre failovers when for instance a standby datacentre must take over from a failed primary. The following describes the mechanisms required to perform such a procedure when the case arises. […]

November 19, 2025

Data Retention Policy Implementation – How and Why

Is your PostgreSQL database Feeling Sluggish? Are SQL statements taking more time than in earlier days? Are you experiencing performance cliffs (Unexpected, sudden drops in performance)? Are backups taking a long time to complete? Are you getting a bigger bill for storage? Are standby rebuilds and development refreshes becoming a herculean task? Are the auditors […]

November 18, 2025

Introducing the GA Release of the New Percona Operator for MySQL: More Replication Options on Kubernetes

The Percona Cloud Native team is happy to announce the general availability of the Percona Operator for MySQL, based on Percona Server for MySQL. This release introduces an additional Kubernetes-native approach to deploying and managing MySQL clusters with synchronous Group Replication, delivering the consistency required for organizations with business continuity needs. With this release, Percona […]