a curated list of database news from authoritative sources

November 24, 2025

Analyzing the Heartbeat of the MySQL Server: A Look at Repository Statistics

The MySQL database server is a foundational component of the open-source world. While its impact is undeniable, looking at the raw statistics of its core source code repository reveals a dynamic and sometimes surprising development history. By analyzing the total lines of code inserted, the number of commits over the years, and unique contributors, we […]

November 23, 2025

Data Locality vs. Independence: Which Should Your Database Prioritize?

Understand how the principle of "store together what is accessed together" is a game-changer for database performance and scalability.

When your application needs several pieces of data at once, the fastest approach is to read them from a single location in a single call. In a document database, developers can decide what is stored together, both logically and physically.

Fragmentation has never been beneficial for performance. In databases, the proximity of data — on disk, in memory or across the network — is crucial for scalability. Keeping related data together allows a single operation to fetch everything needed, reducing disk I/O, memory cache misses and network round-trips, thereby making performance more predictable.

The principle “store together what is accessed together” is central to modeling in document databases. Yet its purpose is to allow developers to control the physical storage layout, even with flexible data structures.

In contrast, SQL databases were designed for data independence — allowing users to interact with a logical model separate from the physical implementation managed by a database administrator.

Today, the trend is not to separate development and operations, allowing faster development cycles without the complexity of coordinating multiple teams or shared schemas. Avoiding the separation into logical and physical models further simplifies the process.

Understanding the core principle of data locality is essential today, especially as many databases emulate document databases or offer similar syntax on top of SQL. To qualify as a document database, it’s not enough to accept JSON documents with a developer-friendly syntax.

The database must also preserve those documents intact in storage so that accessing them has predictable performance. Whether they expose a relational or document API, it is essential to know if your objective is data independence or data locality.

Why Locality Still Matters in Modern Infrastructure

Modern hardware still suffers from penalties for scattered access. Hard disk drives (HDDs) highlighted the importance of locality because seek and rotational latency are more impactful than transfer speed, especially for online transactional processing (OTLP) workloads.

While solid state drives (SSDs) remove mechanical delays, random writes remain expensive, and cloud storage adds latency due to network access to storage. Even in-memory access isn’t immune: on multisocket servers, non-uniform memory access (NUMA) causes varying access times depending on where the data was loaded into memory by the first access, relative to the CPU core that processes it later.

Scale-out architecture further increases complexity. Vertical scaling — keeping all reads and writes on a single instance with shared disks and memory — has capacity limits. Large instances are expensive, and scaling them down or up often requires downtime, which is risky for always-on applications.

For example, you might need your maximum instance size for Black Friday but would have to scale up progressively in the lead-up, incurring downtime as usage increases. Without horizontal scalability, you end up provisioning well above your average load “just in case,” as in on-premises infrastructures sized years in advance for occasional peaks — something that can be prohibitively costly in the cloud.

Horizontal scaling allows adding or removing nodes without downtime. However, more nodes increase the likelihood of distributed queries, in which operations that once hit local memory must now traverse the network, introducing unpredictable latency. Data locality becomes critical with scale-out databases.

To create scalable database applications, developers should understand storage organization and prioritize single-document operations for performance-critical transactions. CRUD functions (insert, find, update, delete) targeting a single document in MongoDB are always handled by a single node, even in a sharded deployment. If that document isn’t in memory, it can be read from disk in a single I/O operation. Modifications are applied to the in-memory copy and written back as a single document during asynchronous checkpoints, avoiding on-disk fragmentation.

In MongoDB, the WiredTiger storage engine stores each document’s fields together in contiguous storage blocks, allowing developers to follow the principle “store together what is accessed together.” By avoiding cross-document joins, such as the $lookup operation in queries, this design helps prevent scatter-gather operations internally, which promotes consistent performance. This supports predictable performance regardless of document size, update frequency or cluster scale.

The Relational Promise: Physical Data Independence

For developers working with NoSQL databases, what I exposed above seems obvious: There is one single data model — the domain model — defined in the application, and the database stores exactly that model.

The MongoDB data modeling workshop defines a database schema as the physical model that describes how the data is organized in the database. In relational databases, the logical model is typically independent of the physical storage model, regardless of the data type used, because they serve different purposes.

SQL developers work with a relational model that is mapped to their object model via object relational mapping (ORM) tooling or hand-coded SQL joins. The models and schemas are normalized for generality, not necessarily optimized for specific application access patterns.

The goal of the relational model was to serve online interactive use by non-programmers and casual users by providing an abstraction that hides physical concerns. This includes avoiding data anomalies through normalization and enabling declarative query access without procedural code. Physical optimizations, like indexes, are considered implementation details. You will not find CREATE INDEX in the SQL standard.

In practice, a SQL query planner chooses access paths based on statistics. When writing JOIN clauses, the order of tables in the FROM clause should not matter. The SQL query planner reorders based on cost estimates. The database guarantees logical consistency, at least in theory, even with concurrent users and internal replication. The SQL approach is database-centric: rules, constraints and transactional guarantees are defined in the relational database, independent of specific use cases or table sizes.

Today, most relational databases sit behind applications. End users rarely interact with them directly, except in analytical or data science contexts. Applications can enforce data integrity and handle code anomalies, and developers understand data structures and algorithms. Nonetheless, relational database experts still advise keeping constraints, stored procedures, transactions, and joins within the database.

The physical storage remains abstracted — indexes, clustering, and partitions are administrator-level, not application-level, concepts, as if the application developers were like the non-programmer casual users described in the early papers about relational databases.

How Codd’s Rules Apply to SQL/JSON Documents

Because data locality matters, some relational databases have mechanisms to enforce it internally. For example, Oracle has long supported “clustered tables” for co-locating related rows from multiple columns, and more recently offers a choice for JSON storage as either binary JSON (OSON, Oracle’s native binary JSON) or decomposed relational rows (JSON-relational duality views). However, those physical attributes are declared and deployed in the database using a specific data definition language (DDL) and are not exposed to the application developers. This reflects Codd’s “independence” rules:

  • Rule 8: Physical data independence
  • Rule 9: Logical data independence
  • Rule 10: Integrity independence
  • Rule 11: Distribution independence

Rules 8 and 11 relate directly to data locality: The user is not supposed to care whether data is physically together or distributed. The database is opened to users who ignore the physical data model, access paths and algorithms. Developers do not know what is replicated, sharded or distributed across multiple data centers.

Where the SQL Abstraction Begins to Weaken

In practice, no relational database perfectly achieves these rules. Performance tuning often requires looking at execution plans and physical data layouts. Serializable isolation is rarely used due to scalability limitations of two-phase locking, leading developers to fall back to weaker isolation levels or to explicit locking (SELECT ... FOR UPDATE). Physical co-location mechanisms — hash clusters, attribute clustering — exist, but are difficult to size and maintain optimally without precise knowledge of access patterns. They often require regular data reorganization as updates can fragment it again.

The normalized model is inherently application-agnostic, so optimizing for locality often means breaking data independence ( denormalizing, maintaining materialized views, accepting stale reads from replicas, disabling referential integrity). With sharding, constraints like foreign keys and unique indexes generally cannot be enforced across shards. Transactions must be carefully ordered to avoid long waits and deadlocks. Even with an abstraction layer, applications must be aware of the physical distribution for some operations.

The NoSQL Approach: Modeling for Access Patterns

As data volumes and latency expectations grow, a different paradigm has emerged: give developers complete control rather than an abstraction with some exceptions.

NoSQL databases adopt an application-first approach: The physical model matches the access patterns, and the responsibility for maintaining integrity and transactional scope is pushed to the application. Initially, many NoSQL stores delegated all responsibility, including consistency, to developers, acting as “dumb” key-value or document stores. Most lacked ACID (atomicity, consistency, isolation and durability) transactions or query planners. If secondary indexes were present, they needed to be queried explicitly.

This NoSQL approach was the opposite of the relational database world: Instead of one shared, normalized database, there were many purpose-built data stores per application. It reduces the performance and scalability surprises, but at the price of more complexity.

MongoDB’s Middle Road for Flexible Schemas

MongoDB evolved by adding essential relational database capabilities — indexes, query planning, multidocument ACID transactions — while keeping the application-first document model. When you insert a document, it is stored as a single unit.

In WiredTiger, the MongoDB storage engine, BSON documents (binary JSON with additional datatypes and indexing capabilities) are stored in B-trees with variable-sized leaf pages, allowing large documents to remain contiguous, which differs from the fixed-size page structures used by many relational databases. This avoids splitting a business object across multiple blocks and ensures consistent latency for operations that appear as a single operation to developers.

Updates in MongoDB are applied in memory. Committing them as in-place changes on disk would fragment pages. Instead, WiredTiger uses reconciliation to write a complete new version at checkpoints — similar to copy-on-write filesystems, but with a flexible block size. This may cause write amplification, but preserves document locality. With appropriately sized instances, these writes occur in the background and do not affect in-memory write latency.

Locality defined at the application’s document schema flows all the way down to the storage layer, something that relational database engines typically cannot match with their goal of physical data independence.

How Data Locality Improves Application Performance

Designing for locality simplifies development and operations in several ways:

  • Transactions: A business change affecting a single aggregate (in the domain-driven design sense) becomes a single atomic read–modify–write on one document — no multiple roundtrips like BEGIN, SELECT ... FOR UPDATE, multiple updates and COMMIT.
  • Queries and indexing: Related data in one document avoids SQL joins and ORM lazy/eager mapping. A single compound index can cover filters and projections across fields that would otherwise be in separate tables, ensuring predictable plans without join-order uncertainty.
  • Development: The same domain model in the application is used directly as the database schema. Developers can reason about access patterns without mapping to a separate model, making latency and plan stability predictable.
  • Scalability: Most operations targeting a single aggregate, with shard keys chosen accordingly, can be routed to one node, avoiding scatter–gather fan-out for critical use cases.

MongoDB’s optimistic concurrency control avoids locks, though it requires retry logic on write conflict errors. For single-document calls, retries are handled transparently by the databases, which have a complete view of the transaction intent, making it simpler and faster.

Embedding vs. Referencing in Document Data Modeling

Locality doesn’t mean “embed everything.” It means: Embed what you consistently access together. Bounded one-to-many relationships (such as an order and its line items) are candidates for embedding. Rarely updated references and dimensions can also be duplicated and embedded. High-cardinality or unbounded-growth relationships, or independently updated entities, are better represented as separate documents and can be co-located via shard keys.

MongoDB’s compound and multikey indexes support embedded fields, maintaining predictable, selective access without joins. Embedding within the same document is the only way to guarantee co-location at the block level. Multiple documents in a single collection are not stored close together, except for small documents inserted at the same time, as they might share the same block. In sharding, the shard key ensures co-location on the same node but not within the same block.

In MongoDB, locality is an explicit design choice in domain-driven design:

  • Identify aggregates that change and are read together.
  • Store them in one document when appropriate.
  • Use indexes aligned with access paths.
  • Choose shard keys so related operations route to one node.

What MongoDB Emulations Miss About Locality

Given the popularity of the document model, some cloud services offer MongoDB-like APIs on top of SQL databases. These systems may expose a MongoDB-like API while retaining a relational storage model, which typically does not maintain the same level of physical locality.

Relational databases store rows in fixed-size blocks (often 8 KB). Large documents must be split across multiple blocks. Here are some examples in popular SQL databases:

  • PostgreSQL JSONB: Stores JSON in heap tables and large documents in many chunks, using TOAST, the oversized attribute storage technique. The document is compressed and split into chunks stored in another table, accessed via an index. Reading a large document is like a nested loop join between the row and its TOAST table.
  • Oracle JSON-Relational Duality Views: Map JSON documents to relational tables, preserving data independence rather than physical locality. Elements accessed together may be scattered across blocks, requiring internal joins, multiple I/Os and possibly network calls in distributed setups.

In both scenarios, the documents are divided into either binary chunks or normalized tables. Although the API resembles MongoDB, it remains a SQL database that lacks data locality. Instead, it provides an abstraction that keeps the developer unaware of internal processes until they inspect the execution plan and understand the database internals.

Conclusion

“Store together what is accessed together” reflects realities across sharding, I/O patterns, transactions, and memory cache efficiency. Relational database engines abstract away physical layout, which works well for centralized, normalized databases serving multiple applications in a single monolithic server. At a larger scale, especially in elastic cloud environments, horizontal sharding is essential — and often incompatible with pure data independence. Developers must account for locality.

In SQL databases, this means denormalizing, duplicating reference data, and avoiding cross-shard constraints. The document model, when the database truly enforces locality down to storage offers an alternative to this abstraction and exceptions.

In MongoDB, locality can be explicitly defined at the application level while still providing indexing, query planning and transactional features. When assessing “MongoDB-compatible”systems on relational engines, it is helpful to determine whether the engine stores aggregates contiguously on disk and routes them to a single node by design. If not, the performance characteristics may differ from those of a document database that maintains physical locality.

Both approaches are valid. In database-first deployment, developers depend on in-database declarations to ensure performance, working alongside the database administrator and using tools like execution plans for troubleshooting. In contrast, application-first deployment shifts more responsibility to developers, who must validate both the application’s functionality and its performance.

November 22, 2025

Challenges compiling old C++ code on modern Linux

I often compile old versions of MySQL, MariaDB, Postgres and RocksDB in my search for performance regressions. Compiling is easy with Postgres as they do a great job at avoiding compilation warnings and I never encounter broken builds. Certainly the community gets the credit for this, but I suspect their task is easier because they use C.  This started as a LinkedIn post.

I expect people to disagree, and I am far from a C++ expert, but here goes ...

tl;dr - if you maintain widely used header files (widely used by C++ projects) consider not removing that include that you don't really need (like <cstdint>) because such removal is likely to break builds for older releases of projects that use your include.

I have more trouble compiling older releases of C++ projects. For MySQL I have a directory in github that includes patches that must be applied. And for MySQL I have to patch all 5.6 versions, 5.7 versions up to 5.7.33 and 8.0 versions up to 8.0.23. The most common reason for the patch is missing C++ includes (like <cstdint>).

For RocksDB with gcc I don't have to patch files but I need to use gcc-11 for RocksDB 6.x and gcc-12 for RocksDB 7.x.

For RocksDB with clang I don't have to patch files for RocksDB 8.x, 9.x and 10.x while I do have to patch 6.x and 7.x. For RocksDB 7.10 I need to edit two files to add <cstdint>. The files are:

  • table/block_based/data_block_hash_index.h
  • util/string_util.h
All of this is true for Ubuntu 24.04 with clang 18.1.3 and gcc 13.3.0.

November 21, 2025

INNER JOIN and LEFT OUTER JOIN in MongoDB (with $lookup and $unwind)

We often explain $lookup as similar to a relational JOIN, but there’s a key difference in the returned output format. In SQL, a JOIN returns a flat, tabular result set, which means values from the “one” side of a one‑to‑many relationship are repeated for each matching row on the “many” side. In MongoDB, $lookup outputs related data as arrays embedded within each document, so the “one” side appears only once and related items are grouped together.

Here’s the funny part: relational databases were invented to avoid duplication through relational normalization — yet a JOIN immediately re‑introduces duplication in the query results, consumed as one stream of records. That’s why application code or ORMs often have to break those repeated values back into a nested structure. In MongoDB, when data is stored using references, $lookup can resolve those references and embed the related documents directly in the output, avoiding duplication and producing a structure that’s naturally more application‑friendly. MongoDB is a document database, either you embed on write to store related data in a single document, or you embed on read from multiple documents with references, with $lookup, but the result is the same: an object aggregate in JSON, or rather BSON, that is fetched as-is by the drivers.

Still, MongoDB can provide the equivalent of an INNER JOIN or a LEFT OUTER JOIN, when adding an $unwind stage after the $lookup.

I'll demonstrate this with the traditional department and employee schema, using the names from the SCOTT example schema of Oracle database, but as MongoDB collections:

db.createCollection("dept");
db.dept.insertMany([
  { deptno: 10, dname: "ACCOUNTING", loc: "NEW YORK" },
  { deptno: 20, dname: "RESEARCH", loc: "DALLAS" },
  { deptno: 30, dname: "SALES", loc: "CHICAGO" },
  { deptno: 40, dname: "OPERATIONS", loc: "BOSTON" }
]);
db.createCollection("emp");
db.emp.insertMany([
  { empno: 7839, ename: "KING",  deptno: 10 },
  { empno: 7698, ename: "BLAKE", deptno: 30 },
  { empno: 7782, ename: "CLARK", deptno: 10 },
  { empno: 7566, ename: "JONES", deptno: 20 },
  { empno: 7788, ename: "SCOTT", deptno: 20 },
  { empno: 7902, ename: "FORD",  deptno: 20 },
  { empno: 7369, ename: "SMITH", deptno: 20 },
  { empno: 7499, ename: "ALLEN", deptno: 30 },
  { empno: 7521, ename: "WARD",  deptno: 30 },
  { empno: 7654, ename: "MARTIN",deptno: 30 },
  { empno: 7844, ename: "TURNER",deptno: 30 },
  { empno: 7876, ename: "ADAMS", deptno: 20 },
  { empno: 7900, ename: "JAMES", deptno: 30 },
  { empno: 7934, ename: "MILLER",deptno: 10 }
]);

Note that the "OPERATIONS" department has no employees at this point in time.

$lookup: join with no data duplication

In a SQL database, joining tables duplicates data to ensure each record has all necessary information for independent processing. When loading this data as objects rather than records, the application must detect where objects begin and end and eliminate duplicates—or rely on an ORM to do this automatically. Here is an example of a join to show the departments with their employees:

SQL> select dept.*,empno,ename 
     from dept left outer join emp on dept.deptno = emp.deptno
;

   DEPTNO DNAME         LOC            EMPNO ENAME
_________ _____________ ___________ ________ _________
       10 ACCOUNTING    NEW YORK       7,782 CLARK
       10 ACCOUNTING    NEW YORK       7,839 KING
       10 ACCOUNTING    NEW YORK       7,934 MILLER
       20 RESEARCH      DALLAS         7,369 SMITH
       20 RESEARCH      DALLAS         7,566 JONES
       20 RESEARCH      DALLAS         7,788 SCOTT
       20 RESEARCH      DALLAS         7,876 ADAMS
       20 RESEARCH      DALLAS         7,902 FORD
       30 SALES         CHICAGO        7,499 ALLEN
       30 SALES         CHICAGO        7,521 WARD
       30 SALES         CHICAGO        7,654 MARTIN
       30 SALES         CHICAGO        7,698 BLAKE
       30 SALES         CHICAGO        7,844 TURNER
       30 SALES         CHICAGO        7,900 JAMES

14 rows selected.

MongoDB provides a consistent document model across both application code and database storage, and returns the same join result without duplication:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'),deptno: 20,dname: 'RESEARCH',loc: 'DALLAS',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'),deptno: 30,dname: 'SALES',loc: 'CHICAGO',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dd'), deptno: 40,dname: 'OPERATIONS', loc: 'BOSTON',
    employees: []
  }
]

The $lookup operation in MongoDB is similar to a LEFT OUTER JOIN except that it keeps the data in a structured schema, to be consumed by the application.

$lookup + $unwind : inner join (with duplication)

The result of $lookup is the easiest for the application, but if you need a record-like output to process further, $unwind can unnest the array into as many documents as the array items:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: 
       "$employees" 
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
  }
]

Internally, the implementation avoids materializing the array and immediately returns one document per match, the two stages $lookup and $unwind being optimized into a single lookup-unwind stage that acts like a SQL join.

Note that because there are no items in the employees array, there are no documents in the output for department 40. This is exactly like an INNER JOIN in SQL.

$lookup + $unwind with preserveNullAndEmptyArrays: left outer join

It is possible to preserve the LEFT OUTER JOIN output of $lookup through $unwind with preserveNullAndEmptyArrays: true:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: {
       path: "$employees" ,
       preserveNullAndEmptyArrays: true
     }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ... (truncated)
                                    

INNER JOIN and LEFT OUTER JOIN in MongoDB (with $lookup and $unwind)

We often explain $lookup as similar to a relational JOIN, but there’s a key difference in the returned output format. In SQL, a JOIN returns a flat, tabular result set, which means values from the “one” side of a one‑to‑many relationship are repeated for each matching row on the “many” side. In MongoDB, $lookup outputs related data as arrays embedded within each document, so the “one” side appears only once and related items are grouped together.

Here’s the funny part: Relational databases were invented to avoid duplication through relational normalization—yet a JOIN immediately reintroduces duplication in the query results, consumed as one stream of records. That’s why application code or ORMs often have to break those repeated values back into a nested structure. In MongoDB, when data is stored using references, $lookup can resolve those references and embed the related documents directly in the output, avoiding duplication and producing a structure that’s naturally more application‑friendly.

MongoDB is a document database. Either you embed on write to store related data in a single document, or you embed on read from multiple documents with references, with $lookup, but the result is the same: an object aggregate in JSON, or rather BSON, that is fetched as-is by the drivers.

Still, MongoDB can provide the equivalent of an INNER JOIN or a LEFT OUTER JOIN, when adding an $unwind stage after the $lookup.

I'll demonstrate this with the traditional department and employee schema, using the names from the SCOTT example schema of Oracle database, but as MongoDB collections:

db.createCollection("dept");
db.dept.insertMany([
  { deptno: 10, dname: "ACCOUNTING", loc: "NEW YORK" },
  { deptno: 20, dname: "RESEARCH", loc: "DALLAS" },
  { deptno: 30, dname: "SALES", loc: "CHICAGO" },
  { deptno: 40, dname: "OPERATIONS", loc: "BOSTON" }
]);
db.createCollection("emp");
db.emp.insertMany([
  { empno: 7839, ename: "KING",  deptno: 10 },
  { empno: 7698, ename: "BLAKE", deptno: 30 },
  { empno: 7782, ename: "CLARK", deptno: 10 },
  { empno: 7566, ename: "JONES", deptno: 20 },
  { empno: 7788, ename: "SCOTT", deptno: 20 },
  { empno: 7902, ename: "FORD",  deptno: 20 },
  { empno: 7369, ename: "SMITH", deptno: 20 },
  { empno: 7499, ename: "ALLEN", deptno: 30 },
  { empno: 7521, ename: "WARD",  deptno: 30 },
  { empno: 7654, ename: "MARTIN",deptno: 30 },
  { empno: 7844, ename: "TURNER",deptno: 30 },
  { empno: 7876, ename: "ADAMS", deptno: 20 },
  { empno: 7900, ename: "JAMES", deptno: 30 },
  { empno: 7934, ename: "MILLER",deptno: 10 }
]);

Note that the "OPERATIONS" department has no employees at this point in time.

$lookup: Join with no data duplication

In a SQL database, joining tables duplicates data to ensure each record has all necessary information for independent processing. When loading this data as objects rather than records, the application must detect where objects begin and end and eliminate duplicates—or rely on an ORM to do this automatically. Here is an example of a join to show the departments with their employees:

SQL> select dept.*,empno,ename 
     from dept left outer join emp on dept.deptno = emp.deptno
;

   DEPTNO DNAME         LOC            EMPNO ENAME
_________ _____________ ___________ ________ _________
       10 ACCOUNTING    NEW YORK       7,782 CLARK
       10 ACCOUNTING    NEW YORK       7,839 KING
       10 ACCOUNTING    NEW YORK       7,934 MILLER
       20 RESEARCH      DALLAS         7,369 SMITH
       20 RESEARCH      DALLAS         7,566 JONES
       20 RESEARCH      DALLAS         7,788 SCOTT
       20 RESEARCH      DALLAS         7,876 ADAMS
       20 RESEARCH      DALLAS         7,902 FORD
       30 SALES         CHICAGO        7,499 ALLEN
       30 SALES         CHICAGO        7,521 WARD
       30 SALES         CHICAGO        7,654 MARTIN
       30 SALES         CHICAGO        7,698 BLAKE
       30 SALES         CHICAGO        7,844 TURNER
       30 SALES         CHICAGO        7,900 JAMES
       40 OPERATIONS    BOSTON

15 rows selected.

MongoDB provides a consistent document model across both application code and database storage, and returns the same join result without duplication:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'),deptno: 20,dname: 'RESEARCH',loc: 'DALLAS',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'),deptno: 30,dname: 'SALES',loc: 'CHICAGO',
    employees: [
      { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 },
      { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
    ]
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dd'), deptno: 40,dname: 'OPERATIONS', loc: 'BOSTON',
    employees: []
  }
]

The $lookup operation in MongoDB is similar to a LEFT OUTER JOIN except that it keeps the data in a structured schema, to be consumed by the application.

$lookup + $unwind: Inner join (with duplication)

The result of $lookup is the easiest for the application, but if you need a record-like output to process further, $unwind can unnest the array into as many documents as the array items:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: 
       "$employees" 
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f6'), empno: 7902, ename: 'FORD', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f7'), empno: 7369, ename: 'SMITH', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fc'), empno: 7876, ename: 'ADAMS', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f2'), empno: 7698, ename: 'BLAKE', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f8'), empno: 7499, ename: 'ALLEN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f9'), empno: 7521, ename: 'WARD', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fa'), empno: 7654, ename: 'MARTIN', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fb'), empno: 7844, ename: 'TURNER', deptno: 30 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0dc'), deptno: 30, dname: 'SALES', loc: 'CHICAGO',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fd'), empno: 7900, ename: 'JAMES', deptno: 30 }
  }
]

Internally, the implementation avoids materializing the array and immediately returns one document per match, the two stages $lookup and $unwind being optimized into a single lookup-unwind stage that acts like a SQL join.

Note that because there are no items in the employees array, there are no documents in the output for department 40. This is exactly like an INNER JOIN in SQL.

$lookup + $unwind preserving empty arrays: Left outer join

It is possible to preserve the LEFT OUTER JOIN output of $lookup through $unwind with preserveNullAndEmptyArrays: true:

db.dept.aggregate([
  { $lookup: {
      from: "emp",
      localField: "deptno",
      foreignField: "deptno",
      as: "employees"
    }
  },
  { $unwind: {
       path: "$employees" ,
       preserveNullAndEmptyArrays: true
     }
  }
])

[
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f1'), empno: 7839, ename: 'KING', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f3'), empno: 7782, ename: 'CLARK', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0da'), deptno: 10, dname: 'ACCOUNTING', loc: 'NEW YORK',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0fe'), empno: 7934, ename: 'MILLER', deptno: 10 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f4'), empno: 7566, ename: 'JONES', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 'RESEARCH', loc: 'DALLAS',
    employees: { _id: ObjectId('691f1623f4a22d6613d4b0f5'), empno: 7788, ename: 'SCOTT', deptno: 20 }
  },
  {
    _id: ObjectId('691ef750f4a22d6613d4b0db'), deptno: 20, dname: 
                                    
                                    
                                    
                                    
                                

November 20, 2025

Performing Standby Datacentre Promotions of a Patroni Cluster

Unlike the standard multi-node Postgres replication cluster, when managed by Patroni, all failovers are automatically executed. However, this is not the case when dealing with inter-datacentre failovers when for instance a standby datacentre must take over from a failed primary. The following describes the mechanisms required to perform such a procedure when the case arises. […]

November 19, 2025

Data Retention Policy Implementation – How and Why

Is your PostgreSQL database Feeling Sluggish? Are SQL statements taking more time than in earlier days? Are you experiencing performance cliffs (Unexpected, sudden drops in performance)? Are backups taking a long time to complete? Are you getting a bigger bill for storage? Are standby rebuilds and development refreshes becoming a herculean task? Are the auditors […]

November 18, 2025

Introducing the GA Release of the New Percona Operator for MySQL: More Replication Options on Kubernetes

The Percona Cloud Native team is happy to announce the general availability of the Percona Operator for MySQL, based on Percona Server for MySQL. This release introduces an additional Kubernetes-native approach to deploying and managing MySQL clusters with synchronous Group Replication, delivering the consistency required for organizations with business continuity needs. With this release, Percona […]