a curated list of database news from authoritative sources

April 25, 2026

Anti-Join in MongoDB

SQL databases implement anti-joins—finding rows with no match—through NOT EXISTS. Although anti-join is not a distinct relational algebra operator, it can be derived from set difference and semi-join. A naive implementation that scans both sets to find the complement would not scale, so databases use short-circuit evaluation: the inner scan stops as soon as a match is found or ruled out, because only existence matters. That is why a subquery without any JOIN keyword can still appear as an Anti Join in the execution plan.

Here is an example in PostgreSQL, where we find users who haven't made any paid transactions:


postgres=# EXPLAIN ANALYZE
SELECT u.*
FROM users u
WHERE NOT EXISTS (
    SELECT 1
    FROM transactions t
    WHERE t.user_id = u.id
      AND t.type = 'paid'
);
                                                        QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
 Nested Loop Anti Join (actual time=17.087..17.087 rows=0.00 loops=1)
   Buffers: shared hit=30095
   ->  Seq Scan on users u (actual time=0.007..0.628 rows=10000.00 loops=1)
         Buffers: shared hit=94
   ->  Index Only Scan using idx_transactions_user_type on transactions t (actual time=0.001..0.001 rows=1.00 loops=10000)
         Index Cond: ((user_id = u.id) AND (type = 'paid'::text))
         Heap Fetches: 0
         Index Searches: 10000
         Buffers: shared hit=30001
 Planning:
   Buffers: shared hit=14
 Planning Time: 0.200 ms
 Execution Time: 17.109 ms
(13 rows)

How does this translate to MongoDB, where there is no NOT EXISTS syntax?

In MongoDB, there are two equivalents to SQL joins:

  • The left outer join materialized at write time, where you embed the inner set as an array nested into the outer document. This is best when the cardinality is known and bounded, the entities are queried together as a single aggregate, and they share the same consistency boundaries within an atomic transaction. When you query the document, it is already joined, and you can $unwind the array to get the equivalent of a left outer join.
  • The $lookup operator, which finds matching documents from an aggregation pipeline and adds them as an array that can be used as if it were embedded, or can be $unwinded.

With reference (join on read)

With $lookup, an anti-join is possible and performant by defining the lookup pipeline with a $limit and a $projection to check for existence only.

Here is an example with a collection of ten thousand users and a collection of transactions:

db.users.drop();
db.transactions.drop();

// Insert 10000 users
const users = [];
for (let i = 0; i < 10000; i++) {
    users.push({
        _id: ObjectId(),
        name: `user_${i}`,
        email: `user_${i}@example.com`,
        created_at: new Date()
    });
}
db.users.insertMany(users);

// Insert between 0 and 100 transactions per user
// ~80% are "paid", ~20% are "free"
const transactions = [];
users.forEach(user => {
    const numTransactions = Math.floor(Math.random() * 101); // 0 to 100
    for (let j = 0; j < numTransactions; j++) {
        transactions.push({
            user_id: user._id,
            type: Math.random() < 0.8 ? "paid" : "free",
            amount: Math.round(Math.random() * 10000) / 100,
            date: new Date(
                Date.now() - Math.floor(Math.random() * 365 * 24 * 60 * 60 * 1000)
            )
        });
    }
});

// Insert in batches (insertMany has a limit)
const BATCH_SIZE = 10000;
for (let i = 0; i < transactions.length; i += BATCH_SIZE) {
    db.transactions.insertMany(transactions.slice(i, i + BATCH_SIZE));
}

print(`Users created: ${db.users.countDocuments()}`);
print(`Users with transactions: ${db.transactions.distinct("user_id").length}`);
print(`Users with paid transactions: ${db.transactions.distinct("user_id", { type: "paid" }).length}`);
print(`Transactions created: ${db.transactions.countDocuments()}`);
print(`Paid transactions: ${db.transactions.countDocuments({ type: "paid" })}`);
print(`Free transactions: ${db.transactions.countDocuments({ type: "free" })}`);

This generated random data for 10,000 users, of whom 105 have no transactions at all, and 24 have only unpaid transactions.

test> print(`Users created: ${db.users.countDocuments()}`);
Users created: 10000

test> print(`Users with transactions: ${db.transactions.distinct("user_id").length}`);
Users with transactions: 9895

test> print(`Users with paid transactions: ${db.transactions.distinct("user_id", { type: "paid" }).length}`);
Users with paid transactions: 9871

test> print(`Transactions created: ${db.transactions.countDocuments()}`);
Transactions created: 498812

test> print(`Paid transactions: ${db.transactions.countDocuments({ type: "paid" })}`);
Paid transactions: 399612

test> print(`Free transactions: ${db.transactions.countDocuments({ type: "free" })}`);
Free transactions: 99200

In this sample dataset, there are 129 users with no paid transactions (either no transactions at all or only unpaid transactions).

As it will have to find transactions per user and type, I create the following index:


db.transactions.createIndex({ user_id: 1, type: 1 });

Here is the aggregation pipeline that returns the users with no paid transactions:


db.users.aggregate([
    {
        $lookup: {
            from: "transactions",
            localField: "_id",
            foreignField: "user_id",
            pipeline: [
                { $match: { type: "paid" } },
                { $limit: 1 },                  // ✅ Stop as soon as we find ONE
                { $project: { _id: 1 } }        // ✅ Return minimal data
            ],
            as: "paid_transactions"
        }
    },
    {
        $match: {
            paid_transactions: { $eq: [] }      // Keep only users with NO match
        }
    },
    {
        $project: {
            paid_transactions: 0                // Clean up the temporary field
        }
    }
]).explain("executionStats").stages

The $lookup stage has examined only one document per user with a paid transaction, so 9,871 ones, thanks to the { '$limit': 1 }:

{
    '$lookup': {
      from: 'transactions',
      as: 'paid_transactions',
      localField: '_id',
      foreignField: 'user_id',
      let: {},
      pipeline: [
        { '$match': { type: 'paid' } },
        { '$limit': 1 },
        { '$project': { _id: 1 } }
      ]
    },
    totalDocsExamined: Long('9871'),
    totalKeysExamined: Long('9871'),
    collectionScans: Long('0'),
    indexesUsed: [ 'user_id_1_type_1' ],
    nReturned: Long('10000'),
    executionTimeMillisEstimate: Long('1096')
  },

It has returned one document for each of the 10,000 users, with an empty array when there was no match, so the $match stage filtered out 129 documents:

  {
    '$match': { paid_transactions: { '$eq': [] } },
    nReturned: Long('129'),
    executionTimeMillisEstimate: Long('1065')
  },

This (1065ms) is notably slower than PostgreSQL anti-join (17ms). The explanation is that $lookup executes a separate query per document in a subpipeline, which has high per-document overhead compared to a native join operator. In MongoDB when there's a join from thousands of documents, you should consider embedding.

With embedding (join on write)

If you preferred to store the transactions embedded within each user, the query can be simpler and faster.

I update users to embed the transactions:

db.users.aggregate([  
    {  
        $lookup: {  
            from: "transactions",  
            localField: "_id",  
            foreignField: "user_id",  
            as: "transactions"  
        }  
    },  
    {  
        $merge: {  
            into: "users",  
            whenMatched: "merge"  
        }  
    }  
]);  

The query to find users with no paid transactions is simply:

db.users.find(
{ "transactions.type": { $ne: "paid" } } ,
{ "transactions": 0 }
).explain("executionStats"); 

This scans the users to return the 129 ones without paid transactions:

    inputStage: {
      stage: 'COLLSCAN',
      filter: { 'transactions.type': { '$not': { '$eq': 'paid' } } },
      nReturned: 129,
      executionTimeMillisEstimate: 20,
      works: 10001,
      advanced: 129,
      needTime: 9871,
      needYield: 0,
      saveState: 1,
      restoreState: 1,
      isEOF: 1,
      direction: 'forward',
      docsExamined: 10000
    }
  }

This is not efficient because it has to read large documents, with all transactions, and discard them later.

A multikey index on "transactions.type" does not help much either. Because it indexes every element in the array, a user with both "paid" and "free" transactions has entries under both values. The $ne: "paid" scan retrieves all users who have any non-paid entry — which is nearly all of them — and then the FETCH stage must read each full document to verify that none of their transactions are paid:

      inputStage: {
        stage: 'IXSCAN',
        nReturned: 9587,
        executionTimeMillisEstimate: 10,
        works: 9589,
        advanced: 9587,
        needTime: 1,
        needYield: 0,
        saveState: 2,
        restoreState: 2,
        isEOF: 1,
        keyPattern: { 'transactions.type': 1 },
        indexName: 'transactions.type_1',
        isMultiKey: true,
        multiKeyPaths: { 'transactions.type': [ 'transactions' ] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: {
          'transactions.type': [ '[MinKey, "paid")', '("paid", MaxKey]' ]
        },
        keysExamined: 9588,
        seeks: 2,
        dupsTested: 9587,
        dupsDropped: 0
      }

but it will still fetch documents with some paid transactions, in addition to unpaid, and that must be discarded later:

      stage: 'FETCH',
      filter: { 'transactions.type': { '$not': { '$eq': 'paid' } } },
      nReturned: 129,
      executionTimeMillisEstimate: 36,

In general, indexes do not help with anti-joins because they index existing values, not the absence of values. In MongoDB with embedded documents, you can work around this by adding a computed flag — such as "has_paid": false — that is maintained when transactions are inserted or removed, and indexing that flag. Since the flag and the transactions live in the same document, this adds negligible write overhead while making the anti-join a simple indexed equality match.

BugBash'26 Morning of Day 1

Continuing with notes from the BugBash talks. Yes, all of this goodness, including Will Wilson's keynote was before lunch the first day.


Where all the ladders start

Peter Alvaro, Associate Professor of Computer Science @ UC Santa Cruz

In this talk, Peter reflects back on his 20 years of distributing systems work. The cover image is Don Quixote (which is Peter)  attacking the windmill (robust distributed systems) with a spear (which is some singular solution often borrowed from databases).

The first attack was through the use of arcane algebras. This is a purist approach of getting it right the first time. This was during Peter's PhD at UC Berkeley, where Neil Conway was also a peer and collaborator.

In his own admission, this was incited by a naive framing around what makes distributed systems difficult? The target was uncertainty regarding order and timing, which cause distributed consistency problems, and require  coordination. But distributed coordination comes at the cost of latency, performance variability, decreased availability. Peter said, they were influenced by James Hamilton (LADIS'08 talk), and Pat Helland's work around avoiding coordination. 

Peter then talked about two similarly sounding problems, deadlock detection versus garbage collection, in a partitioned replicated distributed system. While these have similar formulations around strongly connected components, the first one is a monotonic problem, and the second not! When new information arrives, the first is positive, and second is the negative direction-seeking.

They had formulated the CALM theorem to capture how monotonicity composes to properties on programs. This  sounds grand and very promising, but not even close to reaching the target, solving the robust distributed systems problem.

The first problem is that disorder is not the whole story! The partial failure problem, rather than just binary failures, throws a wrench in the works. The second problem is that nobody wants to buy declarative languages.

The next attack is through lineage driven fault injection. This goes with the premise that fault-tolerance is just through redundancy. This was a practical and useful approach. It found real bugs in real systems at Netflix, Ebay, and Uber. But this also suffered from relying on sufficient observability and whitebox instrumentation/testing. A second limitation was that redundancy is not always a good thing as it comes with costs, both performance and monetary. 

The other attack was through simulation to address metastability problems. This was joint work with Rupak and Rebecca at AWS. Peter talked about CTMCs, attractors, and how these relate to metastability, the bad place, and the recovery to good place. He  said this is beautiful work, but it does not work! The limitation is that the simulation needs to be very finely calibrated, and this can be very hard.

Recently, he has been working on yet another attack,  Descartes: deterministic testing with  Rupak Majumdar. The idea here is to feel out safety margins, and do stability testing.

Is this Sisyphus at work? No, this doesn't feel like punishment. This is fun. And we are making progress with each attack. Yes there is no silver bullet, but that won't stop us for prodding and searching for unifying approaches. 

Peter was also good at making people laugh with inside jokes about "postmortem party", and mentioning some unhelpful misguided suggestions offered at postmortems. We have a long way to go as a discipline.


From dams to data: how to think about infrastructure

Deb Chachra, Professor of Engineering @ Olin College

Here are some out-of-context quotes to summarize the talk. Piecing them together is an exercise left to the reader.

"Technology is the active human interface with the material world." "Infrastructure is relational, it is based on relationships." "Infrastructure has a trajectory." "The gifts of nature are for the public." "Niagara Falls: 1903 triscuit, baked by electricity, Sir Henry Pellet the financier." "Robert R. Moses, boo! Utilitarianist ethics." "We are going to build a system: a small number of people will be harmed, but a lot of people will benefit." "Unfortunately, we are spectacularly bad at that decision making." "Renewable energy is incredibly abundant: Earth is not a closed system for energy." "Renewables are inherently decentralized and distributed." "Infrastructure becomes a political right." "If you can't read in a world where 80% cannot, you are probably fine. But if you can't read in a world where 80% can... you are not fine." 

Yes, this was not a software talk. Yes, there are many technology and AI parallels here, but the speaker did not go into these.


Lightning Talks 

What 20 years of kernel bugs taught us about finding the next one

Jenny Qu, AI Researcher @ Pebblebed

30% of kernel bugs hide 5+ years. These share the same pattern: control and data paths implicit. There is a shared channel, assumed sequencing, no enforcement. In other words, mostly concurrency bugs, which are hard to reproduce. LLMs are going to make this a lot worse! (Maybe the understatement of the year.) Every data channel is also a channel that can reach to control path. There is no control path anymore. What does Jepsen look like for LLM generated systems?


Formal verification in the web dev workflow

Fernanda Graciolli, Co-Founder @ Midspiral

This talk was about proofs. Thanks to AI, we can do it, and because of AI, we must do it! Claude can write a dafny proof, now what? She talked about Dafny to React components workflow. Pure logic is proved before compiling. Logic lives in specs, iterate on specs, not code. spec.yaml: structured spec


Old Tom Bombadil is a merry fuzzer!

Oskar Wickström, Senior Software Engineer @ Antithesis

This talk described property-based testing for web apps using typescript/javascript. It described the Bombadil project from Antithesis, and demoed it.

BugBash'26 Keynote

I attended the BugBash 2026 these last two days, and had a blast. Here are my notes from the first keynote. I will try to find time to publish my notes from the other talks in the coming days.


Keynote: We won, what now?

Will Wilson, Co-founder & CEO @ Antithesis

The Antithesis team opened with a great animation/teaser clip, then Will took the stage. Here is the summary of his talk. 

This is not a software testing conference. This is about building reliable software by any means: testing, observability, formal methods, people/culture, better languages. He shows a meme of fantastic five or something using their rings, they invoke this giant warrior.

Time to acknowledge the elephant in the room.  A new contender emerges: AI!!

We are now taking a fundamentally unreliable system (AI) to make the systems we are developing reliable. And it is working somehow?! There is a vibe quality to it. When the cost of software generation goes down drastically, you can do a whole lot of it as per Jevons' paradox.

At this point Will starts talking about this  hypothetical band: Quaternion dysfunction, and its noncommutative album. This is a niche band, following them early on makes us feel very special, part of a  small in-group.

Now imagine that this niche band becomes freakishly popular suddenly. You feel many things. First you feel a great validation. But now more people start following the band, and you lost the in-group identity.  You need to get a new personality. But who knows when maybe you can cash on it, as expert or talent lead.

Other copycat bands enter the scene, say Helvetica scenario. And now there are also many faker fans. People just follow these bands because they are popular. This happens in real world a lot, and in the technology world as well.  Jon Evans wrote a Techcrunch article on this in 2015: Beware the pretty people. Lawyers, financiers, business people. He overstates the effects. Silicon Valley getting popular was mostly good diversity with these other people arriving. But yes there are also comes scammers, bad actors.

Here is a comparison of niche fields versus popular fields.

  • elite   vs.  energizing 
  • elitist  vs. lots of BS  
  • cozy  vs. innovative 
  • defiant vs. ridiculous

Transition between these two worlds can be traumatic. And when all is said and done,  Michael Lewis will come and write a book about it. (Will's zinger, not mine!)

In case, you still haven't caught on to the analogy. That band is "software correctness": formal methods, property-based testing, observability. I.e., the people/community inc BugBash. 

As Will put it bluntly, you belong to a cult. Vast majority of engineers don't care about correctness much. They are not bad people, but they are doing this because other pressures/priorities. This is a fact of life: most people don't didn't care about correctness much.

Well, that is until something strange happened, which made people care about correctness! Check the Google Trends for property based testing. It shot up from zero to millions in 2025-26. Same for formal methods.



Everybody just started caring about this because of AI. But how has AI caused this to happen? The conventional story is that AI agents don't write correct software, but take this story at face value. You mean software is written by unreliable agents? Always has been meme! People have been writing bad software for decades, and nobody batted an eyelash before for verification. So why now all of a sudden they care about verification.

The Amdahl's law is behind it. Focus on the part that is slow, the bottleneck, for improvement.

Previously when Will told to the managers that  50% of your teams time is spent in testing, they didn't use to believe him. Now they correct him and tell him it is 99%. Implication of AI and Amdahl's law means, now correctness is important. So no need to mention that, thanks to the AI wave, business is booming for Antithesis.

The Amdahl's law is a nice angle to look at this. But I think there is another reason for this, as Steve Klabnik mentioned in day 2. Previously, no matter how buggy it is, you had written your software, understood it, and tried it. And using AI breaks all three: now you don't have a way to validate the software without formal methods and property-based testing, etc.

Then Will went on to set up the roadmap and expectations for the software reliability folks. 

This feels like the Eternal September (1993/1994), where the unwashed masses started onboarding the internet. Forums got flooded, the norms changed. The in-crowd protested, but it was for good. It was an overall positive. We should keep the looking back perspective in mind.

What about the payoffs? Will showed the Rembrandt painting titled the "Parable of the Workers in the Vineyard", which depicts the bible story of vineyard workers getting paid at the end of the day, where the workers who joined in the last couple hours of the day paid the same as the ones who toiled all day. The parable is interpreted to mean that even those who are converted late in life earn equal rewards along with those converted early, and that people who convert early in life need not feel jealous of those later converts. 

Will iterated: Don't feel resentful. This is what winning looks like. Other people coming and coopting your thing is actually what winning looks like. It is okay to win! Your position will get demolished/bastardized, but the world would have moved slightly towards your position. This is the transition from  defiant to ridiculous in the above table.

(My aside: As for one, I am tired of winning! Too much winning going on on all fronts recently. I feel like the word "winning" is getting devalued. Also I personally do not agree with the parable's lesson. Even the monkey's have this injustice instinct built in. Don't go philosophizing over me.)

Anyway, Will's takeaway message is this. The masses are coming. It is our community's time to shine. Software reliability tools had been for the elite, but it is changing. It is time to teach others.

Teach others?! On day two, Steve Klabnik also iterated this message. It is time for others to learn from this community. But, neither elaborated how this teaching/learning will take place. And I remain skeptical. Yeah, I do blog about this stuff, and enthusiasts and people in the know follow and they say they benefit and learn. But I am skeptical about how this would scale. Learning is an active process, it requires active participation and effort on the learner's side. Some educators even claim, there is no teaching, there is only learning. I am worried people will follow easy non-solution trends, like I don't know HOPE: Heuristic Oversight of Probabilistically-correct Execution. Or I don't know AGILE: Assert Goodness, Iterate Later, Eventually. The braindead solutions always get more popular. Thinking is hard, and the human brains are optimized to be lazy.

Let me talk about the talk mechanics to wrap this up. Overall, this was a good show, in the best sense of the word. The delivery of the talk looked effortless but it is clear Will put a lot of work in to this presentation to make it this smooth. He had so many zingers, and in-jokes. The band analogy is wonderful. The Rembrands painting story is really memorable. These set the stage well, and help people manage expectations for the roadmap. This is a technical talk, presented as a nontechnical talk.

It was very entertaining, as well as informative and thought-provoking.  Will's liberal arts background comes through clearly. And the clever use memes was also a pattern shared among the best presenters in the conference. For a conference like this, the point is to score laughs, and entertain as much as teach. 

April 24, 2026

Achieving High Availability with Valkey Sentinel

In the previous guide, a robust Primary-Replica topology for Valkey was established. Read scaling is now active, and a hot copy of the data is securely stored on a second node. But there is a catch. If a primary node crashes, the replica will remain faithful and wait for instructions. It will not automatically take … Continued

The post Achieving High Availability with Valkey Sentinel appeared first on Percona.

April 23, 2026

Innovation From Every Corner: Inside Percona’s Build with AI Competition

At Percona, we’re passionate about open source database software, helping organizations of all sizes run, manage, and optimize their databases with the freedom and transparency that open source provides. That spirit of openness doesn’t stop at our products, it runs through everything we do, including how we encourage our own people to innovate. We recently … Continued

The post Innovation From Every Corner: Inside Percona’s Build with AI Competition appeared first on Percona.

Scaling Your Cache: A Step-by-Step Guide to Setting Up Valkey Replication

In the recent open-source data landscape, Valkey has emerged as a prominent player. Born as a Linux Foundation-backed, fully open-source fork of Redis (following Redis’s recent licensing changes), Valkey serves as a high-performance, in-memory key-value data store. Whether Valkey is deployed as a primary database, an ephemeral cache, or a rapid message broker, a single … Continued

The post Scaling Your Cache: A Step-by-Step Guide to Setting Up Valkey Replication appeared first on Percona.

April 22, 2026

Percona Live 2026 is Back in the Bay Area — Here’s Why You Don’t Want to Miss It

We’re thrilled to welcome the open source database community back in person for Percona Live 2026, taking place May 27–29 in the Bay Area. After the energy of past events, there’s nothing like being together again — swapping war stories over coffee, sketching architectures on napkins, and learning from the people building and running databases … Continued

The post Percona Live 2026 is Back in the Bay Area — Here’s Why You Don’t Want to Miss It appeared first on Percona.

Supabase is now ISO 27001 certified

Supabase is certified to ISO/IEC 27001:2022. The certificate covers our information security management system across the entire platform.

April 21, 2026

Impacts of updates in open-source databases

We recently looked at how various open-source database engines maintain their secondary indexes (in a previous analysis) and found significant differences.  The maintenance of indexes is not the only aspect where storage engines differ, another significant difference is how they handle simple row updates.  These updates highlight how these open-source databases organize data and manage … Continued

The post Impacts of updates in open-source databases appeared first on Percona.

Ring’s Billion-Scale Semantic Video Search with Amazon RDS for PostgreSQL and pgvector

In this post, we share Ring’s billion-scale semantic video search on Amazon RDS for PostgreSQL with pgvector architectural decisions vs alternatives, cost-performance-scale challenges, key lessons, and future directions. The Ring team designed for global scale their vector search architecture to support millions of customers with vector embeddings, the key technology for numerical representations of visual content generated by an AI model. By converting video frames into vectors-arrays of numbers that capture what’s happening (visual content) in each frame – Ring can store these representations in a database and search them using similarity search. When you type “package delivery,” the system converts that text into a vector and finds the video frames whose vectors are most similar-delivering relevant results in under 2 seconds.

Percona Operator for MySQL 1.1.0: PITR, Incremental Backups, and Compression

The latest release of the Percona Operator for MySQL, 1.1.0, is here. It brings point-in-time recovery, incremental backups, zstd backup compression, configurable asynchronous replication retries, and a set of stability fixes. This post walks through the highlights and how they help your MySQL deployments on Kubernetes.   Percona Operator for MySQL 1.1.0 Running stateful databases … Continued

The post Percona Operator for MySQL 1.1.0: PITR, Incremental Backups, and Compression appeared first on Percona.

PostgreSQL Performance: Is Your Query Slow or Just Long-Running?

Introduction: Recently I was having a conversation with a DB Enthusiast, and he mentioned that when he was a fresher, he tuned an ETL/reporting query that was running for 8-10 hours via a nightly job by 1/3rd. He went to his manager, saying that he reduced the query execution time, thinking that the manager would … Continued

The post PostgreSQL Performance: Is Your Query Slow or Just Long-Running? appeared first on Percona.

Approaches to tenancy in Postgres

There are many ways to slice a Postgres database for multi-tenant applications. Let's look at the three most common approaches and the trade-offs.

April 20, 2026

Aurora Serverless: Faster performance, enhanced scaling, and still scales down to zero

Amazon Aurora Serverless is an on-demand, auto scaling configuration for Aurora that scales up to support your most demanding workloads and down to zero when you don’t need it. The latest improvements deliver up to 30% better performance and enhanced scaling that understands your workload. These enhancements are available at no additional cost for a better price-performance ratio. In this post, we’ll share recent performance and scaling improvements with benchmark results, showing how Aurora Serverless can now scale up to 45.0% faster with a 32.9% faster workload completion time.

Deploying Cross-Site Replication in Percona Operator for MySQL (PXC)

Having a separate DR cluster for production databases is a modern day requirement or necessity for tech and other related businesses that rely heavily on their database systems. Setting up such a [DC -> DR] topology for Percona XtraDB Cluster (PXC), which is a virtually- synchronous cluster, can be a bit challenging in a complex … Continued

The post Deploying Cross-Site Replication in Percona Operator for MySQL (PXC) appeared first on Percona.

April 18, 2026

Mutable BSON and Oracle OSON

AskTom Live is a great source of information from Oracle developer advocates and product managers, but I recently came across a clickbait marketing title ("Not All Binary Protocols Are Created Equal: The Science Behind OSON's 529x Performance Advantage") which compares apples to oranges, and it's an opportunity to explain what BSON is, the binary JSON format used by MongoDB.

TL;DR: If you want to compare with OSON, the Oracle Database datatype for JSON, you should compare the Mutable BSON Document which is the structure that MongoDB uses to access documents, reading and updating individual fields. Raw BSON is closer to protobuf: a compact serialization format for disk or network transfer, with access metadata removed and no blocks or headers.

I've left the following comment to the YouTube video but it seems that it is not publicly visible, so here it is.

Let me explain how Oracle Database and MongoDB handle disk-based data access, and you will understand the different design purposes of OSON and BSON, and why you are not testing the right thing to compare them.

Oracle Database, like many traditional databases, uses the same format on disk (blocks) and in memory (buffers), and must store all transient metadata that helps access it in memory on persistent storage. This applies to table blocks (which contain a table directory, a row directory, and even lock flags, ITLs, that need to be cleaned up later), and the same idea was used for OSON (header, dictionary, sorted field IDs, offset arrays). Think of it as a mini database with its catalog, like the Oracle database has its dictionary and segment headers, which map physical extents and blocks. Then accessing the on-disk OSON structure directly makes sense — it's designed to be used through buffers that match the disk blocks.

But MongoDB with WiredTiger uses a smarter cache where the in-memory structures are optimized for RAM: adding pointers instead of disk offsets, building an Elements Vector for O(1) field access, and adding skiplists to navigate fields, all when data is loaded into the database cache. So there are two formats: the mutable BSON that the database actually works on in memory for query processing and updates, and the on-disk raw BSON that, on purpose, strips any unnecessary metadata and compresses it, to maximize the OS filesystem cache usage, and fits to the major advantage of MongoDB for documents: read/write a document in a single I/O.

The raw BSON is a serialization format for disk and network, not to be accessed partially, because MongoDB has a powerful mutable BSON format in memory with O(1) access through its Elements Vector indexing. The O(n) sequential scan, the "no partial updates" limitation, and the field position penalties you describe — those are properties of the serialization format, not how MongoDB actually processes queries. And by definition, the serialization format is read sequentially, even though BSON can jump between fields. Don't do that except when you need a full document. Use the MongoDB server and drivers to access BSON, and learn how to use it correctly.

With this understanding, you can see that the "529x performance" clickbait title comes from a mistake: you used raw BSON to access individual fields, bypassing everything MongoDB does when serving a query. It would be like using BBED to query Oracle Datafiles without going through the instance — no buffer cache, no row directory navigation, no dictionary lookups — and then concluding that Oracle's storage format is slow.

Notably, the original OSON VLDB paper (Liu et al., 2020) by Zhen Hua Liu doesn't make the claims this video does. That paper honestly compares OSON against Oracle's own JSON text storage, not against MongoDB's query processing. It compares encoding sizes with BSON, which is legitimate for a serialization format comparison (though it overlooks that BSON in MongoDB is compressed on disk and over the network). The paper authors understood they were comparing serialization formats and storage approaches within Oracle, not benchmarking MongoDB's actual runtime performance. I believe OSON is the optimal format for Oracle because it was integrated into the existing instance, cache, and securefiles, which were created a long time ago. Conversely, BSON is ideal for MongoDB, as it capitalizes on the document database's purpose and the WiredTiger architecture.