a curated list of database news from authoritative sources

March 03, 2025

February 28, 2025

Our K-Drama Journey

Finding a show that suits everyone in our family has always been a challenge. Intense action/stress is too much for our youngest daughter, and we prefer to keep things PG-13.

We’ve finally found a great solution: Korean-dramas. We’ve been watching them back to back recently, and they’ve been a hit across all ages.

Hometown Cha-Cha-Cha (2021)  This was our gateway into K-dramas. Since it’s dubbed in English, it was easy to start. Set in a small seaside town, it’s a wholesome, family-friendly show with both comedic moments and touching scenes. It also gave us a glimpse into everyday Korean life. It's a solid feel-good watch.

Queen of Tears (2024)  We picked this next because it was also dubbed. This one was a commitment (16 episodes, each 1.5 hours long) but it was worth it. It had more romance, more drama, better cinematography, and a stellar cast. The intense drama in some scenes made our youngest bawling. We got so hooked that we ended up binge-watching the last six episodes over a weekend.

Start-Up (2020)  This is our first subtitled K-drama, so now we’re listening to the original Korean audio. Another love triangle with a childhood connection, a decent plot, and comedic moments. Again sixteen 1.5 hours episodes. One of the main actors was also in Hometown Cha-Cha-Cha, so we’re starting to recognize actors.


Observations on K-Dramas

Recurring Themes: Early love, childhood connections, and fated relationships. The main couple is always "meant to be."

Female Leads Have Erratic Behavior: Is it just me, or do Korean female leads suddenly become very clingy and irrational in romance? They expect grand gestures and act capriciously. Is this cultural, or just a K-drama trope?

Male Leads Are Idealized: The male protagonists are always near-perfect—handsome, successful, and incredibly capable. Doesn't this set unrealistic expectations for real-life relationships?

Drinking Culture: Drinking seems to be a big part of Korean life in these shows. Whenever characters are stressed or sad, they get completely wasted. And this is portrayed as funny, rather than problematic. 

Seoul National University: The two (maybe all three) had the male lead character finish Seoul National University. Must be a big thing.

Hard Work and Cow Dung: For some reason these get featured back to back in K-dramas, the latter as comedic relief I guess.  

Quirky eating: The more dramatically the male lead slurps his noodles, the more the female lead seems drawn to him. Is this an actual thing?

The Korean Language & Culture: Korean sounds familiar at times, possibly due to its distant connection to Turkish. Korean also sounds like the ideal language for ranting. Finally, the honorifics system is fascinating.

Better Than Turkish Dramas: We never clicked with Turkish dramas: too many long stares, very strained plots, and huge plot holes. K-dramas have better pacing and more engaging storylines.


Coda

Watching K-dramas has been a great experience. They spark discussions, keep us engaged in predicting plot twists, and help the kids develop storytelling and observational skills. Watching these shows also makes us want to visit Korea.

Got any recommendations for what we should watch next? Of course, my son and I watched Squid Game together, but that’s definitely not family-friendly. These more traditional K-dramas have been a much better fit.

February 27, 2025

Order-preserving or-expansion in MongoDB and $in:[,,] treated as a range skip scan, or exploded to an equality for the ESR rule

I concluded the previous post with a question. We have explored how queries with range and equality filters behave differently based on the order of fields in the index key. Now, let’s examine queries that use the $in[] operator.
This operator can be viewed as an equality filter that handles multiple values or as a range filter that skips scans across multiple ranges. In the previous post, we saw that MongoDB can use an index scan for both.

I use the same collection as in the previous post:

mdb> db.demoesr.drop();

mdb> db.demoesr.insertMany([
 { e:42, s:"a", r:10 },
 { e:42, s:"b", r:20 },
 { e:42, s:"b", r:10 },
 { e:42, s:"d", r:30 },
 { e:99,        r:11 }
]);

I will query one value of "e" for various values of "r" and sort by "s". Rather than creating an Equality, Sort, Range index (ESR), I believe the Range condition is much more selective and crucial than the sort, so I will create an index with the order of Equality, Range, Sort (ERS):

test> db.demoesr.createIndex(
      { e: 1 , r: 1, s : 1 }
);
e_1_r_1_s_1

This post aims to determine whether a $in[] functions similarly to an Equality filter with multiple values or resembles a Range filter that skips some ranges.
I run the following query where e:{ $eq: 42 } and r:{ $in:[...]} with a list of more than 200 values:

test> db.demoesr.find(
      { e:{ $eq: 42 }, r:{ $in:[
201, 200, 199, 198, 197, 196, 195, 194, 193, 192, 191, 190, 189, 188, 187, 186, 185, 184, 183, 182, 181, 180, 179, 178, 177, 176, 175, 174, 173, 172, 171, 170, 169, 168, 167, 166, 165, 164, 163, 162, 161, 160, 159, 158, 157, 156, 155, 154, 153, 152, 151, 150, 149, 148, 147, 146, 145, 144, 143, 142, 141, 140, 139, 138, 137, 136, 135, 134, 133, 132, 131, 130, 129, 128, 127, 126, 125, 124, 123, 122, 121, 120, 119, 118, 117, 116, 115, 114, 113, 112, 111, 110, 109, 108, 107, 106, 105, 104, 103, 102, 101, 100, 99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 87, 86, 85, 84, 83, 82, 81, 80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66, 65, 64, 63, 62, 61, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
      ] } } , 
      { e:1,s:1,r:1,"_id":0 }
      ).sort({ s: 1 }).explain("executionStats").executionStats
;

{
  executionSuccess: true,
  nReturned: 4,
  totalKeysExamined: 5,
  totalDocsExamined: 0,
  executionStages: {
    stage: 'SORT',
    nReturned: 4,
    sortPattern: { s: 1 },
    inputStage: {
      stage: 'PROJECTION_COVERED',
      nReturned: 4,
      transformBy: { e: 1, s: 1, r: 1, _id: 0 },
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 4,
        keyPattern: { e: 1, r: 1, s: 1 },
        indexName: 'e_1_r_1_s_1',
        isMultiKey: false,
        multiKeyPaths: { e: [], r: [], s: [] },
        direction: 'forward',
        indexBounds: {
          e: [ '[42, 42]' ],
          r: [
            '[1, 1]',   '[2, 2]',   '[3, 3]',   '[4, 4]',   '[5, 5]',
            '[6, 6]',   '[7, 7]',   '[8, 8]',   '[9, 9]',   '[10, 10]',
            '[11, 11]', '[12, 12]', '[13, 13]', '[14, 14]', '[15, 15]',
            '[16, 16]', '[17, 17]', '[18, 18]', '[19, 19]', '[20, 20]',
            '[21, 21]', '[22, 22]', '[23, 23]', '[24, 24]', '[25, 25]',
            '[26, 26]', '[27, 27]', '[28, 28]', '[29, 29]', '[30, 30]',
            '[31, 31]', '[32, 32]', '[33, 33]', '[34, 34]', '[35, 35]',
            '[36, 36]', '[37, 37]', '[38, 38]', '[39, 39]', '[40, 40]',
            '[41, 41]', '[42, 42]', '[43, 43]', '[44, 44]', '[45, 45]',
            '[46, 46]', '[47, 47]', '[48, 48]', '[49, 49]', '[50, 50]',
            '[51, 51]', '[52, 52]', '[53, 53]', '[54, 54]', '[55, 55]',
            '[56, 56]', '[57, 57]', '[58, 58]', '[59, 59]', '[60, 60]',
            '[61, 61]', '[62, 62]', '[63, 63]', '[64, 64]', '[65, 65]',
            '[66, 66]', '[67, 67]', '[68, 68]', '[69, 69]', '[70, 70]',
            '[71, 71]', '[72, 72]', '[73, 73]', '[74, 74]', '[75, 75]',
            '[76, 76]', '[77, 77]', '[78, 78]', '[79, 79]', '[80, 80]',
            '[81, 81]', '[82, 82]', '[83, 83]', '[84, 84]', '[85, 85]',
            '[86, 86]', '[87, 87]', '[88, 88]', '[89, 89]', '[90, 90]',
            '[91, 91]', '[92, 92]', '[93, 93]', '[94, 94]', '[95, 95]',
            '[96, 96]', '[97, 97]', '[98, 98]', '[99, 99]', '[100, 100]',
            ... 101 more items
          ],
          s: [ '[MinKey, MaxKey]' ]
        },
        keysExamined: 5,
        seeks: 1
      }
    }
  }
}

This is the best way to filter the documents for retrieval, utilizing multiple ranges with a skip scan as discussed in the previous post. However, because it doesn't adhere to the ESR rule, the index failed to return the entries in the expected order, necessitating an additional SORT operation.

Sort Merge over a OR Expansion

I used more than 200 values to demonstrate the general behavior of the $in[] operator within the IXSCAN range. MongoDB optimizes performance when the list contains 200 values or fewer.

Here is the execution plan with a smaller list:

test> db.demoesr.find(
      { e:{ $eq: 42 }, r:{ $in:[10,30] } } , 
      { e:1,s:1,r:1,"_id":0 }
      ).sort({ s: 1 }).explain("executionStats").executionStats
;

{
  executionSuccess: true,
  nReturned: 3,
  totalKeysExamined: 3,
  totalDocsExamined: 0,
  executionStages: {
    stage: 'PROJECTION_DEFAULT',
    nReturned: 3,
    transformBy: { e: 1, s: 1, r: 1, _id: 0 },
    inputStage: {
      stage: 'SORT_MERGE',
      nReturned: 3,
      sortPattern: { s: 1 },
      inputStages: [
        {
          stage: 'IXSCAN',
          nReturned: 2,
          keyPattern: { e: 1, r: 1, s: 1 },
          indexName: 'e_1_r_1_s_1',
          isMultiKey: false,
          multiKeyPaths: { e: [], r: [], s: [] },
          direction: 'forward',
          indexBounds: {
            e: [ '[42, 42]' ],
            r: [ '[10, 10]' ],
            s: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 2,
          seeks: 1
        },
        {
          stage: 'IXSCAN',
          nReturned: 1,
          keyPattern: { e: 1, r: 1, s: 1 },
          indexName: 'e_1_r_1_s_1',
          isMultiKey: false,
          multiKeyPaths: { e: [], r: [], s: [] },
          indexBounds: {
            e: [ '[42, 42]' ],
            r: [ '[30, 30]' ],
            s: [ '[MinKey, MaxKey]' ]
          },
          keysExamined: 1,
          seeks: 1
        }
      ]
    }
  }
}

When processing 200 or fewer values, MongoDB treats them as one equality filter for each value, utilizing a separate IXSCAN. If the following field in the index key is used to sort the result, each index scan returns the keys in the correct sort order. MongoDB reads from these sorted segments and merges them while preserving the order, effectively performing a SORT_MERGE to bypass a sort operation.
Since each IXSCAN treats the $in[] value as an Equality instead of a Range, my index configuration consists of Equality, Equality, Sort rather than Equality, Range, Sort. It follows the Equality, Sort, Range (ESR) rule.

I've rarely seen such intelligent optimization in a database before. Oracle Database can convert a list into a concatenation (UNION ALL) using a technique known as OR Expansion, yet it lacks a merge sort feature on top of it. On the other hand, PostgreSQL can perform a merge sort on concatenated UNION but does not support the OR Expansion transformation to get it transparently. YugabyteDB employs a similar method for batching the nested loop join by pushing down the join condition as an array.
The internal name of this feature in MongoDB query planner is Explode for Sort, as a combination of Index Scan Explosion and Merge Sort and is advantageous with pagination when the limit is pushed down to each index scan.
It can be used by $or:{} or $in:[]. For example, the following query read one key from each index scan:

test> db.demoesr.find(
 {
  e: { $eq: 42 },
  $or: [ { r: 10 } , { r: 30 } ]
 }).sort( { s: 1 } ).limit(1);

{
  nReturned: 1,
  totalKeysExamined: 2,
  totalDocsExamined: 0,
  executionStages: {
    stage: 'LIMIT',
    nReturned: 1,
    limitAmount: 1,
    inputStage: {
      stage: 'PROJECTION_DEFAULT',
      nReturned: 1,
      transformBy: { e: 1, s: 1, r: 1, _id: 0 },
      inputStage: {
        stage: 'SORT_MERGE',
        nReturned: 1,
        sortPattern: { s: 1 },
        inputStages: [
          {
            stage: 'IXSCAN',
            nReturned: 1,
            keysExamined: 1,
            seeks: 1
          },
          {
            stage: 'IXSCAN',
            nReturned: 1,
            keysExamined: 1,
            seeks: 1
          }
        ]
      }
    }
  }
}

This feature is also documented in the ESR rule additional considerations: https://www.mongodb.com/docs/manual/tutorial/equality-sort-range-rule/#additional-considerations

Skip Scan in MongoDB with Range, Equality (RE) index as an alternative to Equality, Sort, Range (ESR)

To achieve the ideal index for each query while adhering to the Equality, Sort, and Range (ESR) rule, you may end up with too many indexes. Instead of creating the best index for every query, the most effective indexing strategy is using a small set of indexes that addresses performance and scalability requirements.
Indexes that start with a field not used as a filter can still be beneficial if the column has a limited range of values. The index scans the entries based on the index key's following field and skips the first field's values to seek the following range. We will illustrate this using a similar collection as in the previous post:

mdb> db.demoesr.drop();

mdb> db.demoesr.insertMany([
 { e:42, s:"a", r:10 },
 { e:42, s:"b", r:20 },
 { e:42, s:"b", r:10 },
 { e:42, s:"d", r:30 },
 { e:99,        r:11 }  // add one to skip
]);

My query finds the document with e = 42 and r > 10:

test> db.demoesr.find(
      { r:{$gt:10} , e:{$eq: 42} } , 
      {e:1,s:1,r:1,"_id":0 }
      )
;
[
  { e: 42, s: 'b', r: 20 },
  { e: 42, s: 'd', r: 30 }
]

Without an index, this reads all documents with a COLLSCAN:

db.demoesr.find(
      { r:{$gt:10} , e:{$eq: 42} } , 
      {e:1,s:1,r:1,"_id":0 }
      ).explain("executionStats").executionStats
;

{
  executionSuccess: true,
  nReturned: 2,
  totalKeysExamined: 0,
  totalDocsExamined: 5,
  executionStages: {
    stage: 'PROJECTION_SIMPLE',
    nReturned: 2,
    transformBy: { e: 1, s: 1, r: 1, _id: 0 },
    inputStage: {
      stage: 'COLLSCAN',
      filter: {
        '$and': [ { e: { '$eq': 42 } }, { r: { '$gt': 10 } } ]
      },
      nReturned: 2,
      direction: 'forward',
      docsExamined: 5
    }
  }
}

Although my collection for the demo is limited, the execution plan highlights its scalability challenge. It examines all documents but only returns a subset, which lacks efficiency.

ESR index (Equality, Sort, Range)

With the index created in the previous post, my query uses an index scan:

test> db.demoesr.createIndex(
      { e: 1 , s: 1, r : 1 }
);
e_1_s_1_r_1

test> db.demoesr.find(
      { r:{$gt:10} , e:{$eq: 42} } , 
      {e:1,s:1,r:1,"_id":0 }
      ).explain("executionStats").executionStats
;

{
  executionSuccess: true,
  nReturned: 2,
  totalKeysExamined: 5,
  totalDocsExamined: 0,
  executionStages: {
    stage: 'PROJECTION_COVERED',
    nReturned: 2,
    transformBy: { e: 1, s: 1, r: 1, _id: 0 },
    inputStage: {
      stage: 'IXSCAN',
      nReturned: 2,
      keyPattern: { e: 1, s: 1, r: 1 },
      indexName: 'e_1_s_1_r_1',
      isMultiKey: false,
      multiKeyPaths: { e: [], s: [], r: [] },
      direction: 'forward',
      indexBounds: {
        e: [ '[42, 42]' ],
        s: [ '[MinKey, MaxKey]' ],
        r: [ '(10, inf.0]' ]
      },
      keysExamined: 5,
      seeks: 3
    }
  }
}


This plan is acceptable if the equality part (e: [ '[42, 42]' ]) is highly selective. Since the subsequent range is not filtering, all values were read. However, a skip scan was used to seek to the next value at the end of each range of the next bound (r: [ '(10, inf.0]' ]).

ER index (Equality, Range)

Since I don't do any sort and read all values for "s", having an index on the two fields used for filtering is preferable:

test> db.demoesr.createIndex(
      { e: 1 , r : 1 }
);
e_1_r_1

test> db.demoesr.find(
      { r:{$gt:10} , e:{$eq: 42} } , 
      {e:1,s:1,r:1,"_id":0 }
      ).explain("executionStats").executionStats
;
{
  executionSuccess: true,
  nReturned: 2,
  executionTimeMillis: 3,
  totalKeysExamined: 2,
  totalDocsExamined: 2,
  executionStages: {
    stage: 'PROJECTION_SIMPLE',
    nReturned: 2,
    executionTimeMillisEstimate: 0,
    works: 4,
    advanced: 2,
    needTime: 0,
    needYield: 0,
    saveState: 0,
    restoreState: 0,
    isEOF: 1,
    transformBy: { e: 1, s: 1, r: 1, _id: 0 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 2,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 2,
        keyPattern: { e: 1, r: 1 },
        indexName: 'e_1_r_1',
        isMultiKey: false,
        multiKeyPaths: { e: [], r: [] },
        direction: 'forward',
        indexBounds: { e: [ '[42, 42]' ], r: [ '(10, inf.0]' ] },
        keysExamined: 2,
        seeks: 1
      }
    }
  }
}

This decreased the number of seeks since no values were skipped. It reads a single range (e: [ '[42, 42]' ], r: [ '(10, inf.0]' ]) encompassing all the keys for the returned documents. You achieve the optimal index for your filter when seeks: 1 and keysExamined = nReturned.

RE index (Range, Equality)

Now, imagine that a similar index was created for another critical query with the fields arranged in a different order, as this use case had an equality predicate on "r" and reads many values of "e". I don't want to create another index for my query if I don't need to, so let's remove the indexes created above and check the execution plan with an index starting with "r":

test> db.demoesr.dropIndexes("*");
{
  nIndexesWas: 3,
  msg: 'non-_id indexes dropped for collection',
  ok: 1
}

test> db.demoesr.createIndex(
      { r: 1, e : 1 }
);

test> db.demoesr.find(
      { r:{$gt:10} , e:{$eq: 42} } , 
      {e:1,s:1,r:1,"_id":0 }
      ).explain("executionStats").executionStats
;

{
  executionSuccess: true,
  nReturned: 2,
  totalKeysExamined: 3,
  totalDocsExamined: 2,
  executionStages: {
    stage: 'PROJECTION_SIMPLE',
    nReturned: 2,
    transformBy: { e: 1, s: 1, r: 1, _id: 0 },
    inputStage: {
      stage: 'FETCH',
      nReturned: 2,
      docsExamined: 2,
      alreadyHasObj: 0,
      inputStage: {
        stage: 'IXSCAN',
        nReturned: 2,
        keyPattern: { r: 1, e: 1 },
        indexName: 'r_1_e_1',
        isMultiKey: false,
        multiKeyPaths: { r: [], e: [] },
        direction: 'forward',
        indexBounds: { r: [ '(10, inf.0]' ], e: [ '[42, 42]' ] },
        keysExamined: 3,
        seeks: 2
      }
    }
  }
}

In my case, I have only one value for "e" to skip, as { e:99, r:11 } is in the first range r: [ '(10, inf.0]' ] but not in the following one e: [ '[42, 42]' ], which is evident from one additional seek. If there aren't too many of those, this index remains efficient, and there's no need to create an additional one with a better key field ordering for this query.

Keep in mind that if there's no range predicate in your query, the IXCAN cannot be utilized, as we discussed in the previous post. Without an index that begins with "e", this will result in a COLLSCAN:

db.demoesr.find( 
 { e:{$eq: 42} } , {e:1,s:1,r:1,"_id":0 } 
).explain("executionStats").executionStats;

The workaround remains unchanged from the previous post, which involves adding an infinite range filter to the first field of the index:

db.demoesr.find( 
 { r:{$gte:MinKey}, e:{$eq: 42} } , {e:1,s:1,r:1,"_id":0 } 
).sort({s:1}).explain("executionStats").executionStats;

We have seen equality predicates, where the index scan can seek directly to the desired point, and range predicates, where multiple values can be examined but skipped according to the following ranges. You may wonder if a filter on a list of values ($in) is processed like multiple equalities or a single range with skips. That will be the next topic.

February 26, 2025

Upgrade strategies for Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL 12

In this post, we explore the end-of-life (EOL) timeline for Aurora PostgreSQL and Amazon RDS for PostgreSQL. We discuss features in PostgreSQL major versions, Amazon RDS Extended Support, and various upgrade strategies, including in-place upgrades, Amazon RDS blue/green deployments, and out-of-place upgrades.

Smart Casual Verification of the Confidential Consortium Framework

This paper (NSDI'25) applies lightweight formal methods (hence the pun "smart casual" in contrast to formal attire) to the Confidential Consortium Framework (CCF). CCF is an open-source platform for trustworthy cloud applications, used in Microsoft's Azure Confidential Ledger service. The authors combine formal specification, model checking, and automated testing to validate CCF's distributed protocols, specifically its custom consensus protocol.


Background

CCF uses trusted execution environments (TEEs) and state machine replication. Its consensus protocol is based on Raft but has major modifications: signature transactions (Merkle tree root over the whole log  signed by the leader), optimistic acknowledgments, and partition leader step-down. CCF also avoids RPCs, using a uni-directional messaging layer. These changes add complexity, requiring formal verification. At the time of writing, CCF's implementation spans 63 kLoC in C++.

The authors define five requirements for CCF verification: verifying safety properties, documenting system behavior, increasing confidence in implementation, integrating with the codebase, and evolving with the system. They use TLA+ for formal specification and model checking. TLA+ specifies system behavior using temporal logic, verifying safety and liveness properties via model checking and simulation. However, verifying the protocol does not guarantee the implementation conforms to it.

Bridging this gap requires conformance checking, which can be done in two ways: test-case generation from the spec or trace validation of the implementation. MongoDB's VLDB'20 paper titled "eXtreme Modelling in Practice" explored both approaches, and is worth a look. The last four years brought significant progress in the applicability of both methods. I think deterministic simulations getting mature and gaining popularity helped both approaches.  

Ok, the first approach, test-case generation, uses spec executions to create tests, which are then run against the implementation. Conformance is verified by comparing the implementation's results to those of the spec. This approach requires a highly controllable and observable implementation, effectively eliminating all non-determinism. This is most applicable and effective for single-node implementation components. Recently, at MongoDB Research, we leveraged our distributed transactions specs to generate test cases for the WiredTiger storage component testing. This compositional spec work, led by Will Schultz, will be submitted for publication soon.

This current Microsoft paper focuses on the second approach, trace validation, which works in reverse. It begins with an implementation trace and verifies whether that trace is permitted by a spec execution. This method presents its own challenges, often requiring the spec to be sufficiently detailed to align with the implementation. Consequently, it typically necessitates a "medium-level" spec rather than a more abstract, high-level one.


Distributed Consensus Specification

The TLA+ consensus specification models CCF's protocol with 17 actions and 13 variables. (TLA+ specifications from the project are available on GitHub.) The authors extend the spec for simulation and model checking to explore a wide range of behaviors. Simulation weights actions to increase coverage of rare but critical scenarios.

Trace validation checks if implementation traces match the system’s specification. Since collecting traces from a live production distributed system is difficult, the authors use traces from existing CCF tests. CCF had extensive unit, functional, and end-to-end tests before this work. Consensus testing used a scenario driver that serialized execution deterministically and mocked unrelated components, allowing controlled network faults and observability.

Trace validation verifies whether the behaviors in a trace (T) intersect with those allowed by the spec (S), ensuring T ∩ S ≠ ∅. TLC (TLA's explicit state model checker) constructs S, but cannot generate T from a trace. To address this, the authors wrote a new TLA+ spec, "Trace," reusing the spec actions but constraining them to observed events. The actions are only enabled iff the current event in the trace matches an action, and the actions are parameterized by the values taken from the trace.

The paper reports that aligning spec actions with trace events was still challenging and they had to address granularity mismatches. Invalid traces could still result from incorrect logging, faulty mappings, or genuine spec-implementation mismatches. Unlike model checking, failed validation provided no counterexample, but behaviors in T helped diagnose the issue.

State-space explosion made validating traces computationally expensive. To address this, the authors used depth-first search (DFS) in TLC instead of breadth-first search (BFS). This significantly improving performance. For example, validating a consistency trace took under a second with DFS, compared to an hour with BFS.


Client Consistency Specification

The client consistency spec models interactions between clients and CCF, verifying guarantees like linearizability for read-write transactions and serializability for read-only transactions. TLA+ formalizes these guarantees, and trace validation ensures the implementation adheres to them. This process mirrored the distributed consensus verification effort, so details are omitted. After seeing the method applied to consensus, CCF engineers applied the method themselves with minimal involvement from formal methods specialists. The effort took about one engineer-week spread over two weeks.

Here's a bug they found: Linearizability requires transaction execution to respect real-time ordering. The OBSERVEDROINV property states that committed read-only transactions must reflect all prior committed read-write transactions. Model checking found a 12-step counterexample in four seconds: a read-only transaction was processed by an old but active leader that had not logged recent write transactions from the new leader. This scenario requires a falsely replaced leader still handling requests before retiring. They chose not to change the behavior, as serializability suffices for most applications, but the spec helps clarify guarantees for developers.

Here is an interactive exploration of this bug using the Spectacle (nee tla-web) tool. (For some reason, the trace exploration works only after you press "Reset" first.)


Results

The verification process found bugs in election quorum tallying, commit index advancement, and premature node retirement. These bugs were detected through model checking, simulation, and trace validation. Table 2 lists the most serious consensus protocol bugs found. These affected both safety and liveness and were detected at various verification stages.

One notable bug is the Commit advance on AE-NACK. The spec required a leader’s matchIndex to remain unchanged after receiving an AE-NACK, but the implementation allowed it to decrease. This came from aggressively applying an optimization from the Raft paper. A one-line code change aligned the spec with the implementation, but subsequent simulation found a 34-state counterexample violating a key correctness property. Manual translation of this counterexample into a 150 LoC functional test confirmed that the implementation could advance its commit index incorrectly. Further refinement of the spec revealed that matchIndex should never decrease except after leader elections. Adding this property allowed model checking to find a shorter counterexample. The authors report that the combination of functional testing and model checking allowed a quick and confident fix.

Many bugs were first identified during spec development and alignment with the implementation. Writing the consensus and consistency specs forced deeper thinking about protocol invariants. This process also enabled bug discoveries that would have been difficult to confirm otherwise, especially in complex reconfiguration and failure-handling scenarios.

Feedback from the Postgres community about the vector index benchmarks

This is a response to some of the feedback I received from the Postgres community about my recent benchmark results for vector indexes using MariaDB and Postgres (pgvector). The work here isn't sponsored and required ~2 weeks days of server time and a few more hours of my time (yes, I contribute to the PG community).

tl;dr

  • index create is ~4X faster when using ~4 parallel workers. I hope that parallel DDL comes to more open source DBMS.
  • parallel query does not help pgvector
  • increasing work_mem does not help pgvector
  • pgvector gets less QPS than MariaDB because it uses more CPU to compute the distance metric
The feedback

The feedback I received includes
  • benchmarks are useless, my production workload is the only relevant workload
    • I disagree, but don't expect consensus. But this approach means you are unlikely to do comparative benchmarks because it costs too much to port your workload to some other DBMS just for the sake of a benchmark and only your team can run this so you will miss out on expertise from elsewhere.
  • this work was sponsored so I don't trust it
    • There isn't much I can to about that.
  • this is benchmarketing!
    • There is a marketing element to this work. Perhaps I was not the first to define benchmarketing, but by my definition a benchmark report is not benchmarketing when the results are explained. I have been transparent about how I ran the benchmark and shared some performance debugging results here. MariaDB gets more QPS than pgvector because pgvector uses more CPU to compute the distance metric. 
  • you should try another Postgres extension for vector indexes
    • I hope to do that eventually. But pgvector is a great choice here because it implements HNSW and MariaDB implements modified HNSW. Regardless time is finite, the numbers of servers I have is finite and my work here (both sponsored and volunteer) competes with my need to finish my taxes, play video games, sleep, etc.
  • this result is bogus because I didn't like some config setting you used
    • Claims like this are cheap to make and expensive to debunk. Sometimes the suggested changes make a big difference but that has been rare in my experience. Most of the time the changes at best make a small difference.
  • this result is bogus because you used Docker
    • I am not a Docker fan, but I only used Docker for Qdrant. I am not a fan because I suspect it is overused and I prefer benchmark setups that don't require it. While the ann-benchmarks page states that Docker is used for all algorithms, it is not used for MariaDB or Postgres in my fork. And it is trivial, but time consuming, to update most of ann-benchmarks to not use Docker.
Claims about the config that I used include:
  • huge pages were disabled
    • Yes they were. Enabling huge pages would have helped both MariaDB and Postgres. But I prefer to not use them because the feature is painful (unless you like OOM). Posts from me on this are here and here.
  • track_io_timing was enabled and is expensive
    • There wasn't IO when queries ran as the database was cached so this is irrelevant. There was some write IO when the index was created. While I won't repeat tests with track_io_timing disabled, I am skeptical that the latency of two calls to gettimeofday() per IO are significant on modern Linux.
  • autovacuum_vacuum_cost_limit is too high
    • I set it to 4000 because I often write write-intensive benchmarks on servers with large IOPs capacities. This is the first time anyone suggested they were too high and experts have reviewed my config files. I wish that Postgres vacuum didn't require so much tuning -- and all of that tuning means that someone can always claim your tuning is wrong. Regardless, the benchmark workload is load, create index, query and I only time the create index and query steps. There is little impact from vacuum on this benchmark. I have also used hundreds of server hours to search for good Postgres configs.
  • parallel operations were disabled
    • Parallel query isn't needed for the workloads I normally run but parallel index create is nice to have. I disable parallel index create for Postgres because my primary focus is efficiency -- how much CPU and IO is consumed per operation. But I haven't been clear on that in my blog posts. Regardless, below I show there is a big impact from parallel index create and no impact from parallel query.
  • work_mem is too low
    • I have been using the default which is fine for the workloads I normally run (sysbench, insert benchmark). Below I show there is no impact from increasing it to 8M.

Benchmark

This post has much more detail about my approach in general. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit.  I used the dbpedia-openai dataset with 1M rows. It uses angular (cosine) for the distance metric. The ann-benchmarks config files is here for Postgres and in this case I only have results for pgvector with halfvec (float16).

I used a large server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. I tested three configurations for Postgres and all of the settings are here:
  • def
    • def stands for default and is the config I used in all of my previous blog posts. Thus, it is the config for which I received feedback.
  • wm8
    • wm8 stands for work_mem increased to 8MB. The default (used by def) is 4MB.
  • pq4
    • pq4 stands for Parallel Query with ~4 workers. Here I changed a few settings from def to support that.
Output from the benchmark is here.

The command lines to run the benchmark using my helper scripts are:
    bash rall.batch.sh v1 dbpedia-openai-1000k-angular c32r128

Results: QPS vs recall

These charts show the best QPS for a given recall. The graphs appears to be the same but the differences are harder to see as recall approaches 1.0 so the next section has a table with numbers.

But from these graphs, QPS doesn't improve with the wm8 or pq4 configs.


The chart for the def config which is what I used in previous blog posts.
The chart for the wm8 config with work_mem=8M
The chart for the pq4 config that uses ~4 parallel workers

Results: best QPS for a given recall

Many benchmark results are marketed via peak performance (max throughput or min response time) but these are usually constrained optimization problems -- determine peak performance that satisfies some SLA. And the SLA might be response time or efficiency (cost).

With ann-benchmarks the constraint is recall. Below I share the best QPS for a given recall target along with the configuration parameters (M, ef_construction, ef_search) at which that occurs.

Summary
  • pgvector does not get more QPS with parallel query
  • pgvector does not get more QPS with a larger value for work_mem
Legend:
  • recall, QPS - best QPS at that recall
  • isecs - time to create the index in seconds
  • m= - value for M when creating the index
  • ef_cons= - value for ef_construction when creating the index
  • ef_search= - value for ef_search when running queries
Best QPS with recall >= 1.000
no algorithms achived this for the (M, ef_construction) settings I used

Best QPS with recall >= 0.99
recall  QPS     isecs   config
0.990   317.9   7302    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)
0.990   320.4   7285    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)
0.990   316.9   1565    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)

Best QPS with recall >= 0.98
0.983   412.0   4120    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)
0.984   415.6   4168    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)
0.984   411.4    903    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)

Best QPS with recall >= 0.97
0.978   487.3   5070    PGVector_halfvec(m=32, ef_cons=256, ef_search=30)
0.970   508.3   2495    PGVector_halfvec(m=32, ef_cons=96, ef_search=30)
0.970   508.4   2495    PGVector_halfvec(m=32, ef_cons=96, ef_search=30)

Best QPS with recall >= 0.96
0.961   621.1   4120    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)
0.962   632.3   4168    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)
0.962   622.0    903    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)

Best QPS with recall >= 0.95
0.951   768.7   2436    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)
0.952   770.2   2442    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)
0.953   753.0    547    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)

Results: create index

The database configs for Postgres are shared above and parallel index create is disabled by default because my focus has not been on DDL performance. Regardless, it works great for Postgres with pgvector. The summary is:
  • index create is ~4X faster when using ~4 parallel workers
  • index sizes are similar with and without parallel create index
Sizes: table is ~8G and index is ~4G

Legend
  • M - value for M when creating the index
  • cons - value for ef_construction when creating the index
  • secs - time in seconds to create the index
  • size(MB) - index size in MB
                def     wm8     pq4
M       cons    secs    secs    secs
 8       32      412     405     108
16       32      649     654     155
 8       64      624     627     154
16       64     1029    1029     237
32       64     1901    1895     412
 8       96      834     835     194
16       96     1387    1393     312
32       96     2497    2495     541
48       96     3731    3726     798
 8      192     1409    1410     316
16      192     2436    2442     547
32      192     4120    4168     903
48      192     6117    6119    1309
64      192     7838    7815    1662
 8      256     1767    1752     400
16      256     3146    3148     690
32      256     5070    5083    1102
48      256     7302    7285    1565
64      256     9959    9946    2117

Using VS Code and Docker to Debug MySQL Crashes

Typically, we receive customer tickets regarding crashes or bugs, where we request a core dump to analyze and identify the root cause or understand the unexpected behavior. To read the core dumps, we also request the linked libraries used by the server’s MySQL. However, there’s a more efficient way to achieve our goal: by using […]

Build a Datadog alternative in 5 minutes

I built an open source template that you can use to have a free, simple Datadog alternative in about 5 minutes. Here was my process for building it.

February 25, 2025

February 24, 2025

Understanding trx-consistency-only on MyDumper Before Removal

I have been working on MyDumper for over three years now, and I usually don’t use the tax-consistency-only feature during backups because it wasn’t an option I quite understood. So, when reviewing another issue, I stepped into a curious scenario, and I finally got it and decided to share with you what I learned and when it should […]

February 21, 2025

How Mindbody improved query latency and optimized costs using Amazon Aurora PostgreSQL Optimized Reads

In this post, we highlight the scaling and performance challenges Mindbody was facing due to an increase in their data growth. We also present the root cause analysis and recommendations for adopting to Aurora Optimized Reads, outlining the steps taken to address these issues. Finally, we discuss the benefits Mindbody realized from implementing these changes, including enhanced query performance, significant cost savings, and improved price predictability.

Comments on Executive Order 14168

Submitted to the Department of State, which is requesting comments on a proposed change which would align US passport gender markers with executive order 14168.

Executive order 14168 is biologically incoherent and socially cruel. All passport applicants should be allowed to select whatever gender markers they feel best fit, including M, F, or X.

In humans, neither sex nor gender is binary at any level. There are several possible arrangements of sex chromosomes: X, XX, XY, XXY, XYY, XXX, tetrasomies, pentasomies, etc. A single person can contain a mosaic of cells with different genetics: some XX, some XYY. Chromosomes may not align with genitalia: people with XY chromosomes may have a vulva and internal testes. People with XY chromosomes and a small penis may be surgically and socially reassigned female at birth—and never told what happened. None of these biological dimensions necessarily align with one’s internal concept of gender, or one’s social presentation.

The executive order has no idea how biology works. It defines “female” as “a person belonging, at conception, to the sex that produces the large reproductive cell”. Zygotes do not produce reproductive cells at all: under this order none of us have a sex. Oogenesis doesn’t start until over a month into embryo development. Even if people were karyotyping their zygotes immediately after conception so they could tell what “legal” sex they were going to be, they could be wrong: which gametes we produce depends on the formation of the genital ridge.

All this is to say that if people fill out these forms using this definition of sex, they’re guessing at a question which is both impossible to answer and socially irrelevant. You might be one of the roughly two percent of humans born with an uncommon sexual development and not even know it. Moreover, the proposed change fundamentally asks the wrong question: gender markers on passports are used by border control agents, and are expected to align with how those agents read the passport holder’s gender. A mismatch will create needless intimidation and hardship for travelers.

Of course most of us will not have our identities challenged under this order. That animus is reserved for trans people, for gender-non-conforming people, for anyone whose genetics, body, dress, voice, or mannerisms don’t quite fit the mold. Those are the people who will suffer under this order. That cruelty should be resisted.

What makes entrepreneurs entrepreneurial?

Entrepreneurs think and act differently from managers and strategists.

This 2008 paper argues that entrepreneurs use effectual reasoning, the polar opposite of causal reasoning taught in business schools. Causal reasoning starts with a goal and finds the best way to achieve it. Effectual reasoning starts with available resources and lets goals emerge along the way. Entrepreneurs are explorers, not generals. Instead of following fixed plans, they experiment and adapt to seize whatever opportunities the world throws at them.

Consider this example from the paper. A causal thinker starts an Indian restaurant by following a fixed plan: researching the market, choosing a prime location, targeting the right customers, securing funding, and executing a well-designed strategy. The effectual entrepreneur doesn’t start with a set goal. She starts with what she has (her skills, knowledge, and network), and she experiments. She might begin by selling homemade lunches to friends’ coworkers. If that works, she expands. If not, she watches what excites her customers. Maybe they care less about the food and more about her cultural insights and life experiences. She doesn’t force the restaurant idea. She unasks the question and asks a new one: What do people really want from me? (This is what Airbnb did with its "Experiences". Instead of just renting out rooms, hosts began offering cooking classes, city tours, and adventures, things people didn’t know they wanted until they saw them.) 


Key principles of effectual reasoning

The author, Prof. Saras D. Sarasvathy, interviewed 30 seasoned entrepreneurs and identified key principles of effectual reasoning.

The first is affordable loss. Forget maximizing returns. Entrepreneurs focus on what they can afford to lose. Instead of wasting time on market research, they test ideas in the real world. They fail cheap, fail fast, and learn faster.

The second is partnerships over competition. Entrepreneurs don’t assume a market exists for their idea. They build networks of collaborators who help shape the business. This lowers risk, opens doors, and provides instant feedback. This contrasts with the corporate world, which focuses on outmaneuvering rivals.

The third is leveraging surprises. Managers hate surprises. Entrepreneurs love them. The unexpected isn’t an obstacle, it’s an opening. A pivot. A new market. A better business.

So, effectual reasoning flips traditional business thinking on its head. Most strategies assume that predicting the future gives control. Entrepreneurs believe the opposite: take action, and you shape the future. Rather than waiting for ideal conditions, they start with what they have and improvise.


Discussion

Here is what I make of this. Some people are born entrepreneurs. Maybe you can teach it. Maybe you can’t. Some minds thrive in uncertainty and make it up as they go. Others crave a plan, a map, a well-paved road. 

It’s the old fox-versus-hedgehog dilemma from Grand Strategy. The fox knows many things. The hedgehog knows one big thing.

A smart friend of mine (Mahesh Balakrishnan, who has a related Managing Skunks post here) put it perfectly. On his dream software team, he wants either Jeeps or Ferraris. Jeeps go anywhere. No roads, no directions—just point them at a problem, and they’ll tear through it. That’s effectual reasoning. Ferraris, on the other hand, need smooth roads and a clear destination. But once they have that, they move fast. He doesn’t want Toyota Corollas. Corollas are slow. Worse, they still need roads.

So here’s my corollary (admire my beautiful pun).

If you got to be a causal thinker, be a Ferrari. Not a Corolla.


Postamble

You can tell Steve Jobs is the textbook "effectual thinker" just from this quote.

“The thing I would say is, when you grow up, you tend to get told that the world is the way it is, and your life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. But life, that's a very limited life. Life can be much broader once you discover one simple fact, and that is: Everything around you that you call life was made up by people that were no smarter than you. And you can change it. You can influence it. You can build your own things that other people can use. And the minute that you understand that you can poke life, and actually something will, you know, if you push in, something will pop out the other side, that you can change it. You can mold it. That's maybe the most important thing is to shake off this erroneous notion that life is there and you're just going to live in it, versus embrace it. Change it. Improve it. Make your mark upon it. I think that's very important. And however you learn that, once you learn it, you'll want to change life and make it better. Because it's kind of messed up in a lot of ways. Once you learn that, you'll never be the same again.”