a curated list of database news from authoritative sources

December 18, 2025

Optimizing correlated subqueries in Amazon Aurora PostgreSQL

Correlated subqueries can cause performance challenges in Amazon Aurora PostgreSQL which can cause applications to experience reduced performance as data volumes grow. In this post, we explore the advanced optimization configurations available in Aurora PostgreSQL that can transform these performance challenges into efficient operations without requiring you to modify a single line of SQL code.

Introducing Percona Load Generator for MongoDB Clusters: The Benchmark Tool That Simulates Your Actual Application

If you have ever tuned a MongoDB cluster that passed every synthetic benchmark with flying colors, only to choke the moment real user traffic hit, you are not alone. For years, database administrators and developers have relied on a standard suite of tools to test MongoDB performance (YCSB, Sysbench, POCDriver and mgodatagen –  just to […]

December 17, 2025

Performance regressions in MySQL 8.4 and 9.x with sysbench

I have been claiming that I don't find significant performance regressions in MySQL 8.4 and 9.x when I use sysbench. I need to change that claim. There are regressions for write-heavy tests, they are larger for tests with more concurrency and larger when gtid support is enabled.

By gtid support is enabled I mean that these options are set to ON:

Both of these are ON by default in MySQL 9.5.0 and were OFF by default in earlier releases. I just learned about the performance impact from these and in future tests I will make probably repeat tests with them set to ON and OFF.

This blog post has results from the write-heavy tests with sysbench for MySQL 8.0, 8.4, 9.4 and 9.5 to explain my claims above.

tl;dr

  • Regressions are most likely and larger on the insert test
  • There are regressions for write-heavy workloads in MySQL 8.4 and 9.x
    • Throughput is typically 15% less in MySQL 9.5 than in 8.0 for tests with 16 clients on the 24-core/2-socket srever
    • Throughput is typically 5% less in MySQL 9.5 than 8.0 for tests with 40 clients on the 48-core server
  • The regressions are larger when gtid_mode and enforce_gtid_consistency are set to ON
    • Throughput is typically 5% to 10% less with the -gtid configs vs the -nogtid configs with 40 clients on the 48-core server. But this is less of an issue on other servers.
    • There are significant increases in CPU, context switch rates and KB written to storage for the -gtid configs relative to the same MySQL version using the -nogtid configs
  • Regressions might be larger for the insert and update-inlist tests because they have larger transactions relative to other write-heavy tests. Performance regressions are correlated with increases in CPU, context switches and KB written to storage per transaction.
What changed?

I use diff to compare the output from SHOW GLOBAL VARIABLES when I build new releases and from that it is obvious that the default value for gtid_mode and enforce_gtid_consistency changed in MySQL 9.5 but I didn't appreciate the impact from that change.

Builds, configuration and hardware

I compiled MySQL from source for versions 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The versions that I tested are named:
  • 8.0.44-nogtid
    • MySQL 8.0.44 with gtid_mode and enforce_gtid_consistency =OFF
  • 8.0.44-gtid
    • MySQL 8.0.44 with gtid_mode and enforce_gtid_consistency =ON
  • 8.4.7-notid
    • MySQL 8.4.7 with gtid_mode and enforce_gtid_consistency =OFF
  • 8.4.7-gtid
    • MySQL 8.4.7 with gtid_mode and enforce_gtid_consistency =ON
  • 9.4.0-nogtid
    • MySQL 9.4.0 with gtid_mode and enforce_gtid_consistency =OFF
  • 9.4.0-gtid
    • MySQL 9.4.0 with gtid_mode and enforce_gtid_consistency =ON
  • 9.5.0-nogtid
    • MySQL 9.5.0 with gtid_mode and enforce_gtid_consistency =OFF
  • 9.5.0-gtid
    • MySQL 9.5.0 with gtid_mode and enforce_gtid_consistency =ON

The servers are:
  • 8-core
    • The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04.
    • my.cnf for the -nogtid configs are here for 8.0, 8.4, 9.4, 9.5
    • my.cnf for the -gtid configs are here for 8.0, 8.4, 9.4, 9.5
    • The benchmark is run with 1 thread, 1 table and 50M rows per table
  • 24-core
    • The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM, one m.2 SSD (2TB,  ext4 with discard enabled). The OS is Ubuntu 24.04. The CPUs are Intel Xeon Silver 4214R CPU @ 2.40GHz.
    • my.cnf for the -nogtid configs are here for 8.0, 8.4, 9.4, 9.5
    • my.cnf for the -gtid configs are here for 8.0, 8.4, 9.4, 9.5
    • The benchmark is run with 16 threads, 8 tables and 10M rows per table
  • 48-core
    • The server is ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled and 128G of RAM. Storage is 2 Intel D7-P5520 NVMe devices with RAID 1 (3.8T each) using ext4. The OS is Ubuntu 22.04 running the non-HWE kernel (5.5.0-118-generic).
    • my.cnf for the -nogtid configs are here for 8.0, 8.4, 9.4, 9.5
    • my.cnf for the -gtid configs are here for 8.0, 8.4, 9.4, 9.5
    • The benchmark is run with 40 threads, 8 tables and 10M rows per table
Benchmark

I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB. While I ran all of the tests, I only share results from a subset of the write-heavy tests.

The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. 

The purpose is to search for regressions from new CPU overhead and mutex contention. The workload is cached -- there should be no read IO but will be some write IO.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. Here I only share results from a subset of the write-heavy tests.

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for MySQL 8.0.44)
When the relative QPS is > 1 then some version is faster than MySQL 8.0.44.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than MySQL 8.0.44.

Values from iostat and vmstat divided by QPS are here for the 8-core, 24-core and 48-core servers. These can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

The spreadsheet and charts are here and in some cases are easier to read than the charts below. The y-axis doesn't start at 0 to improve readability.

Results: 8-core

Summary
  • For many tests there are small regressions from 8.0 to 8.4 and 8.4 to 9.x
  • There are small improvements (~5%) for the -gtid configs vs the -nogtid result for update-index
  • There is a small regression (~5%) for the -gtid configs vs the -nogtid result for insert
  • There are small regression (~1%) for the -gtid configs vs the -nogtid result for other tests
From vmstat metrics for the insert test where perf decreases with the 9.5.0-gtid result
  • CPU per operation (cpu/o) increases by 1.10X with the -gtid config
  • Context switches per operation (cs/o) increases by 1.45X with the -gtid config
  • KB written to storage per commit (wKB/o) increases by 1.16X with the -gtid config
From vmstat metrics for the update-index test where perf increases with the 9.5.0-gtid result
  • CPU per operation (cpu/o) decreases by ~3% with the -gtid config
  • Context switches per operation (cs/o) decrease by ~2% with the -gtid config
  • KB written to storage per commit (wKB/o) decreases by ~3% with the -gtid config
  • This result is odd. I might try to reproduce it in the future
Results: 24-core

Summary
  • For many tests there are regressions from 8.0 to 8.4 and 8.4 to 9.x and throughput is typically 15% less in 9.5.0 than 8.0.44
  • There are large regressions in 9.4 and 9.5 for update-inlist
  • There is usually a small regression (~5%) for the -gtid configs vs the -nogtid result
From vmstat metrics for the insert test comparing 9.5.0-gtid with 9.5.0-nogtid
  • Throughput is 1.15X larger in 9.5.0-nogtid
  • CPU per operation (cpu/o) is 1.15X larger in 9.5.0-gtid
  • Context switches per operation (cs/o) are 1.23X larger in 9.5.0-gtid
  • KB written to storage per commit (wKB/o) is 1.24X larger in 9.5.0-gtid
From vmstat metrics for the update-inlist comparing both 9.5.0-nogtid and 9.5.0-nogtid with 8.0.44-nogtid
  • The problems here look different than most other tests as the regressions in 9.4 and 9.5 are similar for the -gtid and -nogtid configs. If I have time I will get flamegraphs and PMP output. The server here has two sockets and can suffer more from false-sharing and real contention on cache lines.
  • Throughput is 1.43X larger in 8.0.44-nogtid
  • CPU per operation (cpu/o) is 1.05X larger in 8.0.44-nogtid
  • Context switches per operation (cs/o) are 1.18X larger in 8.0.44-nogtid
  • KB written to storage per commit (wKB/o) is ~1.12X larger in 9.5.0
Results: 48-core

Summary
  • For many tests there are regressions from 8.0 to 8.4
  • For some tests there are regressions from 8.4 to 9.x
  • There is usually a large regression for the -gtid configs vs the -nogtid result and the worst case occurs on the insert test
From vmstat metrics for the insert test comparing 9.5.0-gtid with 9.5.0-nogtid
  • Throughput is 1.17X larger in 9.5.0-nogtid
  • CPU per operation (cpu/o) is 1.13X larger in 9.5.0-gtid
  • Context switches per operation (cs/o) are 1.26X larger in 9.5.0-gtid
  • KB written to storage per commit (wKB/o) is 1.24X larger in 9.5.0-gtid

Bridging the Gap: Querying PostgreSQL Tables from an Oracle Database

Database interoperability is a common requirement in enterprise environments. For users seeking to access Oracle data from a PostgreSQL instance, the native PostgreSQL extension oracle_fdw (Foreign Data Wrapper) is an excellent and efficient solution. However, the question often arises: How do we perform the reverse? How can an Oracle SQL query execute a SELECT statement […]

The $sql aggregation stage in Oracle Database to replace explain("executionStats")

As application-driven data models and document databases—made popular by MongoDB—continue to gain traction, Oracle Database has added MongoDB emulation capabilities on top of its SQL query engine. It's only the logical model and exposed API that resemble it, as physical documents are stored in relational tables and fixed-size blocks. This adds another abstraction on top of SQL, and when performance is different from what you expected, you need to look at the physical execution behind the logical query. The $sql stage of an aggregation pipeline can help troubleshoot. Let's take an example.

I create a one-million-document collection and index it with the ESR (Equality, Sort, Range) Guideline in mind:

// create the collection
db.oneMillion.drop();
db.oneMillion.createIndex({ e: 1, s: 1, r: 1 });
// insert documents
for (let i = 0; i < 1e2; i++) {  
  void db.oneMillion.insertMany( Array.from(  
    { length: 1e4 },  
    (_, i) => ({ e: i%3, s: new Date(), r: Math.random(), x: UUID() })  
  ) )
}
// check count
db.oneMillion.countDocuments();

I run a query that completes in 1 millisecond on MongoDB but takes seconds in the Oracle Database emulation. It’s a simple pagination query combining:

  • An equality filter: { e: 2 }, which returns one-third of the collection
  • A range filter: { r: { $gt: 0.5 } }, which returns half of those documents
  • A sort with pagination: sort({ s: 1, r: 1 }).limit(10), which returns the last ten documents by date and value

This query runs much faster on MongoDB than in the Oracle emulation. To get execution statistics, I add hint({"$native":"MONITOR"}) so that the underlying SQL query is run with the /*+ MONITOR */ hint:

db.oneMillion.find( 
 { e:2, r: { $gt:0.5 } },
 { s:1, r:1, _id:0 }
).sort({ s:1, r:1 }).limit(10).hint({"$native":"MONITOR"});

The query is executed and returns the result:

[
  { s: ISODate('2025-12-17T08:52:50.475Z'), r: 0.5222276191239983 },
  { s: ISODate('2025-12-17T08:52:50.475Z'), r: 0.7565247894880116 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.6099160713187135 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.765542699487576 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.8144790402364248 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.8328191789951023 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.8356551175440483 },
  { s: ISODate('2025-12-17T08:52:50.476Z'), r: 0.9779607167502489 },
  { s: ISODate('2025-12-17T08:52:50.477Z'), r: 0.5236033088481526 },
  { s: ISODate('2025-12-17T08:52:50.477Z'), r: 0.5290926931399482 }
]

After running it, I can get the SQL Monitor report for the last query in my session by calling the dbms_sqltune function through the $sql aggregation stage:

db.aggregate([{ $sql : `select 
 dbms_sqltune.report_sql_monitor(report_level=>'all',type=>'text') as "text"
`}]).forEach(row => print(row.text))

The output shows the underlying SQL query generated by the emulation, with its execution plan and execution statistics:

SQL Monitoring Report

SQL Text
------------------------------
select /*+ FIRST_ROWS(10) MONITOR */ json_patch("DATA",:1 project ),rawtohex("RESID"),"ETAG" from "ORA"."oneMillion" where JSON_EXISTS("DATA",'$?( (@.e.numberOnly() == $B0) && (@.r.numberOnly() > $B1) )' passing :2 as "B0", :3 as "B1" type(strict)) order by JSON_QUERY("DATA", '$.s[*].min()') asc nulls first, JSON_QUERY("DATA", '$.r[*].min()') asc nulls first fetch next 10 rows only

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  4
 Session             :  ORA (36619:4028)
 SQL ID              :  420n5y2ytx6zh
 SQL Execution ID    :  67108867
 Execution Started   :  12/17/2025 09:04:51
 First Refresh Time  :  12/17/2025 09:04:51
 Last Refresh Time   :  12/17/2025 09:04:52
 Duration            :  1s
 Module/Action       :  ORDS_ADBS_Managed/-
 Service             :  CQWRIAXKGYBKVNX_O23_low.adb.oraclecloud.com
 Program             :  ORDS_ADBS_Managed
 Fetch Calls         :  1

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :2   |        2 | NUMBER | 2                                                                                         |
| :3   |        3 | NUMBER | .5                                                                                        |
========================================================================================================================

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    1.18 |    1.18 |     0.00 |     1 |  98194 |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=2462396944)
==============================================================================================================================================================================
| Id |                 Operation                 |            Name             |  Rows   | Cost |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                                           |                             | (Estim) |      | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
==============================================================================================================================================================================
|  0 | SELECT STATEMENT                          |                             |         |      |         1 |     +1 |     1 |       10 |     . |          |                 |
|  1 |   COUNT STOPKEY                           |                             |         |      |         1 |     +1 |     1 |       10 |     . |          |                 |
|  2 |    VIEW                                   |                             |      52 |  316 |         1 |     +1 |     1 |       10 |     . |          |                 |
|  3 |     SORT ORDER BY STOPKEY                 |                             |      52 |  316 |         2 |     +1 |     1 |       10 |  4096 |    50.00 | Cpu (1)         |
|  4 |      FILTER                               |                             |         |      |         1 |     +1 |     1 |     167K |     . |          |                 |
|  5 |       TABLE ACCESS BY INDEX ROWID BATCHED | oneMillion                  |      52 |   38 |         1 |     +1 |     1 |     167K |     . |    50.00 | Cpu (1)         |
|  6 |        HASH UNIQUE                        |                             |      52 |      |         1 |     +1 |     1 |     167K |     . |          |                 |
|  7 |         INDEX RANGE SCAN (MULTI VALUE)    | $ora:oneMillion.e_1_s_1_r_1 |      38 |   27 |         1 |     +1 |     1 |     167K |     . |          |                 |
==============================================================================================================================================================================

Returning those 10 rows used 1.18 seconds of CPU because it used the index only for the filters, returning 166,666 rows that had to be deduplicated (HASH UNIQUE), and sorted (SORT ORDER BY STOPKEY) before returning the result.

Oracle has powerful hints, and you can use them with the hint({"$native": }) (not to be confused with hint({"$natural":1})) of MongoDB. For example, I can try to avoid this HASH UNIQUE that doesn't preserve the ordering from the index:

db.oneMillion.find( 
 { e:2, r: { $gt:0.5 } },
 { s:1, r:1, _id:0 }
).sort({ s:1, r:1 }).limit(10).hint(
 {"$native":'NO_USE_HASH_AGGREGATION MONITOR'}
);

It uses a SORT UNIQUE but still doesn't preserve the index ordering because the deduplication is on the ROWID, so finally it's just an additional sort:

|  3 |     SORT ORDER BY STOPKEY                 |                             |       1 |    3 |         1 |     +1 |     1 |       10 |  4096 |          |                 |
|  4 |      FILTER                               |                             |         |      |         1 |     +1 |     1 |     167K |     . |          |                 |
|  5 |       TABLE ACCESS BY INDEX ROWID BATCHED | oneMillion                  |       1 |    2 |         1 |     +1 |     1 |     167K |     . |   100.00 | Cpu (1)         |
|  6 |        SORT UNIQUE                        |                             |       1 |      |         1 |     +1 |     1 |     167K |  10MB |          |                 |
|  7 |         INDEX RANGE SCAN (MULTI VALUE)    | $ora:oneMillion.e_1_s_1_r_1 |       1 |    2 |         1 |     +1 |     1 |     167K |     . |          |                 |

If you don't have the license for all options (Enterprise Edition, Diagnostic Pack, and Tuning Pack), don't use SQL Monitor. You can still view the execution plan with DBMS_XPLAN. To obtain execution statistics, use the GATHER_PLAN_STATISTICS hint:

db.oneMillion.find( 
 { e:2, r: { $gt:0.5 } },
 { s:1, r:1, _id:0 }
).sort({ s:1, r:1 }).limit(10).hint({"$native":"GATHER_PLAN_STATISTICS"});

The query to get all execution plan sections is:

db.aggregate( [ { $sql : `
 select * 
 from dbms_xplan.display_cursor(null,null,'ADVANCED ALLSTATS LAST')
` } ] ).forEach(row => print(row.PLAN_TABLE_OUTPUT));
SQL_ID  66jyw5hxfx4zh, child number 0
-------------------------------------
select /*+ FIRST_ROWS(10) GATHER_PLAN_STATISTICS */
json_patch("DATA",:1 project ),rawtohex("RESID"),"ETAG" from
"ORA"."oneMillion" where JSON_EXISTS("DATA",'$?( (@.e.numberOnly() ==
$B0) && (@.r.numberOnly() > $B1) )' passing :2 as "B0", :3 as "B1"
type(strict)) order by JSON_QUERY("DATA", '$.s[*].min()') asc nulls
first, JSON_QUERY("DATA", '$.r[*].min()') asc nulls first fetch next 10
rows only

Plan hash value: 2462396944

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name                        | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                             |      1 |        |       |     3 (100)|          |     10 |00:00:00.98 |   98194 |       |       |          |
|*  1 |  COUNT STOPKEY                          |                             |      1 |        |       |            |          |     10 |00:00:00.98 |   98194 |       |       |          |
|   2 |   VIEW                                  |                             |      1 |      1 | 18314 |     3  (34)| 00:00:01 |     10 |00:00:00.98 |   98194 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY                |                             |      1 |      1 |   179 |     3  (34)| 00:00:01 |     10 |00:00:00.98 |   98194 |  4096 |  4096 | 4096  (0)|
|*  4 |     FILTER                              |                             |      1 |        |       |            |          |    166K|00:00:00.32 |   98194 |       |       |          |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| oneMillion                  |      1 |      1 |   179 |     2   (0)| 00:00:01 |    166K|00:00:00.30 |   98194 |       |       |          |
|   6 |       HASH UNIQUE                       |                             |      1 |      1 |   179 |            |          |    166K|00:00:00.11 |    2048 |   772K|   772K|          |
|*  7 |        INDEX RANGE SCAN (MULTI VALUE)   | $ora:oneMillion.e_1_s_1_r_1 |      1 |      1 |       |     2   (0)| 00:00:01 |    166K|00:00:00.08 |    2048 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$2
   2 - SEL$1 / "from$_subquery$_002"@"SEL$2"
   3 - SEL$1
   5 - SEL$1 / "oneMillion"@"SEL$1"
   7 - SEL$1 / "oneMillion"@"SEL$1"

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('23.1.0')
      DB_VERSION('23.1.0')
      OPT_PARAM('_fix_control' '20648883:0 26552730:1 27175987:0 29972495:0 22387320:0 30195773:0 31945701:1 32108311:1 33659818:3 34092979:1 35495824:1 33792497:1 36554842:1 36283175:1
              31720959:1 36004220:1 36635255:1 36675198:1 36868551:1 37400112:1 37346200:0 37626161:1')
      FIRST_ROWS(10)
      FORCE_XML_QUERY_REWRITE
      FORCE_JSON_TABLE_TRANSFORM
      XML_DML_RWT_STMT
      XMLINDEX_REWRITE
      XMLINDEX_REWRITE_IN_SELECT
      NO_COST_XML_QUERY_REWRITE
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      NO_ACCESS(@"SEL$2" "from$_subquery$_002"@"SEL$2")
      INDEX_RS_ASC(@"SEL$1" "oneMillion"@"SEL$1" "$ora:oneMillion.e_1_s_1_r_1")
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "oneMillion"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$1" UNIQUE)
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=10)
   3 - filter(ROWNUM<=10)
   4 - filter(HEXTORAW('04')>SYS_CONS_ANY_SCALAR(:3, 3))
   7 - access("oneMillion"."SYS_NC00005$"=SYS_CONS_ANY_SCALAR(:2, 3) AND "oneMillion"."SYS_NC00007$">SYS_CONS_ANY_SCALAR(:3, 3) AND "oneMillion"."SYS_NC00007$"<HEXTORAW('04'))
       filter(("oneMillion"."SYS_NC00007$"<HEXTORAW('04') AND "oneMillion"."SYS_NC00007$">SYS_CONS_ANY_SCALAR(:3, 3)))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "from$_subquery$_002"."JSON_PATCH("DATA",:1PROJECT)"[JSON,32600], "from$_subquery$_002"."RAWTOHEX("RESID")"[VARCHAR2,4000], "from$_subquery$_002"."ETAG"[RAW,16]
   2 - "from$_subquery$_002"."JSON_PATCH("DATA",:1PROJECT)"[JSON,32600], "from$_subquery$_002"."RAWTOHEX("RESID")"[VARCHAR2,4000], "from$_subquery$_002"."ETAG"[RAW,16]
   3 - (#keys=2) JSON_VALUE( /*+ QJSNMD5_TC_JCMP_JV */ JSON_QUERY("DATA" /*+ LOB_BY_VALUE */  FORMAT OSON , '$.s[*].min()' RETURNING JSON WITHOUT ARRAY WRAPPER NULL ON ERROR
       TYPE(LAX) ) FORMAT OSON , '$' RETURNING ANY ORA_RAWCOMPARE(32767) ERROR ON ERROR TYPE(LAX) )[32767], JSON_VALUE( /*+ QJSNMD5_TC_JCMP_JV */ JSON_QUERY("DATA" /*+ LOB_BY_VALUE */
       FORMAT OSON , '$.r[*].min()' RETURNING JSON WITHOUT ARRAY WRAPPER NULL ON ERROR TYPE(LAX
                                    
                                    
                                    
                                    
                                

December 16, 2025

Managed Database vs. Kubernetes: Taking Back Control of Your Cloud Costs and Agility

Enterprises have spent years modernizing applications for the cloud, yet databases often remain a holdout. Many teams turn to managed database services for convenience, only to find that hidden markups, unpredictable scaling fees, and vendor lock-in erode the financial and operational benefits cloud adoption was meant to deliver. Pressure to control spend, meet evolving compliance […]

December 15, 2025

Many-to-One: Stronger Relationship Design with MongoDB

In a relational database, a one-to-many relationship is typically implemented with two tables. The "one" side is the parent and has a primary key that is indexed to guarantee uniqueness. The "many" side is the child and references the parent’s primary key with a foreign key. An index is often added to the foreign key as well, so that operations on the parent can efficiently locate its child rows.

While this design seems straightforward and ideal from a database administrator’s point of view—where data integrity is the primary concern—it can surprise developers. This normalized data model does not account for access patterns or cardinalities: the same structure is used whether the "many" side contains millions of rows and keeps growing or only a few items.

MongoDB takes a different approach: its data model is optimized for a specific application, based on known access patterns and cardinalities. In a document model, a one-to-many relationship can be implemented either as multiple documents linked by references or as an embedded array or subdocument within a single document. In both cases, you can choose whether to embed or reference from the parent (the "one" side) or from the child (the "many" side).

An example

I use an HR Dataset with employees that I load in an "employees" collection. It has two million documents:

cd /var/tmp
## Download HR_Dataset
 from Kaggle (https://www.kaggle.com/datasets/kadirduran/hr-dataset) 
## Unzip and import
curl -L -o hr-data-mnc.zip https://www.kaggle.com/api/v1/datasets/download/rohitgrewal/hr-data-mnc &&
unzip -o hr-data-mnc.zip &&
mongoimport -d mdb -c employees --type=csv --headerline --drop 'HR_Data_MNC_Data Science Lovers.csv'

Once imported, I connect with mongosh and update the performance rating to add random decimal digits, to recognize them better when comparing the results:

use mdb;

db.employees.updateMany( {}, [
 {
   $set: {
   Performance_Rating: {
    $add: [ { $toDouble: "$Performance_Rating" }, { $rand: {} }                       ]
   }
  }
 }
]);

This collection contains employees associated with a department name. I’ll add additional details about each department, such as a description, and then explore alternative models for this one-to-many relationship, where one department has many employees and each employee belongs to one department.

For each model, I'll look at the performance of the following query:

Let's identify the top 10 most outstanding active employees in the IT department and list their names along with their performance ratings.

Embed in the "many" side

The many-to-one relationship here is employee-to-department, as each employee has a department name:

  {  
    _id: ObjectId('693f1d61e235ef0960ae2b53'),  
    'Unnamed: 0': 22,  
    Employee_ID: 'EMP0000023',  
    Full_Name: 'James Valdez',  
    Department: 'R&D',  
    Job_Title: 'Research Scientist',  
    Hire_Date: '2017-10-30',  
    Location: 'East Scott, Mauritius',  
    Performance_Rating: 5.882713091486423,  
    Experience_Years: 7,  
    Status: 'Retired',  
    Work_Mode: 'On-site',  
    Salary_INR: 789283  
  }  

As we generally embed more than one field, I add a description and structure it as a sub-object:

db.employees.aggregate([  
  {  
    $addFields: {  
      Department: {  
        $switch: {  
          branches: [  
            { case: { $eq: ["$Department", "Finance"] }, then: { Name: "Finance", Description: "Manages the company’s budgets, expenses, and financial planning to ensure fiscal health." } },  
            { case: { $eq: ["$Department", "HR"] }, then: { Name: "HR", Description: "Handles recruitment, employee relations, and organizational development initiatives." } },  
            { case: { $eq: ["$Department", "IT"] }, then: { Name: "IT", Description: "Maintains technology infrastructure, software systems, and cybersecurity protections." } },  
            { case: { $eq: ["$Department", "Marketing"] }, then: { Name: "Marketing", Description: "Promotes the company’s products and services through strategic campaigns and market research." } },  
            { case: { $eq: ["$Department", "Operations"] }, then: { Name: "Operations", Description: "Oversees daily business activities, logistics, and process optimization for efficiency." } },  
            { case: { $eq: ["$Department", "R&D"] }, then: { Name: "R&D", Description: "Researches and develops innovative products and services to support future growth." } },  
            { case: { $eq: ["$Department", "Sales"] }, then: { Name: "Sales", Description: "Builds customer relationships and drives revenue through product and service sales." } }  
          ],  
          default: { Name: "$Department", Description: "No description available" }  
        }  
      }  
    }  
  },  
  {  
    $merge: {  
      into: "employees",   // same collection  
      whenMatched: "merge", // update existing docs  
      whenNotMatched: "fail"  
    }  
  }  
])  

The result for the same employee is:

  {
    _id: ObjectId('693f1d61e235ef0960ae2b53'),
    'Unnamed: 0': 22,
    Employee_ID: 'EMP0000023',
    Full_Name: 'James Valdez',
    Department: {
      Name: 'R&D',
      Description: 'Researches and develops innovative products and services to support future growth.'
    },
    Job_Title: 'Research Scientist',
    Hire_Date: '2017-10-30',
    Location: 'East Scott, Mauritius',
    Performance_Rating: 5.882713091486423,
    Experience_Years: 7,
    Status: 'Retired',
    Work_Mode: 'On-site',
    Salary_INR: 789283
  }

I want to retrieve the top 10 best-performing active employees from the IT department and display their names and performance ratings:

db.employees.find( 
 { "Status": "Active", "Department.Name": "IT" },
 { _id: 0, "Full_Name": 1, "Performance_Rating": 1 }
).sort({ "Performance_Rating": -1 }).limit(10)

// result:

[
  { Full_Name: 'Stuart Lopez', Performance_Rating: 5.999973276392604 },
  { Full_Name: 'Mr. Ethan Morton', Performance_Rating: 5.9999561502903065 },
  { Full_Name: 'Lee White', Performance_Rating: 5.999935393136708 },
  { Full_Name: 'Amber Coleman', Performance_Rating: 5.999919949194189 },
  { Full_Name: 'Eugene Brown', Performance_Rating: 5.999917240114123 },
  { Full_Name: 'Nicole Edwards', Performance_Rating: 5.999914413630196 },
  { Full_Name: 'Erika Stewart', Performance_Rating: 5.999902351452448 },
  { Full_Name: 'Jenna King', Performance_Rating: 5.999896490219257 },
  { Full_Name: 'Douglas Hill', Performance_Rating: 5.999886177014563 },
  { Full_Name: 'Richard Gonzalez', Performance_Rating:  5.999879794558417 }
]

Since I have no index, it reads all documents, which takes 1.3 seconds:

x=db.employees.find( 
 { "Status": "Active", "Department.Name": "IT" },
 { _id: 0, "Full_Name": 1, "Performance_Rating": 1 }
).sort({ "Performance_Rating": -1 }).limit(10).explain("executionStats");

printjson({  
  nReturned:           x.executionStats.nReturned,  
  executionTimeMillis: x.executionStats.executionTimeMillis,
  totalKeysExamined:   x.executionStats.totalKeysExamined,
  totalDocsExamined:   x.executionStats.totalDocsExamined,
});

// Execution statistics:

{
  nReturned: 10,
  executionTimeMillis: 1367,
  totalKeysExamined: 0,
  totalDocsExamined: 2000000
}

One benefit of embedding on the "many" side is that you can use all fields to create a compound index. For instance, I can build an index that supports my filter, sort, and projection needs:

db.employees.createIndex({
 "Status":1,               // for equality predicate on employee
 "Department.Name":1,      // for equality predicate on department
 "Performance_Rating": 1,  // for sort and limit (pagination)
 "Full_Name": 1,           // for projection (covering index)
})

The query now instantly retrieves the top 10 documents from the index:

x=db.employees.find( 
 { "Status": "Active", "Department.Name": "IT" },
 { _id: 0, "Full_Name": 1, "Performance_Rating": 1 }
).sort({ "Performance_Rating": -1 }).limit(10).explain("executionStats")

printjson({  
  nReturned:           x.executionStats.nReturned,  
  executionTimeMillis: x.executionStats.executionTimeMillis,
  totalKeysExamined:   x.executionStats.totalKeysExamined,
  totalDocsExamined:   x.executionStats.totalDocsExamined,
});

// Execution statistics:

{
  nReturned: 10,
  executionTimeMillis: 0,
  totalKeysExamined: 10,
  totalDocsExamined: 0
}

Embedding data on the "many" side helps create optimized indexes and improves response times but involves duplicating data from the "one" side into the "many" side. In our example, each employee records the department name and description. This leads to two main effects:

  • Increased storage usage, which can be reduced through compression. You might also opt not to embed all fields—such as storing only the department name, which is often queried with the employee, and keeping the description in a separate "departments" collection.
  • Any update to the department information must be reflected across all associated employee records. This is generally manageable for infrequent changes, like name updates. Often, a department name change coincides with broader reorganizations, requiring employee record updates anyway.

Reference and lookup from the "many" side

To minimize duplication, I create a separate "departments" collection using the unique department names and descriptions I embedded, ensuring each department's information is stored only once:

db.employees.aggregate([  
  {  
    $group: {  
      _id: "$Department.Name",  
      Name: { $first: "$Department.Name" },  
      Description: { $first: "$Department.Description" }  
    }  
  },  
  {  
    $project: { _id: 0, Name: 1, Description: 1 }
  },  
  {  
    $merge: {  
      into: "departments",  
      whenMatched: "keepExisting",  
      whenNotMatched: "insert"  
    }  
  }  
]);  

You might be surprised by the speed of this aggregation pipeline. Instead of scanning all documents, MongoDB efficiently retrieved the unique departments by searching for distinct values in the index (a loose index scan).

Then, I can substitute the "Department" sub-object with a reference to the "_id" from the "departments" collection:

db.employees.aggregate([  
  {  
    $lookup: {  
      from: "departments",  
      localField: "Department.Name",  
      foreignField: "Name",  
      as: "deptInfo"  
    }  
  },  
  {  
    $addFields: {  
      Department: { $arrayElemAt: ["$deptInfo._id", 0] }
    }  
  },  
  { $project: { deptInfo: 0 } },  
  {  
    $merge: {  
      into: "employees",  
      whenMatched: "merge",  
      whenNotMatched: "fail"  
    }  
  }  
]);  

Here is the shape of an employee document with a single field for the department:

  {
    _id: ObjectId('693f1d61e235ef0960ae2b53'),
    'Unnamed: 0': 22,
    Employee_ID: 'EMP0000023',
    Full_Name: 'James Valdez',
    Department: ObjectId('693f2e38c2dd5ab4fbfd73b8'),
    Job_Title: 'Research Scientist',
    Hire_Date: '2017-10-30',
    Location: 'East Scott, Mauritius',
    Performance_Rating: 5.882713091486423,
    Experience_Years: 7,
    Status: 'Retired',
    Work_Mode: 'On-site',
    Salary_INR: 789283
  }

To find the top 10 highest-performing active employees in the IT department and display their names and ratings, I will join the department's collection using $lookup. Since $lookup incurs a cost, it's more efficient to filter the data beforehand. Therefore, I first apply $match to filter employees by status, then perform the $lookup with the "departments" collection from the reference, and filter more with the department name fetched from the foreign collection:

x=db.employees.aggregate([  
  { $match: { Status: "Active" } },  
  {  
    $lookup: {  
      from: "departments",  
      localField: "Department", 
      foreignField: "_id", 
      as: "deptInfo"  
    }  
  },  
  { $unwind: "$deptInfo" },  
  { $match: { "deptInfo.Name": "IT" } },  
  {  
    $project: {  
      _id: 0,  
      Full_Name: 1,  
      Performance_Rating: 1  
    }  
  },  
  { $sort: { Performance_Rating: -1 } },  
  { $limit: 10 }  
]).explain("executionStats")
print(x.stages[1])

// Execution plan for the lookup stage:

{
  '$lookup': {
    from: 'departments',
    as: 'deptInfo',
    localField: 'Department',
    foreignField: '_id',
    let: {},
    pipeline: [
      { '$match': { Name: { '$eq': 'IT' } } }
    ],
    unwinding: { preserveNullAndEmptyArrays: false }
  },
  totalDocsExamined: Long('1401558'),
  totalKeysExamined: Long('1401558'),
  collectionScans: Long('0'),
  indexesUsed: [ '_id_' ],
  nReturned: Long('421333'),
  executionTimeMillisEstimate: Long('94596')
}

I printed the statistics for the $lookup stage, which read 1,401,558 documents—one per active employee. Each access was quick, thanks to the index on "_id," but executing it a million times took over a minute. It returned only 421,333 documents because the department name filter was pushed down by the query planner from the subsequent $unwind and $match stages into the $lookup pipeline. The main issue remains reading a million identical documents.

If you have a many-to-one relationship and numerous documents on the many side, it's better to join from the application rather than using an aggregation pipeline lookup.

I retrieved the "_id" of the departments I am interested in:


var itDeptId = db.departments.findOne({ Name: "IT" })._id;

If the department names are unlikely to change, this can be run once and stored in the application's cache.

Then I can access employees filtered by the reference '_id', so I will create an index for it:

db.employees.createIndex(  
  { Status: 1, Department: 1, Performance_Rating: -1, Full_Name: 1 }  
)  

Since I'm querying just a single collection, I don't require an aggregation pipeline:

x=db.employees.find(  
  {   
    Status: "Active",   
    Department: itDeptId   
  },  
  {   
    _id: 0,   
    Full_Name: 1,  
    Performance_Rating: 1  
  }  
).sort({ Performance_Rating: -1 }).limit(10).explain("executionStats");  

printjson({  
  nReturned:           x.executionStats.nReturned,  
  executionTimeMillis: x.executionStats.executionTimeMillis,
  totalKeysExamined:   x.executionStats.totalKeysExamined,
  totalDocsExamined:   x.executionStats.totalDocsExamined,
});

// Execution statistics:

{
  nReturned: 10,
  executionTimeMillis: 0,
  totalKeysExamined: 10,
  totalDocsExamined: 0
}

Finally, when the application accesses the lookup table first, I observe the same performance with a many-to-one reference as with embedding the single item into the many.

Reference from the "many" side, but lookup from the "one" side

I can perform the same operation as above in the application, using an aggregation pipeline that starts with the departments and includes employees via a lookup. I use a lookup pipeline to add the filter for active employees:

x=db.departments.aggregate([  
  {  
    $match: { Name: "IT" } 
  },  
  {  
    $lookup: {  
      from: "employees",  
      let: { deptId: "$_id" },
      pipeline: [  
        {  
          $match: {  
            $expr: {  
              $and: [  
                { $eq: ["$Department", "$$deptId"] },  
                { $eq: ["$Status", "Active"] }
              ]  
            }  
          }  
        },  
        {  
          $sort: { Performance_Rating: -1 }  
        },  
        {  
          $limit: 10  
        },  
        {  
          $project: {  
            _id: 0,  
            Full_Name: 1,  
            Performance_Rating: 1  
          }  
        }  
      ],  
      as: "employees"  
    }  
  },  
  {  
    $project: {  
      _id: 0,  
      Name: 1,  
      employees: 1  
    }  
  }  
]).explain("executionStats")

print(x.stages[1])

// Execution plan for the lookup stage:

{
  '$lookup': {
    from: 'employees',
    as: 'employees',
    let: { deptId: '$_id' },
    pipeline: [
      {
        '$match': { '$expr': { '$and': [ [Object], [Object] ] } <... (truncated)
                                    

TLA+ modeling tips

Model minimalistically

Start from a tiny core, and always keep a working model as you extend. Your default should be omission. Add a component only when you can explain why leaving it out would not work. Most models are about a slice of behavior, not the whole system in full glory: E.g., Leader election, repair, reconfiguration. Cut entire layers and components if they do not affect that slice. Abstraction is the art of knowing what to cut. Deleting should spark joy. 


Model specification, not implementation

Write declaratively. State what must hold, not how it is achieved. If your spec mirrors control flow, loops, or helper functions, you are simulating code. Cut it out. Every variable must earn its keep. Extra variables multiply the state space (model checking time) and hide bugs. Ask yourself repeatedly: can I derive this instead of storing it? For example, you do not need to maintain a WholeSet variable if you can define it as a state function of existing variables: WholeSet == provisionalItems \union nonProvisionalItems.


Review the model for illegal knowledge

Do a full read-through of your model and check what each process can really see. TLA+ makes it easy to read global state (or another process's state) that no real distributed process could ever observe atomically. This is one of the most common modeling errors. Make a dedicated pass to eliminate illegal global knowledge.


Check atomicity granularity

Push actions to be as fine-grained as correctness allows. Overly large atomic actions hide races and invalidate concurrency arguments. Fine-grained actions expose the real interleavings your protocol must tolerate. 


Think in guarded commands, not procedures 

Each action should express one logical step in guarded-command style. The guard should ideally define the meaning of the action. Put all enablement conditions in the guard. If the guard holds, the action may fire at any time in true event-driven style. This is why I now prefer writing TLA+ directly over PlusCal: TLA+ forces you to think in guarded-command actions, which is how distributed algorithms are meant to be designed. Yes, PlusCal is easier for developers to read, but it also nudges you toward sequential implementation-shaped thinking. And recently, with tools like Spectacle, sharing and visually exploring TLA+ specs got much easier.


Step back and ask what you forgot to model

There is no substitute for thinking hard about your system. TLA+ modeling is only there to help you think hard about your system, and cannot substitute thinking about it. Check that you incorporated all relevant aspects: failures, message reordering, repair, reconfiguration.


Write TypeOK invariants 

TLA+ is not typed, so you should state types explicitly and early by writing TypeOK invariants. A good TypeOK invariant provides an executable documentation for your model. Writing this in seconds can save you many minutes of hunting runtime bugs through TLA+ counterexample logs.


Write as many invariants as you can

If a property matters, make it explicit as an invariant. Write them early. Expand them over time. Try to keep your invariants as tight as possible. Document your learnings about invariants and non-invariants. A TLA+ spec is a communication artifact. Write it for readers, not for the TLC model checker. Be explicit and boring for the sake of clarity.


Write progress properties

Safety invariants alone are not enough. Check that things eventually happen: requests complete, leaders emerge, and goals accomplished. Many "correct" models may quietly do nothing forever. Checking progress properties catch paths that stall.


Be suspicious of success

A successful TLC run proves nothing unless the model explores meaningful behavior. Low coverage or tiny state spaces usually mean the model is over-constrained or wrong. Break the spec on purpose to check that your spec is actually doing some real work, and not giving up in a vacuous/trivial way. Inject bugs on purpose. If your invariants do not fail, they are too weak. Test the spec by sabotaging it.


Optimize model checking efficiency last

Separate the model from the model checker. The spec should stand on its own. Using the cfg file, you can optimize for model checking by using appropriate configuration, constraints, bounds for counters, and symmetry terms.


You can find many examples and walkthroughs of TLA+ specifications on my blog.

There are many more in the TLA+ repo as well.

The Future of Radiation Safety is Lies, I Guess

Here’s a page from AEG Test (archive), a company which sells radiation detectors, talking about the safety of uranium glass. Right from the get-go it feels like LLM slop. “As a passionate collector of uranium glass,” the unattributed author begins, “I’ve often been asked: ‘Does handling these glowing antiques pose a health risk?’” It continues into SEO-friendly short paragraphs, each with a big header and bullet points. Here’s one:

Uranium glass emits low levels of alpha and beta radiation, detectable with a Geiger counter. However, most pieces register less than 10 microsieverts per hour (μSv/h), which is:

  • Far below the 1,000 μSv annual limit recommended for public exposure.
  • Comparable to natural background radiation from rocks, soil, or even bananas (which contain potassium-40, a mildly radioactive isotope).

First, uranium glass emits gamma rays too, not just alpha and beta particles. More importantly, these numbers are hot nonsense.

First, the Sievert is a measure of dose, not source intensity; saying a piece emits 10 µSv/hour is like saying a microwave oven emits sixty degrees of warming per minute. It depends on whether the food is in the microwave or outside it, how much food there is, whether it was shielded, and so on. The dose from a uranium glass cup depends on whether you’re chipping off bits and eating them, cuddling it every night, or keeping it in a display case.

10 μSv/hour is 87,600 μSv/year. How is that “far below” 1,000 μSv/year? If you’ve got a uranium glass candy dish on your desk that delivers 10 µSv/hour to your body, and you keep that up for eight hours a day, you’re looking at 29,200 µSv (29.2 mSv) per year. That’s over the DHS emergency guidelines for public relocation, and about half of the NRC dose limit for radiation workers.

The other comparisons are also bonkers: 10 μSv/hour is not comparable to typical background radiation: in the US, that’s roughly 3,100 μSv/year, or 0.35 μSv/hour. Nor is it on par with a banana: the banana equivalent dose is very roughly 0.1 μSv. Nobody is eating 100 bananas an hour.

The best source I know of for uranium glass safety (and which, now that we’re drowning in LLM slop, is surprisingly hard to actually track down) is the Nuclear Regulatory Commission’s NUREG-1717. The section on glassware begins on page 499 of the PDF, labeled 3-217. You should read their methods for estimating dosages, as exposure is highly dependent on uranium density, exposure vector, acidity, distance, etc. The NRC estimated a negligible 1.8 × 10-5 mSv/year (0.018 μSv/year) from drinking glass leachate, up to 0.02 mSv/year (20 μSv/year) from external exposure to drinking glasses (e.g. washing dishes, being in the same room, etc.), and 0.002 mSv/year (2 μSv/year) from occasional handling, admiring, and generally being around four pieces of decorative glassware scattered through a home. These exposures are almost certainly fine.

Please stop asking large language models to tell you or others about radiation safety. Ask a physicist or regulator instead.

Perform Point-In-Time-Recovery (PITR) in Valkey/Redis

This article explains how you can perform Point-in-time-Recovery (PITR) in Valkey/Redis. Requirements To perform PITR, you need to have append-only logging enabled. By default, AOF in Valkey/Redis only records the operations that have been executed against the instance, not when they were executed. For that, we need to enable the aof-timestamp-enabled parameter. So your Valkey/Redis […]

December 12, 2025

No, MongoDB Does Not Mean Skipping Design

With MongoDB, domain-driven design empowers developers to build robust systems by aligning the data model with business logic and access patterns.

Too often, developers are unfairly accused of being careless about data integrity. The logic goes: Without the rigid structure of an SQL database, developers will code impulsively, skipping formal design and viewing it as an obstacle rather than a vital step in building reliable systems.

Because of this misperception, many database administrators (DBAs) believe that the only way to guarantee data quality is to use relational databases. They think that using a document database like MongoDB means they can’t be sure data modeling will be done correctly.

Therefore, DBAs are compelled to predefine and deploy schemas in their database of choice before any application can persist or share data. This also implies that any evolution in the application requires DBAs to validate and run a migration script before the new release reaches users.

However, developers care just as much about data integrity as DBAs do. They put significant effort into the application’s domain model and avoid weakening it by mapping it to a normalized data structure that does not reflect application use cases.

Different Database Models, Different Data Models

Relational and document databases take different approaches to data modeling.

In a document database, you still design your data model. What changes is where and how the design happens, aligning closely with the domain model and the application’s access patterns. This is especially true in teams practicing domain‑driven design (DDD), where developers invest time in understanding domain objects, relationships and usage patterns.

The data model evolves alongside the development process — brainstorming ideas, prototyping, releasing a minimum viable product (MVP) for early feedback and iterating toward a stable, production-ready application.

Relational modeling often starts with a normalized design created before the application is fully understood. This model must then serve diverse future workloads and unpredictable data distributions. For example, a database schema designed for academic software could be used by both primary schools and large universities. This illustrates the strength of relational databases: the logical model exposed to applications is the same, even when the workloads differ greatly.

Document modeling, by contrast, is tailored to specific application usage. Instead of translating the domain model into normalized tables, which adds abstraction and hides performance optimizations, MongoDB stores aggregates directly in the way they appear in your code and business logic. Documents reflect the business transactions and are stored as contiguous blocks on disk, keeping the physical model aligned with the domain schema and optimized for access patterns.

Here are some other ways these two models compare.

Document Modeling Handles Relationships

Relational databases are often thought to excel at “strong relationships” between data, but this is partly because of a misunderstanding of the name — relations refers to mathematical sets of tuples (rows), not to the connections between them, which are relationships. Normalization actually loosens strong relationships, decoupling entities that are later matched at query time via joins.

In entity-relationship diagrams (ERDs), relationships are shown as simple one-to-one or one-to-many links, implemented via primary and foreign keys. ERDs don’t capture characteristics such as the direction of navigation or ownership between entities. Many-to-many relationships are modeled through join tables, which split them into two one-to-many relationships. The only property of a relationship in an ERD is to distinguish one-to-one (direct line) from one-to-many (crow’s foot), and the data model is the same whether the “many” is a few or billions.

Unified Modeling Language (UML)-class diagrams in object-oriented design, by comparison, are richer: They have a navigation direction and distinguish between association, aggregation, composition and inheritance. In MongoDB, these concepts map naturally:

  • Composition (for instance, an order and its order lines) often appears as embedded documents, sharing a life cycle and preventing partial deletion.
  • Aggregation ( a customer and their orders) uses references when life cycles differ or when the parent ownership is shared.
  • Inheritance can be represented via polymorphism, a concept ERDs don’t directly capture and workaround with nullable columns.

Domain models in object-oriented applications and MongoDB documents better mirror real-world relationships. In relational databases, schemas are rigid for entities, while relationships are resolved at runtime with joins — more like a data scientist discovering correlations during analysis. SQL’s foreign keys prevent orphaned rows, but they aren’t explicitly referenced when writing SQL queries. Each query can define a different relationship.

Schema Validation Protects Data Integrity

MongoDB is schema-flexible, not schema-less. This feature is especially valuable for early-stage projects — such as brainstorming, prototyping, or building an MVP — because you don’t need to execute Data Definition Language (DDL) statements before writing data. The schema resides within the application code, and documents are stored as-is, without additional validation at first, as consistency is ensured by the same application that writes and reads them.

As the model matures, you can define schema validation rules directly in the database — field requirements, data types, and accepted ranges. You don’t need to declare every field immediately. You add validation as the schema matures, becomes stable, and is shared. This ensures consistent structure when multiple components depend on the same fields, or when indexing, since only the fields used by the application are helpful in the index.

Schema flexibility boosts development speed at every stage of your application. Early in prototyping, you can add fields freely without worrying about immediate validation. Later, with schema validation in place, you can rely on the database to enforce data integrity, reducing the need to write and maintain code that checks incoming data.

Schema validation can also enforce physical bounds. If you embed order items in the order document, you might validate that the array does not exceed a certain threshold. Instead of failing outright — like SQL’s check constraints (which often cause unhandled application errors) — MongoDB can log a warning, alerting the team without disrupting user operations. This enables the application to stay available while still flagging potential anomalies or necessary evolutions.

Application Logic vs. Foreign Keys

In SQL databases, foreign keys are constraints, not actual definitions of relationships, which are evaluated at query time. SQL joins define relationships by listing columns as filter predicates, and foreign keys are not used in the JOIN clause. Foreign keys help prevent certain anomalies, such as orphaned children or cascading deletes, that arise from normalization.

MongoDB takes a different approach: By embedding tightly coupled entities, you solve major integrity concerns upfront. For example, embedding order lines inside their order document means orphaned line items are impossible by design. Referential relationships are handled by application logic, often reading from stable collections (lists of values) before embedding their values into a document.

Because MongoDB models are built for known access patterns and life cycles, referential integrity is maintained through business rules rather than enforced generically. In practice, this better reflects real-world processes, where updates or deletions must follow specific conditions (such asa price drop might apply to ongoing orders, but a price increase might not).

In relational databases, the schema is application-agnostic, so you must protect against any possible Data Manipulation Language (DML) modifications, not just those that result from valid business transactions. Doing so in the application would require extra locks or higher isolation levels, so it’s often more efficient to declare foreign keys for the database to enforce.

However, when domain use cases are well understood, protections are required for only a few cases and can be integrated into the business logic itself. For example, a product will never be deleted while ongoing transactions are using it. The business workflow often marks the product as unavailable long before it is physically deleted, and transactions are short-lived enough that there’s no overlap, preventing orphans without additional checks.

In domain‑driven models, where the schema is designed around specific application use cases, integrity can be fully managed by the application team alongside the business rules. While additional database verification may serve as a safeguard, it could limit scalability, particularly with sharding, and limit flexibility. An alternative is to run a periodic aggregation pipeline that asynchronously detects anomalies.

Next Time You Hear That Myth

MongoDB does not mean “no design.” It means integrating database design with application design — embedding, referencing, schema validation and application‑level integrity checks to reflect actual domain semantics.

This approach keeps data modeling a first‑class concern for developers, aligning directly with the way domain objects are represented in code. The database structure evolves alongside the application, and integrity is enforced in the same language and pipelines that deliver the application itself.

In environments where DBAs only see the database model and SQL operations, foreign keys may appear indispensable. But in a DevOps workflow where the same team handles both the database and the application, schema rules can be implemented first in code and refined in the database as specifications stabilize. This avoids maintaining two separate models and the associated migration overhead, enabling faster, iterative releases while preserving integrity.

mlrd: DynamoDB-Compatible API on MySQL

Introducing mlrd (“mallard”) to the world: a DynamoDB-compatible API on MySQL. Crazy, but it works really well and I’m confident it will help a lot of business save a lot of money. Here’s why.

Support for Crunchy Hardened PostgreSQL Ends Soon: Don’t Get Caught Off Guard.

Support shifts for hardened builds draw quick attention in regulated sectors, so when discussions surface about the future of a distribution, teams responsible for compliance and continuity take notice. Recent community discussions and rumors suggest that Crunchy Hardened PostgreSQL may reach end of support sometime around April 2026, and, while this has not been formally […]

December 11, 2025

Introducing Amazon Aurora powers for Kiro

In this post, we show how you can turn your ideas into full-stack applications with Kiro powers for Aurora. We explore how a new innovation, Kiro powers, can help you use Amazon Aurora best practices built into your development workflow, automatically implementing configurations and optimizations that make sure your database layer is production-ready from day one.

Sysbench for MySQL 5.6 through 9.5 on a 2-socket, 24-core server

This has results for the sysbench benchmark on a 2-socket, 24-core server. A post with results from 8-core and 32-core servers is here.

tl;dr

  • old bad news - there were many large regressions from 5.6 to 5.7 to 8.0
  • new bad news - there are some new regressions after MySQL 8.0
Normally I claim that there are few regressions after MySQL 8.0 but that isn't the case here. I also see regressions after MySQL 8.0 on the other larger servers that I use, but that topic will explained in another post.

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM, one m.2 SSD (2TB,  ext4 with discard enabled). The OS is Ubuntu 24.04. The CPUs are Intel Xeon Silver 4214R CPU @ 2.40GHz.

The config files are here for 5.6, 5.7, 8.0, 8.4 and 9.x.

Benchmark

I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB.

The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. The benchmark is run with 16 clients and 8 tables with 10M rows per table. 

The purpose is to search for regressions from new CPU overhead and mutex contention. The workload is cached -- there should be no read IO but will be some write IO.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than the base version.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than the base version.

I present two sets of charts. One set uses MySQL 5.6.51 as the base version which is my standard practice. The other uses MySQL 8.0.44 as the base version to show 

Values from iostat and vmstat divided by QPS are hereThese can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

The spreadsheet and charts are here and in some cases are easier to read than the charts below. Converting the Google Sheets charts to PNG files does the wrong thing for some of the test names listed at the bottom of the charts below.

Results: point queries

Summary
  • from 5.6 to 5.7 there are big improvements for 5 tests, no changes for 2 tests and small  regressions for 2 tests
  • from 5.7 to 8.0 there are big regressions for all tests
  • from 8.0 to 9.5 performance is stable
  • for 9.5 the common result is ~20% less throughput vs 5.6
Using vmstat from the hot-points test to understand the performance changes (see here)
  • context switch rate (cs/o) is stable, mutex contention hasn't changed
  • CPU per query (cpu/o) drops by 35% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 23% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: range queries without aggregation

Summary
  • from 5.6 to 5.7 throughput drops by 10% to 15%
  • from 5.7 to 8.0 throughput drops by about 15%
  • from 8.0 to 9.5 throughput is stable
  • for 9.5 the common result is ~30% less throughput vs 5.6
Using vmstat from the scan test to understand the performance changes (see here)
  • context switch rates are low and can be ignored
  • CPU per query (cpu/o) grows by 11% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: range queries with aggregation

Summary
  • from 5.6 to 5.7 there are big improvements for 2 tests, no changes for 1 tests and regressions for 5 tests
  • from 5.7 to 8.0 there are regressions for all tests
  • from 8.0 through 9.5 performance is stable
  • for 9.5 the common result is ~25% less throughput vs 5.6
Using vmstat from the read-only-count test to understand the performance changes (see here)
  • context switch rates are similar
  • CPU per query (cpu/o) grows by 16% from 5.6 to 5.7
  • CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
  • CPU per query (cpu/o) is stable from 8.0 through 9.5
Results: writes

Summary
  • from 5.6 to 5.7 there are big improvements for 9 tests and no changes for 1 test
  • from 5.7 to 8.0 there are regressions for all tests
  • from 8.4 to 9.x there are regressions for 8 tests and no change for 2 tests
  • for 9.5 vs 5.6: 5 are slower in 9.5, 3 are similar and 2 are faster in 9.5
Using vmstat from the insert test to understand the performance changes (see here)
  • in 5.7, CPU per insert drops by 30% while context switch rates are stable vs 5.6
  • in 8.0, CPU per insert grows by 36% while context switch rates are stable vs 5.7
  • in 9.5, CPU per insert grows by 3% while context switch rates grow by 23% vs 8.4
The first chart doesn't truncate the y-axis to show the big improvement for update-index but that makes it hard to see the smaller changes on the other tests.
This chart truncates the y-axis to make it easier to see changes on tests other than update-index.


A Christmas Carol of Two Databases

Being a Tale of Databases, Binary Logs, WAL Files, and the Redemption of Ebenezer Scrooge, DBA Part the First — In Which We Meet Ebenezer Scrooge, Database Administrator Extraordinary It was a cold, dark, and CPU-bound night. The wind blew fierce across the datacenter racks, and the disks did rattle in their trays like bones. […]

December 10, 2025

The insert benchmark on a small server : MySQL 5.6 through 9.5

This has results for MySQL versions 5.6 through 9.5 with the Insert Benchmark on a small server. Results for Postgres on the same hardware are here.

tl;dr

  • good news - there are no large regressions after MySQL 8.0
  • bad news - there are many large regressions from 5.6 to 5.7 to 8.0

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

The config files are here: 5.6.515.7.448.0.4x8.4.x9.x.0.

The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from MySQL 5.6.51.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

Results: cached

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.59)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) has similar throughput for 5.6.51 and 8.4.7
  • the second write-only step (l.i2) is 14% slower in 8.4.7 vs 8.4.7
  • the range-query steps (qr*) are ~30% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 38% slower in 8.4.7 vs 5.6.51

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.531.161.090.830.830.830.840.830.83
8.0.440.602.421.050.870.690.620.700.620.700.62
8.4.70.592.541.040.860.680.610.680.610.670.60
9.4.00.592.571.030.860.690.620.690.620.700.61
9.5.00.592.611.050.850.690.620.690.620.690.62

Results: IO-bound

Performance summaries are here for all versions and latest versions. I focus on the latest versions.

Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements. There are large regressions from new CPU overheads.
  • the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.60)
  • the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
  • the first write-only steps (l.i1) is 1.54X faster for 8.4.7 vs 5.6.51
  • the second write-only step (l.i2) is  1.82X faster for 8.4.7 vs 5.6.51
  • the range-query steps (qr*) are ~20% slower in 8.4.7 vs 5.6.51
  • the point-query steps (qp*) are 13% slower, 3% slower and 17% faster in 8.4.7 vs 5.6.51
dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.911.421.521.780.840.920.870.970.931.17
8.0.440.622.581.561.810.760.880.790.990.851.18
8.4.70.602.651.541.820.740.870.770.980.821.17
9.4.00.612.681.521.760.750.860.800.970.851.16
9.5.00.602.751.531.730.750.870.790.970.841.17

The insert benchmark on a small server : Postgres 12.22 through 18.1

This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.

Postgres continues to be boring in a good way. It is hard to find performance regressions.

 tl;dr for a cached workload

  • performance has been stable from Postgres 12 through 18
tl;dr for an IO-bound workload
  • performance has mostly been stable
  • create index has been ~10% faster since Postgres 15
  • throughput for the write-only steps has been ~10% less since Postgres 15
  • throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

For versions prior to 18, the config file is named conf.diff.cx10a_c8r32 and they are as similar as possible and here for versions 12, 13, 14, 15, 16 and 17.

For Postgres 18 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 12.22.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
This statement doesn't apply to this blog post, but I keep it here for copy/paste into future posts. Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: cached

The performance summaries are here for all versions and latest versions.

I focus on the  latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range

Results: IO-bound

The performance summaries are here for all versions and latest versions.

I focus on the latest versions.
  • throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
  • throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
  • throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
  • throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
  • throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22
The improvements for the index step arrived in Postgres 15.

The regressions for the write-only steps arrived in Postgres 15 and are likely from two issues -- vacuum and get_actual_variable_range

The improvements for the point-query steps arrived in Postgres 13.













    Rotate SSL/TLS Certificates in Valkey/Redis Without Downtime

    If your Valkey/Redis deployments use SSL/TLS, you will eventually need to rotate the TLS certificates. Perhaps it is because the certificates are expiring, or you made mistakes when creating them, or it could be that the private key has been leaked. This article explains the process of rotating the TLS/SSL certificates used by Valkey/Redis deployments […]

    How to Monitor Kafka with ClickHouse® Kafka Engine

    Learn how to monitor Kafka consumer lag, track throughput, and debug errors using ClickHouse Kafka engine system tables. Includes ready-to-use SQL queries for Kafka partition analysis and performance monitoring.