a curated list of database news from authoritative sources

October 28, 2025

Practical Data Masking in Percona Server for MySQL 8.4

Data masking lets you hide sensitive fields (emails, credit-card numbers, job titles, etc.) while keeping data realistic for reporting, support, or testing. It is particularly useful when you collaborate with external entities and need to share your data for development reasons. You also need to protect your data and keep your customers’ privacy safe. Last […]

October 27, 2025

Troubleshooting PostgreSQL Logical Replication, Working with LSNs

PostgreSQL logical replication adoption is becoming more popular as significant advances continue to expand its range of capabilities.  While quite a few blogs have described features, there seems to be a lack of simple and straightforward advice on restoring stalled replication. This blog demonstrates an extremely powerful approach to resolving replication problems using the Log […]

October 23, 2025

How efficient is RocksDB for IO-bound, point-query workloads?

How efficient is RocksDB for workloads that are IO-bound and read-only? One way to answer this is to measure the CPU overhead from RocksDB as this is extra overhead beyond what libc and the kernel require to perform an IO. Here my focus is on KV pairs that are smaller than the typical RocksDB block size that I use -- 8kb.

By IO efficiency I mean: (storage read IOPs from RocksDB benchmark / storage read IOPs from fio)

And I measure this in a setup where RocksDB doesn't get much benefit from RocksDB block cache hits (database size > 400G, block cache size was 16G).

This value will be less than 1.0 in such a setup. But how much less than 1.0 will it be? On my hardware the IO efficiency was ~0.85 at 1 client and 0.88 at 6 clients. Were I to use storage that had a 2X larger storage latency then the IO efficiency would be closer to 0.95.

 Note that:

  • IO efficiency increases (decreases) when SSD read latency increases (decreases)
  • IO efficiency increases (decreases) when the RocksDB CPU overhead decreases (increases)
  • RocksDB QPS increases by ~8% for IO-bound workloads when --block_align is enabled

The overheads per 8kb block read on my test hardware were:

  • about 11 microseconds from libc + kernel
  • between 6 and 10 microseconds from RocksDB
  • between 100 and 150 usecs of IO latency from SSD per iostat

A simple performance model

A simple model to predict the wall-clock latency for reading a block is:
    userland CPU + libc/kernel CPU + device latency

For fio I assume that userland CPU is zero, I measured libc/kernel at 10 usecs and will estimate that device latency is ~91 usecs. My device latency estimate comes from read-only benchmarks with fio where fio reports the average latency as 102 usecs which includes 11 usecs of CPU from libc+kernel and 91 = 102 - 11

This model isn't perfect, as I will show below when reporting results for RocksDB, but it might be sufficient.

Q and A

Q: What is the CPU overhead from libc + kernel per 8kb read?
A: About 10 microseconds on this CPU.

Q: Can you write your own code that will be faster than RocksDB for such a workload?
A: Yes, you can

Q: Should you write your own library for this?
A: It depends on how many features you need and the opportunity cost in spending time writing that code vs doing something else.

Q: Will RocksDB add features to make this faster?
A: That is for them to answer. But all projects have a complexity budget. Code can become too expensive to maintain when that budget is exceeded. There is also the opportunity cost to consider as working on this delays work on other features.

Q: Does this matter?
A: It matters more when storage is fast (read latency less than 100 usecs). As read response time grows the CPU overhead from RocksDB becomes much less of an issue.

Benchmark hardware

I ran tests on a Beelink SER7 with a Ryzen 7 7840HS CPU that has 8 cores and 32G of RAM. The storage device a Crucial is CT1000P3PSSD8 (Crucial P3, 1TB) using ext-4 with discard enabled. The OS is Ubuntu 24.04.

From fio, the average read latency for the SSD is 102 microseconds using O_DIRECT with io_depth=1 and the sync engine.

CPU frequency management makes it harder to claim that the CPU runs at X GHz, but the details are:

$ cpupower frequency-info

analyzing CPU 5:
  driver: acpi-cpufreq
  CPUs which run at the same hardware frequency: 5
  CPUs which need to have their frequency coordinated by software: 5
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1.60 GHz - 3.80 GHz
  available frequency steps:  3.80 GHz, 2.20 GHz, 1.60 GHz
  available cpufreq governors: conservative ... powersave performance schedutil
  current policy: frequency should be within 1.60 GHz and 3.80 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 3.79 GHz (asserted by call to kernel)
  boost state support:
    Supported: yes
    Active: no

Results from fio

I started with fio using a command-line like the following for NJ=1 and NJ=6 to measure average IOPs and the CPU overhead per IO.

fio --name=randread --rw=randread --ioengine=sync --numjobs=$NJ --iodepth=1 \
  --buffered=0 --direct=1 \
  --bs=8k \
  --size=400G \
  --randrepeat=0 \
  --runtime=600s --ramp_time=1s \
  --filename=G_1:G_2:G_3:G_4:G_5:G_6:G_7:G_8  \
  --group_reporting

Results are:

legend:
* iops - average reads/s reported by fio
* usPer, syPer - user, system CPU usecs per read
* cpuPer - usPer + syPer
* lat.us - average read latency in microseconds
* numjobs - the value for --numjobs with fio

iops    usPer   syPer   cpuPer  lat.us  numjobs
 9884   1.351    9.565  10.916  101.61  1
43782   1.379   10.642  12.022  136.35  6

Results from RocksDB

I used an edited version of my benchmark helper scripts that run db_bench. In this case the sequence of tests was:

  1. fillseq - loads the LSM tree in key order
  2. revrange - I ignore the results from this
  3. overwritesome - overwrites 10% of the KV pairs
  4. flush_mt_l0 - flushes the memtable, waits, compacts L0 to L1, waits
  5. readrandom - does random point queries when LSM tree has many levels
  6. compact - compacts LSM tree into one level
  7. readrandom2 - does random point queries when LSM tree has one level, bloom filters enabled
  8. readrandom3 - does random point queries when LSM tree has one level, bloom filters disabled
I use readrandom, readrandom2 and readrandom3 to vary the amount of work that RocksDB must do per query and measure the CPU overhead of that work. The most work happens with readrandom as the LSM tree has many levels and there are bloom filters to check. The least work happens with readrandom3 as the LSM tree only has one level and there are no bloom filters to check.

Initially I ran tests with --block_align not set as that reduces space-amplification (less padding) but 8kb reads are likely to cross file system page boundaries and become larger reads. But given the focus here is on IO efficiency, I used --block_align. 

A summary of the results for db_bench with 1 user (thread) and 6 users (threads) is:

--- 1 user
qps     iops    reqsz   usPer   syPer   cpuPer  rx.lat  io.lat  test
8282     8350   8.5     11.643   7.602  19.246  120.74  101     readrandom
8394     8327   8.7      9.997   8.525  18.523  119.13  105     readrandom2
8522     8400   8.2      8.732   8.718  17.450  117.34  100     readrandom3

--- 6 users
38391   38628   8.1     14.645   7.291  21.936  156.27  134     readrandom
39359   38623   8.3     10.449   9.346  19.795  152.43  144     readrandom2
39669   38874   8.0      9.459   9.850  19.309  151.24  140     readrandom3

From the following:
  • IO efficiency is approximately 0.84 at 1 client and 0.88 at 6 clients
  • With 1 user RocksDB adds between 6.534 and 8.330 usecs of CPU time per query compared to fio depending on the amount of work it has to do. 
  • With 6 users RocksDB adds between 7.287 to 9.914 usecs of CPU time per query
  • IO latency as reported by RocksDB is ~20 usecs larger than as reported by iostat. But I have to re-read the RocksDB source code to understand where and how it is measured.
legend:
* io.eff - IO efficiency as (db_bench storage read IOPs / fio storage read IOPs)
* us.inc - incremental user CPU usecs per read as (db_bench usPer - fio usPer)
* cpu.inc - incremental total CPU usecs per read as (db_bench cpuPer - fio cpuPer)

--- 1 user

        io.eff          us.inc          cpu.inc         test
        ------          ------          ------
        0.844           10.292           8.330          readrandom
        0.842            8.646           7.607          readrandom2
        0.849            7.381           6.534          readrandom3

--- 6 users

        io.eff          us.inc          cpu.inc         test
        ------          ------          ------
        0.882           13.266           9.914          readrandom
        0.882            9.070           7.773          readrandom2
        0.887            8.080           7.287          readrandom3

Evaluating the simple performance model

I described a simple performance model earlier in this blog post and now it is time to see how well it does for RocksDB. First I will use values from the 1 user/client/thread case:
  • IO latency is ~91 usecs per fio
  • libc+kernel CPU overhead is ~11 usecs per fio
  • RocksDB CPU overhead is 8.330, 7.607 and 6.534 usecs for readrandom, *2 and *3
The model is far from perfect as it predicts that RocksDB will sustain:
  • 9063 IOPs for readrandom, when it actually did 8350
  • 9124 IOPs for readrandom2, when it actually did 8327
  • 9214 IOPs for readrandom3, when it actually did 8400
Regardless, model is a good way to think about the problem.

The impact from --block_align

RocksDB QPS increases by between 7% and 9% when --block_align is enabled. Enabling it reduces read-amp and increases space-amp. But given the focus here is on IO efficiency I prefer to enable it. RocksDB QPS increases with it enabled because fewer storage read requests cross file system page boundaries, thus the average read size from storage is reduced (see the reqsz column below).

legend:
* qps - RocksDB QPS
* iops - average reads/s reported by fio
* reqsz - average read request size in KB per iostat
* usPer, syPer, cpuPer - user, system and (user+system) CPU usecs per read
* rx.lat - average read latency in microseconds, per RocksDB
* io.lat - average read latency in microseconds, per iostat
* test - the db_bench test name

- block_align disabled
qps     iops    reqsz   usPer   syPer   cpuPer  rx.lat  io.lat  test
7629     7740   8.9     12.133   8.718  20.852  137.92  111     readrandom
7866     7813   9.1     10.094   9.098  19.192  127.12  115     readrandom2
7972     7862   8.6      8.931   9.326  18.257  125.44  110     readrandom3

- block_align enabled
qps     iops    reqsz   usPer   syPer   cpuPer  rx.lat  io.lat  test
8282     8350   8.5     11.643   7.602  19.246  120.74  101     readrandom
8394     8327   8.7      9.997   8.525  18.523  119.13  105     readrandom2
8522     8400   8.2      8.732   8.718  17.450  117.34  100     readrandom3

Barbarians at the Gate: How AI is Upending Systems Research

This recent paper from the Berkeley Sky Computing Lab has been making waves in systems community. Of course, Aleksey and I did our live blind read of it, which you can watch below. My annotated copy of the paper is also available here.

This is a fascinating and timely paper. It raises deep questions about how LLMs will shape the research process, and how that could look like. Below, I start with a short technical review, then move to the broader discussion topics.


Technical review

The paper introduces AI-Driven Research for Systems (ADRS) framework. By leveraging the OpenEvolve framework,  ADRS integrates LLMs directly into the systems research workflow to automate much of the solution-tweaking and evaluation process. As shown in Figure 3, ADRS operates as a closed feedback loop in which the LLM ensemble iteratively proposes, tests, and refines solutions to a given systems problem. This automation targets the two most labor-intensive stages of the research cycle, solution tweaking and evaluation, leaving the creative areas (problem formulation, interpreting results, and coming up with insights) untouched.

Within the inner loop, four key components work together. The Prompt Generator creates context-rich prompts that seed the LLM ensemble (Solution Generator), which outputs candidate designs or algorithms. These are then assessed by the Evaluator, a simulator or benchmark written by humans, for gathering quantitative feedback. The Solution Selector identifies the most promising variants, which are stored along with their scores in the Storage module to inform subsequent iterations. This automated loop runs rapidly and at scale, and enables exploration of large design spaces within hours rather than weeks! They applied ADRS to several systems problems, including cloud job scheduling, load balancing, transaction scheduling, and LLM inference optimization. In each case, the AI improved on prior human-designed algorithms, often within a few hours of automated search. Reported gains include up to 5x faster performance or 30–50% cost reductions compared to published baselines, which are achieved in a fraction of the time and cost of traditional research cycles.

Outside the optimization loop, the creative and difficult work happens. The scientist identifies the research problem, directs the search, and decides which hills are worth climbing. Machines handle the iterative grunt work of tweaking and testing solutions, while humans deal with abstraction, framing, and insight.

There are several other important limitations for the framework's effectiveness as well. The paper's examples mostly involve trivial correctness, and also no concurrency, security, or fault-tolerance concerns. These domains require reasoning beyond performance tuning. Another limitation is that these LLMs focus/update one component only, and can't handle system-wide interactions yet.

Simulator-based evaluation makes this approach feasible, but the systems field undervalues simulation work and this leads to limited infrastructure for automated testing. Similarly, evaluators also pose risks: poorly designed ones invite reward hacking, where LLMs exploit loopholes rather than learn real improvements. If AI-driven research is to scale, we need richer evaluators, stronger specifications, and broader respect for simulation as a first-class research tool.


Discussion topics

Here I wax philosophical on many interesting questions this work raises.


LLMs provide breadth, but research demands depth

LLMs excel at high-throughput mediocrity. By design, they replicate what has already been done, and optimize across the surface of knowledge. Research, however, advances through novelty, depth, and high-value insight.

"Research is to see what everybody else has seen, and to think what nobody else has thought."

-- Albert Szent-Györgyi (Nobel laureate)

In this sense, LLMs are not as dangerous as the "Barbarians" at the gates. They are more like "Barbies" at the gates, with gloss, confidence, and some hollowness. They may dazzle with presentation but they will lack the inner substance/insights/value that mastery, curiosity, and struggle bring.


LLMs address only the tip of the iceberg

LLMs operate on the visible tip of the research iceberg I described earlier. They cannot handle the deep layers that matter: Curiosity, Clarity, Craft, Community, Courage.

Worse, they may even erode those qualities. The danger in the short-term is not invasion, but imitation: the replacement of thought with performance, and depth with polish. We risk mistaking synthetic polish with genuine understanding.

In the long term though, I am not worried. In the long term, we are all dead.

I'm kidding, ok. In the long term, we may be screwed as well. The 2004 movie "Idiocracy" rings more true every day. I am worried that due to the inherent laziness of our nature, we may end up leaning more and more on AI to navigate literature, frame questions, or spin hypotheses, that we may not get enough chances to exercise our curiosity or improve our clarity of understanding.


LLMs are bad researchers, but can they still make good collaborators?

In our academic chat follow-up to the iceberg post, I wrote about what makes a bad researcher:

Bad research habits are easy to spot: over-competition, turf-guarding, incremental work, rigidity, and a lack of intellectual flexibility. Bad science follows bad incentives such as benchmarks over ideas, and performance over understanding. These days the pressure to run endless evaluations has distorted the research and publishing process. Too many papers now stage elaborate experiments to impress reviewers instead of illuminating them with insights. Historically, the best work always stood on its own, by its simplicity and clarity. 

LLMs are bad researchers. The shoe fits. 

But can they still be good collaborators? Is it still worth working with them? The hierarchy is simple:

Good collaborators  >  No  collaborators  >  Bad collaborators

Used wisely, LLMs can climb high enough to reach the lowest range of the good collaborator category. If you give them bite-sized well defined work, they can reduce friction, preserve your momentum, and speed up parts of your work significantly. In a sense, they can make you technically fearless. I believe that when used for rapid prototyping, LLMs can help improve the design. And, through faster iteration, you may uncover some high-value insights.

But speed cuts both ways, because premature optimization is the root of all evil. If doing evaluations and optimizations becomes very cheap and effortless, we will more readily jump to this step, without nothing forcing us to think harder. Human brains are lazy by design. They don't want to think hard, and they will take the quick superficial route out, and we don't get to go deep. 

So, we need to tread carefully here as well.


Can we scale human oversight?

The worst time I ever had as an advisor was when I had to manage 6-7 (six-seveeeen!) PhD students at once. I would much rather work with 2 sharp creative students I support myself than 50 mediocre ones handed to me for free. The former process of working is more productive and it results in deep work and valuable research. Focus is the key, and it does not scale. 

The same holds for LLM-augmented research. Validation (via human focus) remains as the bottleneck. They can generate endless results, but without distilling those results into insight or wisdom, they all remain as AI slop in abundance.


Can clear insights distill without dust, tear, and sweat?

One may argue that with machines handling the grunt work, the researchers would finally get more time for thinking. Our brains are --what?-- yes, lazy. Left idle, they will scroll Reddit/Twitter rather than solve concurrency bugs.

I suspect we need some friction/irritation to nudge us to think in the background. And I suspect this is what happens when we are doing the boring work and working in the trenches. While writing a similar code snippet for the fifth time in our codebase, an optimization opportunity or an abstraction would occur to us. Very hard problems are impossible to tackle head on. Doing the legwork, I suspect we approach the problem sideways, and have a chance to make some leeway.

Yes, doing evaluation work sucks. But it is often necessary to generate the friction and space to get you think about the performance, and more importantly the logic/point of your system.  Through that suffering, you gradually get transformed and enlightened. Working in the trenches, you may even realize your entire setup is flawed, and your measurements are garbage due to using closed loop clients instead of open loop ones.

What happens when we stop getting our hands dirty? We risk distilling nothing at all. Insights don't bubble up while we are sitting in comfort and scrolling cat videos. In an earlier post, Looming Liability Machines (LLMs), I argued that offloading root-cause analysis to AI misses the point. RCA isn't about assigning blame to a component. It is an opportunity to think about the system holistically, and understand it better, and improve. Outsourcing this to LLMs strike me as a very stupid thing to do. We need to keep exercising those muscles, otherwise they would atrophy alongside our understanding of the system.


What will happen to the publication process?

In his insightful blog post on this paper, Brooker concludes:

Which leads systems to a tough spot. More bottlenecked than ever on the most difficult things to do. In some sense, this is a great problem to have, because it opens the doors for higher quality with less effort. But it also opens the doors for higher volumes of meaningless hill climbing and less insight (much of which we’re already seeing play out in more directly AI-related research). Conference organizers, program committees, funding bodies, and lab leaders will all be part of setting the tone for the next decade. If that goes well, we could be in for the best decade of systems research ever. If it goes badly, we could be in for 100x more papers and 10x less insight.

Given my firm belief in human laziness, I would bet on the latter. I have been predicting the collapse of the publishing system for a decade, and the flood of LLM-aided research may finally finish the job. That might not be a bad outcome either. We are due for a better model/process anyways.

October 22, 2025

Advanced Query Capabilities 👉🏻 aggregation pipelines

Although MongoDB has supported ACID transactions and sophisticated aggregation features for years, certain publications still promote outdated misconceptions, claiming that only SQL databases provide robust data consistency and powerful querying capabilities. The “Benefits of Migrating” section in a spreadsheet company’s article is a recent example. It's yet another chance to learn from—and correct—misleading claims.

The claims ignore MongoDB’s advanced querying and multi-document transaction support. Written to market migration tools, this overlooks that MongoDB’s simple CRUD API is efficient for single-document tasks, and as a general-purpose database, it also offers explicit transactions and strong aggregation queries like SQL.

Enhanced Data Consistency and Reliability

The migration tool company justifies migrating by stating:

PostgreSQL’s ACID compliance ensures that all transactions are processed reliably, maintaining data integrity even in the event of system failures. This is particularly important for applications that require strong consistency, such as financial systems or inventory management.

Yes, PostgreSQL does provide ACID transactions and strong consistency, but this is mainly true for single-node deployments. In high-availability and sharded settings, achieving strong consistency and ACID properties is more complicated (see an example, and another example).

Therefore, highlighting ACID compliance as a reason to migrate from another database—when that alternative also supports ACID transactions—is not correct. For instance, single-node MongoDB has offered ACID compliance for years, and since v4.2, it supports multi-document transactions across replica sets and sharded clusters. Let's provide some syntax examples for the domains they mentioned.

Example: Financial system

Transfer $100 from Alice’s account to Bob’s account

// Initialize data  
db.accounts.insertMany([  
  { account_id: "A123", name: "Alice", balance: 500 },  
  { account_id: "B456", name: "Bob", balance: 300 }  
]);  

// Start a transaciton in a session
const session = db.getMongo().startSession();

try {
  accounts = session.getDatabase(db.getName()).accounts
  session.startTransaction();

  // Deduct $100 from Alice
  accounts.updateOne(
    { account_id: "A123" },
    { $inc: { balance: -100 } }
  );

  // Add $100 to Bob
  accounts.updateOne(
    { account_id: "B456" },
    { $inc: { balance: 100 } }
  );

  session.commitTransaction();
} catch (error) {
  session.abortTransaction();
  console.error("Transaction aborted due to error:", error);
} finally {
  session.endSession();
}

Why ACID matters in MongoDB here:

  • Atomicity: Deduct and credit, either both happen or neither happens.
  • Consistency: The total balance across accounts remains accurate.
  • Isolation: Other concurrent transfers won’t interfere mid-flight.
  • Durability: Once committed, changes survive crashes.

Example: Inventory management

Selling a product and recording that sale.


try {
  inventory = session.getDatabase(db.getName()).inventory
  session.startTransaction();

  // Reduce inventory count
  inventory.updateOne(
    { product_id: "P100" },
    { $inc: { quantity: -1 } }
  );

  // Add a record of the sale
  sales.insertOne(
    { product_id: "P100", sale_date: new Date(), quantity: 1 }
  );

  session.commitTransaction();
} catch (error) {
  session.abortTransaction();
  console.error("Transaction aborted due to error:", error);
} finally {
  session.endSession();
}

ACID guarantees in MongoDB:

  • No partial updates
  • Inventory stays synchronized with sales records
  • Safe for concurrent orders
  • Durable once committed

Advanced Query Capabilities

The migration tool vendor justifies migrating by stating:

PostgreSQL offers powerful querying capabilities, including:

  • Complex joins across multiple tables
  • Advanced aggregations and window functions
  • Full-text search with features like ranking and highlighting
  • Support for geospatial data and queries These allow for more sophisticated data analysis and reporting compared to MongoDB’s more limited querying capabilities.

This completely overlooks MongoDB’s aggregation pipeline.

Complex joins

MongoDB’s $lookup stage joins collections, even multiple times if you want.

Example: Join orders with customers to get customer names.

db.orders.aggregate([
  {
    $lookup: {
      from: "customers",
      localField: "customer_id",
      foreignField: "_id",
      as: "customer_info"
    }
  },
  { $unwind: "$customer_info" },
  {
    $project: {
      order_id: 1,
      product: 1,
      "customer_info.name": 1
    }
  }
]);

Advanced aggregations

Operators like $group, $sum, $avg, $count handle numeric calculations with ease.

Example: Total sales amount per product.

db.sales.aggregate([
  {
    $group: {
      _id: "$product_id",
      totalRevenue: { $sum: "$amount" },
      avgRevenue: { $avg: "$amount" }
    }
  },
  { $sort: { totalRevenue: -1 } }
]);

Window-like functions

MongoDB has $setWindowFields for operations akin to SQL window functions.

Running total of sales, sorted by date:

db.sales.aggregate([
  { $sort: { sale_date: 1 } },
  {
    $setWindowFields: {
      sortBy: { sale_date: 1 },
      output: {
        runningTotal: {
          $sum: "$amount",
          window: { documents: ["unbounded", "current"] }
        }
      }
    }
  }
]);

Full-text search with ranking & highlighting

MongoDB supports both simple text indexes and Atlas Search (powered by Apache Lucene).

Example with Atlas Search: Search in articles and highlight matches.

db.articles.aggregate([
  {
    $search: {
      index: "default",
      text: {
        query: "machine learning",
        path: ["title", "body"]
      },
      highlight: { path: "body" }
    }
  },
  {
    $project: {
      title: "1,"
      score: { $meta: "searchScore" },
      highlights: { $meta: "searchHighlights" }
    }
  }
]);

Geospatial queries

Native geospatial indexing with operators like $near.

Example: Find restaurants within 1 km of a point.

db.restaurants.createIndex({ location: "2dsphere" });

db.restaurants.find({
  location: {
    $near: {
      $geometry: { type: "Point", coordinates: [-73.97, 40.77] },
      $maxDistance: 1000
    }
  }
});

Conclusion

MongoDB and PostgreSQL have equivalent capabilities for ACID transactions and “advanced” queries — the difference lies in syntax and data model.

MongoDB transactions don’t rely on blocking locks. They detect conflicts and let the application wait and retry if necessary.

And instead of SQL in text strings sent to the database server to be interpreted at runtime, MongoDB uses a staged aggregation pipeline, fully integrated in your application language.

Migrating to PostgreSQL doesn’t magically grant you ACID or advanced analytics — if you’re already using MongoDB’s features, you already have them.

Customizing the New MongoDB Concurrency Algorithm

On some occasions, we realize the necessity of throttling the number of requests that MongoDB tries to execute per second, be it due to resource saturation remediation, machine change planning, or performance tests. The most direct way of doing this is by tuning the WiredTiger transaction ticket parameters. Applying this throttle provides more controlled and […]

October 21, 2025

October 20, 2025

Determine how much concurrency to use on a benchmark for small, medium and large servers

What I describe here works for me given my goal, which is to find performance regressions. A benchmark run at low concurrency is used to find regressions from CPU overhead. A benchmark run at high concurrency is used to find regressions from mutex contention. A benchmark run at medium concurrency might help find both.

My informal way for classifying servers by size is:

  • small - has less than 10 cores
  • medium - has between 10 and 20 cores
  • large - has more than 20 cores
How much concurrency?

I almost always co-locate benchmark clients and the DBMS on the same server. This comes at a cost (less CPU and RAM is available for the DBMS) and might have odd artifacts because clients in the real world are usually not co-located. But it has benefits that matter to me. First, I don't worry about variance from changes in network latency. Second, this is much easier to setup.

I try to not oversubscribe the CPU when I run a benchmark. For benchmarks where there are few waits for reads from or writes to storage, then I will limit the number of benchmark users so that the concurrent connection count is less than the number of CPU cores (cores, not VPUs) and I almost always use servers with Intel Hyperthreads and AMD SMT disabled. I do this because DBMS performance suffers when the CPU is oversubscribed and back when I was closer to production we did our best to avoid that state.

Even for benchmarks that have some benchmark steps where the workload will have IO waits, I will still limit the amount of concurrency unless all benchmark steps that I measure will have IO waits.

Assuming a benchmark is composed of a sequence of steps (at minimum: load, query) then I consider the number of concurrent connections per benchmark user. For sysbench, the number of concurrent connections is the same as the number of users, although sysbench uses the --threads argument to set the number of users. I am just getting started with TPROC-C via HammerDB and that appears to be like sysbench with one concurrent connection per virtual user (VU).

For the Insert Benchmark the number of concurrent connections is 2X the number of users on the l.i1 and l.i2 steps and then 3X the number of users on the range-query read-write steps (qr*) and the point-query read-write steps (qp*). And whether or not there are IO-waits for these users is complicated, so I tend to configure the benchmark so that the number of users is no more than half the number of CPU cores.

Finally, I usually set the benchmark concurrency level to be less than the number of CPU cores because I want to leave some cores for the DBMS to do the important background work, which is mostly MVCC garbage collection -- MyRocks compaction, InnoDB purge and dirty page writeback, Postgres vacuum.

October 16, 2025

Why is RocksDB spending so much time handling page faults?

This week I was running benchmarks to understand how fast RocksDB could do IO, and then compared that to fio to understand the CPU overhead added by RocksDB. While looking at flamegraphs taken during the benchmark I was confused that about 20% of the samples were from page fault handling. This confused me at first.

The lesson here is to run your benchmark long enough to reach a steady state before you measure things or there will be confusion. And I was definitely confused when I first saw this. Perhaps my post saves time for the next person who spots this.

The workload is db_bench with a database size that is much larger than memory and read-only microbenchmarks for point lookups and range scans.

Then I wondered if this was a transient issue that occurs while RocksDB is warming up the block cache and growing process RSS until the block cache has been fully allocated.

While b-trees as used by Postgres and MySQL will do a large allocation at process start, RocksDB does an allocation per block read, and when the block is evicted then the allocation is free'd. This can be a stress test for a memory allocator which is why jemalloc and tcmalloc work better than glibc malloc for RocksDB. I revisit the mallocator topic every few years and my most recent post is here.

In this case I use RocksDB with jemalloc. Even though per-block allocations are transient, the memory used by jemalloc is mostly not transient. While there are cases where jemalloc an return memory to the OS, with my usage that is unlikely to happen.

Were I to let the benchmark run for a long enough time, then eventually jemalloc would finish getting memory from the OS. However, my tests were running for about 10 minutes and doing about 10,000 block reads per second while I had configured RocksDB to use a block cache that was at least 36G and the block size was 8kb. So my tests weren't running long enough for the block cache to fill, which means that during the measurement period:

  • jemalloc was still asking for memory
  • block cache eviction wasn't needed and after each block read a new entry was added to the block cache
The result in this example is 22.69% of the samples are from page fault handling. That is the second large stack from the left. The RocksDB code where it happens is rocksdb::BlockFetcher::ReadBlockContents.

When I run the benchmark for more time, the CPU overhead from page fault handling goes away.