a curated list of database news from authoritative sources

October 06, 2025

My time at Oracle: functional and design specification reviews

I worked at Oracle from 1997 to 2005 for 3 years on the app server team in Portland and the last 5 on DBMS query execution in Redwood Shores. I had a good time there, made many friends and learned a lot.

They had an excellent process for functional and design specification reviews. Like many, I am wary of (too much) process but this wasn't too much. It was just enough.

At a high level, you would write and then get a review for the functional spec. The review was an in-person meeting. Once that was resolved the process would repeat for the design spec. You were expected to write a good spec -- it was better for one person (the author) to spend much time on it to avoid wasting time for the many readers. Many specs would be revisited long after the review because there is turnover and specs are easier to read than source code.

We used FrameMaker to write the specs on Solaris workstations. That was a long time ago. The functional spec I wrote for IEEE754 datatypes was more than 50 pages because I had to document every aspect of PL/SQL and SQL that would be impacted by it (there were so many functions to document). The design spec I wrote for a new sort algorithm was also quite long because I had already implemented the algorithm to collect performance results to justify the effort. The patent attorney copied much of that design doc into the patent resulting in a patent that might be more readable than average.

For each specification you setup a meeting a few weeks out and shared the spec with people who might attend the meeting. In many cases there was feedback via email or in person prior to the meeting that could be resolved before the meeting. But in some cases there was feedback that wouldn't get resolved until the meeting.

It is important to split the functional and design specs, and their reviews. It helps with efficiency and the design review might change a lot based on the outcome of the functional spec review.

There are a variety of responses to the feedback, and all of that was added to an appendix of the spec (both the feedback and the response). Common responses include:

  • good point
    • I will change my spec as you suggest
  • no thank you
    • I disagree and will not change my spec as you suggest. Hopefully this isn't the response to all feedback but some people like to bike shed and/or get in the way of progress. When I rewrote the sort algorithm, I used something that was derived from quicksort and quicksort implementations have worse than expected performance on some input sequences. The algorithm I used was far better than vanilla quicksort in that regard, but it didn't eliminate the risk. However, the performance improvement over the existing code was so large (the white paper claims 5X faster) that I sad no thank you and the project got done. But I did spend some time doing the math to show how likely (or unlikely) the worst cases were. I needed a tool with arbitrary precision math to for that because the numbers are small and might have ended up using a Scheme implementation.
  • good point, but
    • I won't change my spec, but I have a workaround for the problem you mention. For IEEE754 datatypes, a few people objected because a few infrequently and fading platforms for the DBMS did not have hardware support for IEEE754. My solution was to use functions for each IEEE754 operation that were trivial for platforms with IEEE754 HW support -- things like double multiply_double(x, y) { return x*y } but could be implemented as needed on the platforms that lacked IEEE754 via a software implementation of IEEE754.

October 05, 2025

Measuring scaleup for Postgres 18.0 with sysbench

This post has results to measure scaleup for Postgres 18.0 on a 48-core server.

tl;dr

  • Postgres continues to be boring (in a good way)
  • Results are mostly excellent
  • A few of the range query tests have a scaleup that is less than great but I need time to debug

Builds, Configuration & Hardware

The server has an AMD EPYC 9454P 48-Core Processor with AMD SMT disabled, 128G of RAM and SW RAID 0 with 2 NVMe devices. The OS is Ubuntu 22.04.

I compiled Postgres 18.0 from source and the configuration file is here.

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres. Each microbenchmark is run for 300 seconds.

The benchmark is run with 1, 2, 4, 8, 12, 16, 20, 24, 32, 40 and 48 clients. The purpose is to determine how well Postgres scales up.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I still use relative QPS here, but in a different way. The relative QPS here is:
(QPS at X clients) / (QPS at 1 client)

The goal is to determine scaleup efficiency for Postgres. When the relative QPS at X clients is a value near X, then things are great. But sometimes things aren't great and the relative QPS is much less than X. One issue is data contention for some of the write-heavy microbenchmarks. Another issue is mutex and rw-lock contention.

Perf debugging via vmstat and iostat

I use normalized results from vmstat and iostat to help explain why things aren't as fast as expected. By normalized I mean I divide the average values from vmstat and iostat by QPS to see things like how much CPU is used per query or how many context switches occur per write. And note that a high context switch rate is often a sign of mutex contention.

Those results are here but can be difficult to read.

Charts: point queries

The spreadsheet with all of the results is here.

While results aren't perfect, they are excellent. Perfect results would be to get a scaleup of 48 at 48 clients and here the result is between 40 and 42 in most tests. The worst-case is for hot-points where the scaleup is 32.57 at 48 clients. Note that the hot-points test has the most data contention of the point-query tests, as all queries fetch the same rows.

From the vmstat metrics (see here) I don't see an increase in mutex contention (more context switches, see the cs/o column) but I do see an increase in CPU (cpu/o). When compared to a test that has better scaleup, like points-covered-pk, there I also don't see an increase in mutex contention and do see an increase in CPU overhead (see cpu/o) but the CPU increase is smaller (see here).

Charts: range queries without aggregation

The spreadsheet with all of the results is here.

The results again are great, but not perfect. The worst case is for range-notcovered-pk where the scaleup is 32.92 at 48 clients. The base case is for scan where the scaleup is 46.56 at 48 clients.

From the vmstat metrics for range-notcovered-pk I don't see any obvious problems. The CPU overhead (cpu/o, CPU per query) increases by 1.08 (about 8%) from 1 to 48 clients while the context switches per query (cs/o) decreases (see here).

Charts: range queries with aggregation

The spreadsheet with all of the results is here.

Results for range queries with aggregation are worse than for range queries without aggregation. I hope to try and explain that later. A perfect result is scaleup equal to 48. Here, 3 of 8 tests have scaleup less than 3, 4 have scaleup between 30 and 40, and the best case is read-only_range=10 with a scaleup of 43.35.

The worst-case was read-only-count with a scaleup of 21.38. From the vmstat metrics I see that at CPU overhead (cpu/o, CPU per query) increases by 2.08 at 48 clients vs 1 client while context switches per query (cs/o) decrease (see here). I am curious about that CPU increase as isn't as bad for the other range query tests, for example see here where it is no larger than 1.54. The query for read-only-count is here.

Later I hope to explain why read-only-count, read-only-simple and read-only-sum don't do better.

Charts: writes

The spreadsheet with all of the results is here.

The worst-case is update-one where scaleup is 2.86 at 48 clients. The bad result is expected as having many concurrent clients update the same row is an anti-pattern with Postgres. The scaleup for Postgres on that test is a lot worse than for MySQL where it was ~8 with InnoDB. But I am not here for Postgres vs InnoDB arguments.

Excluding the tests that mix reads and writes (read-write-*) the scaleup is between 13 and 21. This is far from great but isn't horrible. I run with fsync-on-commit disabled which highlights problems but is less realistic. So for now I am happy with this results.



October 03, 2025

First/Last per Group: PostgreSQL DISTINCT ON and MongoDB DISTINCT_SCAN Performance

On Stack Overflow, the most frequent question for PostgreSQL is: "Select first row in each GROUP BY group?". I've written about it previously, presenting multiple alternatives and execution plans: How to Select the First Row of Each Set of Grouped Rows Using GROUP BY.
Solving this problem in a way that's both straightforward and high-performing can be challenging with SQL databases. However, MongoDB's aggregation provides a simple syntax and an efficient execution plan. When you use $first or $last alongside $sort and $group, MongoDB can perform an efficient loose index scan, which is similar to an index skip scan, reading only what is necessary for the result.

PostgreSQL

With PostgreSQL, the DISTINCT ON ... ORDER BY syntax is the easiest for the developer, but not the best for performance.

create table demo (
 primary key (a, b, c),
  a int, b timestamptz,
  c float,
  d text
);
-- insert a hundred thousand rows
insert into demo
 select 
  a,
  now() as b,
  random() as c,
  repeat('x',5) as d
 from generate_series(1,5) a
    , generate_series(1,20000) c
 -- ignore bad luck random;
 on conflict do nothing
;
-- run 10 more times (now() will be different):
\watch count=9
vacuum analyze demo;

In PostgreSQL 18, all rows are read, but most are eliminated so that only one row remains per group:

explain (buffers, analyze, verbose, costs off)
select
 distinct on (b) a, b, c, d
from demo
where a=1
order by b, c
;

                                             QUERY PLAN
----------------------------------------------------------------------------------------------------
 Unique (actual time=0.025..94.601 rows=10.00 loops=1)
   Output: a, b, c, d
   Buffers: shared hit=199959
   ->  Index Scan using demo_pkey on public.demo (actual time=0.024..77.263 rows=200000.00 loops=1)
         Output: a, b, c, d
         Index Cond: (demo.a = 1)
         Index Searches: 1
         Buffers: shared hit=199959
 Planning Time: 0.077 ms
 Execution Time: 94.622 ms

Although the DISTINCT ON ... ORDER BY syntax is straightforward, it is not efficient here: the number of rows processed (rows=200,000.00) and buffers read (Buffers: shared hit=199,959) is excessive compared to the result size (rows=10.00).

If you want to avoid unnecessary reads, you have to write a complex recursive CTE:

with recursive skip_scan as (
 (
  -- get the first row
  select * from demo
  where a=1
  order by b,c limit 1
 ) union all (
  -- get the next row
  select demo.*
  from skip_scan , lateral(
   select * from demo
   where demo.a = skip_scan.a and demo.b > skip_scan.b
   order by b,c limit 1
  ) demo
 )
)
select * from skip_scan
;

This simulates an index loose scan with nested loops, iterating from a recursive WITH clause.

MongoDB aggregation

In MongoDB, using an aggregation pipeline makes it easy and efficient to get either the first or last document from each group.

I create a collection and fill it with similar data:

// Create collection and compound unique index to mimic PRIMARY KEY(a, b, c)
db.demo.drop();
db.demo.createIndex(
  { a: 1, b: 1, c: 1 },
  { unique: true }
);

// Function to insert bulk records
function insertDemoBatch() {
  const batch = [];
  const now = new Date();
  for (let a = 1; a <= 5; a++) {
    for (let j = 1; j <= 20000; j++) {
      batch.push({
        a: a,
        b: now,                   // similar to now()
        c: Math.random(),         // random float [0,1)
        d: 'x'.repeat(5)          // repeat string
      });
    }
  }
  try {
    db.demo.insertMany(batch, { ordered: false }); // ignore duplicates
  } catch (e) {
    print(`Insert completed with some duplicates ignored: ${e.writeErrors?.length ?? 0} errors`);
  }
}

// Run 10 times — now() will be different each run
for (let i = 0; i < 10; i++) {
  insertDemoBatch();
}

Here is the aggregation that groups and keeps the first value of each group:

db.demo.aggregate([
  { $match: { a: 1 } },         // equivalent to WHERE a=1
  { $sort: { b: 1, c: 1 } },    // equivalent to ORDER BY b, c
  { $group: {
      _id: "$b",                 // equivalent to DISTINCT ON (b)
      a: { $first: "$a" },       
      b: { $first: "$b" },
      c: { $first: "$c" },
      d: { $first: "$d" }      
  }},
  { $project: {                  // equivalent to SELECT a, b, c, d
      _id: 0, a: 1, b: 1, c: 1, d: 1 
  }},
]).explain("executionStats");

The execution plan is efficient, reading only one document per group (totalDocsExamined: 10) and seeking to the end of each group (keysExamined: 11) in the index scan:

...
        executionStats: {                                                                                                                                                                            
          executionSuccess: true,
          nReturned: 10,
          executionTimeMillis: 0,
          totalKeysExamined: 11,
          totalDocsExamined: 10,
          executionStages: {
            isCached: false,
            stage: 'FETCH',
            nReturned: 10,
            executionTimeMillisEstimate: 0,
            works: 11,
            advanced: 10,
            needTime: 0,
            needYield: 0,
            saveState: 1,
            restoreState: 1,
            isEOF: 1,
            docsExamined: 10,
            alreadyHasObj: 0,
            inputStage: {
              stage: 'DISTINCT_SCAN',
              nReturned: 10,
              executionTimeMillisEstimate: 0,
              works: 11,
              advanced: 10,
              needTime: 0,
              needYield: 0,
              saveState: 1,
              restoreState: 1,
              isEOF: 1,
              keyPattern: { a: 1, b: 1, c: 1 },
              indexName: 'a_1_b_1_c_1',
              isMultiKey: false,
              multiKeyPaths: { a: [], b: [], c: [] },
              isUnique: true,
              isSparse: false,
              isPartial: false,
              indexVersion: 2,
              direction: 'forward',
              indexBounds: {
                a: [ '[1, 1]' ],
                b: [ '[MinKey, MaxKey]' ],
                c: [ '[MinKey, MaxKey]' ]
              },
              keysExamined: 11
            }
          }
        }
      },
      nReturned: Long('10'),
...

MongoDB uses DISTINCT_SCAN in aggregation when the pipeline starts with a $sort and $group using $first or $last. The planner checks for a matching index that has the correct sort order, adjusting the scan direction if needed. If the conditions are met, MongoDB rewrites the pipeline to use DISTINCT_SCAN and $groupByDistinct, optimizing by skipping to the relevant index entries and retrieving only needed documents.

This pattern is common in real‑world queries such as:

  • Latestor earliest measure for each metric in a time‑series database
  • Last contract with each supplier
  • Last purchase from each client
  • Most recent transaction for each account
  • Earliest login event for each user
  • Lowest‑paid employee in each department

First/Last per Group: PostgreSQL DISTINCT ON and MongoDB DISTINCT_SCAN Performance

On Stack Overflow, the most frequent question for PostgreSQL is: "Select first row in each GROUP BY group?" I've written about it previously, presenting multiple alternatives and execution plans: How to Select the First Row of Each Set of Grouped Rows Using GROUP BY.
Solving this problem in a way that's both straightforward and high-performing can be challenging with SQL databases. However, MongoDB's aggregation provides a simple syntax and an efficient execution plan. When you use $first or $last alongside $sort and $group, MongoDB can perform an efficient loose index scan, which is similar to an index skip scan, reading only what is necessary for the result.

PostgreSQL

With PostgreSQL, the DISTINCT ON ... ORDER BY syntax is the easiest for the developer, but not the best for performance.

create table demo (
 primary key (a, b, c),
  a int, b timestamptz,
  c float,
  d text
);
-- insert a hundred thousand rows
insert into demo
 select 
  a,
  now() as b,
  random() as c,
  repeat('x',5) as d
 from generate_series(1,5) a
    , generate_series(1,20000) c
 -- ignore bad luck random;
 on conflict do nothing
;
-- run 10 more times (now() will be different):
\watch count=9
vacuum analyze demo;

In PostgreSQL 18, all rows are read, but most are eliminated so that only one row remains per group:

explain (buffers, analyze, verbose, costs off)
select
 distinct on (b) a, b, c, d
from demo
where a=1
order by b, c
;

                                             QUERY PLAN
----------------------------------------------------------------------------------------------------
 Unique (actual time=0.025..94.601 rows=10.00 loops=1)
   Output: a, b, c, d
   Buffers: shared hit=199959
   ->  Index Scan using demo_pkey on public.demo (actual time=0.024..77.263 rows=200000.00 loops=1)
         Output: a, b, c, d
         Index Cond: (demo.a = 1)
         Index Searches: 1
         Buffers: shared hit=199959
 Planning Time: 0.077 ms
 Execution Time: 94.622 ms

Although the DISTINCT ON ... ORDER BY syntax is straightforward, it is not efficient here: the number of rows processed (rows=200,000.00) and buffers read (Buffers: shared hit=199,959) is excessive compared to the result size (rows=10.00).

If you want to avoid unnecessary reads, you have to write a complex recursive CTE:

with recursive skip_scan as (
 (
  -- get the first row
  select * from demo
  where a=1
  order by b,c limit 1
 ) union all (
  -- get the next row
  select demo.*
  from skip_scan , lateral(
   select * from demo
   where demo.a = skip_scan.a and demo.b > skip_scan.b
   order by b,c limit 1
  ) demo
 )
)
select * from skip_scan
;

This simulates an index loose scan with nested loops, iterating from a recursive WITH clause.

MongoDB aggregation

In MongoDB, using an aggregation pipeline makes it easy and efficient to get either the first or last document from each group.

I create a collection and fill it with similar data:

// Create collection and compound unique index to mimic PRIMARY KEY(a, b, c)
db.demo.drop();
db.demo.createIndex(
  { a: 1, b: 1, c: 1 },
  { unique: true }
);

// Function to insert bulk records
function insertDemoBatch() {
  const batch = [];
  const now = new Date();
  for (let a = 1; a <= 5; a++) {
    for (let j = 1; j <= 20000; j++) {
      batch.push({
        a: a,
        b: now,                   // similar to now()
        c: Math.random(),         // random float [0,1)
        d: 'x'.repeat(5)          // repeat string
      });
    }
  }
  try {
    db.demo.insertMany(batch, { ordered: false }); // ignore duplicates
  } catch (e) {
    print(`Insert completed with some duplicates ignored: ${e.writeErrors?.length ?? 0} errors`);
  }
}

// Run 10 times — now() will be different each run
for (let i = 0; i < 10; i++) {
  insertDemoBatch();
}

Here is the aggregation that groups and keeps the first value of each group:

db.demo.aggregate([
  { $match: { a: 1 } },         // equivalent to WHERE a=1
  { $sort: { b: 1, c: 1 } },    // equivalent to ORDER BY b, c
  { $group: {
      _id: "$b",                 // equivalent to DISTINCT ON (b)
      a: { $first: "$a" },       
      b: { $first: "$b" },
      c: { $first: "$c" },
      d: { $first: "$d" }      
  }},
  { $project: {                  // equivalent to SELECT a, b, c, d
      _id: 0, a: 1, b: 1, c: 1, d: 1 
  }},
]).explain("executionStats");

The execution plan is efficient, reading only one document per group (totalDocsExamined: 10) and seeking to the end of each group (keysExamined: 11) in the index scan:

...
        executionStats: {                                                                                                                                                                            
          executionSuccess: true,
          nReturned: 10,
          executionTimeMillis: 0,
          totalKeysExamined: 11,
          totalDocsExamined: 10,
          executionStages: {
            isCached: false,
            stage: 'FETCH',
            nReturned: 10,
            executionTimeMillisEstimate: 0,
            works: 11,
            advanced: 10,
            needTime: 0,
            needYield: 0,
            saveState: 1,
            restoreState: 1,
            isEOF: 1,
            docsExamined: 10,
            alreadyHasObj: 0,
            inputStage: {
              stage: 'DISTINCT_SCAN',
              nReturned: 10,
              executionTimeMillisEstimate: 0,
              works: 11,
              advanced: 10,
              needTime: 0,
              needYield: 0,
              saveState: 1,
              restoreState: 1,
              isEOF: 1,
              keyPattern: { a: 1, b: 1, c: 1 },
              indexName: 'a_1_b_1_c_1',
              isMultiKey: false,
              multiKeyPaths: { a: [], b: [], c: [] },
              isUnique: true,
              isSparse: false,
              isPartial: false,
              indexVersion: 2,
              direction: 'forward',
              indexBounds: {
                a: [ '[1, 1]' ],
                b: [ '[MinKey, MaxKey]' ],
                c: [ '[MinKey, MaxKey]' ]
              },
              keysExamined: 11
            }
          }
        }
      },
      nReturned: Long('10'),
...

MongoDB uses DISTINCT_SCAN in aggregation when the pipeline starts with a $sort and $group using $first or $last. The planner checks for a matching index that has the correct sort order, adjusting the scan direction if needed. It also checks that the group fields are not multi-key. If the conditions are met, MongoDB rewrites the pipeline to use DISTINCT_SCAN and $groupByDistinct, optimizing by skipping to the relevant index entries and retrieving only needed documents.

This pattern is common in real‑world queries such as:

  • Latest or earliest measure for each metric in a time‑series database
  • Last contract with each supplier.
  • Last purchase from each client.
  • Most recent transaction for each account.
  • Earliest login event for each user.
  • Lowest‑paid employee in each department.

The Invisible Curriculum of Research

Courses, textbooks, and papers provide the formal curriculum of research. But there is also an invisible curriculum. Unwritten rules and skills separate the best researchers from the rest.

I did get an early education on this thanks to my advisor, Anish. He kept mentioning "taste", calling some of my observations and algorithms "cute", and encouring me to be more curious and creative and to develop my "taste". 

Slowly, I realized that what really shapes a research career isn't written in any textbook or taught in any course. You learn it by osmosis from mentors, and through missteps: working on the wrong problem, asking shallow questions, botching a project, giving up too soon. But if you can absorb these lessons faster, you will find research more fulfilling. The visible curriculum teaches you how to build a car. The invisible curriculum teaches you where to go, who to ride with, and how to keep going when the road turns uphill.

After 25 years of experience, I can name five big items on that curriculum. And with some sleight of hand, make these into the 5Cs of the invisible curriculum: curiosity/taste, clarity/questions, craft, community, and courage/endurance.


Curiosity/Taste

"Do only what only you can do"

-- Dijkstra's advice to a promising researcher, who asked how to select a topic for research

Most problems are not worth solving. They may be technically tricky but irrelevant, or they may be easy and uninteresting. Developing taste means knowing which questions combine depth, tractability, and importance.

I believe curiosity and taste have an innate part: you can't replicate the twinkle in Gouda's eye when he is onto an interesting research problem. But they can also be cultivated. You build them by reading broadly, revisiting classic papers, and asking senior researchers not just what was done, but why it mattered at the time. Over the years, I have seen researchers chase technically impressive but tasteless problems that led nowhere. The best researchers have a finely tuned compass that points toward ideas with lasting value.


Clarity/Questions

If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask. For once I know the proper question, I could solve the problem in less than five minutes. 

--A. Einstein

The best researchers are the best question-askers. Any good researcher can solve the problems handed to them. The real skill is asking sharper, deeper questions that reframe an area and make others stop and think, "Yes, that's the question we should be asking".

Good questions are uncomfortable: they expose blind spots, disrupt comfortable assumptions, and make traditionalists nervous. They are generative and open new directions. If you want to stand out, learn to ask better questions.


Craft

Details make perfection, and perfection is not a detail.

-- Leonardo da Vinci

Research ideas live or die by execution. I have seen brilliant insights fail because the paper was unreadable, the system was sloppy, or the evaluation was unconvincing. Craft is about how you write papers, present talks, code systems, or design experiments. Craft matters as much as the idea itself.

Craft looks boring from the outside: rewriting a paragraph five times, running experiments three different ways, making your figures clean and interpretable. But craft is what makes an idea visible, persuasive, and reproducible. Without it, your work never takes off.


Community

"None of us is as smart as all of us."

-- Ken Blanchard

Research happens in conversation, not isolation. Community is how you learn taste. Whom you share ideas with, who critiques your drafts, who cites you ... all of this shapes your trajectory. Invest in your community: mentor, review, collaborate, and give credit generously. Your reputation compounds faster, and lasts longer, than your h-index.

People skills are very important. There is nothing soft about these skills, these are the hardest skills to master and the most crucial for success. Learn to communicate well. Spend many times more effort than you think sufficient to improve your writing and presentation. Not a second of this work goes to waste. Really, just read through the Writing/Presenting section here.

Finally, maintain high standards. Your name is your currency. Trust is hard to gain and easy to lose.


Courage/Endurance

"Research is to see what everybody else has seen, and to think what nobody else has thought."

-- Albert Szent-Györgyi (Nobel laureate)

"Nothing in this world can take the place of persistence. Talent will not; genius will not; education will not; persistence and determination alone are omnipotent."

-- Calvin Coolidge

Community is important but that doesn't mean you flock like sheep. Incremental work is safe but forgettable. Transformative work requires courage to risk failure, and endurance to push through rejection. Every meaningful project will face resistance: reviewers who don't get it, experiments that collapse, colleagues who tell you it won't work.

Steven Pressfield calls it "turning pro": showing up day after day, even when enthusiasm wanes. The invisible curriculum here is that breakthroughs often come not from brilliance, but from stubborn persistence. The courage to start and the endurance to continue... That is what carries you across the long, dull middle of any project.


If you are looking for more to read, here is more advice:

https://muratbuffalo.blogspot.com/2024/07/advice-to-young.html

https://muratbuffalo.blogspot.com/2020/06/research-writing-and-career-advice.html

How to round timestamps in ClickHouse

Learn how to round timestamps in ClickHouse using toStartOfDay, toStartOfHour, and other built-in functions with syntax examples and performance tips.

How to URL-encode query parameters in ClickHouse

Learn how to safely URL-encode query parameters in ClickHouse using encodeURLFormComponent, including syntax, examples, and performance tips for web applications.

Supabase Series E

Raised funding from Accel, Peak XV, Figma, and existing investors.

October 02, 2025

How to extract the protocol of a URL in ClickHouse

Learn how to extract URL protocols in ClickHouse using the protocol() function with practical examples, performance tips, and real-time API implementation.

How to round dates in ClickHouse

Master ClickHouse date rounding with toStartOfYear, toStartOfMonth, toStartOfWeek and more - complete guide with syntax, examples, and API integration.

Measuring scaleup for MariaDB with sysbench

This post has results to measure scaleup for MariaDB 11.8.3 on a 48-core server.

tl;dr

  • Scaleup is better for range queries than for point queries
  • For tests where results were less than great, the problem appears to be mutex contention within InnoDB

Builds, Configuration & Hardware

The server has an AMD EPYC 9454P 48-Core Processor with AMD SMT disabled, 128G of RAM and SW RAID 0 with 2 NVMe devices. The OS is Ubuntu 22.04.

I compiled MariaDB 11.8.3 from source and the my.cnf file is here.

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by MariaDB. Each microbenchmark is run for 300 seconds.

The benchmark is run with 1, 2, 4, 8, 12, 16, 20, 24, 32, 40 and 48 clients. The purpose is to determine how well MariaDB scales up.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I still use relative QPS here, but in a different way. The relative QPS here is:
(QPS at X clients) / (QPS at 1 client)

The goal is to determine scaleup efficiency for MariaDB. When the relative QPS at X clients is a value near X, then things are great. But sometimes things aren't great and the relative QPS is much less than X. One issue is data contention for some of the write-heavy microbenchmarks. Another issue is mutex and rw-lock contention.

Perf debugging via vmstat and iostat

I use normalized results from vmstat and iostat to help explain why things aren't as fast as expected. By normalized I mean I divide the average values from vmstat and iostat by QPS to see things like how much CPU is used per query or how many context switches occur per write. And note that a high context switch rate is often a sign of mutex contention.

Charts: point queries

The spreadsheet with all of the results is here.

For point queries

  • tests for which the relative QPS at 48 clients is greater than 40
    • point-query
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • none
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • hot-points, points-covered-si, random-points_range=10
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • points-covered-pk, points-notcovered-pk, points-notcovered-si, random-points_range=100
  • tests for which the relative QPS at 48 clients is less than 10
    • random-points_range=1000
For 5 of the 9 point query tests, QPS stops improving beyond 16 clients. And I assume that mutex contention is the problem.

Results for the random-points_range=Z tests are interesting. They use oltp_inlist_select.lua which does a SELECT with a large IN-list where the IN-list entries can find rows by exact match on the PK. The value of Z is the number of entries in the IN-list. And here MariaDB scales worse with a larger Z (1000) than with a smaller Z (10 or 100), which means that the thing that limits scaleup is more likely in InnoDB than the parser or optimizer.

From the normalized vmstat metrics (see here) for 1 client and 48 clients the number of context switches per query (the cs/o column) grows a lot more from 1 to 48 clients for random-points_range=1000 than for random-points_range=10. The ratio (cs/o at 48 clients / cs/o at 1 client) is 1.46 for random-points_range=10 and then increases to 19.96 for random-points_range=1000. The problem appears to be mutex contention.

Charts: range queries without aggregation

The spreadsheet with all of the results is here.

For range queries without aggregation:

  • tests for which the relative QPS at 48 clients is greater than 40
    • range-covered-pk, range-covered-si, range-notcovered-pk
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • scan
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • none
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • none
  • tests for which the relative QPS at 48 clients is less than 10
    • range-notcovered-si
Only one test has less than great results for scaleup -- range-notcovered-si. QPS for it stops growing beyond 12 clients. The root cause appears to be mutex contention based on the large value for cs/o in the normalized vmstat metrics (see here). For all of the range-*covered-* tests, has the most InnoDB activity per query -- the query isn't covering so it must do PK index access per index entry it finds in the secondary index.

Charts: range queries with aggregation

The spreadsheet with all of the results is here.

For range queries with aggregation:

  • tests for which the relative QPS at 48 clients is greater than 40
    • read-only-distinct, read-only-order, read-only-range=Y, read-only-sum
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • read-only-count, read-only-simple
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • none
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • none
  • tests for which the relative QPS at 48 clients is less than 10
    • none
Results here are excellent, and better than the results above for range queries without aggregation. The difference might mean that there is less concurrent activity within InnoDB because aggregation code is run after each row is fetched from InnoDB.

Charts: writes

The spreadsheet with all of the results is here.

For writes:

  • tests for which the relative QPS at 48 clients is greater than 40
    • none
  • tests for which the relative QPS at 48 clients is between 30 and 40
    • read-write_range=Y
  • tests for which the relative QPS at 48 clients is between 20 and 30
    • update-index, write-only
  • tests for which the relative QPS at 48 clients is between 10 and 20
    • delete, insert, update-inlist, update-nonindex, update-zipf
  • tests for which the relative QPS at 48 clients is less than 10
    • update-one
The best result is for the read-write_range=Y tests which are the classic sysbench transaction that does a mix of writes, point and range queries. 

The worst result is from update-one which suffers from data contention as all updates are to the same row. A poor result is expected here.



October 01, 2025

The Redis License Has Changed: What You Need to Know

Redis has always been the go-to when you need fast, in-memory data storage. You’ll find it everywhere. Big ecommerce sites. Mobile apps. Maybe your own projects, too. But if you’re relying on Redis today, you’re facing a new reality: the licensing terms have changed, and that shift could affect the way you use Redis going […]

September 30, 2025

How to get the hostname from a URL in ClickHouse

Learn how to extract hostnames from URLs in ClickHouse using the domain() function, plus performance tips and real-world examples for web analytics.

How to decode URL-encoded strings in ClickHouse

Learn how to decode URL-encoded strings in ClickHouse using decodeURLComponent, with performance tips, edge cases, and production deployment strategies.

How to parse numeric date formats in ClickHouse

Learn how to convert numeric date formats to ClickHouse Date/DateTime types using YYYYMMDDToDate functions for better performance and built-in date operations.

Tackling the Cache Invalidation and Cache Stampede Problem in Valkey with Debezium Platform

There are two hard problems in computer science: cache invalidation, naming things, and off-by-1 errors. This classic joke, often attributed to Phil Karlton, highlights a very real and persistent challenge for software developers. We’re constantly striving to build faster, more responsive systems, and caching is a fundamental strategy for achieving that. But while caching offers […]

PostgREST 13

New features and changes in PostgREST version 13.

September 29, 2025

Postgres 18.0 vs sysbench on a 24-core, 2-socket server

This post has results from sysbench run at higher concurrency for Postgres versions 12 through 18 on a server with 24 cores and 2 sockets. My previous post had results for sysbench run with low concurrency. The goal is to search for regressions from new CPU overhead and mutex contention.

tl;dr, from Postgres 17.6 to 18.0

  • For most microbenchmarks Postgres 18.0 is between 1% and 3% slower than 17.6
  • The root cause might be new CPU overhead. It will take more time to gain confidence in results like this. On other servers with sysbench run at low concurrency I only see regressions for some of the range-query microbenchmarks. Here I see them for point-query and writes.

tl;dr, from Postgres 12.22 through 18.0

  • For point queries Postgres 18.0 is usually about 5% faster than 12.22
  • For range queries Postgres 18.0 is usually as fast as 12.22
  • For writes Postgres 18.0 is much faster than 12.22

Builds, configuration and hardware

I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.

The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM. The CPU is Intel Xeon Silver 4214R CPU @ 2.40GHz. It runs Ubuntu 24.04. Storage is a 1TB m.2 NVMe device with ext-4 and discard enabled.

Prior to 18.0, the configuration file was named conf.diff.cx10a_c24r64 and is here for 12.2213.2214.1915.1416.10 and 17.6.

For 18.0 I tried 3 configuration files:

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

The benchmark is run with 16 clients and 8 tables with 10M rows per table. The purpose is to search for regressions from new CPU overhead and mutex contention.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

I present results for:
  • versions 12 through 18 using 12.22 as the base version
  • versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0

Results per microbenchmark from vmstat and iostat are here.

For point queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the random-points microbenchmarks.

For range queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the read-only_range=X microbenchmarks.

For writes queries, 18.0 often gets between 1% and 2% less QPS than 17.6 and the root cause might be new CPU overhead. I ignore the write-heavy microbenchmarks that also do queries as the regressions for them might be from the queries. See the cpu/o column (CPU per query) in the vmstat metrics here for the update-index microbenchmark.

Relative to: 17.6
col-1 : 18.0 with the x10b config
col-2 : 18.0 with the x10c config
col-3 : 18.0 with the x10d config

col-1   col-2   col-3   point queries
1.00    0.99    1.00    hot-points_range=100
0.99    0.98    1.00    point-query_range=100
0.98    0.99    0.99    points-covered-pk_range=100
0.99    0.99    0.98    points-covered-si_range=100
0.97    0.99    0.98    points-notcovered-pk_range=100
0.98    0.99    0.97    points-notcovered-si_range=100
0.98    0.99    0.98    random-points_range=1000
0.97    0.99    0.98    random-points_range=100
0.99    0.99    0.98    random-points_range=10

col-1   col-2   col-3   range queries without aggregation
0.98    0.98    0.99    range-covered-pk_range=100
0.98    0.98    0.98    range-covered-si_range=100
0.98    0.99    0.98    range-notcovered-pk_range=100
1.00    1.02    0.99    range-notcovered-si_range=100
1.01    1.01    1.01    scan_range=100

col-1   col-2   col-3   range queries with aggregation
0.99    1.00    0.98    read-only-count_range=1000
0.98    0.98    0.98    read-only-distinct_range=1000
0.97    0.97    0.96    read-only-order_range=1000
0.97    0.98    0.97    read-only_range=10000
0.98    0.99    0.98    read-only_range=100
0.99    0.99    0.99    read-only_range=10
0.98    0.99    0.99    read-only-simple_range=1000
0.98    1.00    0.98    read-only-sum_range=1000

col-1   col-2   col-3   writes
0.99    0.99    0.99    delete_range=100
0.99    0.99    0.99    insert_range=100
0.98    0.98    0.98    read-write_range=100
0.99    1.00    0.99    read-write_range=10
0.99    0.98    0.97    update-index_range=100
0.99    0.99    1.00    update-inlist_range=100
1.00    0.97    0.99    update-nonindex_range=100
0.97    1.00    0.98    update-one_range=100
1.00    0.99    1.01    update-zipf_range=100
0.98    0.98    0.97    write-only_range=10000

Results: Postgres 12 to 18

For the Postgres 18.0 results in col-6, the result is in green when relative QPS is >= 1.05 and in yellow when relative QPS is <= 0.98. Yellow indicates a possible regression.

Results per microbenchmark from vmstat and iostat are here.

Relative to: 12.22
col-1 : 13.22
col-2 : 14.19
col-3 : 15.14
col-4 : 16.10
col-5 : 17.6
col-6 : 18.0 with the x10b config

col-1   col-2   col-3   col-4   col-5   col-6   point queries
0.98    0.96    0.99    0.98    2.13    2.13    hot-points_range=100
1.00    1.02    1.01    1.02    1.03    1.01    point-query_range=100
0.99    1.05    1.05    1.08    1.07    1.05    points-covered-pk_range=100
0.99    1.08    1.05    1.07    1.07    1.05    points-covered-si_range=100
0.99    1.04    1.05    1.06    1.07    1.05    points-notcovered-pk_range=100
0.99    1.05    1.04    1.05    1.06    1.04    points-notcovered-si_range=100
0.98    1.03    1.04    1.06    1.06    1.04    random-points_range=1000
0.98    1.04    1.05    1.07    1.07    1.05    random-points_range=100
0.99    1.02    1.03    1.05    1.05    1.04    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   range queries without aggregation
1.02    1.04    1.03    1.04    1.03    1.01    range-covered-pk_range=100
1.05    1.07    1.06    1.06    1.06    1.05    range-covered-si_range=100
0.99    1.00    1.00    1.00    1.01    0.98    range-notcovered-pk_range=100
0.97    0.99    1.00    1.01    1.01    1.01    range-notcovered-si_range=100
0.86    1.06    1.08    1.17    1.18    1.20    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   range queries with aggregation
0.98    0.97    0.97    1.00    0.98    0.97    read-only-count_range=1000
0.99    0.99    1.02    1.02    1.01    0.99    read-only-distinct_range=1000
1.00    0.99    1.02    1.05    1.05    1.02    read-only-order_range=1000
0.99    0.99    1.04    1.07    1.09    1.06    read-only_range=10000
0.99    1.00    1.00    1.01    1.02    0.99    read-only_range=100
1.00    1.00    1.00    1.01    1.01    1.00    read-only_range=10
0.99    0.99    1.00    1.01    1.01    0.99    read-only-simple_range=1000
0.98    0.99    0.99    1.00    1.00    0.98    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   writes
0.98    1.09    1.09    1.04    1.29    1.27    delete_range=100
0.99    1.03    1.02    1.03    1.08    1.07    insert_range=100
1.00    1.03    1.04    1.05    1.07    1.05    read-write_range=100
1.01    1.09    1.09    1.09    1.15    1.14    read-write_range=10
1.00    1.04    1.03    0.86    1.44    1.42    update-index_range=100
1.01    1.11    1.11    1.12    1.13    1.12    update-inlist_range=100
0.99    1.04    1.06    1.05    1.25    1.25    update-nonindex_range=100
1.05    0.92    0.90    0.84    1.18    1.15    update-one_range=100
0.98    1.04    1.03    1.01    1.26    1.26    update-zipf_range=100
1.02    1.05    1.10    1.09    1.21    1.18    write-only_range=10000

New File Copy-Based Initial Sync Overwhelms the Logical Initial Sync in Percona Server for MongoDB

In a previous article, Scalability for the Large-Scale: File Copy-Based Initial Sync for Percona Server for MongoDB, we presented some early benchmarks of the new File Copy-Based Initial Sync (FCBIS) available in Percona Server for MongoDB. Those first results already suggested significant improvements compared to the default Logical Initial Sync. In this post, we extend our […]