a curated list of database news from authoritative sources

October 16, 2025

Why is RocksDB spending so much time handling page faults?

This week I was running benchmarks to understand how fast RocksDB could do IO, and then compared that to fio to understand the CPU overhead added by RocksDB. While looking at flamegraphs taken during the benchmark I was confused that about 20% of the samples were from page fault handling. This confused me at first.

The lesson here is to run your benchmark long enough to reach a steady state before you measure things or there will be confusion. And I was definitely confused when I first saw this. Perhaps my post saves time for the next person who spots this.

The workload is db_bench with a database size that is much larger than memory and read-only microbenchmarks for point lookups and range scans.

Then I wondered if this was a transient issue that occurs while RocksDB is warming up the block cache and growing process RSS until the block cache has been fully allocated.

While b-trees as used by Postgres and MySQL will do a large allocation at process start, RocksDB does an allocation per block read, and when the block is evicted then the allocation is free'd. This can be a stress test for a memory allocator which is why jemalloc and tcmalloc work better than glibc malloc for RocksDB. I revisit the mallocator topic every few years and my most recent post is here.

In this case I use RocksDB with jemalloc. Even though per-block allocations are transient, the memory used by jemalloc is mostly not transient. While there are cases where jemalloc an return memory to the OS, with my usage that is unlikely to happen.

Were I to let the benchmark run for a long enough time, then eventually jemalloc would finish getting memory from the OS. However, my tests were running for about 10 minutes and doing about 10,000 block reads per second while I had configured RocksDB to use a block cache that was at least 36G and the block size was 8kb. So my tests weren't running long enough for the block cache to fill, which means that during the measurement period:

  • jemalloc was still asking for memory
  • block cache eviction wasn't needed and after each block read a new entry was added to the block cache
The result in this example is 22.69% of the samples are from page fault handling. That is the second large stack from the left. The RocksDB code where it happens is rocksdb::BlockFetcher::ReadBlockContents.

When I run the benchmark for more time, the CPU overhead from page fault handling goes away.




October 14, 2025

Security Advisory: CVE Affecting Percona Monitoring and Management (PMM)

A critical security vulnerability has been identified in the following software that Percona has made available and that you may be using:  PMM 3.x installations (that is, 3.0 and forward). The Common Vulnerabilities and Exposures (CVE) identifier for this issue is on request from mitre.org. Vulnerability details We were notified via an external report that […]

Amazon Aurora MySQL zero-ETL integration with Amazon SageMaker Lakehouse

In this post, we explore how zero-ETL integration works, the key benefits it delivers for data-driven teams, and how it aligns with the broader zero-ETL strategy in AWS services. You'll learn how this integration can enhance your data workflows, whether you're building predictive models, entering interactive SQL queries, or visualizing business trends. By eliminating traditional extract, transform, and load (ETL) processes, this solution enables real-time intelligence securely and at scale to help you make faster, data-driven decisions.

Is it time for TPC-BLOB?

If you want to store vectors in your database then what you store as a row, KV pair or document is likely to be larger than the fixed-page size (when your DBMS uses fixed-page sizes) and you will soon care about efficient and performant support for large objects. I assume this support hasn't been the top priority for many DBMS implementations and there will be some performance bugs.

In a SQL DBMS, support for large objects will use the plumbing created to handle LOB (Large OBject) datatypes. We should define what the L in LOB means here and I will wave my hands and claim larger than a fixed-page in your favorite DBMS but smaller than 512kb because I limit my focus to online workloads.

Perhaps now is the time for industry standard benchmarks for workloads with large objects. Should it be TPC-LOB or TPC-BLOB?

Most popular DBMS use fixed-size pages whether that storage is index-organized via an update-in-place b-tree (InnoDB) or heap-organized (Postgres, Oracle). For rows that are larger than the page size, which is usually between 4kb and 16kb, the entire row or largest columns will be stored out of line and likely split across several pages in the out of line storage. When the row is read, additional reads will be done to gather all of the too-large parts from the out of line locations.

This approach is far from optimal as there will be more CPU overhead, more random IO and might be more wasted space. But this was good enough because support for LOBs wasn't a priority for these DBMS as their focus was on OLTP where rows were likely to be smaller than a fixed-size page.

Perhaps by luck, perhaps it was fate, but WiredTiger is a great fit for MongoDB because it is more flexible about page sizes. And it is more flexible because it isn't an update-in-place b-tree, instead it is a copy-on-write random (CoW-R) b-tree that doesn't need or use out-of-line storage, although for extra large documents there might be a benefit from out-of-line.

MyRocks, and other LSM-based DBMS, also don't require out-of-line storage but they can benefit from it as shown by WiscKey and other engines that do key-value separation. Even the mighty RocksDB has an implementation of key-value separation via BlobDB.

Benchmarking Postgres 17 vs 18

Postgres 18 brings a significant improvement to read performance via async I/O and I/O worker threads. Here we compare its performance to Postgres 17.

Here are some ClickHouse® Cloud alternatives to consider

Explore alternatives to ClickHouse Cloud including managed ClickHouse providers, cloud data warehouses, and real-time OLAP engines. Compare performance, cost, developer experience, and learn when to choose each platform.

ClickHouse® vs BigQuery for real-time analytics

Compare ClickHouse and BigQuery for real-time analytics across architecture, query latency, cost models, and streaming ingestion. Learn when to choose each platform and how managed services simplify deployment.

ClickHouse® vs Firebolt for real-time data warehousing

Compare ClickHouse and Firebolt across architecture, real-time ingestion, query performance, and operational complexity. Learn when to choose each platform for your analytics workloads.

ClickHouse® vs PostgreSQL (with extensions)

Compare ClickHouse and PostgreSQL across storage models, performance benchmarks, scalability approaches, and popular extensions. Learn when to choose each database and how to migrate from PostgreSQL to ClickHouse for analytics.

October 13, 2025

Postgres 18.0 vs sysbench on a 32-core server

This is yet another great result for Postgres 18.0 vs sysbench. This time I used a 32-core server. Results for a 24-core server are here. The goal for this benchmark is to check for regressions from new CPU overhead and mutex contention.

I repeated the benchmark twice because I had some uncertainty about platform variance (HW and SW) on the first run.

tl;dr, from Postgres 17.6 to 18.0

  • There might be regressions from 17.6 to 18.0 but they are small (usually <= 3%)

tl;dr, from Postgres 12.22 through 18.0

  • the hot-points test is almost 2X faster starting in 17.6
  • scan is ~1.2X faster starting in 14.19
  • all write tests are much faster staring in 17.6
Builds, configuration and hardware

I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.

The server is a Dell Precision 7865 Tower Workstation with 1 socket, 128G RAM and an AMD Ryzen Threadripper PRO 5975WX with 32-Cores. The OS is Ubuntu 24.04 and storage is a 2TB m.2 SSD with ext-4 and discard enabled.

Prior to 18.0, the configuration file was named conf.diff.cx10a_c32r128 and is here for 12.2213.2214.1915.1416.10 and 17.6.

For 18.0 I tried 3 configuration files:

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

The benchmark is run with 24 clients and 8 tables with 10M rows per table. The purpose is to search for regressions from new CPU overhead and mutex contention.

I ran the benchmark twice. In the first run, there was several weeks between getting results for the older Postgres releases and Postgres 18.0 so I am less certain about variance from the hardware and softare. One concern is changes in daily temperature because I don't have a climate-controlled server room. Another concern is changes from updating my OS install.

In the second run, all results were collected within 7 days and I am less concerned about variance there.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
When the relative QPS is > 1 then some version is faster than base version.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

I present results for:
  • versions 12 through 18 using 12.22 as the base version
  • versions 17.6 and 18.0 using 17.6 as the base version
Results: Postgres 17.6 and 18.0

All files are here.

Results per microbenchmark from vmstat and iostat are here for the first and second run.

Some comments:
  • 18.0 looks better relative to 17.6 in the second run and I explain my uncertainty about the first run above
  • But I am skeptical about the great result for 18.0 on the full scan test (scan_range=100) in the second run. That might be variance induced by vacuum.
  • There might be regressions from 17.6 to 18.0 but they are small (usually <= 3%)
  • The small regression in read-only_range=10 might be from new optimizer overhead, because it doesn't reproduce when the length of the range query is increased -- see read-only_range=100 and read-only_range=10000.
Relative to: 17.6
col-1 : 18.0 with the x10b config that uses io_method=sync
col-2 : 18.0 with the x10c config that uses io_method=worker
col-3 : 18.0 with the x10d config that uses io_method=io_uring

col-1   col-2   col-3   point queries, first run
0.97    0.99    0.94    hot-points_range=100
0.97    0.98    0.96    point-query_range=100
1.00    0.99    0.99    points-covered-pk_range=100
0.99    1.00    1.00    points-covered-si_range=100
0.98    0.99    0.98    points-notcovered-pk_range=100
0.99    0.99    0.99    points-notcovered-si_range=100
1.00    1.00    0.99    random-points_range=1000
0.98    0.98    0.98    random-points_range=100
0.99    0.98    0.99    random-points_range=10

col-1   col-2   col-3   point queries, second run
0.98    1.00    0.99    hot-points_range=100
1.00    1.00    0.99    point-query_range=100
1.01    1.01    1.01    points-covered-pk_range=100
1.00    1.01    1.00    points-covered-si_range=100
1.00    0.98    1.00    points-notcovered-pk_range=100
1.00    1.00    1.01    points-notcovered-si_range=100
1.00    1.01    1.01    random-points_range=1000
1.00    0.99    1.01    random-points_range=100
0.99    0.99    1.00    random-points_range=10

col-1   col-2   col-3   range queries without aggregation, first run
0.97    0.98    0.95    range-covered-pk_range=100
0.97    0.97    0.94    range-covered-si_range=100
0.98    0.98    0.97    range-notcovered-pk_range=100
0.99    0.99    0.98    range-notcovered-si_range=100
0.97    0.99    0.96    scan_range=100

col-1   col-2   col-3   range queries without aggregation, second run
0.99    0.99    0.98    range-covered-pk_range=100
0.99    0.99    0.99    range-covered-si_range=100
0.98    0.99    0.98    range-notcovered-pk_range=100
0.99    1.00    1.00    range-notcovered-si_range=100
1.24    1.24    1.22    scan_range=100

col-1   col-2   col-3   range queries with aggregation, first run
0.99    1.00    1.00    read-only-count_range=1000
1.01    1.01    1.01    read-only-distinct_range=1000
1.01    1.01    1.00    read-only-order_range=1000
1.04    1.04    1.04    read-only_range=10000
0.99    0.99    0.98    read-only_range=100
0.97    0.98    0.97    read-only_range=10
0.99    0.98    0.98    read-only-simple_range=1000
0.99    0.99    0.99    read-only-sum_range=1000

col-1   col-2   col-3   range queries with aggregation, second run
0.99    1.00    1.00    read-only-count_range=1000
1.01    1.01    1.00    read-only-distinct_range=1000
0.99    0.99    1.00    read-only-order_range=1000
1.02    1.03    1.03    read-only_range=10000
0.99    0.99    0.99    read-only_range=100
0.99    0.99    0.98    read-only_range=10
0.99    1.00    1.01    read-only-simple_range=1000
1.00    1.00    1.00    read-only-sum_range=1000

col-1   col-2   col-3   writes, first run
0.99    0.98    0.96    delete_range=100
0.99    0.96    0.98    insert_range=100
1.00    0.99    0.98    read-write_range=100
0.99    0.98    0.98    read-write_range=10
1.00    0.99    1.00    update-index_range=100
1.03    0.95    1.01    update-inlist_range=100
0.99    0.99    1.00    update-nonindex_range=100
1.00    1.00    1.01    update-one_range=100
0.98    0.99    1.00    update-zipf_range=100
0.97    0.97    0.99    write-only_range=10000

col-1   col-2   col-3   writes, second run
0.97    0.97    0.98    delete_range=100
0.99    0.99    1.00    insert_range=100
0.99    0.99    0.98    read-write_range=100
0.98    0.98    0.98    read-write_range=10
0.97    0.98    0.97    update-index_range=100
0.98    0.99    1.04    update-inlist_range=100
0.98    0.99    0.98    update-nonindex_range=100
0.99    0.99    0.98    update-one_range=100
0.98    0.99    0.98    update-zipf_range=100
0.99    0.97    0.95    write-only_range=10000

Results: Postgres 12 to 18

All files are here.

Results per microbenchmark from vmstat and iostat are here for the first and second run.
The data below with a larger font is here.

Some comments:
  • the hot-points test is almost 2X faster starting in 17.6
  • scan is ~1.2X faster starting in 14.19
  • all write tests are much faster staring in 17.6
Relative to: 12.22
col-1 : 13.22
col-2 : 14.19
col-3 : 15.14
col-4 : 16.10
col-5 : 17.6
col-6 : 18.0 with the x10b config
col-7 : 18.0 with the x10c config
col-8 : 18.0 with the x10d config

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   point queries, first run
1.02    1.00    1.01    1.00    1.94    1.87    1.91    1.82    hot-points_range=100
1.01    1.02    1.02    1.00    1.02    0.99    1.00    0.98    point-query_range=100
1.02    1.02    1.01    1.03    1.01    1.01    1.00    1.00    points-covered-pk_range=100
1.01    1.04    1.03    1.05    1.03    1.02    1.03    1.03    points-covered-si_range=100
1.01    1.01    1.01    1.02    1.02    1.00    1.00    1.00    points-notcovered-pk_range=100
1.00    1.03    1.02    1.03    1.02    1.01    1.01    1.02    points-notcovered-si_range=100
1.01    1.02    1.02    1.03    1.00    1.00    1.00    0.99    random-points_range=1000
1.01    1.02    1.02    1.02    1.02    1.00    1.00    1.00    random-points_range=100
1.02    1.03    1.02    1.02    1.01    1.00    1.00    1.00    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   point queries, second run
1.00    0.98    0.99    1.00    1.94    1.90    1.93    1.92    hot-points_range=100
1.00    1.01    1.02    1.03    1.03    1.02    1.02    1.02    point-query_range=100
1.02    1.01    1.00    1.04    0.99    1.00    1.00    0.99    points-covered-pk_range=100
1.01    1.04    1.03    1.07    1.03    1.03    1.05    1.04    points-covered-si_range=100
1.01    1.02    1.03    1.04    1.01    1.00    0.99    1.01    points-notcovered-pk_range=100
1.02    1.05    1.05    1.05    1.03    1.03    1.03    1.04    points-notcovered-si_range=100
1.01    1.02    1.03    1.03    0.99    0.99    1.00    1.00    random-points_range=1000
1.02    1.02    1.03    1.04    1.01    1.01    1.00    1.01    random-points_range=100
1.02    1.02    1.02    1.03    1.02    1.01    1.01    1.02    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   range queries without aggregation, first run
1.00    1.02    1.02    1.01    1.00    0.97    0.98    0.95    range-covered-pk_range=100
1.00    1.02    1.02    1.01    1.00    0.97    0.97    0.94    range-covered-si_range=100
1.01    1.00    1.00    1.00    0.99    0.97    0.97    0.97    range-notcovered-pk_range=100
0.99    1.00    1.00    0.99    1.01    1.00    1.00    0.99    range-notcovered-si_range=100
0.98    1.24    1.11    1.13    1.16    1.12    1.14    1.11    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   range queries without aggregation, second run
1.01    1.02    1.02    1.02    1.01    1.00    1.00    0.99    range-covered-pk_range=100
1.01    1.03    1.02    1.02    1.01    1.00    1.01    1.00    range-covered-si_range=100
1.00    0.99    1.00    1.00    0.99    0.97    0.98    0.98    range-notcovered-pk_range=100
1.00    1.00    1.00    0.98    1.01    1.00    1.01    1.01    range-notcovered-si_range=100
1.00    1.27    1.15    1.15    0.97    1.20    1.20    1.18    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   range queries with aggregation, first run
1.02    1.00    1.00    1.01    0.97    0.96    0.97    0.97    read-only-count_range=1000
1.00    1.00    1.02    1.02    0.98    0.99    0.99    0.99    read-only-distinct_range=1000
1.01    1.00    1.03    1.03    1.00    1.01    1.01    1.01    read-only-order_range=1000
1.00    0.98    1.00    1.06    0.95    0.99    0.99    0.99    read-only_range=10000
1.00    1.00    1.00    1.00    1.00    0.98    0.98    0.98    read-only_range=100
1.00    1.01    1.01    1.00    1.01    0.98    0.99    0.98    read-only_range=10
1.01    1.00    1.02    1.01    1.00    0.99    0.98    0.98    read-only-simple_range=1000
1.00    1.00    1.01    1.00    0.99    0.98    0.98    0.98    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   range queries with aggregation, second run
1.03    1.02    1.02    1.03    0.97    0.97    0.97    0.98    read-only-count_range=1000
1.00    0.99    1.02    1.02    0.98    0.99    0.99    0.99    read-only-distinct_range=1000
1.00    0.99    1.02    1.04    1.02    1.01    1.01    1.02    read-only-order_range=1000
1.01    1.03    1.03    1.06    0.97    0.99    0.99    0.99    read-only_range=10000
0.99    1.00    1.00    1.01    1.00    0.99    0.99    0.99    read-only_range=100
0.99    1.00    1.00    1.00    1.01    0.99    1.00    0.99    read-only_range=10
1.00    0.99    1.01    1.00    0.99    0.98    0.98    0.99    read-only-simple_range=1000
1.00    1.00    1.01    1.01    0.99    0.98    0.98    0.98    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   writes, first run
1.00    1.08    1.08    1.05    1.25    1.24    1.23    1.20    delete_range=100
1.01    1.05    1.04    1.03    1.07    1.06    1.02    1.05    insert_range=100
1.00    1.06    1.07    1.07    1.10    1.09    1.08    1.07    read-write_range=100
1.00    1.07    1.08    1.07    1.13    1.13    1.11    1.11    read-write_range=10
0.99    1.04    1.04    0.90    1.43    1.43    1.41    1.43    update-index_range=100
1.00    1.09    1.08    1.08    1.11    1.15    1.06    1.12    update-inlist_range=100
1.00    1.05    1.05    1.04    1.35    1.34    1.34    1.35    update-nonindex_range=100
1.02    0.95    0.96    0.93    1.19    1.19    1.19    1.20    update-one_range=100
1.00    1.05    1.08    1.07    1.23    1.21    1.22    1.23    update-zipf_range=100
1.01    1.06    1.05    1.01    1.25    1.22    1.20    1.24    write-only_range=10000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   writes, second run
1.00    1.06    1.07    1.07    1.26    1.23    1.23    1.24    delete_range=100
1.03    1.07    1.05    1.05    1.09    1.07    1.08    1.09    insert_range=100
1.01    1.07    1.08    1.07    1.11    1.10    1.10    1.09    read-write_range=100
0.99    1.04    1.06    1.07    1.13    1.11    1.11    1.12    read-write_range=10
0.99    1.02    1.04    0.87    1.44    1.40    1.41    1.40    update-index_range=100
1.00    1.11    1.12    1.09    1.17    1.14    1.16    1.22    update-inlist_range=100
1.01    1.04    1.06    1.03    1.36    1.33    1.35    1.34    update-nonindex_range=100
1.01    0.95    0.98    0.94    1.22    1.21    1.21    1.20    update-one_range=100
0.99    1.05    1.07    1.07    1.24    1.21    1.22    1.21    update-zipf_range=100
1.02    1.06    1.06    1.02    1.27    1.25    1.23    1.21    write-only_range=10000












October 11, 2025

Geoblocking Multiple Localities With Nginx

A few months back I wound up concluding, based on conversations with Ofcom, that aphyr.com might be illegal in the UK due to the UK Online Safety Act. I wrote a short tutorial on geoblocking a single country using Nginx on Debian.

Now Mississippi’s 2024 HB 1126 has made it illegal for essentially any web site to know a user’s e-mail address, or other “personal identifying information”, unless that site also takes steps to "verify the age of the person creating an account”. Bluesky wound up geoblocking Mississippi. Over on a small forum I help run, we paid our lawyers to look into HB 1126, and the conclusion was that we were likely in the same boat. Collecting email addresses put us in scope of the bill, and it wasn’t clear whether the LLC would shield officers (hi) from personal liability.

This blog has the same problem: people use email addresses to post and confirm their comments. I think my personal blog is probably at low risk, but a.) I’d like to draw attention to this legislation, and b.) my risk is elevated by being gay online, and having written and called a whole bunch of Mississippi legislators about HB 1126. Long story short, I’d like to block both a country and an individual state. Here’s how:

First, set up geoipupdate as before. Then, in /etc/nginx/conf.d.geoblock.conf, pull in the country and city databases, and map the countries and states you’d like to block to short strings explaining the applicable law. This creates variables $geoblock_country_law and $geoblock_state_law.

geoip2 /var/lib/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_iso_code country iso_code;
}

geoip2 /var/lib/GeoIP/GeoLite2-City.mmdb {
  $geoip2_data_state_name subdivisions 0 names en;
}

map $geoip2_data_country_iso_code $geoblock_country_law {
  GB      "the UK Online Safety Act";
  default "";
}

map $geoip2_data_state_name $geoblock_state_law {
  Mississippi "Mississippi HB 1126";
  default     "";
}

Create an HTML page to show to geoblocked IPs. I’ve put mine in /var/www/custom_errors/451.html. The special comments here are Server-Side Include (SSI) directives; they’ll insert the contents of the $geoblock_law variable from nginx, which we’ll set shortly.

<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Unavailable Due to
      <!--# echo var="geoblock_law" default=""-->
    </title>
  </head>
  <body>
    <h1>Unavailable Due to
      <!--# echo var="geoblock_law" default=""-->
    </h1>
</body>
</html>

Then, in /etc/nginx/sites-enabled/whatever.conf, add an error page for status code 451 (unavailable for legal reasons). In the main location block, check the $geoblock_country_law and $geoblock_state_law variables, and use them to return status 451, and set the $geoblock_law variable for the SSI template:

server {
  ...
  # Status 451 renders this page
  error_page 451 /451.html;
  location /451.html {
    ssi on;
    internal;
    root /var/www/custom_errors/;
  }

  location / {
    # If either geoblock variable is set, return status 451
    if ($geoblock_state_law != "") {
      set $geoblock_law $geoblock_state_law;
      return 451;
    }
    if ($geoblock_country_law != "") {
      set $geoblock_law $geoblock_country_law;
      return 451;
    }
  }
}

Test with nginx -t, and reload with service nginx reload, as usual.

Geoblocking is a bad experience in general. In Amsterdam and Frankfurt, I’ve seen my cell phone’s 5G connection and hotel WiFi improperly identified as being in the UK. I’m certain this is going to block people who aren’t in Mississippi either. If you don’t want to live in this world either, start calling your representatives to demand better legislation.

October 10, 2025

Academic chat: On PhD

This week, Aleksey and I met not to dissect a research paper, but to chat about "the process of PhD". I had recently wrote a post titled "The Invisible Curriculum of Research", where I framed research as an iceberg, with the small visible parts (papers, conferences) resting on the hidden 5 Cs:

  • Curiosity/Taste: what problems are worth solving.
  • Clarity: how to ask precise and abstracting questions.
  • Craft: writing, experimentation, presentation.
  • Community: collaboration and contribution.
  • Courage: resilience through setbacks.

Above is the video of our chat, with a lot of personal anecdotes and a few rants. But if you want to cut to the chase, the highlight reel is below.


What a PhD Really Produces

The real product of a PhD is not the thesis, but you, the researcher! The thesis is just the residue of this long internal transformation. Like martial arts, the training breaks you and rebuilds you into someone who sees and thinks differently. This transformation cannot be faked and you should take your time to grow your wings. But you can be effective about it.


Curiosity and taste

Taste blends curiosity, creativity, and judgment of what's important. Curiosity alone can lure you into many bottomless technical rabbit holes. Taste filters what matters and channels curiosity into focus. And you definitely need passion to sustain yourself through the ups and downs of the arduous PhD journey.


The serendipitous path to research

Many researchers stumble into research through chance encounters, unexpected opportunities, and detours. So it is worth keeping an open mind, noticing what sparks your curiosity and suits you best, and following it. Aleksey shares his own unlikely path to a PhD, which is well worth watching. I have written before about how I started, but here I go deeper into where those interests first took root.


Growing through friction and mentorship

Taste, curiosity, and confidence grow through friction. The best labs are loud where debates spill into hallways. When Aleksey, Aili, and I worked together, neighboring faculty sometimes complained about the noise, wondering why we were always arguing. But intellectual sparring sharpens your ideas. Research maturity comes from questioning, defending, refining. In this type of hands-on, messy mentorship, taste, passion, craft all rub off.


Asking good questions and abstracting well

Abstraction is the art of asking the right questions. The best questions cut away accidental complexity and get to the essence of the problem. Leslie Lamport's genius was exactly knowing what to ignore/abstract-away. "Craft is knowing how to work, and art is knowing when to stop."

By finding the right question/framing/abstraction, you can pivot a project that is not getting any lift into an impactful hit! (This I believe.)


The Craft of Research

Most research is unglamorous: debugging, writing, revising, rejections. But you gotta  do the craft, and do it well, for your ideas to lift off. You need routines and ritual to keep you steady and improving. Aleksey's productivity routine involves daily 90-minute walks as his "thinking time". Thinking, for him, is a physical process. We used to walk a lot when we worked together, but somehow I have fallen off that wagon. My thinking time now comes through freewriting on Emacs or on my tablet, and arguing with myself on the page. We both agree, though, that talking/arguing with collaborators forces clarity and generates ideas.


On Courage and Resilience

Every researcher fails as much as (if not more than) they succeed. The researcher needs to endure through failures and rejections. You need to keep showing up to write the next draft, rerun the next experiment, submit again. Passion helps, without it, survival in research is unlikely. But you also need to make a habit of endurance. Courage also means questioning norms and pursuing ideas that may not yet be fashionable but feel true.

But, sometimes (ah Retroscope) you have to take the loss, cut your losses, and move on.  Maybe you can return later at a more opportune time.


Top skills/qualities for a PhD

We discussed our picks for top three skills needed for a successful Phd. For me, it is writing/communication, asking the right questions, and metacognition (knowing when to stop, reframe, or abstract; seeing the essence rather than surface detail). Reading skills came up very high in our discussion too. You can't outsource that to ChatGPT. People skills also matter: work well with your collaborators. Conferences and brutal rankings in academia can feel like SquidGames at times, but what truly matters is people, mentorship, and the craft itself.

What makes a bad researcher

Bad research habits are easy to spot: over-competition, turf-guarding, incremental work, rigidity, and a lack of intellectual flexibility. Bad science follows bad incentives such as benchmarks over ideas, and performance over understanding. These days the pressure to run endless evaluations has distorted the research and publishing process. Too many papers now stage elaborate experiments to impress reviewers instead of illuminating them with insights. Historically, the best work always stood on its own, by its simplicity and clarity. 


Onboarding and Departmental Support

Advisor fit is crucial, and students should be free to explore before committing. Early rotations and cohort boot camps, which Aleksey mentioned is common in biomedical programs, help build both skills and faculty connections. Unfortunately, computer science still lacks this scaffolding. Industry treats onboarding as an investment, with structured mentorship, regular check-ins, and clear expectations. Academia, by contrast, seems to treat the absence of onboarding as a filtering mechanism. New PhD students are frequently left on their own for months, without direction, feedback, or a sense of belonging. Even small rituals (weekly meetings, mentorship pairings, consistent feedback) could change and catch struggling/blocked students early rather than years later.

Open Source Is Not Just Code: It’s Integrity

The following blog is my personal opinion and view on the world and our company.  Open source is more than just code; it’s a philosophy. It’s about openness, honesty, integrity, and sharing in how we work and communicate, even when no one is watching. The saying “knowledge is power,” often credited to Francis Bacon, captures […]

A guide to ClickHouse® deployment options

Compare ClickHouse deployment models from self-hosted VMs to managed services. Learn about costs, performance, and operational requirements to choose the right deployment for your team.

How to stream Kafka topics to ClickHouse® in real-time

Learn how to stream Kafka topics to ClickHouse in real-time. Compare native Kafka engine, Kafka Connect, and managed services. Includes setup, optimization, and production best practices.

October 09, 2025

Advanced observability and troubleshooting with Amazon RDS event monitoring pipelines

AWS provides a wide range of monitoring solutions for your Amazon RDS and Amazon Aurora instances, such as Amazon CloudWatch, Amazon CloudWatch Database Insights, and AWS CloudTrail. Amazon RDS event monitoring pipelines make troubleshooting operational events like reboots, errors, and failovers more efficient. In this post, we present a solution to get a head start on troubleshooting by sending an email after a reboot or failover with the last 10 minutes of important CloudWatch metrics, top queries, and related API calls performed on the instance.

A Guide to Redis Performance Best Practices

Getting started with Redis? Read this first. Redis makes it very easy to get started, but if you want Redis to keep up when things get busy, there’s a bit more to take care of. Anyone can spin up a test server, but production workloads demand reliability and performance planning. That’s where the details matter. […]

October 08, 2025

OLAP databases: what's new and what's best in 2025

Compare the leading OLAP databases including ClickHouse, Apache Druid, Pinot, and StarRocks. Learn what changed in 2025 with vector search, lakehouse formats, and serverless ingestion becoming standard features.

A quick review of different ClickHouse® MCP servers

There's more than one ClickHouse MCP Server. We compare popular ClickHouse MCP server implementations including the official clickhouse-mcp, Tinybird MCP Server, and Altinity MCP.

OLTP vs OLAP: when to use each (and when to use both)

Learn the key differences between OLTP and OLAP databases, when to use each type, and how modern architectures combine them to serve both operational and analytical workloads.

Tiga: Accelerating Geo-Distributed Transactions with Synchronized Clocks

This paper (to appear at SOSP'25) is one of the latest efforts exploring the dream of a one-round commit for geo-replicated databases. TAPIR tried to fuse concurrency control and consensus into one layer. Tempo and Detock went further using dependency graphs. 

Aleksey and I did our usual thing. We recorded our first blind read of the paper. I also annotated a copy while reading, which you can access here.

We liked the paper overall. This is a thoughtful piece of engineering, not a conceptual breakthrough. It uses future timestamps to align replicas in a slightly new way, and the results are solid. But the presentation needs refinement and stronger formalization. (See our livereading video about how these problems manifested themselves.) Another study to add to my survey, showing how, with modern clocks, time itself is becoming a coordination primitive.


The Big Idea

Tiga claims to do strictly serializable, fault-tolerant transactions in one wide-area round trip (1-WRTT) most of the time by predicting/tracking the future commit times of the transactions. Instead of waiting for messages to arrive and then ordering them, Tiga assigns each transaction a future timestamp at submission.

If all goes well, the transaction arrives before that timestamp at all replicas, waits until the local clock catches up, and then executes in order.

There is no dependency graph to track. Synchronized clocks and flight-of-message prediction promise to still get us strict serializability with 1-WRTT for most cases. Well, at least for more cases than the competition. You don't need to outrun the bear, but just the other campers.

This is essentially the Deadline-Ordered Multicast (DOM) idea from the Nezha paper. Figures 1–2 in the paper show the contrast with Tapir. Tapir commits optimistically and fails when transactions arrive in different orders at different regions. Tiga fixes this by giving both transactions predetermined timestamps: all servers delay execution until their clocks reach those timestamps, ensuring consistent order.

Tiga also merges consensus and concurrency control into a single timestamp-based protocol. I think the "Unanimous 2PC: Fault-tolerant Distributed Transactions Can be Fast and Simple" is a very relevant protocol to compare with here, but unfortunately, Tiga fails to cite U2PC.


Algorithm in a Nutshell

In the best case (steps 1-3), Tiga commits a transaction in 1-WRTT, essentially by predicting the correct global order instead of discovering it. If the prediction falters, steps 4-6 reconcile timestamps and logs, recovering correctness at the cost of another half to one full round trip.

1. Timestamp Initialization: The coordinator uses the measured one-way delays (OWDs) to each replica to predict when the transaction should arrive everywhere. It assigns the transaction a future timestamp t = send_time + max_OWD + Δ, where Δ is a small safety headroom (≈10 ms). This t represents the intended global serialization time. The coordinator then multicasts the transaction T and its timestamp to all shards.

2. Optimistic Execution: Upon receipt, each server buffers T in a priority queue sorted by timestamp. When the local clock reaches t, followers simply release T (they do not execute yet) while leaders execute T optimistically, assuming their local timestamp ordering will hold. The green bars in Figure 3 mark this optimistic execution phase.

3. Quorum Check of Fast Path: The coordinator collects fast-replies from a super quorum on each shard (the leader + f + ⌈f / 2⌉ followers). If the replies agree on the same log hash and timestamp, T is fast-committed. This completes the ideal 1-WRTT commit: half a round from coordinator to replicas, half back. (The other leader-inclusive paper I remember is Nezha, prior work to this one.)

4. Timestamp Agreement: Sometimes leaders execute with slightly different timestamps due to delays or clock drift. They then exchange their local timestamps to compute a common agreed value (the maximum). If all timestamps already match, the process costs 0.5 WRTT. If some leaders lag, another half round (total 1-WRTT) ensures alignment. If any executed with an older timestamp, that execution is revoked and T is re-executed at the new agreed time (slow path). This phase corresponds to the curved inter-leader arrows in the figure.

5. Log Synchronization: After leaders finalize timestamps, they propagate the consistent log to their followers. Followers update their logs to match the leader’s view and advance their sync-point. This ensures replicas are consistent before commit acknowledgment. The figure shows this as another 0.5 WRTT of leader-to-follower synchronization.

6. Quorum Check of Slow Path: Finally, the coordinator verifies that enough followers (≥ f) have acknowledged the synchronized logs. Once that quorum is reached, T is committed via the slow path. Even in this fallback case, the total latency stays within 1.5–2 WRTTs.

I am skipping details and optimizations. Leaders across many shards being located in the same datacenter/AZ is an optimization to improve the latency of timestamp-agreement (that this paper seem to have borrowed from the recent OSDI'25 Mako paper.) This then opens the door for a preventive flavor of the Tiga workflow as shown in Figure 6.

Evaluation Highlights

Running on Google Cloud across three regions, Tiga outperforms Janus, Tapir, and Calvin+ by 1.3–7x in throughput and 1.4–4x lower latency. In low-contention microbenchmarks, it easily sustains 1-WRTT commits. Under high contention, Calvin+ catches up somewhat but with 30% higher latency. Calvin+ replaces Calvin's Paxos-based consensus layer with Nezha, saving at least 1-WRTT in committing transactions. A lot of work must have gone into these evaluation results.