Postgres 18 beta1: small server, cached Insert Benchmark
I recently published results for Postgres 18 beta1 on a small server using sysbench with a cached and IO-bound workload. This post has results for the Insert Benchmark on a small server with a cached workload and low concurrency.
tl;dr - for 17.5 vs 18 beta
- the l.i1 benchmark step (write-only with inserts and deletes) was ...
- 5% slower in 18 beta1 with io_method=sync
- ~10% slower in 18 beta1 with io_method= worker or io_uring
- the point query benchmark steps (qp100, qp500, qp1000) were ...
- 1% or 2% slower in 18 beta1 when using io_method= sync or worker
- ~6% slower in 18 beta1 when using io_method=io_uring
- l.x (create index) is ~1.2X faster in 17.5 vs 14.0
- l.i1, l.i2 (write-only) are ~5% slower in 17.5 vs 14.0
- qp100, qp500, qp1000 (point query) are 1% to 3% slower in 17.5 vs 14.0
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.
- conf.diff.cx10b_c8r32
- uses io_method='sync' to match Postgres 17 behavior
- conf.diff.cx10c_c8r32
- uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
- conf.diff.cx10d_c8r32
- uses io_method='io_uring' to do async IO via io_uring
- l.i0
- insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
- Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
- the l.i0 (initial load) step was ...
- 1% or 2% faster in 18 beta1 vs 17.5
- the create index step (l.x) was ...
- as fast with 18 beta1 as with 17.5 when using io_method=sync
- 2% slower in 18 beta1 when using the new io_method= worker or io_uring
- the l.i1 step was ...
- 5% slower in 18 beta1 with io_method=sync
- ~10% slower in 18 beta1 with io_method =worker =sync
- the range query steps (qr100, qr500, qr1000) were ...
- 1% to 3% slower in 18 beta1
- the point query steps (qp100, qp500, qp1000) were ...
- 1% or 2% slower in 18 beta1 when using io_method =sync or =worker
- ~6% slower in 18 beta1 when using io_method=io_uring
- This step does inserts and deletes as fast as possible with 50 rows per transaction. The regressions were smaller for the l.i2 step that only changes 5 rows per transaction.
- From vmstat and iostat metrics 18 beta1 uses more CPU per operation (see cpupq here)
- l.i0 (the initial load) is stable
- l.x (create index) is ~1.2X faster in 17.5 vs 14.0
- l.i1, l.i2 (write-only) is ~5% slower in 17.5 vs 14.0
- qr100, qr500, qr1000 (range query) is similar between 17.5 and 14.0
- qp100, qp500, qp1000 (point query) is 1% to 3% slower in 17.5 vs 14.0