Queue Drain Fixed Backlog¶
queue.drain_fixed_backlog is the first curated queue benchmark scenario for pgqrs.
It measures how fast a pre-populated queue can be drained under a sweep of:
- consumer count
- dequeue batch size
Question¶
For a fixed pre-populated backlog, how do throughput, completion time, and latency vary with consumer count and dequeue batch size?
Why This Matters¶
This scenario isolates the consumer side of the queue.
That makes it useful for:
- understanding how well a backend scales under drain pressure
- separating batch-size effects from concurrency effects
- identifying whether higher concurrency improves useful work or only adds contention
Setup¶
The current published baselines use:
- Rust executor
- release mode
prefill_jobs = 50000- compatibility profile
- variables:
consumers = [1, 2, 4]dequeue_batch_size = [1, 10, 50]
Primary Findings¶
PostgreSQL¶
PostgreSQL scales with both consumers and batch size.
- At
batch_size = 1, throughput improves from149.5 msg/sto575.0 msg/sas consumers rise from1to4(3.85x). - At
batch_size = 10, throughput improves from1603.1 msg/sto5249.6 msg/s(3.27x). - At
batch_size = 50, throughput improves from6817.1 msg/sto20175.8 msg/s(2.96x). - At
1 consumer, increasing batch size from1to50improves throughput from149.5 msg/sto6817.1 msg/s(45.59x). - At
4 consumers, increasing batch size from1to50improves throughput from575.0 msg/sto20175.8 msg/s(35.09x).
SQLite¶
SQLite benefits strongly from larger batch sizes, but does not scale with more consumers in this scenario.
- At
batch_size = 1, throughput changes from261.0 msg/sto247.0 msg/sas consumers rise from1to4(0.95x). - At
batch_size = 10, throughput changes from2709.0 msg/sto2422.7 msg/s(0.89x). - At
batch_size = 50, throughput changes from12630.3 msg/sto11232.9 msg/s(0.89x). - At
1 consumer, increasing batch size from1to50improves throughput from261.0 msg/sto12630.3 msg/s(48.40x). - At
4 consumers, increasing batch size from1to50improves throughput from247.0 msg/sto11232.9 msg/s(45.47x).
Latency Behavior¶
PostgreSQL¶
PostgreSQL latency stays comparatively flat as consumer count rises.
p95 dequeue latencyrises from8.78 msto11.16 msatbatch_size = 1(1.27x).p95 dequeue latencyrises from10.09 msto13.48 msatbatch_size = 50(1.34x).p95 archive latencyrises from1.13 msto2.64 msatbatch_size = 1(2.33x).p95 archive latencyrises from1.55 msto4.23 msatbatch_size = 50(2.73x).
SQLite¶
SQLite latency rises sharply as more consumers are added, even though throughput does not improve.
p95 dequeue latencyrises from7.13 msto21.06 msatbatch_size = 1(2.96x).p95 dequeue latencyrises from6.88 msto22.02 msatbatch_size = 50(3.20x).p95 archive latencyrises from0.15 msto14.30 msatbatch_size = 1(98.64x).p95 archive latencyrises from0.24 msto15.20 msatbatch_size = 50(64.15x).
How To Interpret This¶
The current benchmark says:
- PostgreSQL is the backend that scales with concurrency for this queue scenario.
- SQLite is functional and predictable, but extra consumers mostly add contention rather than throughput.
- Batch size is an important lever for both backends.
This should be read as scenario behavior, not as a universal backend ranking.
SQLite remains useful for embedded, test, and low-operational-overhead use cases even when it is not the scalable choice for multi-consumer drain workloads.
Artifacts¶
Curated baselines used for this page:
To explore runs interactively:
Turso Status¶
Turso support for this scenario is still a work in progress.
The current docs intentionally do not publish Turso baseline guidance yet.