top of page

Friends and Family

Public·149 members

Benchmark PostgreSQL With Linux HugePages

Benchmark PostgreSQL With Linux HugePages

PostgreSQL is a powerful and popular open source relational database system that can handle a wide range of workloads. However, to achieve optimal performance, PostgreSQL relies on the Linux kernel to be properly configured. One of the most important kernel parameters that affects PostgreSQL performance is the page size.

Benchmark PostgreSQL With Linux HugePages

By default, Linux uses 4K memory pages along with HugePages, which are larger pages of 2MiB or 1GiB. HugePages can improve performance by reducing the number of page table entries and TLB misses, which are costly operations for the CPU. However, not all HugePages are created equal. In this article, we will compare two types of HugePages: transparent and static, and see how they affect PostgreSQL benchmarking.

What are Transparent and Static HugePages?

Transparent HugePages (THP) are enabled by default on most Linux distributions. They allow the kernel to automatically allocate 2MiB pages to applications that request large amounts of memory. However, THP have some drawbacks for database workloads. For example, THP can cause memory fragmentation and allocation delays, which can degrade performance. Moreover, THP are not guaranteed to be available or contiguous, which can affect the stability and consistency of the database.

Static HugePages (SHP) are explicitly configured by the system administrator at boot time or at runtime. They require specifying the page size (either 2MiB or 1GiB) and the number of pages to reserve for each size. SHP are dedicated and pre-allocated for the application that requests them, which avoids fragmentation and allocation delays. SHP also ensure that the pages are contiguous and aligned, which can improve performance and reliability.

How to Configure Static HugePages for PostgreSQL?

To use SHP for PostgreSQL, we need to configure both the Linux kernel and the PostgreSQL server. On the Linux kernel side, we need to add some parameters to the kernel command line to reserve SHP at boot time. For example, to reserve 48GiB of memory for 1GiB SHP, we can use the following parameters:

hugepagesz=1G hugepages=48

We also need to disable THP by adding:


On the PostgreSQL side, we need to enable the use of SHP by setting:

huge_pages = on

in the postgresql.conf file. We also need to set the shared_buffers parameter to match the size of SHP we reserved. For example, if we reserved 48GiB of SHP, we can set:

shared_buffers = 48GB

This will ensure that PostgreSQL allocates its shared memory into SHP.

How to Benchmark PostgreSQL With Linux HugePages?

To benchmark PostgreSQL with different types of HugePages, we can use the pgbench tool, which is a simple and flexible benchmarking tool for PostgreSQL. pgbench can simulate different types of workloads, such as read-only, read-write, or custom SQL queries. pgbench also allows us to specify the scale factor, which determines the size of the database, and the number of concurrent clients, which determines the level of concurrency.

In this article, we will use pgbench to benchmark PostgreSQL with THP and SHP using different scale factors and number of clients. We will use the following command to run pgbench:

pgbench -c N -j N -T 1800 -S -P 60 -h localhost -U postgres test

where N is the number of clients and threads, 1800 is the duration of the test in seconds, -S means read-only workload, and -P 60 means report progress every 60 seconds. We will also use the following command to monitor the CPU utilization during the test:

mpstat 60

We will compare the transactions per second (TPS) and the CPU utilization reported by pgbench and mpstat respectively. We will also compare the memory usage reported by free before and after running pgbench.

Benchmark Results

We ran pgbench with different scale factors (10, 100, 1000) and different number of clients (1, 10, 100) using THP and SHP. We used a PostgreSQL server version 11 on a Ubuntu 16.04.4 machine with 256GB of RAM and a Samsung SM863 1.9TB SSD. We reserved 48GiB of memory for SHP using 1GiB pages.

The following tables show the TPS, CPU utilization, and memory usage for each combination of scale factor, number of clients, and type of HugePages.

Scale FactorClientsHugePagesTPSCPU %Memory Used







Analysis of Benchmark Results

The benchmark results show that using SHP with 1GiB pages can improve the performance of PostgreSQL compared to using THP with 2MiB pages. The improvement is more noticeable when the scale factor and the number of clients are higher, which means that the database size and the workload are larger. For example, with a scale factor of 1000 and 100 clients, using SHP increased the TPS by 8.5% and reduced the CPU utilization by 6.7%. This indicates that using SHP can reduce the overhead of memory management and address translation for PostgreSQL.

The benchmark results also show that using SHP does not affect the memory usage of PostgreSQL compared to using THP. This is because PostgreSQL allocates its shared memory into HugePages regardless of the type of HugePages. The only difference is that SHP are pre-allocated and dedicated for PostgreSQL, whereas THP are allocated on demand and shared with other applications. Therefore, using SHP does not increase the memory consumption of PostgreSQL, but rather improves its memory efficiency.


In this article, we have seen how to configure and benchmark PostgreSQL with Linux HugePages. We have compared two types of HugePages: transparent and static, and we have seen how they affect PostgreSQL performance. We have found that using static HugePages with 1GiB pages can improve the performance of PostgreSQL for large databases and workloads by reducing the number of page table entries and TLB misses. We have also found that using static HugePages does not increase the memory usage of PostgreSQL, but rather improves its memory efficiency.

Therefore, we recommend using static HugePages with 1GiB pages for PostgreSQL if your system supports them and if your database size and workload are large enough to benefit from them. However, you should always test your own configuration and workload before making any changes to your system. You should also monitor your system performance and resource utilization after applying any changes to ensure that they are effective and beneficial. d282676c82


Welcome to the group! You can connect with other members, ge...


  • blackmobilitysc
  • connections nyt
    connections nyt
  • thanh tran
    thanh tran
  • solitaire queen
    solitaire queen
  • Hendry Emma
    Hendry Emma
bottom of page