The intention of this post is helping you with your go-live production deadline configuring huge pages for MySQL using Red Hat Enterprise Linux / CentOS.
Please, remember that Huge Pages are only supported by InnoDB, so the InnoDB buffer pool will be allocated in that memory segment. MyISAM storage engine uses normal memory allocation.
Advantages using Huge Pages:
- General working with huge pages will give better memory management.
- The Page Table Size will be smaller, so the kernel will have to deal with fewer entries in the page table.
- The Translation Lookaside Buffer (TLB) misses will be reduced.
- As huge pages cannot be replaced, the page tables lookups can be skipped.
Step 1: Check your huge page size
$ grep Hugepagesize /proc/meminfo
Hugepagesize: 2048 kB
By default, the huge page size is 2MB.
Step 2: Calculate MySQL Maximum Memory Usage
Calculate maximum GLOBAL BUFFER memory usage in bytes which is the sum of the global buffers and performance schema memory:
S = (query_cache_size + table_open_cache + innodb_buffer_pool_size + innodb_log_file_size + performance_schema.memory) + 10 %
Get the values of the above variables using the below commands in your mysql shell:
show variables like ‘query_cache_size’;
show variables like ‘table_open_cache’;
show variables like ‘innodb_buffer_pool_size’;
show variables like ‘innodb_log_file_size’;
Translate values to bytes and add a +10% to the value just in case MySQL needs to allocate more memory using Huge Pages. If InnoDB uses the total of hugepages configured, the engine will start using the conventional memory pool.
Please note that if the total memory usage is greater than 85% – 90% of your physical RAM you could experience bottlenecks, resource contention and your system could start swapping pages to disk which degrades overall performance.
In this example we assume innodb_buffer_pool_size is 8G and sum of the other vars is 2GB, so S = 10GB + 10% = 11G = 11811160064 bytes
Step 3: Get the total number of huge pages needed.
n= S / 1024 /2048
We assume n = 11811160064 / 1024 / 2048 = 5632
Step 4: Get the total number of normal pages
p = S / 4096
p = 11811160064 / 4096 = 2883584
Step 5: Get the largest continuous block of memory InnoDB can use in bytes
get innodb_buffer_pool_instances variable value
b = innodb_buffer_pool_size / innodb_buffer_pool_instances
we asumme innodb_buffer_pool_instances = 8. So b = 1GB * 1024 * 1024 * 1024 = 1073741824 bytes
Step 6: Configure HugePages
get your mysql group id
# id mysql
uid=495(mysql) gid=491(mysql) groups=491(mysql)
Edit your /etc/sysctl.conf with the values
vm.hugetlb_shm_group= gid
kernel.shmmax = b
kernel.shmall = p
vm.nr_hugepages = n
In this example:
vm.hugetlb_shm_group= 491
kernel.shmmax = 1073741824
kernel.shmall = 2883584
vm.nr_hugepages = 5632
Step 7: Edit my.cnf
[mysqld]
large-pages
Step 8: Reboot your linux box
Step 9: Verify correct implementation
grep -i huge /proc/meminfo
# grep -i hugepages /proc/meminfo
HugePages_Total: 5632
Check you mysqld.log if InnoDB allocates the buffer pool. If it fails you can notice the following sign: Warning: Using conventional memory pool
Step 10: Monitor Swapping
Keep an eye on your system memory usage! If your OS is swapping your performance will be degraded and you will have to reduce the values of your configuration.
credit: https://www.linkedin.com/pulse/configuring-huge-pages-mysql-server-red-hat-linux-juan-soto/