18.4. Managing Kernel Resources
PostgreSQL can sometimes exhaust various operating system resource limits, especially when multiple copies of the server are running on the same system, or in very large installations. This section explains the kernel resources used by PostgreSQL and the steps you can take to resolve problems related to kernel resource consumption.
18.4.1. Shared Memory and Semaphores
PostgreSQL requires the operating system to provide inter-process communication ( IPC ) features, specifically shared memory and semaphores. Unix-derived systems typically provide " System V " IPC , " POSIX " IPC , or both. Windows has its own implementation of these features and is not discussed here.
The complete lack of these facilities is usually manifested by an " Illegal system call " error upon server start. In that case there is no alternative but to reconfigure your kernel. PostgreSQL won't work without them. This situation is rare, however, among modern operating systems.
By default,
PostgreSQL
allocates
a very small amount of System V shared memory, as well as a much larger
amount of anonymous
mmap
shared memory.
Alternatively, a single large System V shared memory region can be used
(see
shared_memory_type
).
In addition a significant number of semaphores, which can be either
System V or POSIX style, are created at server startup. Currently,
POSIX semaphores are used on Linux and FreeBSD systems while other
platforms use System V semaphores.
Note
Prior to PostgreSQL 9.3, only System V shared memory was used, so the amount of System V shared memory required to start the server was much larger. If you are running an older version of the server, please consult the documentation for your server version.
System V IPC features are typically constrained by system-wide allocation limits. When PostgreSQL exceeds one of these limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See also Section 18.3.1 .) The relevant kernel parameters are named consistently across different systems; Table 18.1 gives an overview. The methods to set them, however, vary. Suggestions for some platforms are given below.
Table 18.1. System V IPC Parameters
Name | Description | Values needed to run one PostgreSQL instance |
---|---|---|
SHMMAX
|
Maximum size of shared memory segment (bytes) | at least 1kB, but the default is usually much higher |
SHMMIN
|
Minimum size of shared memory segment (bytes) | 1 |
SHMALL
|
Total amount of shared memory available (bytes or pages) |
same as
SHMMAX
if bytes,
or
ceil(SHMMAX/PAGE_SIZE)
if pages,
plus room for other applications
|
SHMSEG
|
Maximum number of shared memory segments per process | only 1 segment is needed, but the default is much higher |
SHMMNI
|
Maximum number of shared memory segments system-wide |
like
SHMSEG
plus room for other applications
|
SEMMNI
|
Maximum number of semaphore identifiers (i.e., sets) |
at least
ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16)
plus room for other applications
|
SEMMNS
|
Maximum number of semaphores system-wide |
ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) * 17
plus room for other applications
|
SEMMSL
|
Maximum number of semaphores per set | at least 17 |
SEMMAP
|
Number of entries in semaphore map | see text |
SEMVMX
|
Maximum value of semaphore | at least 1000 (The default is often 32767; do not change unless necessary) |
PostgreSQL
requires a few bytes of System V shared memory
(typically 48 bytes, on 64-bit platforms) for each copy of the server.
On most modern operating systems, this amount can easily be allocated.
However, if you are running many copies of the server or you explicitly
configure the server to use large amounts of System V shared memory (see
shared_memory_type
and
dynamic_shared_memory_type
), it may be necessary to
increase
SHMALL
, which is the total amount of System V shared
memory system-wide. Note that
SHMALL
is measured in pages
rather than bytes on many systems.
Less likely to cause problems is the minimum size for shared
memory segments (
SHMMIN
), which should be at most
approximately 32 bytes for
PostgreSQL
(it is
usually just 1). The maximum number of segments system-wide
(
SHMMNI
) or per-process (
SHMSEG
) are unlikely
to cause a problem unless your system has them set to zero.
When using System V semaphores,
PostgreSQL
uses one semaphore per allowed connection
(
max_connections
), allowed autovacuum worker process
(
autovacuum_max_workers
), allowed WAL sender process
(
max_wal_senders
), and allowed background
process (
max_worker_processes
), in sets of 16.
Each such set will
also contain a 17th semaphore which contains a
"
magic
number
"
, to detect collision with semaphore sets used by
other applications. The maximum number of semaphores in the system
is set by
SEMMNS
, which consequently must be at least
as high as
max_connections
plus
autovacuum_max_workers
plus
max_wal_senders
,
plus
max_worker_processes
, plus one extra for each 16
allowed connections plus workers (see the formula in
Table 18.1
). The parameter
SEMMNI
determines the limit on the number of semaphore sets that can
exist on the system at one time. Hence this parameter must be at
least
ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16)
.
Lowering the number
of allowed connections is a temporary workaround for failures,
which are usually confusingly worded
"
No space
left on device
"
, from the function
semget
.
In some cases it might also be necessary to increase
SEMMAP
to be at least on the order of
SEMMNS
. If the system has this parameter
(many do not), it defines the size of the semaphore
resource map, in which each contiguous block of available semaphores
needs an entry. When a semaphore set is freed it is either added to
an existing entry that is adjacent to the freed block or it is
registered under a new map entry. If the map is full, the freed
semaphores get lost (until reboot). Fragmentation of the semaphore
space could over time lead to fewer available semaphores than there
should be.
Various other settings related to
"
semaphore undo
"
, such as
SEMMNU
and
SEMUME
, do not affect
PostgreSQL
.
When using POSIX semaphores, the number of semaphores needed is the same as for System V, that is one semaphore per allowed connection ( max_connections ), allowed autovacuum worker process ( autovacuum_max_workers ), allowed WAL sender process ( max_wal_senders ), and allowed background process ( max_worker_processes ). On the platforms where this option is preferred, there is no specific kernel limit on the number of POSIX semaphores.
- AIX
-
At least as of version 5.1, it should not be necessary to do any special configuration for such parameters as
SHMMAX
, as it appears this is configured to allow all memory to be used as shared memory. That is the sort of configuration commonly used for other databases such as DB/2 .It might, however, be necessary to modify the global
ulimit
information in/etc/security/limits
, as the default hard limits for file sizes (fsize
) and numbers of files (nofiles
) might be too low. - FreeBSD
-
The default IPC settings can be changed using the
sysctl
orloader
interfaces. The following parameters can be set usingsysctl
:#
sysctl kern.ipc.shmall=32768
#
sysctl kern.ipc.shmmax=134217728
To make these settings persist over reboots, modify
/etc/sysctl.conf
.These semaphore-related settings are read-only as far as
sysctl
is concerned, but can be set in/boot/loader.conf
:kern.ipc.semmni=256 kern.ipc.semmns=512
After modifying that file, a reboot is required for the new settings to take effect.
You might also want to configure your kernel to lock System V shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using the
sysctl
settingkern.ipc.shm_use_phys
.If running in FreeBSD jails by enabling sysctl 's
security.jail.sysvipc_allowed
, postmaster s running in different jails should be run by different operating system users. This improves security because it prevents non-root users from interfering with shared memory or semaphores in different jails, and it allows the PostgreSQL IPC cleanup code to function properly. (In FreeBSD 6.0 and later the IPC cleanup code does not properly detect processes in other jails, preventing the running of postmasters on the same port in different jails.)FreeBSD versions before 4.0 work like old OpenBSD (see below).
- NetBSD
-
In NetBSD 5.0 and later, IPC parameters can be adjusted using
sysctl
, for example:#
sysctl -w kern.ipc.semmni=100
To make these settings persist over reboots, modify
/etc/sysctl.conf
.You will usually want to increase
kern.ipc.semmni
andkern.ipc.semmns
, as NetBSD 's default settings for these are uncomfortably small.You might also want to configure your kernel to lock System V shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using the
sysctl
settingkern.ipc.shm_use_phys
.NetBSD versions before 5.0 work like old OpenBSD (see below), except that kernel parameters should be set with the keyword
options
notoption
. - OpenBSD
-
In OpenBSD 3.3 and later, IPC parameters can be adjusted using
sysctl
, for example:#
sysctl kern.seminfo.semmni=100
To make these settings persist over reboots, modify
/etc/sysctl.conf
.You will usually want to increase
kern.seminfo.semmni
andkern.seminfo.semmns
, as OpenBSD 's default settings for these are uncomfortably small.In older OpenBSD versions, you will need to build a custom kernel to change the IPC parameters. Make sure that the options
SYSVSHM
andSYSVSEM
are enabled, too. (They are by default.) The following shows an example of how to set the various parameters in the kernel configuration file:option SYSVSHM option SHMMAXPGS=4096 option SHMSEG=256 option SYSVSEM option SEMMNI=256 option SEMMNS=512 option SEMMNU=256
- HP-UX
-
The default settings tend to suffice for normal installations. On HP-UX 10, the factory default for
SEMMNS
is 128, which might be too low for larger database sites.IPC parameters can be set in the System Administration Manager ( SAM ) under → . Choose when you're done.
- Linux
-
The default maximum segment size is 32 MB, and the default maximum total size is 2097152 pages. A page is almost always 4096 bytes except in unusual kernel configurations with " huge pages " (use
getconf PAGE_SIZE
to verify).The shared memory size settings can be changed via the
sysctl
interface. For example, to allow 16 GB:$
sysctl -w kernel.shmmax=17179869184
$
sysctl -w kernel.shmall=4194304
In addition these settings can be preserved between reboots in the file
/etc/sysctl.conf
. Doing that is highly recommended.Ancient distributions might not have the
sysctl
program, but equivalent changes can be made by manipulating the/proc
file system:$
echo 17179869184 >/proc/sys/kernel/shmmax
$
echo 4194304 >/proc/sys/kernel/shmall
The remaining defaults are quite generously sized, and usually do not require changes.
- macOS
-
The recommended method for configuring shared memory in macOS is to create a file named
/etc/sysctl.conf
, containing variable assignments such as:kern.sysv.shmmax=4194304 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.sysv.shmall=1024
Note that in some macOS versions, all five shared-memory parameters must be set in
/etc/sysctl.conf
, else the values will be ignored.Beware that recent releases of macOS ignore attempts to set
SHMMAX
to a value that isn't an exact multiple of 4096.SHMALL
is measured in 4 kB pages on this platform.In older macOS versions, you will need to reboot to have changes in the shared memory parameters take effect. As of 10.5 it is possible to change all but
SHMMNI
on the fly, using sysctl . But it's still best to set up your preferred values via/etc/sysctl.conf
, so that the values will be kept across reboots.The file
/etc/sysctl.conf
is only honored in macOS 10.3.9 and later. If you are running a previous 10.3.x release, you must edit the file/etc/rc
and change the values in the following commands:sysctl -w kern.sysv.shmmax sysctl -w kern.sysv.shmmin sysctl -w kern.sysv.shmmni sysctl -w kern.sysv.shmseg sysctl -w kern.sysv.shmall
Note that
/etc/rc
is usually overwritten by macOS system updates, so you should expect to have to redo these edits after each update.In macOS 10.2 and earlier, instead edit these commands in the file
/System/Library/StartupItems/SystemTuning/SystemTuning
. - Solaris 2.6 to 2.9 (Solaris 6 to Solaris 9)
-
The relevant settings can be changed in
/etc/system
, for example:set shmsys:shminfo_shmmax=0x2000000 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=256 set shmsys:shminfo_shmseg=256 set semsys:seminfo_semmap=256 set semsys:seminfo_semmni=512 set semsys:seminfo_semmns=512 set semsys:seminfo_semmsl=32
You need to reboot for the changes to take effect. See also http://sunsite.uakom.sk/sunworldonline/swol-09-1997/swol-09-insidesolaris.html for information on shared memory under older versions of Solaris.
-
Solaris
2.10 (Solaris
10) and later
OpenSolaris -
In Solaris 10 and later, and OpenSolaris, the default shared memory and semaphore settings are good enough for most PostgreSQL applications. Solaris now defaults to a
SHMMAX
of one-quarter of system RAM . To further adjust this setting, use a project setting associated with thepostgres
user. For example, run the following asroot
:projadd -c "PostgreSQL DB User" -K "project.max-shm-memory=(privileged,8GB,deny)" -U postgres -G postgres user.postgres
This command adds the
user.postgres
project and sets the shared memory maximum for thepostgres
user to 8GB, and takes effect the next time that user logs in, or when you restart PostgreSQL (not reload). The above assumes that PostgreSQL is run by thepostgres
user in thepostgres
group. No server reboot is required.Other recommended kernel setting changes for database servers which will have a large number of connections are:
project.max-shm-ids=(priv,32768,deny) project.max-sem-ids=(priv,4096,deny) project.max-msg-ids=(priv,4096,deny)
Additionally, if you are running PostgreSQL inside a zone, you may need to raise the zone resource usage limits as well. See "Chapter2: Projects and Tasks" in the System Administrator's Guide for more information on
projects
andprctl
.
18.4.2. systemd RemoveIPC
If
systemd
is in use, some care must be taken
that IPC resources (including shared memory) are not prematurely
removed by the operating system. This is especially of concern when
installing PostgreSQL from source. Users of distribution packages of
PostgreSQL are less likely to be affected, as
the
postgres
user is then normally created as a system
user.
The setting
RemoveIPC
in
logind.conf
controls whether IPC objects are
removed when a user fully logs out. System users are exempt. This
setting defaults to on in stock
systemd
, but
some operating system distributions default it to off.
A typical observed effect when this setting is on is that shared memory objects used for parallel query execution are removed at apparently random times, leading to errors and warnings while attempting to open and remove them, like
WARNING: could not remove shared memory segment "/PostgreSQL.1450751626": No such file or directory
Different types of IPC objects (shared memory vs. semaphores, System V vs. POSIX) are treated slightly differently by systemd , so one might observe that some IPC resources are not removed in the same way as others. But it is not advisable to rely on these subtle differences.
A
"
user logging out
"
might happen as part of a maintenance
job or manually when an administrator logs in as
the
postgres
user or something similar, so it is hard
to prevent in general.
What is a
"
system user
"
is determined
at
systemd
compile time from
the
SYS_UID_MAX
setting
in
/etc/login.defs
.
Packaging and deployment scripts should be careful to create
the
postgres
user as a system user by
using
useradd -r
,
adduser --system
,
or equivalent.
Alternatively, if the user account was created incorrectly or cannot be changed, it is recommended to set
RemoveIPC=no
in
/etc/systemd/logind.conf
or another appropriate
configuration file.
Caution
At least one of these two things has to be ensured, or the PostgreSQL server will be very unreliable.
18.4.3. Resource Limits
Unix-like operating systems enforce various kinds of resource limits
that might interfere with the operation of your
PostgreSQL
server. Of particular
importance are limits on the number of processes per user, the
number of open files per process, and the amount of memory available
to each process. Each of these have a
"
hard
"
and a
"
soft
"
limit. The soft limit is what actually counts
but it can be changed by the user up to the hard limit. The hard
limit can only be changed by the root user. The system call
setrlimit
is responsible for setting these
parameters. The shell's built-in command
ulimit
(Bourne shells) or
limit
(
csh
) is
used to control the resource limits from the command line. On
BSD-derived systems the file
/etc/login.conf
controls the various resource limits set during login. See the
operating system documentation for details. The relevant
parameters are
maxproc
,
openfiles
, and
datasize
. For
example:
default:\ ... :datasize-cur=256M:\ :maxproc-cur=256:\ :openfiles-cur=256:\ ...
(
-cur
is the soft limit. Append
-max
to set the hard limit.)
Kernels can also have system-wide limits on some resources.
-
On Linux the kernel parameter
fs.file-max
determines the maximum number of open files that the kernel will support. It can be changed withsysctl -w fs.file-max=
. To make the setting persist across reboots, add an assignment inN
/etc/sysctl.conf
. The maximum limit of files per process is fixed at the time the kernel is compiled; see/usr/src/linux/Documentation/proc.txt
for more information.
The PostgreSQL server uses one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.
The factory default limit on open files is often set to " socially friendly " values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.
On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set PostgreSQL 's max_files_per_process configuration parameter to limit the consumption of open files.
Another kernel limit that may be of concern when supporting large
numbers of client connections is the maximum socket connection queue
length. If more than that many connection requests arrive within a very
short period, some may get rejected before the postmaster can service
the requests, with those clients receiving unhelpful connection failure
errors such as
"
Resource temporarily unavailable
"
or
"
Connection refused
"
. The default queue length limit is 128
on many platforms. To raise it, adjust the appropriate kernel parameter
via
sysctl
, then restart the postmaster.
The parameter is variously named
net.core.somaxconn
on Linux,
kern.ipc.soacceptqueue
on newer FreeBSD,
and
kern.ipc.somaxconn
on macOS and other BSD
variants.
18.4.4. Linux Memory Overcommit
In Linux 2.4 and later, the default virtual memory behavior is not optimal for PostgreSQL . Because of the way that the kernel implements memory overcommit, the kernel might terminate the PostgreSQL postmaster (the master server process) if the memory demands of either PostgreSQL or another process cause the system to run out of virtual memory.
If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):
Out of Memory: Killed process 12345 (postgres).
This indicates that the
postgres
process
has been terminated due to memory pressure.
Although existing database connections will continue to function
normally, no new connections will be accepted. To recover,
PostgreSQL
will need to be restarted.
One way to avoid this problem is to run PostgreSQL on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoid the problem, because the out-of-memory (OOM) killer is invoked only when physical memory and swap space are exhausted.
If
PostgreSQL
itself is the cause of the
system running out of memory, you can avoid the problem by changing
your configuration. In some cases, it may help to lower memory-related
configuration parameters, particularly
shared_buffers
and
work_mem
. In
other cases, the problem may be caused by allowing too many connections
to the database server itself. In many cases, it may be better to reduce
max_connections
and instead make use of external connection-pooling software.
On Linux 2.6 and later, it is possible to modify the
kernel's behavior so that it will not
"
overcommit
"
memory.
Although this setting will not prevent the
OOM killer
from being invoked
altogether, it will lower the chances significantly and will therefore
lead to more robust system behavior. This is done by selecting strict
overcommit mode via
sysctl
:
sysctl -w vm.overcommit_memory=2
or placing an equivalent entry in
/etc/sysctl.conf
.
You might also wish to modify the related setting
vm.overcommit_ratio
. For details see the kernel documentation
file
https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
.
Another approach, which can be used with or without altering
vm.overcommit_memory
, is to set the process-specific
OOM score adjustment
value for the postmaster process to
-1000
, thereby guaranteeing it will not be targeted by the OOM
killer. The simplest way to do this is to execute
echo -1000 > /proc/self/oom_score_adj
in the postmaster's startup script just before invoking the postmaster. Note that this action must be done as root, or it will have no effect; so a root-owned startup script is the easiest place to do it. If you do this, you should also set these environment variables in the startup script before invoking the postmaster:
export PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj export PG_OOM_ADJUST_VALUE=0
These settings will cause postmaster child processes to run with the
normal OOM score adjustment of zero, so that the OOM killer can still
target them at need. You could use some other value for
PG_OOM_ADJUST_VALUE
if you want the child processes to run
with some other OOM score adjustment. (
PG_OOM_ADJUST_VALUE
can also be omitted, in which case it defaults to zero.) If you do not
set
PG_OOM_ADJUST_FILE
, the child processes will run with the
same OOM score adjustment as the postmaster, which is unwise since the
whole point is to ensure that the postmaster has a preferential setting.
Older Linux kernels do not offer
/proc/self/oom_score_adj
,
but may have a previous version of the same functionality called
/proc/self/oom_adj
. This works the same except the disable
value is
-17
not
-1000
.
Note
Some vendors' Linux 2.4 kernels are reported to have early versions
of the 2.6 overcommit
sysctl
parameter. However, setting
vm.overcommit_memory
to 2
on a 2.4 kernel that does not have the relevant code will make
things worse, not better. It is recommended that you inspect
the actual kernel source code (see the function
vm_enough_memory
in the file
mm/mmap.c
)
to verify what is supported in your kernel before you try this in a 2.4
installation. The presence of the
overcommit-accounting
documentation file should
not
be taken as evidence that the
feature is there. If in any doubt, consult a kernel expert or your
kernel vendor.
18.4.5. Linux Huge Pages
Using huge pages reduces overhead when using large contiguous chunks of
memory, as
PostgreSQL
does, particularly when
using large values of
shared_buffers
. To use this
feature in
PostgreSQL
you need a kernel
with
CONFIG_HUGETLBFS=y
and
CONFIG_HUGETLB_PAGE=y
. You will also have to adjust
the kernel setting
vm.nr_hugepages
. To estimate the
number of huge pages needed, start
PostgreSQL
without huge pages enabled and check the
postmaster's anonymous shared memory segment size, as well as the system's
huge page size, using the
/proc
file system. This might
look like:
$head -1 $PGDATA/postmaster.pid
4170 $pmap 4170 | awk '/rw-s/ && /zero/ {print $2}'
6490428K $grep ^Hugepagesize /proc/meminfo
Hugepagesize: 2048 kB
6490428
/
2048
gives approximately
3169.154
, so in this example we need at
least
3170
huge pages, which we can set with:
$ sysctl -w vm.nr_hugepages=3170
A larger setting would be appropriate if other programs on the machine
also need huge pages. Don't forget to add this setting
to
/etc/sysctl.conf
so that it will be reapplied
after reboots.
Sometimes the kernel is not able to allocate the desired number of huge pages immediately, so it might be necessary to repeat the command or to reboot. (Immediately after a reboot, most of the machine's memory should be available to convert into huge pages.) To verify the huge page allocation situation, use:
$ grep Huge /proc/meminfo
It may also be necessary to give the database server's operating system
user permission to use huge pages by setting
vm.hugetlb_shm_group
via
sysctl
, and/or
give permission to lock memory with
ulimit -l
.
The default behavior for huge pages in
PostgreSQL
is to use them when possible and
to fall back to normal pages when failing. To enforce the use of huge
pages, you can set
huge_pages
to
on
in
postgresql.conf
.
Note that with this setting
PostgreSQL
will fail to
start if not enough huge pages are available.
For a detailed description of the Linux huge pages feature have a look at https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt .