Postgres-XC 1.1.1 Documentation | ||||
---|---|---|---|---|
Prev | Up | Chapter 16. Server Setup and Operation | Next |
A large Postgres-XC installation can quickly exhaust various operating system resource limits. (On some systems, the factory defaults are so low that you don't even need a really "large" installation.) If you have encountered this kind of problem, keep reading.
Shared memory and semaphores are collectively referred to as "System V IPC" (together with message queues, which are not relevant for Postgres-XC). Almost all modern operating systems provide these features, but many of them don't have them turned on or sufficiently sized by default, especially as available RAM and the demands of database applications grow. (On Windows, Postgres-XC provides its own replacement implementation of these facilities, so most of this section can be disregarded.)
The complete lack of these facilities is usually manifested by an Illegal system call error upon server start. In that case there is no alternative but to reconfigure your kernel. Postgres-XC won't work without them. This situation is rare, however, among modern operating systems.
When Postgres-XC exceeds one of the various hard IPC limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See also Section 16.3.8.) The relevant kernel parameters are named consistently across different systems; Table 16-1 gives an overview. The methods to set them, however, vary. Suggestions for some platforms are given below.
Table 16-1. System V IPC Parameters
Name | Description | Reasonable values |
---|---|---|
SHMMAX | Maximum size of shared memory segment (bytes) | at least several megabytes (see text) |
SHMMIN | Minimum size of shared memory segment (bytes) | 1 |
SHMALL | Total amount of shared memory available (bytes or pages) | if bytes, same as SHMMAX; if pages, ceil(SHMMAX/PAGE_SIZE) |
SHMSEG | Maximum number of shared memory segments per process | only 1 segment is needed, but the default is much higher |
SHMMNI | Maximum number of shared memory segments system-wide | like SHMSEG plus room for other applications |
SEMMNI | Maximum number of semaphore identifiers (i.e., sets) | at least ceil((max_connections + autovacuum_max_workers + 4) / 16) |
SEMMNS | Maximum number of semaphores system-wide | ceil((max_connections + autovacuum_max_workers + 4) / 16) * 17 plus room for other applications |
SEMMSL | Maximum number of semaphores per set | at least 17 |
SEMMAP | Number of entries in semaphore map | see text |
SEMVMX | Maximum value of semaphore | at least 1000 (The default is often 32767; do not change unless necessary) |
The most important
shared memory parameter is SHMMAX, the maximum size, in
bytes, of a shared memory segment. If you get an error message from
shmget
like "Invalid argument", it is
likely that this limit has been exceeded. The size of the required
shared memory segment varies depending on several
Postgres-XC configuration parameters, as shown in
Table 16-2. (Any error message you might
get will include the exact size of the failed allocation request.)
You can, as a temporary solution, lower some of those settings to
avoid the failure. While it is possible to get
Postgres-XC to run with SHMMAX as small as
2 MB, you need considerably more for acceptable performance. Desirable
settings are in the hundreds of megabytes to a few gigabytes.
Some systems also have a limit on the total amount of shared memory in the system (SHMALL). Make sure this is large enough for Postgres-XC plus any other applications that are using shared memory segments. Note that SHMALL is measured in pages rather than bytes on many systems.
Less likely to cause problems is the minimum size for shared memory segments (SHMMIN), which should be at most approximately 500 kB for each Postgres-XC nodes (it is usually just 1). The maximum number of segments system-wide (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has them set to zero.
Each Postgres-XC coordinator and datanode uses one semaphore per allowed connection
(max_connections) and allowed autovacuum worker
process (autovacuum_max_workers), in sets of 16.
Each such set will
also contain a 17th semaphore which contains a "magic
number", to detect collision with semaphore sets used by
other applications. The maximum number of semaphores in the system
is set by SEMMNS, which consequently must be at least
as high as max_connections plus
autovacuum_max_workers, plus one extra for each 16
allowed connections plus workers (see the formula in Table 16-1). The parameter SEMMNI
determines the limit on the number of semaphore sets that can
exist on the system at one time. Hence this parameter must be at
least ceil((max_connections + autovacuum_max_workers + 4) / 16).
Lowering the number
of allowed connections is a temporary workaround for failures,
which are usually confusingly worded "No space
left on device", from the function semget
.
In some cases it might also be necessary to increase SEMMAP to be at least on the order of SEMMNS. This parameter defines the size of the semaphore resource map, in which each contiguous block of available semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that is adjacent to the freed block or it is registered under a new map entry. If the map is full, the freed semaphores get lost (until reboot). Fragmentation of the semaphore space could over time lead to fewer available semaphores than there should be.
The SEMMSL parameter, which determines how many semaphores can be in a set, must be at least 17 for Postgres-XC.
Various other settings related to "semaphore undo", such as SEMMNU and SEMUME, do not affect Postgres-XC.
The default maximum segment size is 32 MB, which is only adequate for very small Postgres-XC installations. The default maximum total size is 2097152 pages. A page is almost always 4096 bytes except in unusual kernel configurations with "huge pages" (use getconf PAGE_SIZE to verify). That makes a default limit of 8 GB, which is often enough, but not always.
The shared memory size settings can be changed via the sysctl interface. For example, to allow 16 GB:
$ sysctl -w kernel.shmmax=17179869184 $ sysctl -w kernel.shmall=4194304
In addition these settings can be preserved between reboots in the file /etc/sysctl.conf. Doing that is highly recommended.
Ancient distributions might not have the sysctl program, but equivalent changes can be made by manipulating the /proc file system:
$ echo 17179869184 >/proc/sys/kernel/shmmax $ echo 4194304 >/proc/sys/kernel/shmall
The remaining defaults are quite generously sized, and usually do not require changes.
Table 16-2. PostgreSQL Shared Memory Usage
Usage | Approximate shared memory bytes required (as of 8.3) |
---|---|
Connections | (1800 + 270 * max_locks_per_transaction) * max_connections |
Autovacuum workers | (1800 + 270 * max_locks_per_transaction) * autovacuum_max_workers |
Prepared transactions | (770 + 270 * max_locks_per_transaction) * max_prepared_transactions |
Shared disk buffers | (block_size + 208) * shared_buffers |
WAL buffers | (wal_block_size + 8) * wal_buffers |
Fixed space requirements | 770 kB |
Unix-like operating systems enforce various kinds of resource limits
that might interfere with the operation of your
Postgres-XC components. Of particular
importance are limits on the number of processes per user, the
number of open files per process, and the amount of memory available
to each process. Each of these have a "hard" and a
"soft" limit. The soft limit is what actually counts
but it can be changed by the user up to the hard limit. The hard
limit can only be changed by the root user. The system call
setrlimit
is responsible for setting these
parameters. The shell's built-in command ulimit
(Bourne shells) or limit (csh) is
used to control the resource limits from the command line. On
BSD-derived systems the file /etc/login.conf
controls the various resource limits set during login. See the
operating system documentation for details. The relevant
parameters are maxproc,
openfiles, and datasize. For
example:
default:\ ... :datasize-cur=256M:\ :maxproc-cur=256:\ :openfiles-cur=256:\ ...
(-cur is the soft limit. Append -max to set the hard limit.)
Kernels can also have system-wide limits on some resources.
On Linux /proc/sys/fs/file-max determines the maximum number of open files that the kernel will support. It can be changed by writing a different number into the file or by adding an assignment in /etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled; see /usr/src/linux/Documentation/proc.txt for more information.
The Postgres-XC coordinator and datanode use one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.
The factory default limit on open files is often set to "socially friendly" values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.
On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set Postgres-XC's max_files_per_process configuration parameter to limit the consumption of open files.
In Linux 2.4 and later, the default virtual memory behavior is not optimal for Postgres-XC. Because of the way that the kernel implements memory overcommit, the kernel might terminate the Postgres-XC postmaster (the master server process) if the memory demands of either Postgres-XC or another process cause the system to run out of virtual memory.
If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):
Out of Memory: Killed process 12345 (postgres).
This indicates that the postgres process has been terminated due to memory pressure. Although existing database connections will continue to function normally, no new connections will be accepted. To recover, Postgres-XC component will need to be restarted.
One way to avoid this problem is to run Postgres-XC component on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoid the problem, because the out-of-memory (OOM) killer is invoked only when physical memory and swap space are exhausted.
If Postgres-XC component itself is the cause of the system running out of memory, you can avoid the problem by changing your configuration. In some cases, it may help to lower memory-related configuration parameters, particularly shared_buffers and work_mem. In other cases, the problem may be caused by allowing too many connections to the database server itself. In many cases, it may be better to reduce max_connections and instead make use of external connection-pooling software.
On Linux 2.6 and later, it is possible to modify the kernel's behavior so that it will not "overcommit" memory. Although this setting will not prevent the OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl:
sysctl -w vm.overcommit_memory=2
or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting vm.overcommit_ratio. For details see the kernel documentation file Documentation/vm/overcommit-accounting.
Another approach, which can be used with or without altering vm.overcommit_memory, is to set the process-specific oom_score_adj value for the postmaster process to -1000, thereby guaranteeing it will not be targeted by the OOM killer. The simplest way to do this is to execute
echo -1000 > /proc/self/oom_score_adj
in the postmaster's startup script just before invoking the postmaster. Note that this action must be done as root, or it will have no effect; so a root-owned startup script is the easiest place to do it. If you do this, you may also wish to build Postgres-XC with -DLINUX_OOM_SCORE_ADJ=0 added to CPPFLAGS. That will cause postmaster child processes to run with the normal oom_score_adj value of zero, so that the OOM killer can still target them at need.
Older Linux kernels do not offer /proc/self/oom_score_adj, but may have a previous version of the same functionality called /proc/self/oom_adj. This works the same except the disable value is -17 not -1000. The corresponding build flag for Postgres-XC is -DLINUX_OOM_ADJ=0.
Note: Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl parameter. However, setting vm.overcommit_memory to 2 on a 2.4 kernel that does not have the relevant code will make things worse, not better. It is recommended that you inspect the actual kernel source code (see the function
vm_enough_memory
in the file mm/mmap.c) to verify what is supported in your kernel before you try this in a 2.4 installation. The presence of the overcommit-accounting documentation file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor.