Managing Client Connections
pgpool-II 4.1.5 Documentation | |||
---|---|---|---|
Prev | Up | Chapter 7. Performance Considerations | Next |
As the number of client connections accepted is growing, the number of Pgpool-II child process which can accept new connections from client is decreasing and finally reaches to 0. In this situation new clients need to wait until a child process becomes free. Under heavy load, it could be possible that the queue length of waiting clients is getting longer and longer and finally hits the system's limit (you might see "535 times the listen queue of a socket overflowed" error"). In this case you need to increase the queue limit. There are several ways to deal with this problem.
7.2.1. Controlling num_init_children
The obvious way to deal with the problem is increasing the number of child process. This can be done by tweaking num_init_children . However increasing child process requires more CPU and memory resource. Also you have to be very careful about max_connections parameter of PostgreSQL because once the number of child process is greater than max_connections, PostgreSQL refuses to accept new connections, and failover will be triggered.
Another drawback of increasing num_init_children is, so called "thundering herd problem". When new connection request comes in, the kernel wake up any sleeping child process to issue accept() system call. This triggers fight of process to get the socket and could give heavy load to the system. To mitigate the problem, you could set serialize_accept to on so that there's only one process to grab the accepting socket.
7.2.2. Controlling listen_backlog_multiplier
Another solution would be increasing the connection request queue. This could be done by increasing listen_backlog_multiplier .
7.2.3. When to use reserved_connections
However, none of above solutions guarantees that the connection accepting the queue would not be filled up. If a client connection request arrives quicker than the rate of processing queries, the queue will be filled in someday. For example, if there are some heavy queries that take long time, it could easily trigger the problem.
The solution is setting reserved_connections so that overflowed connection requests are rejected as PostgreSQL already does. This gives visible errors to applications ("Sorry max_connections already") and force them retrying. So the solution should only be used when you cannot foresee the upper limit of system load.