Introduction
pgpool-II 4.0.7 Documentation | |||
---|---|---|---|
Prev | Up | Chapter 2. Watchdog | Next |
Watchdog is a sub process of Pgpool-II to add high availability. Watchdog is used to resolve the single point of failure by coordinating multiple pgpool-II nodes. The watchdog was first introduced in pgpool-II V3.2 and is significantly enhanced in pgpool-II V3.5 , to ensure the presence of a quorum at all time. This new addition to watchdog makes it more fault tolerant and robust in handling and guarding against the split-brain syndrome and network partitioning. However to ensure the quorum mechanism properly works, the number of pgpool-II nodes must be odd in number and greater than or equal to 3.
2.1.1. Coordinating multiple Pgpool-II nodes
Watchdog coordinates multiple Pgpool-II nodes by exchanging information with each other.
At the startup, if the watchdog is enabled, Pgpool-II node sync the status of all configured backend nodes from the master watchdog node. And if the node goes on to become a master node itself it initializes the backend status locally. When a backend node status changes by failover etc.., watchdog notifies the information to other Pgpool-II nodes and synchronizes them. When online recovery occurs, watchdog restricts client connections to other Pgpool-II nodes for avoiding inconsistency between backends.
Watchdog also coordinates with all connected Pgpool-II nodes to ensure that failback, failover and follow_master commands must be executed only on one pgpool-II node.
2.1.2. Life checking of other Pgpool-II nodes
Watchdog lifecheck is the sub-component of watchdog to monitor the health of Pgpool-II nodes participating in the watchdog cluster to provide the high availability. Traditionally Pgpool-II watchdog provides two methods of remote node health checking. "heartbeat" and "query" mode. The watchdog in Pgpool-II V3.5 adds a new "external" to wd_lifecheck_method , which enables to hook an external third party health checking system with Pgpool-II watchdog.
Apart from remote node health checking watchdog lifecheck can also check the health of node it is installed on by monitoring the connection to upstream servers. If the monitoring fails, watchdog treats it as the local Pgpool-II node failure.
In heartbeat mode, watchdog monitors other Pgpool-II processes by using heartbeat signal. Watchdog receives heartbeat signals sent by other Pgpool-II periodically. If there is no signal for a certain period, watchdog regards this as the failure of the Pgpool-II . For redundancy you can use multiple network connections for heartbeat exchange between Pgpool-II nodes. This is the default and recommended mode to be used for health checking.
In query mode, watchdog monitors Pgpool-II service rather than process. In this mode watchdog sends queries to other Pgpool-II and checks the response.
Note: Note that this method requires connections from other Pgpool-II , so it would fail monitoring if the num_init_children parameter isn't large enough. This mode is deprecated and left for backward compatibility.
external mode is introduced by Pgpool-II V3.5 . This mode basically disables the built in lifecheck of Pgpool-II watchdog and expects that the external system will inform the watchdog about health of local and all remote nodes participating in the watchdog cluster.
2.1.3. Consistency of configuration parameters on all Pgpool-II nodes
At startup watchdog verifies the Pgpool-II configuration of the local node for the consistency with the configurations on the master watchdog node and warns the user of any differences. This eliminates the likelihood of undesired behavior that can happen because of different configuration on different Pgpool-II nodes.
2.1.4. Changing active/standby state when certain fault is detected
When a fault of Pgpool-II is detected, watchdog notifies the other watchdogs of it. If this is the active Pgpool-II , watchdogs decide the new active Pgpool-II by voting and change active/standby state.
2.1.5. Automatic virtual IP switching
When a standby Pgpool-II server promotes to active, the new active server brings up virtual IP interface. Meanwhile, the previous active server brings down the virtual IP interface. This enables the active Pgpool-II to work using the same IP address even when servers are switched.
2.1.6. Automatic registration of a server as a standby in recovery
When the broken server recovers or new server is attached, the watchdog process notifies this to the other watchdogs in the cluster along with the information of the new server, and the watchdog process receives information on the active server and other servers. Then, the attached server is registered as a standby.
2.1.7. Starting/stopping watchdog
The watchdog process starts and stops automatically as sub-processes of the Pgpool-II , therefore there is no dedicated command to start and stop watchdog.
Watchdog controls the virtual IP interface, the commands executed by the watchdog for bringing up and bringing down the VIP require the root privileges. Pgpool-II requires the user running Pgpool-II to have root privileges when the watchdog is enabled along with delegate IP. This is however not good security practice to run the Pgpool-II as root user, the alternative and preferred way is to run the Pgpool-II as normal user and use either the custom commands for if_up_cmd , if_down_cmd , and arping_cmd using sudo or use setuid ("set user ID upon execution") on if_* commands
Lifecheck process is a sub-component of watchdog, its job is to monitor the health of Pgpool-II nodes participating in the watchdog cluster. The Lifecheck process is started automatically when the watchdog is configured to use the built-in life-checking, it starts after the watchdog main process initialization is complete. However lifecheck process only kicks in when all configured watchdog nodes join the cluster and becomes active. If some remote node fails before the Lifecheck become active that failure will not get caught by the lifecheck.