24.3. Log File Maintenance
It is a good idea to save the database server's log output
somewhere, rather than just discarding it via
The log output is invaluable when diagnosing
The server log can contain sensitive information and needs to be protected,
no matter how or where it is stored, or the destination to which it is routed.
For example, some DDL statements might contain plaintext passwords or other
authentication details. Logged statements at the
level might show the SQL source code for applications
and might also contain some parts of data rows. Recording data, events and
related information is the intended function of this facility, so this is
not a leakage or a bug. Please ensure the server logs are visible only to
appropriately authorized people.
Log output tends to be voluminous (especially at higher debug levels) so you won't want to save it indefinitely. You need to rotate the log files so that new log files are started and old ones removed after a reasonable period of time.
If you simply direct the
file, you will have log output, but
the only way to truncate the log file is to stop and restart
the server. This might be acceptable if you are using
in a development environment,
but few production servers would find this behavior acceptable.
A better approach is to send the server's
output to some type of log rotation program.
There is a built-in log rotation facility, which you can use by
setting the configuration parameter
. The control
parameters for this program are described in
. You can also use this approach
to capture the log data in machine readable
(comma-separated values) format.
Alternatively, you might prefer to use an external log rotation
program if you have one that you are already using with other
server software. For example, the
tool included in the
can be used with
. To do this,
just pipe the server's
output to the desired program.
If you start the server with
is already redirected to
, so you just need a
pipe command, for example:
pg_ctl start | rotatelogs /var/log/pgsql_log 86400
Another production-grade approach to managing log output is to
send it to
deal with file rotation. To do this, set the
(to log to
. Then you can send a
signal to the
daemon whenever you want to force it
to start writing a new log file. If you want to automate log
program can be
configured to work with log files from
On many systems, however,
is not very reliable,
particularly with large log messages; it might truncate or drop messages
just when you need them the most. Also, on
will flush each message to disk, yielding poor
performance. (You can use a
at the start of the file name
configuration file to disable syncing.)
Note that all the solutions described above take care of starting new log files at configurable intervals, but they do not handle deletion of old, no-longer-useful log files. You will probably want to set up a batch job to periodically delete old log files. Another possibility is to configure the rotation program so that old log files are overwritten cyclically.
pgBadger is an external project that does sophisticated log file analysis. check_postgres provides Nagios alerts when important messages appear in the log files, as well as detection of many other extraordinary conditions.