Managing the Log and Diagnostic Files on Exadata Cloud Service
The software components in Oracle Database Exadata Cloud Service generate a variety of log and diagnostic files, and not all these files are automatically archived and purged. Thus, managing the identification and removal of these files to avoid running out of file storage space is an important administrative task.
Database deployments on Exadata Cloud Service include the
cleandblogs
script to simplify this administrative task. The
script runs daily as a cron
job on each compute node to archive key files and remove old log and
diagnostic files.
The cleandblogs
script operates by using the adrci
(Automatic Diagnostic Repository [ADR] Command Interpreter) tool to identify and
purge target diagnostic folders and files for each Oracle Home listed in
/etc/oratab
. It also targets Oracle Net Listener logs, audit
files and core dumps.
On Exadata Cloud Service, the script is run separately as the oracle
user
to clean log and diagnostic files that are associated with Oracle Database, and as
the grid
user to clean log and diagnostic files that are associated
with Oracle Grid Infrastructure.
The cleandblogs
script uses a configuration file to
determine how long to retain each type of log or diagnostic file. You can edit the
file to change the default retention periods. The file is located at
/var/opt/oracle/cleandb/cleandblogs.cfg
on each compute node.
The following table lists the parameters that appear in the cleandblogs.cfg
file, providing a description and the default retention period in days for each file type.
Parameter | Description and Default Value |
---|---|
|
Alert log ( Default value: 14 |
|
Listener log ( Default value: 14 |
|
Database audit ( Default value: 1 |
|
Core dump/files ( Default value: 7 |
|
Trace file ( Default value: 7 |
|
Data designated in the Automatic Diagnostic Repository (ADR) as having a long life (the Default value: 14 |
|
Data designated in the Automatic Diagnostic Repository (ADR) as having a short life (the Default value: 7 |
|
Log file retention in days for files under Default value: 14 |
|
cleandblogs log file retention in days. Default value: 14 |
|
Temporary file retention in days for files under
Default value: 7 |
Archiving Alert Logs and Listener Logs
When cleaning up alert and listener logs, cleandblogs
first archives and compresses the logs, operating as follows:
-
The current log file is copied to an archive file that ends with a date stamp.
-
The current log file is emptied.
-
The archive file is compressed using
gzip
. -
Any existing compressed archive files older than the retention period are deleted.
Running the cleandblogs Script Manually
The cleandblogs
script automatically runs daily on each compute node, but you can also
run the script manually if the need arises.
-
Connect to the compute node as the
oracle
user to clean log and diagnostic files that are associated with Oracle Database, or connect as thegrid
user to clean log and diagnostic files that are associated with Oracle Grid Infrastructure.For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).
-
Change to the directory containing the
cleandblogs
script:$ cd /var/opt/oracle/cleandb
-
Run the
cleandblogs
script:$ ./cleandblogs.pl
When running the script manually, you can specify an alternate configuration file to use instead of
cleandblogs.cfg
by using the--pfile
option:$ ./cleandblogs.pl --pfile config-file-name
-
Close your connection to the compute node:
$ exit