When submitting a Service Request (SR), please include an archive file with the relevant log files and debugging information as listed in this section. This information can be used by Oracle Support to analyze and diagnose system issues. The support data files can be uploaded for further analysis by Oracle Support.
Collecting support files involves logging in to the command line on components in your Oracle Virtual Compute Appliance rack and copying files to a storage location external to the appliance environment, in the data center network. This can only be achieved from a system with access to both the internal appliance management network and the data center network. You can set up a physical or virtual system with those connections, or use the master management node.
The most convenient way to collect the necessary files, is to mount the target storage location on the system using nfs, and copy the files using scp with the appropriate login credentials and file path. The command syntax should be similar to this example:
# mkdir /mnt/mynfsshare
# mount -t nfsstorage-host-ip
:/path-to-share
/mnt/mynfsshare # scp root@component-ip
:/path-to-file
/mnt/mynfsshare/ovca-support-data/
For more accurate diagnosis of physical server issues,
Oracle Support Services require a system memory
dump. To be able to provide this, you should install and
configure kdump
, as described in the
support note with
Doc
ID 1520837.1.
For diagnostic data collection, Oracle Support Services recommend that the OSWatcher tool be run for an extended period of time. For details about the use of OSWatcher, please consult the support note with Doc ID 580513.1.
For diagnostic purposes, Oracle Support Services
use a script called VMPInfo3 that
automatically collects vital troubleshooting information from
your Oracle Virtual Compute Appliance environment. This script is installed with
the Oracle Virtual Compute Appliance controller software on both management nodes
and is located at /usr/sbin/vmpinfo3.sh
.
To collect support data from your system, proceed as follows:
Log in to the master management node as root.
If you accidentally run the vmpinfo3
script from the secondary management node, an error
message is displayed and you are instructed to run the
script from the master management node.
Run the diagnostic script as follows:
[root@ovcamn05r1 ~]# /usr/sbin/vmpinfo3.sh --username=admin
--password=Welcome1
Gathering files from all servers. This process may take some time. Gathering OVM Model Dump files Gathering sosreport from ovcacn07r1 Gathering sosreport from ovcacn08r1 Gathering sosreport from ovcacn09r1 [...] Gathering sosreport from ovcacn41r1 Gathering sosreport from ovcacn42r1 Gathering OVM Manager Logs
When prompted, enter the root password for the secondary management node.
Enter your root password for the other management node Warning: Permanently added '192.168.4.4' (RSA) to the list of known hosts. root@192.168.4.4's password: access.log 100% 2949KB 2.9MB/s 00:00 access.log00001 100% 5000KB 4.9MB/s 00:00 access.log00002 100% 5000KB 4.9MB/s 00:00 access.log00003 100% 5000KB 4.9MB/s 00:00 AdminServer-diagnostic.log 100% 322KB 321.6KB/s 00:00 AdminServer.log 100% 5292KB 5.2MB/s 00:00 AdminServer.log00002 100% 284KB 284.4KB/s 00:00 AdminServer.log00003 100% 302KB 302.3KB/s 00:00 [...] AdminServer.log00011 100% 10MB 9.8MB/s 00:00 base_adf_domain.log 100% 2709KB 2.7MB/s 00:00 base_adf_domain.log00001 100% 5001KB 4.9MB/s 00:00 CLIAudit.log 100% 7401 7.2KB/s 00:00 CLI.log 100% 1039KB 1.0MB/s 00:00 CLI.log.1 100% 5120KB 5.0MB/s 00:01 metricdump-20131126.173510.log.gz 100% 596KB 595.9KB/s 00:00 metricdump-20131126.203511.log.gz 100% 595KB 595.1KB/s 00:00 metricdump-20131126.143507.log.gz 100% 593KB 593.4KB/s 00:00 metricdump-20131126.233509.log.gz 100% 596KB 596.1KB/s 00:00 diagnostic.log
The script collects the logs from both management nodes to ensure that the diagnostic data you send to Oracle Support is as complete and detailed as possible.
When all files have been collected, the script compresses them into a single tarball and displays a message with the name and location.
Compressing VMPinfo3<date>
-<time>
. ======================================================================================= Please send /tmp/vmpinfo3-<version>
-<date>
-<time>
.tar.gz to Oracle support =======================================================================================
If the diagnostic script should fail, collect the files manually.
Use a separate subdirectory for each component. For easy identification, use the host name as directory name.
Because of the log rotation mechanism, additional files
may exist with the same names but ending in extension
.0
, .1
,
.2
and so on. Please include those in
the support data files as well.
From both management nodes, copy these files:
the entire directory
/u01/app/oracle/ovm-manager-3/domains/ovm_domain/servers/AdminServer/logs/
the Oracle Virtual Compute Appliance log files:
/tmp/install.log, /var/log/ovca.log,
/etc/ovca-info
the entire directory
/opt/xsigo/xms/logs/
From each compute node, copy the entire
/var/log/ directory, as well as the
file /tmp/sosreport
.
If no such file exists, run the sosreport command to generate it.
From each Oracle Fabric Interconnect F1-15 Director Switch, copy the entire /var/log/ directory.
To allow better analysis of physical server issues, for
example hanging, crashing or rebooting, also include the
system memory dump file (vmcore
).
The location of the file is:
.
The partition and mount point are defined during
<kdump-partition-mount-point>
/var/crash/127.0.0.1-<date>
-<time>
/vmcorekdump
configuration. For details,
please consult the support note with
Doc
ID 1520837.1.
Collect the OSWatcher logs. The default location is
/opt/osw
.
For details, please consult the support note with Doc ID 580513.1.
Copy all diagnostic files to a location external to the appliance environment.
For support data up to 2 GB, upload the file as part of the Service Request (SR) process in My Oracle Support (MOS).
If you are still in the process of logging the SR, upload the support data in the Upload Files/Attachments step of the SR.
If you have already logged the SR and need to upload files afterwards, proceed as follows:
Log into MOS and open the Dashboard or Service Request tab.
In the Service Request region, click the SR you want to update.
In the Update section, select Add Attachment.
In the pop-up window, select the file for upload, include any notes, and click Attach File.
If uploading the support data with the SR is not an option, or
for support data files over 2 GB in size, use the file transfer
service from Oracle support at
sftp.oracle.com
. Oracle Support
might request that you upload using a different mechanism.
Using a browser or FTP client, access the
Oracle SFTP server
sftp.oracle.com
at port 2021.
Log in with your Oracle Single Sign-On user name and password.
Select the support data file to upload.
Select a destination for the file.
Use the directory path provided by Oracle Support.
Typically, the directory path is constructed as follows:
"/support/incoming/
".
case_number
/
The use of a case number ensures that the file is correctly associated with the service request. Write down the full path to the file and the case number for future reference in communications with Oracle Support.
Click the Upload button to upload the file.
Some browsers do not show the progress of the upload.
Do not click the Upload button multiple times, as this restarts the transfer.
When the upload is complete, a confirmation message is displayed.
For detailed information about the use of Oracle's SFTP server, refer to the support notes with Doc ID 549180.1 and Doc ID 464666.1.