Known Issues

The following lists describe the known issues with Oracle Cloud Infrastructure.

Announcements

Currently, there are no known Announcements issues.

Anomaly Detection

Currently, there are no known issues with the Anomaly Detection service.

Application Performance Monitoring

Browser and Scripted Browser monitors might not run applications that use frames

Details: In Synthetic Monitoring, the Browser and Scripted Browser monitors might fail to run against applications that use frames.

Workaround: We are aware of the issue and working on a resolution. For Scripted Browser monitors, you can work around this issue by replacing index=<frame-index> with either id=<id-of-frame> or name=<name-of-frame> in the .side script.

For example, if this script is the original version:

{
      "id": "23956f51-8812-40e6-ac91-1d608871ee4c",
      "comment": "",
      "command": "selectFrame",
      "target": "index=0",
      "targets": [
        ["index=0"]
      ],
      "value": ""
    }

The following script would be the modified version:

{
      "id": "23956f51-8812-40e6-ac91-1d608871ee4c",
      "comment": "",
      "command": "selectFrame",
      "target": "id=frame1",
      "targets": [
        ["id=frame1"]
      ],
      "value": ""
    }

Direct link to this issue: Browser and Scripted Browser monitors might not run applications that use frames

Issues with the authorization policies based on the apm-domains resource tags

Details: Authorization policies based on the apm-domains resource tags do not work for the Trace Explorer and Synthetic Monitoring APIs, causing authorization failures.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Issues with the authorization policies based on the apm-domains resource tags

Artifact Registry

For known issues with Artifact Registry, see Known Issues.

Audit

Currently, there are no known Audit issues.

Automated CEMLI Execution

For known issues with Automated CEMLI Execution, see Known Issues.

Autonomous Linux

For known issues with Autonomous Linux, see Known Issues.

Big Data

For known issues with Big Data Service, see Known Issues.

Block Volume

Cross-region replication not supported for volumes encrypted with customer-managed keys

Details: When you try to enable cross-region replication for a volume configured to use a Vault encryption key, the following error message occurs: Edit Volume Error: You cannot enable cross-region replication for volume <volume_ID> as it uses a Vault encryption key.

Workaround: We're working on a resolution. Cross-region replication is not supported for volumes encrypted with a customer-managed key. As a workaround to enable replication, unassign the Vault encryption key from the volume. In this scenario, the volume is encrypted with an Oracle-managed key.

Direct link to this issue: Cross-region replication not supported for volumes encrypted with customer-managed keys

Paravirtualized volume attachment not multipath-enabled after instance is resized

Details: To achieve the optimal performance level for volumes configured for ultra high performance, the volume attachment must be multipath-enabled. Multipath-enabled attachments to VM instances are only supported for instances based on shapes with 16 or greater OCPUs.

If you have an instance with fewer than 16 OCPUs, you can resize it so that it has 16 or more OCPUs to support multipath-enabled attachments. This step will not work for instances where the original number of OCPUs was less than 8 and the volume attachment is paravirtualized. In this scenario, after the volume is detached and reattached, the volume attachment will still not be multipath-enabled even though the instance now supports multipath-enabled attachments.

Workaround: As a workaround, we recommend that you create a new instance based on a shape with 16 or more OCPUs, and then attach the volume to the new instance.

Direct link to this issue: Paravirtualized volume attachment not multipath-enabled after instance is resized

Attaching the maximum number of block volumes to smaller VM.Standard.A1.Flex instances might fail

Details: When you attempt to attach the maximum number of block volumes to a smaller VM.Standard.A1.Flex instance, in some cases, the volumes might fail to attach. This happens because of limitations with the underlying physical host configuration.

Workaround: We're working on a resolution. As a workaround, we recommend that you increase the size of the VM by resizing the VM, and then try attaching the volumes again.

Direct link to this issue: Attaching the maximum number of block volumes to smaller VM.Standard.A1.Flex instances might fail

Vault encryption keys not copied to destination region for scheduled cross region backup copies

Details: When you schedule volume and volume group backups using a backup policy that is enabled for cross-region copy for volumes that are encrypted using Vault service encryption keys, the encryption keys are not copied with the volume backup to the destination region. The volume backup copies in the destination region are instead encrypted using Oracle-provided keys.

Workaround: We're working on a resolution. As a workaround, you can manually copy volume backups and volume group backups across regions, either manually or using a script, and specify the key management key ID in the target region for the copy operation. For more information about manual cross region copy, see Copying a Volume Backup Between Regions.

Direct link to this issue: Vault encryption keys not copied to destination region for scheduled cross region backup copies

Attaching a Windows boot volume as a data volume to another instance fails

Details: When you attach a Windows boot volume as a data volume to another instance, when you try to connect to the volume using the steps described in Connecting to a Block Volume the volume fails to attach and you may encounter the following error:

Connect-IscsiTarget : The target has already been logged in via an iSCSI session.

Workaround: You need to append the following to the Connect-IscsiTarget command copied from the Console:

-IsMultipathEnabled $True

Direct link to this issue: Attaching a Windows boot volume as a data volume to another instance fails

bootVolumeSizeInGBs attribute is null

Details: When calling GetInstance, the bootVolumeSizeInGBs attribute of InstanceSourceViaImageDetails is null.

Workaround: We're working on a resolution. To work around this issue, call GetBootVolume, and use the sizeInGBs attribute of BootVolume.

Direct link to this issue: bootVolumeSizeInGBs attribute is null

Blockchain Platform

For known issues with Blockchain Platform, see Known Issues.

Certificates

For known issues with Certificates, see Known Issues.

Cluster Placement Groups

For known issues with Cluster Placement Groups, see Known Issues.

Compute Cloud@Customer

For known issues with Compute Cloud@Customer, see Known Issues.

Console

Bug in the Firefox browser can cause the Console not to load

Details: When you try to access the Console using Firefox, the Console page never loads in the browser. This problem is likely caused by a corrupted Firefox user profile.

Workaround: Create a new Firefox user profile as follows:

  1. Ensure that you are on the latest version of Firefox. If not, update to the latest version.
  2. Create a new user profile and remove your old user profile. See Mozilla Support for instructions to create and remove user profiles: https://support.mozilla.org/en-US/kb/profile-manager-create-and-remove-firefox-profiles.
  3. Open Firefox with the new profile.

Alternatively, you can use one of the other Supported Browsers.

Direct link to this issue: Bug in the Firefox browser can cause the Console not to load

Container Registry

For known issues with Container Registry, see Known Issues.

Data Catalog

For known issues with Data Catalog, see Known Issues.

Data Flow

For known issues with Data Flow, see Known Issues.

Data Integration

For known issues with Data Integration, see Known Issues.

Data Labeling

For known issues with Data Labeling, see Known Issues.

Data Science

Currently, there are no known issues with the Data Science service.

Data Transfer

Currently, there are no known Data Transfer issues.

Database

Existing PDBs in a new database

Details: Existing PDBs do not appear in a newly created database and it may take up to a few hours before they appear in the Console. This includes the default PDB for a new database and existing PDBs for cloned or restored databases. In case of an in-place restore to an older version, the PDB list is updated similarly and may have some delay.

Workaround: None

Direct link to this issue: Existing PDBs in a new database

PDB in existing Data Guard configuration

Details: Creating and cloning a PDB in the primary database is not allowed via console or the API.

Workaround: You can use sqlplus to create or clone PDBs in the Primary database and they will be synced later in OCI console.

Direct link to this issue: PDB in existing Data Guard configuration

Migrating file-based TDE wallet to customer-managed key-based TDE wallet on Oracle Database 12c R1

Details: Using the Database Service API to migrate a file-based TDE wallet to a customer-managed key-based TDE wallet on Oracle Database 12c release 1 (12.1.0.2) fails with the following error:

[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME>
ACTION: Apply the required patches (30128047) and re-try the operation

Workaround: Use the DBAASCLI utility with the --skip_patch_check true flag to skip the validation of the patch for bug 30128047. Ensure that you have applied the patch for bug 31527103 in the Oracle home and then run the following dbaascli command:
nohup /var/opt/oracle/dbaascli/dbaascli tde file_to_hsm --dbname <database_name> --kms_key_ocid <kms_key_ocid> --skip_patch_check true &

In the preceding command, <kms_key_ocid> refers to the OCID of the customer-managed key you are using.

Migrating customer-managed key-based TDE wallet to file-based TDE wallet on Oracle Database 12c R1

Details: Using the Database Service API to migrate a customer-managed key-based TDE wallet to a file-based TDE wallet on Oracle Database 12c release 1 (12.1.0.2) fails with the following error:

[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME>
ACTION: Apply the required patches (30128047) and re-try the operation

Workaround: Use the DBAASCLI utility with the --skip_patch_check true flag to skip the validation of the patch for bug 30128047. Ensure that you have applied the patch for bug 29667994 in the Oracle home and then run following dbaascli command:
nohup /var/opt/oracle/dbaascli/dbaascli tde hsm_to_file --dbname <database_name> --skip_patch_check true &
Migrating file-based TDE wallet to customer-managed key-based TDE wallet on Oracle Database 12c R2

Details: Using the Database Service API to migrate a file-based TDE wallet to customer-managed key-based TDE wallet on Oracle Database 12c release 2 (12.2.0.1) fails with the following error:

[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME>
ACTION: Apply the required patches (30128047) and re-try the operation

Workaround: Migrate a file-based TDE wallet to a customer-managed key-based TDE wallet, as follows:

  1. Determine whether the database has encrypted UNDO or TEMP tablespaces in any of the Autonomous Databases or in CDB$ROOT, as follows:
    Run the following query from CDB$ROOT, to list all encrypted tablespaces contained within all Autonomous Databases:
    SQL> select tstab.con_id, tstab.name from v$tablespace tstab, v$encrypted_tablespaces enctstab where tstab.ts# = enctstab.ts# and encryptedts = 'YES';

    In then NAME column of the result of the query, search for the names of UNDO and TEMP tablespaces. If there are encrypted UNDO or TEMP tablespaces, then proceed to the next step.

  2. Unencrypt UNDO or TEMP tablespaces, as follows:

    If an UNDO tablespace is encrypted

    Unencrypt existing UNDO tablespaces, as follows:
    SQL> alter tablespace <undo_tablespace_name> encryption online decrypt;

    Repeat this procedure for all encrypted UNDO tablespaces.

    If a TEMP tablespace is encrypted
    1. Check the default TEMP tablespace, as follows:
      SQL> select property_value from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';
      If the default TEMP tablespace is not encrypted but other TEMP tablespaces are encrypted, then drop the other TEMP tablespaces, as follows:
      SQL> drop tablespace <temp_tablespace_name>;

      Skip the remainder of the steps in this procedure.

      If the default TEMP tablespace is encrypted, then proceed with the remaining steps to create and set an unencrypted default TEMP tablespace.

    2. Set the encrypt_new_tablespaces parameter to DDL, as follows:
      SQL> alter system set "encrypt_new_tablespaces" = DDL scope = memory;
    3. Create a TEMP tablespace with the specifications of the current TEMP tablespace, as follows:
      SQL> create temporary tablespace <temp_tablespace_name> TEMPFILE size 5000M;
    4. Set the new TEMP tablespace as the default TEMP tablespace for the database, as follows:
      SQL> alter database default temporary tablespace <temp_tablespace_name>;
    5. Drop existing TEMP tablespaces, as follows:
      SQL> drop tablespace <temp_tablespace_name>;

    Repeat this procedure for all encrypted TEMP tablespaces.

    The database is now running with default UNDO and TEMP tablespaces that are not encrypted and any older TEMP and UNDO tablespaces are also decrypted.

    Set encrypt_new_tablespaces to its original value, as follows:
    SQL> alter system set "encrypt_new_tablespaces" = cloud_only;

    Proceed with keystore migration to customer-managed keys.

  3. Once you confirm that there are no UNDO or TEMP tablespaces encrypted in any of the pluggable databases or in CDB$ROOT, use the DBAASCLI utility with the --skip_patch_check true flag to skip the validation of the patch for bug 30128047. Ensure that you have applied the patch for bug 31527103 in the Oracle home and then run following dbaascli command:
    nohup /var/opt/oracle/dbaascli/dbaascli tde file_to_hsm --dbname <database_name> --kms_key_ocid <kms_key_ocid> --skip_patch_check true &

    In the preceding command, <kms_key_ocid> refers to the OCID of the customer-managed key you are using.

Migrating customer-managed key-based TDE wallet to file-based TDE wallet on Oracle Database 12c R2

Details: Using the Database Service API to migrate a customer-managed key-based TDE wallet to a file-based TDE wallet on Oracle Database 12c release 2 (12.2.0.1) fails with the following error:

[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME>
ACTION: Apply the required patches (30128047) and re-try the operation

Workaround: Migrate a customer-managed key-based TDE wallet to a file-based TDE wallet, as follows:

  1. Determine whether the database has encrypted UNDO or TEMP tablespaces in any of the Autonomous Databases or in CDB$ROOT, as follows:
    Run the following query from CDB$ROOT, to list all encrypted tablespaces contained within all Autonomous Databases:
    SQL> select tstab.con_id, tstab.name from v$tablespace tstab, v$encrypted_tablespaces enctstab where tstab.ts# = enctstab.ts# and encryptedts = 'YES';

    In then NAME column of the result of the query, search for the names of UNDO and TEMP tablespaces. If there are encrypted UNDO or TEMP tablespaces, then proceed to the next step.

  2. Unencrypt UNDO or TEMP tablespaces, as follows:

    If an UNDO tablespace is encrypted

    Unencrypt existing UNDO tablespaces, as follows:
    SQL> alter tablespace <undo_tablespace_name> encryption online decrypt;

    Repeat this procedure for all encrypted UNDO tablespaces.

    If a TEMP tablespace is encrypted
    1. Check the default TEMP tablespace, as follows:
      SQL> select property_value from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';
      If the default TEMP tablespace is not encrypted but other TEMP tablespaces are encrypted, then drop the other TEMP tablespaces, as follows:
      SQL> drop tablespace <temp_tablespace_name>;

      Skip the remainder of the steps in this procedure.

      If the default TEMP tablespace is encrypted, then proceed with the remaining steps to create and set an unencrypted default TEMP tablespace.

    2. Set the encrypt_new_tablespaces parameter to DDL, as follows:
      SQL> alter system set "encrypt_new_tablespaces" = DDL scope = memory;
    3. Create a TEMP tablespace with the specifications of the current TEMP tablespace, as follows:
      SQL> create temporary tablespace <temp_tablespace_name> TEMPFILE size 5000M;
    4. Set the new TEMP tablespace as the default TEMP tablespace for the database, as follows:
      SQL> alter database default temporary tablespace <temp_tablespace_name>;
    5. Drop existing TEMP tablespaces, as follows:
      SQL> drop tablespace <temp_tablespace_name>;

    Repeat this procedure for all encrypted TEMP tablespaces.

    The database is now running with default UNDO and TEMP tablespaces that are not encrypted and any older TEMP and UNDO tablespaces are also decrypted.

    Set encrypt_new_tablespaces to its original value, as follows:
    SQL> alter system set "encrypt_new_tablespaces" = cloud_only;

    Proceed with keystore migration to customer-managed keys.

  3. Once you confirm that there are no UNDO or TEMP tablespaces encrypted in any of the pluggable databases or in CDB$ROOT, use the DBAASCLI utility with the --skip_patch_check true flag to skip the validation of the patch for bug 30128047. Ensure that you have applied the patch for bug 29667994 in the Oracle home and then run following dbaascli command:
    nohup /var/opt/oracle/dbaascli/dbaascli tde file_to_hsm --dbname <database_name> --kms_key_ocid <kms_key_ocid> --skip_patch_check true &

    In the preceding command, <kms_key_ocid> refers to the OCID of the customer-managed key you are using.

Billing issue when changing license type

Details: When you change the license type of your Database or DB system from BYOL to license included, or the other way around, you are billed for both types of licenses for the first hour. After that, you are billed according to your updated license type.

Workaround: We're working on a resolution.

Direct link to this issue: Billing issue when changing license type

RESOLVED: Service gateway does not currently support OS updates

Details: If you configure your VCN with a service gateway, the private subnet blocks access to the YUM repositories needed to update the OS. This issue affects all types of DB systems.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

The service gateway enables access to the Oracle YUM repos if you use the Overview of Service Gateways called All <region> Services in Oracle Services Network. However, you still might have issues accessing the YUM services through the service gateway. There's a solution to the issue. For details, see Access issues for instances to Oracle yum services through service gateway.

Direct link to this issue: Service gateway does not currently support OS updates

Bare Metal and Virtual Machine DB Systems Only

Backing up to Object Storage using dbcli or RMAN fails due to certificate change

Details: Unmanaged backups to Object Storage using the database CLI (dbcli) or RMAN fail with the following errors:

-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error

In response to policies implemented by two common web browsers regarding Symantec certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates can cause backups to Object Storage to fail if the Oracle Database Cloud Backup Module still points to the old certificate.

Workaround for dbcli: Check the log files for the errors listed and, if found, update the backup module.

Review the RMAN backup log files for the errors listed above:

  1. Determine the ID of the failed backup job.

    dbcli list-jobs

    In this example output, the failed backup job ID is "f59d8470-6c37-49e4-a372-4788c984ea59".

    root@<node name> ~]# dbcli list-jobs
     
    ID                                       Description                                                                 Created                             Status
    ---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
    cbe852de-c0f3-4807-85e8-7523647ec78c     Authentication key update for DCS_ADMIN                                     March 30, 2018 4:10:21 AM UTC       Success
    db83fdc4-5245-4307-88a7-178f8a0efa48     Provisioning service creation                                               March 30, 2018 4:12:01 AM UTC       Success
    c1511a7a-3c2e-4e42-9520-f156b1b4cf0e     SSH keys update                                                             March 30, 2018 4:48:24 AM UTC       Success
    22adf146-9779-4a2c-8682-7fd04d7520b2     SSH key delete                                                              March 30, 2018 4:50:02 AM UTC       Success
    6f2be750-9823-4ed5-b5ff-8e49f136dd22     create object store:bV0wqIaoLA4xLT4dGjOu                                    March 30, 2018 5:33:38 AM UTC       Success
    0716f464-1a10-40df-a303-cadee0302b1b     create backup config:bV0wqIaoLA4xLT4dGjOu_BC                                March 30, 2018 5:33:49 AM UTC       Success
    e08b21c3-cd09-4e3a-944c-d1da96cb21d8     update database : hfdb1                                                     March 30, 2018 5:34:04 AM UTC       Success
    1c3d7c58-79c3-4039-8f48-787057ce7c6e     Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname>    March 30, 2018 5:37:11 AM UTC       Success
    f59d8470-6c37-49e4-a372-4788c984ea59     Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname>    March 30, 2018 5:43:45 AM UTC       Failure
  2. Use the ID of the failed job to obtain the location of the log file to review.

    
    dbcli describe-job -i <failed_job_ID>

    Relevant output from the describe-job command should look like this:

    Message: DCS-10001:Internal error encountered: Failed to run Rman statement.
    Refer log in Node <node_name>: /opt/oracle/dcs/log/<node_name>/rman/bkup/<db_unique_name>/rman_backup/<date>/rman_backup_<date>.log.

Update the Oracle Database Cloud Backup Module:

  1. Determine the Swift object store ID and user the database is using for backups.

    1. Run the dbcli list-databases command to determine the ID of the database.

    2. Use the database ID to determine the backup configuration ID (backupConfigId).

      dbcli list-databases
      dbcli describe-database -i <database_ID> -j
    3. Using the backup configuration ID you noted from the previous step, determine the object store ID (objectStoreId).

      dbcli list-backupconfigs
      dbcli describe-backupconfig –i <backupconfig_ID> -j
    4. Using the object store ID you noted from the previous step, determine the object store user (userName).

      dbcli list-objectstoreswifts
      dbcli describe-objectstoreswift –i <objectstore_ID> -j
  2. Using the object store credentials you obtained from step 1, update the backup module.

    dbcli update-objectstoreswift –i <objectstore_ID> -p –u <user_name>

Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to the host as the oracle user, and use your Swift credentials to reinstall the backup module.

Note

Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
java -jar <opc_install.jar_path> -opcId '<swift_user_ID>' -opcPass '<auth_token>' -container <objectstore_container> -walletDir <wallet_directory> -configfile <config_file> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -import-all-trustcerts

For a multi-node DB system, perform the workaround on all nodes in the cluster.

See Oracle Database Cloud Backup Module documentation for details on using this command.

Direct link to this issue: Backing up to Object Storage using dbcli or RMAN fails due to certificate change

Breaking changes in Database service SDKs

Details: The SDKs released on October 18, 2018 introduce code-breaking changes to the database size and the database edition attributes in the database backup APIs.

Workaround: Refer to the following language-specific documentation for more details about the breaking changes, and update your existing code as applicable:

Direct link to this issue: Breaking changes in Database service SDKs

Unable to use Managed Backups in your DB system

Details: Backup and restore operations might not work in your DB system when you use the Console or the API.

Workaround: Install the Oracle Database Cloud Backup Module, and then contact Oracle Support Services for further instructions.

To install the Oracle Database Cloud Backup Module:

  1. SSH to the DB system, and log in as opc.

    
    ssh -i <SSH_key> opc@<DB_system_IP address>
    login as: opc

    Alternatively, you can use opc@<DB_system_hostname> to log in.

  2. Download the Oracle Database Cloud Backup Module from http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html.
  3. Extract the contents of opc_installer.zip to a target directory, for example, /home/opc.
  4. In your tenancy, create a temporary user, and grant them privileges to access the tenancy's Object Storage.
  5. For this temporary user, create an Working with Auth Tokens and note down the password.
  6. Verify that credentials work by running the following curl command:

    Note

    Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
    curl -v -X HEAD -u  <user_id>:'<auth_token>' https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace>

    See https://cloud.oracle.com/infrastructure/storage/object-storage/faq for the correct region to use.

    The command should return either the HTTP 200 or the HTTP 204 No Content success status response code. Any other status code indicates a problem connecting to Object Storage.

  7. Run the following command:

    java -jar opc_install.jar -opcid <user_id> -opcPass '<auth_token>' -libDir <target_dir> -walletDir <target_dir> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -configFile config.txt

    Note that <target_dir> is the directory to which you extracted opc_installer.zip in step 3.

    This command might take a few minutes to complete because it downloads libopc.so and other files. Once the command completes, you should see several files (including libopc.so) in your target directory.

  8. Change directory to your target directory, and copy the lipopc.so and opc_install.jar files into the /opt/oracle/oak/pkgrepos/oss/odbcs directory.

    cp libopc.so /opt/oracle/oak/pkgrepos/oss/odbcs
    
    
    cp opc_install.jar /opt/oracle/oak/pkgrepos/oss/odbcs

    (You might have to use sudo with the copy commands to run them as root.)

  9. Run the following command to check whether the directory indicated exists:

    
    
    ls /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs

    If this directory exists, perform the following steps:

    1. Back up the files in the /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs directory.
    2. Run these two commands to replace the existing libopc.so and opc_install.jar files in that directory:

      
      cp libopc.so /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs
      cp opc_install.jar /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs
  10. Verify the version of opc_install.jar.

    
    java -jar /opt/oracle/oak/pkgrepos/oss/odbcs/opc_install.jar |grep -i build
    

    If /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs exists, also run the following command:

    
    java -jar /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/opc_install.jar |grep -i build

    Both commands should return the following output:

    Oracle Database Cloud Backup Module Install Tool, build MAIN_2017-08-16.
  11. (Optional) Delete the temporary user and the target directory you used to install the backup module.

After you complete the procedure, contact Oracle Support or your tenant administrator for further instructions. You must provide the OCID of the DB system for which you would like to enable backups.

Direct link to this issue: Unable to use Managed Backups in your DB System

Managed Automatic Backups fail on the VM.Standard1.1 shape due to a process crash

Details: Memory limitations of host machines running the VM.Standard1.1 shape can cause failures for automatic database backup jobs managed by Oracle Cloud Infrastructure (jobs managed by using either the Console or the API). You can change the systems' memory parameters to resolve this issue.

Workaround: Change the systems' memory parameters as follows:

  1. Switch to the oracle user in the operating system.

    [opc@hostname ~]$ sudo su - oracle
  2. Set the environment variable to login to the database instance. For example:

    
    [oracle@hostname ~]$ . oraenv
     ORACLE_SID = [oracle] ? orcl
    				
  3. Start SQL*Plus.

    [oracle@hostname ~]$ sqlplus / as sysdba
  4. Change the initial memory parameters as follows:

    
    SQL> ALTER SYSTEM SET SGA_TARGET = 1228M scope=spfile;
    SQL> ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 1228M;
    SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 2457M;
    SQL> exit
    							
  5. Restart the database instance.

    
    [oracle@hostname ~]$ srvctl stop database -d db_unique_name -o immediate
    [oracle@hostname ~]$ srvctl start database -d db_unique_name -o open								

Direct link to this issue: Managed Automatic Backups fail on the VM.Standard1.1 shape due to a process crash

Oracle Data Pump operations return "ORA-00439: feature not enabled"

Details: On High Performance and Extreme Performance DB systems, Data Pump utility operations that use compression and/or parallelism might fail and return the error ORA-00439: feature not enabled. This issue affects database versions 12.1.0.2.161018 and 12.1.0.2.170117.

Workaround: Apply patch 25579568 or 25891266 to Oracle Database homes for database versions 12.1.0.2.161018 or 12.1.0.2.170117, respectively. Alternatively, use the Console to apply the April 2017 patch to the DB system and database home.

Note

Determining the Version of a Database in a Database Home

To determine the version of a database in a database home, run either $ORACLE_HOME/OPatch/opatch lspatches as the oracle user or dbcli list-dbhomes as the root user.

Direct link to this issue: Oracle Data Pump operations return "ORA-00439: feature not enabled"

Unable to connect to the EM Express console from your 1-node DB system

Details: You might get a "Secure Connection Failed" error message when you try to connect to the EM Express console from your 1-node DB system because the correct permissions were not applied automatically.

Workaround: Add read permissions for the asmadmin group on the wallet directory of the DB system, and then retry the connection:

  1. SSH to the DB system host, log in as opc, sudo to the grid user.

    [opc@dbsysHost ~]$ sudo su - grid
    [grid@dbsysHost ~]$ . oraenv
    ORACLE_SID = [+ASM1] ?
    The Oracle base has been set to /u01/app/grid
    
  2. Get the location of the wallet directory, shown in red below in the command output.

    [grid@dbsysHost ~]$ lsnrctl status | grep xdb_wallet
    
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=dbsysHost.sub04061528182.dbsysapril6.oraclevcn.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet))(Presentation=HTTP)(Session=RAW))
  3. Return to the opc user, switch to the oracle user, and change to the wallet directory.

    [opc@dbsysHost ~]$ sudo su - oracle
    [oracle@dbsysHost ~]$ cd /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet
  4. List the directory contents and note the permissions.

    
    [oracle@dbsysHost xdb_wallet]$ ls -ltr
    total 8
    -rw------- 1 oracle asmadmin 3881 Apr  6 16:32 ewallet.p12
    -rw------- 1 oracle asmadmin 3926 Apr  6 16:32 cwallet.sso
    
  5. Change the permissions:

    
    [oracle@dbsysHost xdb_wallet]$ chmod 640 /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet/*
  6. Verify that read permissions were added.

    [oracle@dbsysHost xdb_wallet]$ ls -ltr
    total 8
    -rw-r----- 1 oracle asmadmin 3881 Apr  6 16:32 ewallet.p12
    -rw-r----- 1 oracle asmadmin 3926 Apr  6 16:32 cwallet.sso
    

Direct link to this issue: Unable to connect to the EM Express console from your 1-node DB system

Exadata DB Systems Only

Backing up to Object Storage using bkup_api or RMAN fails due to certificate change

Details: Backup operations to Object Storage using the Exadata backup utility (bkup_api) or RMAN fail with the following errors:

* DBaaS Error trace:
-> API::ERROR -> KBHS-00715: HTTP error occurred 'oracle-error'
-> API::ERROR -> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> API::ERROR -> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> API::ERROR -> ORA-27023: skgfqsbi: media manager protocol error
-> API::ERROR Unable to verify the backup pieces
-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error

In response to policies implemented by two common web browsers regarding Symantec certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates can cause backups to Object Storage to fail if the Oracle Database Cloud Backup Module still points to the old certificate.

Important

Before using the applicable workaround in this section, follow the steps in Updating Tooling on an Exadata Cloud Service Instance to ensure the latest version of dbaastools_exa is installed on the system.

Workaround for bkup_api: Check the log files for the errors listed above, and if found, reinstall the backup module.

Use the following command to check the status of the failed backup:

/var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=<database_name>

Run the following command to reinstall the backup module:

/var/opt/oracle/ocde/assistants/bkup/bkup -dbname=<database_name>

Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to your host as the oracle user, and reinstall the backup module using your Swift credentials.

Note

Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
java -jar <opc_install.jar_path> -opcId '<Swift_user_ID>' -opcPass '<auth_token>' -container <objectstore_container> -walletDir <wallet_directory> -configfile <config_file> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -import-all-trustcerts

Perform this workaround on all nodes in the cluster.

See Oracle Database Cloud Backup Module documentation for details on using this command.

Direct link to this issue: Backing up to Object Storage using bkup_api or RMAN fails due to certificate change

Console information not synced for Data Guard enabled databases when using dbaascli

Details: With the release of the shared Database Home feature for Exadata DB systems, the Console now also synchronizes and displays information about databases that are created and managed by using the dbaasapi and dbaascli utilities. However, databases with Data Guard configured do not display correct information in the Console under the following conditions:

  • If Data Guard was enabled by using the Console, and then a change is made to the primary or standby database by using dbaascli (such as moving the database to a different home), the result is not reflected in the Console.
  • If Data Guard was configured manually, the Console does not show a Data Guard association between the two databases.

Workaround: We are aware of the issue and working on a resolution. In the meantime, Oracle recommends that you manage your Data Guard enabled databases by using either only the Console or only command line utilities.

Direct link to this issue: Console information not synced for Data Guard enabled databases when using dbaascli

Grid Infrastructure does not start after offlining and onlining a disk

Details: This is a clusterware issue that occurs only when the Oracle GI version is 12.2.0.1 without any bundle patch. The problem is caused by corruption of a voting disk after you offline then online the disk.

Workaround: Determine the version of the GI, and whether the voting disk is corrupted. Repair the disk, if applicable, and then apply the latest GI bundle.

  1. Verify the GI version is 12.2.0.1 without any bundle patch applied:

    
    [root@rmstest-udaau1 ~]# su - grid
    [grid@rmstest-udaau1 ~]$ . oraenv
    ORACLE_SID = [+ASM1] ? +ASM1
    The Oracle base has been set to /u01/app/grid
    [grid@rmstest-udaau1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
    Oracle Interim Patch Installer version 12.2.0.1.6
    Copyright (c) 2018, Oracle Corporation.  All rights reserved.
    
    
    Oracle Home       : /u01/app/12.2.0.1/grid
    Central Inventory : /u01/app/oraInventory
       from           : /u01/app/12.2.0.1/grid/oraInst.loc
    OPatch version    : 12.2.0.1.6
    OUI version       : 12.2.0.1.4
    Log file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-15_22-11-10PM_1.log
    
    Lsinventory Output file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-15_22-11-10PM.txt
    
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: rmstest-udaau1.exaagclient.sretest.oraclevcn.com
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1):
    
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    There are no Interim patches installed in this Oracle Home.
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
  2. Check the /u01/app/grid/diag/crs/<hostname>/crs/trace/ocssd.trc file for evidence that the GI failed to start due to voting disk corruption:

    ocssd.trc
     
    2017-01-17 23:45:11.955 :    CSSD:3807860480: clssnmvDiskCheck:: configured 
    Sites = 1, Incative sites = 1, Mininum Sites required = 1 
    2017-01-17 23:45:11.955 :    CSSD:3807860480: (:CSSNM00018:)clssnmvDiskCheck: 
    Aborting, 2 of 5 configured voting disks available, need 3 
    ...... 
    . 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssnmCheckForNetworkFailure: 
    skipping 31 defined 0 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssnmRemoveNodeInTerm: node 4, 
    slcc05db08 terminated. Removing from its own member and connected bitmaps 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: 
    ################################### 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssscExit: CSSD aborting from 
    thread clssnmvDiskPingMonitorThread 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: 
    ################################### 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: (:CSSSC00012:)clssscExit: A 
    fatal error occurred and the CSS daemon is terminating abnormally 
     
    ------------
     
    2017-01-19 19:00:32.689 :    CSSD:3469420288: clssnmFindVF: Duplicate voting disk found in the queue of previously configured disks 
    queued(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009 
    cbd]), 
    found(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009c 
    bd]), is not corrupted 
    2017-01-19 19:01:06.467 :    CSSD:3452057344: clssnmvVoteDiskValidation: 
    Voting disk(o/192.168.10.19/PCW_CD_02_slcc05cel11) is corrupted
  3. You can also use SQL*Plus to confirm that the voting disks are corrupted:

    1. Log in as the grid user, and set the environment to ASM.

      [root@rmstest-udaau1 ~]# su - grid
      [grid@rmstest-udaau1 ~]$ . oraenv
      ORACLE_SID = [+ASM1] ? +ASM1
      The Oracle base has been set to /u01/app/grid
    2. Log in to SQL*Plus as SYSASM.

      $ORACLE_HOME/bin/sqlplus / as sysasm
    3. Run the following two queries:

      SQL> select name, voting_file from v$asm_disk where VOTING_FILE='Y' and group_number !=0;
      SQL> select  CC.name, count(*) from x$kfdat AA JOIN (select disk_number, name from v$asm_disk where VOTING_FILE='Y' and group_number !=0) CC ON CC.disk_number = AA.NUMBER_KFDAT where AA.FNUM_KFDAT= 1048572 group by CC.name;

      If the system is healthy, the results should look like the following example.

      Query 1 Results

      NAME                           VOTING_FILE
      ------------------------------ ---------------
      DBFSC3_CD_02_SLCLCX0788        Y
      DBFSC3_CD_09_SLCLCX0787        Y
      DBFSC3_CD_04_SLCLCX0786        Y

      Query 2 Results

      NAME                           COUNT(*)
      ------------------------------ ---------------
      DBFSC3_CD_02_SLCLCX0788        8
      DBFSC3_CD_09_SLCLCX0787        8
      DBFSC3_CD_04_SLCLCX0786        8

      In a healthy system, every voting disk returned in the first query should also be returned in the second query and the counts for all the disks should be non-zero. Otherwise, one or more of your voting disks are corrupted.

  4. If a voting disks is corrupted, offline the grid disk that contains the voting disk. The cells will automatically move the bad voting disk to the other grid disk and online that voting disk.

    1. The following command offlines a grid disk named DATAC01_CD_05_SCAQAE08CELADM13.

      SQL> alter diskgroup DATAC01 offline disk DATAC01_CD_05_SCAQAE08CELADM13;
           Diskgroup altered.
    2. Wait 30 seconds and then rerun the two queries in step 3c to verify that the voting disk migrated to the new grid disk and that it is healthy.

    3. Verify the grid disk you offlined is now online:

      SQL> select name, mode_status, voting_file from v$asm_disk where name='DATAC01_CD_05_SCAQAE08CELADM13';

      The mode_status should be ONLINE, and the voting_file should NOT be Y.

    Repeat steps 4a through 4c for each remaining grid disk that contains a corrupt voting disk.
    Note

    If the CRS does not start because of the voting disk corruption, start it using Exclusive mode before you execute the command in step 4.

    crsctl start crs -excl
     
  5. If you are using Oracle GI version 12.2.0.1 without any bundle patch, you must upgrade the GI version to the latest GI bundle, whether or not a voting disk was corrupted.

    See Patching Oracle Grid Infrastructure and Oracle Databases Using dbaascli for instructions on how to use the dbaascli utility to perform patching operations for Oracle Grid Infrastructure and Oracle Database on Exadata Database Service on Dedicated Infrastructure.

Direct link to this issue: Grid Infrastructure does not start after offlining and onlining a disk

Managed features not enabled for systems provisioned before June 15, 2018

Details: Exadata DB systems launched on June 15, 2018 or later automatically include the ability to create, list, and delete databases by using the Console, API, or Oracle Cloud Infrastructure CLI. However, systems provisioned before this date require extra steps to enable this functionality.

Attempts to use this functionality without the extra steps result in the following error messages:

  • On creating a database - "Create Database is not supported on this Exadata DB system. To enable this feature, please contact Oracle Support."
  • On terminating a database - "DeleteDbHome is not supported on this Exadata DB system. To enable this feature, please contact Oracle Support."

Workaround: You need to install the Exadata agent on each node of the Exadata DB system.

First, create a service request for assistance from Oracle Support Services. Oracle Support will respond by providing you with a preauthenticated URL for an Oracle Cloud Infrastructure Object Storage location where you can obtain the agent.

Before you install the Exadata agent:

To install the Exadata agent:

  1. Log on to the node as root.
  2. Run the following commands to install the agent:

    [root@<node_n>~]# cd /tmp
    [root@<node_n>~]# wget https://objectstorage.<region_name>.oraclecloud.com/p/1q523eOkAOYBJVP9RYji3V5APlMFHIv1_6bAMmxsS4E/n/dbaaspatchstore/b/dbaasexadatacustomersea1/o/backfill_agent_package_iwwva.tar
    [root@<node_n>~]# tar -xvf /tmp/backfill_agent_package_*.tar -C /tmp
    [root@<node_n>~]# rpm -ivh /tmp/dbcs-agent-2.5-3.x86_64.rpm

    Example output:

    [root@<node_n>~]# rpm -ivh dbcs-agent-2.5-3.x86_64.rpm
    Preparing...                ########################################### [100%]
    Checking for dbaastools_exa rpm on the system
    Current dbaastools_exa version = dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64
    dbaastools_exa version dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64 is good. Continuing with dbcs-agent installation
       1:dbcs-agent             ########################################### [100%]
    initctl: Unknown instance:
    initctl: Unknown instance:
    initzookeeper start/running, process 85821
    initdbcsagent stop/waiting
    initdbcsadmin stop/waiting
    initdbcsagent start/running, process 85833
    initdbcsadmin start/running, process 85836
    
  3. Confirm that the agent is installed and running.

    [root@<node_n>~]# rpm -qa | grep dbcs-agent
    dbcs-agent-2.5-0.x86_64
    [root@<node_n>~]# initctl status initdbcsagent
    initdbcsagent start/running, process 97832
  4. Repeat steps 1 through 3 on the remaining nodes.

After the agent is installed on all nodes, allow up to 30 minutes for Oracle to complete additional workflow tasks such as upgrading the agent to the latest version, rotating the agent credentials, and so on. When the process is complete, you should be able to use the Exadata managed features in the Console, API, or Oracle Cloud Infrastructure CLI.

Direct link to this issue: Managed features not enabled for systems provisioned before June 15, 2018

Patching configuration file points to wrong region

Details: The patching configuration file (/var/opt/oracle/exapatch/exadbcpatch.cfg) points to the object store of the us-phoenix-1 region, even if the Exadata DB system is deployed in another region.

This problem occurs if the release version of the database tooling package (dbaastools_exa) is 17430 or lower.

Workaround: Follow the instructions in Updating Tooling on an Exadata Cloud Service Instance to confirm that the release version of the tooling package is 17430 or lower, and then update it to the latest version.

Direct link to this issue: Patching configuration file points to wrong region

Various database workflow failures due to Oracle Linux 7 removal of required temporary files

Details: A change in how Oracle Linux 7 handles temporary files can result in the removal of required socket files from the /var/tmp/.oracle directory. This issue affects only Exadata DB systems running the version 19.1.2 operating system image.

Workaround: Run sudo /usr/local/bin/imageinfo as the opc user to determine your operating system image version. If your image version is 19.1.2.0.0.190306, follow the instructions in Doc ID 2498572.1 to fix the issue.

Direct link to this issue: Various database workflow failures due to Oracle Linux 7 removal of required temporary files

Virtual machine DB system storage scaling

If you are scaling either regular data storage or recovery area (RECO) storage from a value less than 10,240 GB (10 TB) to a value exceeding 10,240 GB, perform the scaling in two operations. First, scale the system to 10,240 GB. After this first scaling operation is complete and the system is in the "available" state, perform a second scaling operation, specifying your target storage value above 10,240 GB. Attempting to scale from a value less than 10,240 GB to a value higher than 10,240 GB in a single operation can lead to a failure of the scaling operation. For instructions on scaling, see Scale Up the Storage For a Virtual Machine DB System.

Virtual Machine DB systems shape scaling fails because DB_Cache_nX parameter is not 0 (zero)

Details: When scaling a virtual machine DB system to use a larger system shape, the scaling operation fails if a DB_Cache_nX parameter is not set to 0 (zero).

Workaround: When scaling a virtual DB system, ensure that all DB_Cache_nX parameters (for example, DB_nK_CACHE_SIZE) are set to 0.

DNS

Currently, there are no known DNS issues.

Document Understanding

For known issues with Document Understanding, see Known Issues.

Events

Currently, there are no known issues for Events.

Full Stack Disaster Recovery

Volume group backups to perform intra-region, cross-AD DR

Details: If you use volume group backups when performing DR operations for compute and storage across different ADs within the same region, back and forth DR transitions will cause the compute and associated block storage (which uses volume group backups) to end up in a different AD each time.

Workaround: This issue does not affect block storage that is replicated using volume group replication.

Auto-tune performance settings for block storage volumes are not carried over during DR operations

Details: Auto-tune performance settings for block storage volumes are not carried over during DR operations.

Workaround: For block storage volumes which have auto-tuned performance enabled you must re-enable these settings after Full Stack DR transitions these block storage volumes to another region.

Modifications made to Full Stack DR-protected resources may cause problems in certain failover situations

Details: If you perform a failover operation immediately after modifying an Full Stack DR-protected resource, then the resource recovery may fail, or the resource may not be recovered properly. For example, if you change the replication target or other properties for a volume group that you added to a DR protection group, and the primary region suffers an immediate outage thereafter, Full Stack DR may not detect the changes you made to the volume group replication settings, and this will affect recovery of that volume group.

Workaround: Perform a switchover precheck immediately after making any changes to any resources under DR protection.

User-defined steps on Microsoft Windows instances cannot use "Run As User" when executing local scripts

Details: Full Stack DR uses the Oracle Cloud Agent (OCA) Run Command utility to run local scripts on instances. When you configure a user-defined step to run a local script on a Microsoft Windows instance, then you can't use the Full Stack DR Run As User feature that allows you to specify a different userid to run local scripts that reside on instances.

Workaround: On Microsoft Windows instances, the script can only run as the default ocarun userid used by the Oracle Cloud Agent Run Command utility. This limitation does not affect Oracle Linux instances.

User-defined steps on Microsoft Windows instances can't use scripts inaccessible to the 'ocarun' userid

Details: Full Stack DR uses the Oracle Cloud Agent (OCA) Run Command utility to run local scripts on instances. By default, these scripts are run as the ocarun user.

Workaround: On a Microsoft Windows instance, any local script that you configure to run as a user-defined step in a DR plan must be accessible and executable by this ocarun userid.

For local script run using a user-defined step, not providing the full paths causes errors

Details: When running a local script using a user-defined step in a DR plan, if you do not provide full paths to script interpreters or scripts, then Full Stack DR will throw errors.

Workaround: When you configure a user-defined step in a DR plan to run a local script that resides on an instance, ensure that you provide the full path to any interpreter that may precede the script name, as well as the full path to the script.

Specify /bin/sh /path/to/myscript.sh arg1 arg2 instead of sh myscript.sh arg1 arg2
OCFS2 cluster nodes will detach from the cluster if their private IPs can't be reassigned in the standby region

Details: During DR operations, Full Stack DR attempts to reassign the original private IP assigned to an instance if the CIDR-block of the destination subnet matches the CIDR-block of the source subnet, and if the original private IP of the instance is not already assigned.

If you use Full Stack DR to relocate all the nodes in an OCFS2 cluster, and the private IP for any of the cluster node can't be reassigned, those cluster nodes will detach from the OCFS2 cluster after the nodes are launched in the standby region.

Workaround: Ensure that the destination subnet's CIDR-block matches the CIDR-block of the source subnet and all private IP addresses required for cluster nodes are available in the destination subnet.

After DR operations, compute instances may display incorrect information for "Instance Access"

Details: After Full Stack DR relocates an instance to a different region, the resource page of the instance may display the following message for Instance Access:

We are not quite sure how to connect to an instance this uses this image

Workaround: Ignore this message. SSH connections to the instance will work normally if you use the original SSH keyfile to connect to and authenticate the instance.

After DR operations, boot volumes for an instance may not display the correct Image information

Details: After Full Stack DR relocates an instance to a different region, the resource page of the instance may display incorrect information for the Image portion of its boot volume.

For example, the Image information column may display the following message: Error loading data

Workaround: This error message is for the display of the Image name but does not affect the operation of the instance or its boot volume.

The command for running background jobs fails at the user-defined step

Details: When there is no sleep time for the nohup command, the run command fails to execute and fails to report the status successfully.

Workaround: To start a process in the background, add sleep in the wrapper script, as shown here:
nohup sh enabler.sh  &> enabler.log &
sleep 10
exit 0
Performance settings for block volumes are not replicated and restored automatically

Details: During a DR transition, when the block volumes are moved to a different region, the performance settings (IOPS and Throughput) are not replicated and restored automatically. You may need to reconfigure these performance settings.

Workaround: After executing a DR plan, configure the performance setting manually.

Globally Distributed Autonomous Database

For known issues with Globally Distributed Autonomous Database, see Known Issues.

Integration

For known issues with Integration Generation 2, see Known Issues.

For known issues with Integration 3, see Known Issues.

Java Management

For details about known issues in the Java Management service, see Known Issues.

Language

Currently, there are no known issues with the Language service.

Load Balancer

For known issues with the Load Balancer service, see Known Issues.

Logging Analytics

On-demand upload from a Windows machine using a zip file

Details: The on-demand upload of a zip file which is created on a Windows machine might sometimes fail to upload the log content. The reason for the failure is that the zip created on Windows has the same last modification time as the file's creation time. So, when the file is unzipped, the file's last modification time is set as the file's creation time which might be older than the timestamp of the log entries in the log file. In such a case, the log entries with the timestamp more recent than the file's last modification time are not uploaded.

An example of the issue:

Timestamp on the log entry: 2020-10-12 21:12:06

File last modification time of the log file: 2020-10-10 08:00:00

Workaround: Copy the log files to a new folder and create a zip file. This action makes the file's last modification time more recent than the timestamp of the log entries. Use this new zip file for on-demand upload.

Using the previous example, after the workaround is implemented:

Timestamp on the log entry: 2020-10-12 21:12:06

File last modification time of the log file: 2021-03-31 08:00:00

Direct link to this issue: On-demand upload from a Windows machine using a zip file

Special handling when monitoring logs in large folders

Details: Folders containing more than 10,000 files may cause high resource (memory / storage / CPU) usage by the Management Agent which may lead to slow log collection, affect other functionalities of the Management Agent, and may also slow down the host machine.

When large folders are encountered by the Management Agent Logging Analytics plug-in, a message similar to the following example message is added to the Management Agent mgmt_agent_logan.log file:

2020-07-30 14:46:51,653 [LOG.Executor.2388 (LA_TASK_os_file)-61850] INFO - ignore large dir /u01/service/database/logs. set property loganalytics.enable_large_dir to enable. 

Resolution: We recommend avoiding large folders. Utilize a cleanup mechanism to remove files soon after they are collected so that the Management Agent would have sufficient time to collect them again.

However, if you want to continue monitoring logs in large folders, then you can enable the support by performing the following action:

sudo -u mgmt_agent echo "loganalytics.enable_large_dir=true" >> INSTALL_DIRECTORY/agent_inst/config/emd.properties

Replace INSTALL_DIRECTORY with the path to the agent_inst folder and restart the agent.

You may have to make some configuration changes on your host agent to enable this support. Try the new settings in a development or test environment before making them in production. Determine the increase for the following factors by using a representative environment to test them. The actual required increase will depend on factors such as number of files, rate of file creation, and the other types of collection that the Management Agent is doing.
  • Increase the heap size of the Management Agent. For directories with a large number of files, the required heap size increases with the number of files. See Management Agent documentation.
  • Ensure that sufficient disk space and inodes are available for handling the large number of state files that the Management Agent may have to keep. This depends on the type of log source and parser used. If your parser uses the Header-Detail function, then the agent creates and stores the header in a cache file as long as the original log file exists.
  • Ensure that the operating system setting for the number of open files can support the Management Agent reading the large folder and potentially large number of state files.

Direct link to this issue: Special handling when monitoring logs in large folders

Managed Access

For known issues with Managed Access, see Known Issues.

Managed Cloud Self Service Platform

For known issues with Managed Cloud Self Service Platform, see Known Issues.

Management Agent

Currently, there are no known Management Agent issues.

Marketplace

For known issues with Marketplace, see Known issues.

Media Services

For known issues with Media Flow, see Known Issues.

For known issues with Media Streams, see Known Issues.

Network Load Balancer

For known issues with Network Load Balancer, see Known Issues.

OCI Control Center

For known issues with OCI Control Center, see Known Issues.

Ops Insights

Currently, there are no known Ops Insights issues.

Oracle Cloud Marketplace

For known issues with Oracle Cloud Marketplace, see Known Issues.

OS Management Hub

For known issues with OS Management Hub, see Known Issues.

Partner Portal

For known issues with Partner Portal, see Known Issues.

Process Automation

For details about known issues in the Process Automation service, see Known Issues.

Publisher

For known issues with Marketplace, see Known Issues.

Queue

Currently, there are no known Queue issues.

Roving Edge Infrastructure

Currently, there are no known Roving Edge Infrastructure issues.

Secure Desktops

For known issues with Secure Desktops, see Known Issues.

Search with OpenSearch

For known issues with Search with OpenSearch, see Known Issues.

Security Zones

For known issues with Security Zones, see Known Issues.

Service Mesh

For known issues with Service Mesh, see Known Issues.

Service Catalog

For known issues with Service Catalog, see Known Issues.

Speech

Currently, there are no known issues with the Speech service.

Tenancy Management

For known issues with Tenancy Management see Known Issues.

Threat Intelligence

For known issues with Threat Intelligence, see Known Issues.

Traffic Management Steering Policies

Currently, there are no known Traffic Management issues.

Vault

Currently, there are no known Vault service issues.

Web Application Acceleration

For known issues with Web Application Acceleration, see Known Issues.

Zero Trust Packet Routing

For known issues with Zero Trust Packet Routing, see Known Issues.