B Bundled Scripts

Learn about scripts that illustrate database control, ZFS storage, and ZFS analysis.

This appendix includes the following sections:

B.2 Oracle Virtual Machine (OVM) DR Script — siteguard_ovm_control.py

A script to perform disaster recovery operations for OVM (Oracle Virtual Machine) deployments.

Site Guard provides the siteguard_ovm_control.py bundled script for performing disaster recovery operations for OVM deployments that use OVM version 3.3.x or 3.4.x. For deployments that have Oracle Fusion Middleware and Oracle Fusion Applications that are deployed inside OVM guests, Site Guard can facilitate the disaster recovery of the virtual machine guests in addition to the disaster recovery performed for middleware and applications. This means that the VM guests running middleware and applications are also relocated from the primary to the standby site.

Note:

Oracle strongly recommends against using OVM DR for Oracle Database disaster recovery. Oracle Database disaster recovery should use Active Data Guard for protecting databases.

Configuring siteguard_ovm_control.py

The siteguard_ovm_control.py script is a multipurpose script that is used during all stages of the disaster recovery operation for a Oracle Virtual Machine deployment. The options and parameters provided to the script change depending on the specific stage of the DR operation.

Depending on the stage of the OVM DR operation, the siteguard_ovm_control.py script is configured either as Site Guard Custom Precheck Script, Pre Script, or a Post Script with the appropriate options as shows in the examples below in the Usage section.

Custom Precheck Script

When configured as a Custom Precheck Script with the start_precheck or stop_precheck options, the script performs Prechecks to ensure that OVM guests can be started or stopped as part of the DR operation.

Pre Script

When configured as a Pre Script with the start_prepare or start options, the script prepares OVM repositories and starts OVM guests at the standby site.

Post Script

When configured as a Post Script with the stop or stop_cleanup options, the script stops OVM guests and cleans up OVM repositories at the primary site.

Sequence of Operations

In a typical switchover operation, the following is the sequence of configured scripts that are executed as part of the operation.
  1. Precheck Phase

    • Custom Precheck Primary Site (stop_precheck option)

    • Custom Precheck Standby Site (start_precheck option)

  1. Post Script Phase at Primary Site

    • Post Script Primary Site (stop option

    • Post Script Primary Site (start_cleanup option)

  1. Pre Script Phase at Standby Site

    • Pre Script Standby Site (start_prepare option)

    • Pre Script Standby Site (start option)

Usage

python siteguard_ovm_control.py 
  --action <action>
  --uri <uri>
  --pool <pool_name>
  --vm <vm_list>
  --repo <repo_list>
  --cert <cert_path>
  --signed <signed_cert_path>
  [--force]
  [--nocert]

This is the top-level entry point for Site Guard OVM disaster recovery operations. This script can be invoked through a Site Guard operation plan, or can be run as a stand-alone mode script.

This script will perform the specified action on the specified list of VMs or repositories. For example:
  • Specifying a "stop_precheck" action for a list of guest VMs will perform a Site Guard Precheck to ensure that all the specified guest VMs exist and can be stopped at the primary site. Note: this will NOT actually stop the specified guest VMs.

  • Specifying a "stop" action for a list of guest VMs will shut down all the guest VMs. You will typically do this to stop the guest VMs at the primary site as part of a Switchover to another site.

  • Specifying a "start_precheck" action for list of guest VMs will perform a Site Guard Precheck to ensure that all the guest VMs can be started at the standby site. Note: this will NOT actually start the guest VMs.

  • Specifying a "start_prepare" action for a list of repositories will prepare the guest VMs for a "start" operation. You will typically do this before you start the standby site during a Switchover or Failover operation.

  • Specifying a "start" action for a list of guest VMs will assign all the guest VMs to the specified server pool and start the VMs. You will typically do this to after a "start_prepare" to start the guest VMs at the standby site during a Switchover or Failover operation.

Note:

Specifying the --force option with a "start_prepare" (or "start") action will forcibly acquire ownership of repositories (or start the specified guest VMs). You will typically do this during a Failover operation to forcibly start guest VMs at the standby site, without regard to what happened at the Primary site. This may be necessary because primary site may be unreachable and the guest VMs may not have been cleanly shut down at the primary site.

Options

  • -h, --help

    Show the help message and exit.

  • -a ACTION, --action=ACTION

    The disaster recovery action to perform <start | stop | start_prepare | stop_cleanup | start_precheck | start_prepare_precheck | stop_precheck | stop_cleanup_precheck>.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --action start_precheck.

  • -f, --force

    Forcibly perform the specified action. Ignore any inconsistencies and forcibly perform the specified action. Can be used to forcibly start guest VMs in the specified repositories at the standby site in the event of a fail over. It may be necessary to do this in cases where the primary site is unreachable and a graceful shutdown of guest VMs is not possible. This flag only applies to the 'start' action. It is ignored for other actions.

    OPTIONAL argument. Default value: OFF (--force will not be used).

    Example: --force

  • -u URI, --uri=URI

    The OVM Manager URI including the port number.

    MANDATORY argument. Default Value: <not applicable>.

    Example: -uri https://ovmm.mycompany.com:7002

  • -r "Repository Name(s)", --repo="Repository Name(s)"

    A list of one or more repositories on which the action is to be performed. When specifying multiple repositories, separate repository names with commas. Repositories will be processed in the order specified.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --repo "SiteA Repo Prod CRM (NAS), SiteA Repo Prod ERP (SAN), SiteA Repo Prod IDM DB (NAS)"

  • -v "Ordered list of VMs", --vm="Ordered list of VMs

    An ordered list of VMs (and their containing repositories) on which the action must be performed. The pecified VMs will be processed in the order given. VMs and their repositories should be separated using the ":" character. When specifying multiple VM:repository pairs, separate the pairs with commas. To specify "All VMs in a repository", use the "*" character as a wild-card.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --vm "*:SiteA Repo Prod CRM DB (SAN), Mid-Tier VM1:SiteA Repo Prod CRM MT (NAS), Mid-Tier VM2:SiteA Repo Prod CRM MT (NAS), *:SiteA Repo Prod IDM DB (NAS)"

  • -p "Pool Name", --pool="Pool Name

    The server pool name on which the action is performed. This argument is mandatory when the 'start' action is specified. It is ignored otherwise.

    CONDITIONALLY MANDATORY argument. Default Value: <not applicable>.

    Example: --pool "My Primary Pool"

  • -c /path/to/unsigned_certificate, --cert=/path/to/unsigned_certificate

    The path to your unsigned public SSL certificate (PEM).

    OPTIONAL argument. Default Value: <not applicable>

    Example: --cert /opt/ovmdr/cert/my-unsigned-cert.pem

  • -s /path/to/signed_certificate, --signed=/path/to/signed_certificate

    The path to store the signed OVM SSL certificate (PEM).

    OPTIONAL argument. Default Value: <not applicable>

    Example: --signed /opt/ovmdr/cert/ovm-signed-cert.pem

  • -n, --nocert

    Do not user certificates. Suppress warnings.

    OPTIONAL argument Default value: OFF (--nocert will not be used).

    Example: --signed /opt/ovmdr/cert/ovm-signed-cert.pem

Usage Examples

Example 1

Perform a "stop_precheck" at the primary site to ensure that we can stop the guest VMs in the repositories "SiteA Repo Prod CRM (NAS)" and "SiteA Repo Prod ERP (SAN)". Use the unsigned certificate "/opt/ovmdr/cert/my-cert.pem" when communicating with the OVM server. Use the "/opt/ovmdr/cert/my-signed-cert.pe" file to save the signed certificate received from the OVM Manager.
siteguard_ovm_control.py 
     --action stop_precheck
     --uri https://primovmm.mycompany.com:7002
     --vm "*:SiteA Repo Prod CRM (NAS), *:SiteA Repo Prod ERP (SAN)"
     --cert /opt/ovmdr/cert/my-cert.pem
     --signed /opt/ovmdr/cert/my-signed-cert.pem

Example 2

Perform a "start_prepare" at the standby site on the repositories "SiteA Repo Prod CRM (NAS)" and "SiteA Repo Prod ERP (SAN)". Assign all VMs to the server pool "Standby Server Pool Denver". Use the "--force" flag to indicate that this is part of a failover operation. Do not use signed or unsigned certificates and suppress any certificate-related warnings.
siteguard_ovm_control.py 
     --action start_prepare --force
     --uri https://stbyovmm.mycompany.com:7002
     --repo "SiteA Repo Prod CRM (NAS), SiteA Repo Prod ERP (SAN)"
     --pool "Standby Server Pool Denver"
     --nocert

Example 3

Perform a sequenced (ordered) "start" at the standby site on the guest VMs "RAC DB VM1" and "RAC DB VM2" in the repository "SiteA Repo Prod CRM (NAS)" and all the guest VMs in the repository "SiteA Repo Prod ERP (SAN)". Use the "--force" flag to indicate that this is part of a failover operation. Do not use signed or unsigned certificates and suppress any certificate-related warnings.
siteguard_ovm_control.py 
     --action start 
     --force
     --uri https://stbyovmm.mycompany.com:7002
     --vm "RAC DB VM1:SiteA Repo Prod CRM (NAS), RAC DB VM2:SiteA Repo Prod CRM (NAS), *:SiteA Repo Prod ERP (SAN)"
     --nocert

Example 4

Perform a sequenced (ordered) "stop" at the primary site on the guest VMs "Mid-Tier VM1" and "Mid-Tier VM2" in the repository "SiteA Repo Prod CRM (NAS)". Then, stop all remaining guest VMs in the repositories "SiteA Repo Prod CRM (NAS)" and "SiteA Repo Prod ERP (SAN)" (in any order). Do not use signed or unsigned certificates and suppress any certificate-related warnings.
siteguard_ovm_control.py 
     --action stop
     --uri https://primovmm.mycompany.com:7002
     --vm "Mid-Tier VM1:SiteA Repo Prod CRM (NAS), Mid-Tier VM2:SiteA Repo Prod CRM (NAS), *:SiteA Repo Prod CRM (NAS), *:SiteA Repo Prod ERP (SAN)"
     --nocert

Example 5

Perform a "stop_cleanup" at the primary site on the repositories "SiteA Repo Prod CRM (NAS)" and "SiteA Repo Prod ERP (SAN)". Do not use signed or unsigned certificates and suppress any certificate-related warnings.
siteguard_ovm_control.py 
     --action stop_cleanup
     --uri https://primovmm.mycompany.com:7002
     --repo "SiteA Repo Prod CRM (NAS), SiteA Repo Prod ERP (SAN)"
     --nocert

Note:

The following installation pre-requsities must be satisfied before this script can execute on a host where it is configured to execute:

  • You must install the python Requests module (version 2.5.1). See http://docs.python-requests.org/en/master/.

To ensure that you use the correct path to this python interpreter, specify the path to the correct python installation as part of the script configuration, such as:

/home/oracle/python2.6/bin/python siteguard_ovm_control.py {options]

B.3 Oracle Virtual Machine CLI (OVMCLI) DR Script — siteguard_ovmcli_control.py

A script to perform disaster recovery operations for OVM deployments.

A script to perform disaster recovery operations for OVM deployments that use OVM version 3.2.x. Disaster recovery for these OVM deployments can be managed using the CLI version of the Site Guard OVM DR script. For deployments that have Oracle Fusion Middleware and Oracle Fusion Applications that are deployed inside OVM guests, Site Guard can facilitate the disaster recovery of the virtual machine guests in addition to the disaster recovery performed for middleware and applications. This means that the VM guests running middleware and applications are also relocated from the primary to the standby site.

Note:

Oracle strongly recommends against using OVM DR for Oracle Database disaster recovery. Oracle Database disaster recovery should use Active Data Guard for protecting databases.

Configuring siteguard_ovmcli_control.py

The siteguard_ovmcli_control.py script is a multipurpose script that is used during all stages of the disaster recovery operation for a Oracle Virtual Machine deployment. The options and parameters provided to the script change depending on the specific stage of the DR operation.

Depending on the stage of the OVM DR operation, the siteguard_ovmcli_control.py script is configured either as Site Guard Custom Precheck Script, Pre Script, or a Post Script with the appropriate options as shows in the examples below in the Usage section.

Custom Precheck Script

When configured as a Custom Precheck Script with the start_precheck or stop_precheck options, the script performs Prechecks to ensure that OVM guests can be started or stopped as part of the DR operation.

Pre Script

When configured as a Pre Script with the start_prepare or start options, the script prepares OVM repositories and starts OVM guests at the standby site.

Post Script

When configured as a Post Script with the stop or stop_cleanup options, the script stops OVM guests and cleans up OVM repositories at the primary site.

Sequence of Operations

In a typical switchover operation, the following is the sequence of configured scripts that are executed as part of the operation.
  1. Precheck Phase

    • Custom Precheck Primary Site (stop_precheck option)

    • Custom Precheck Standby Site (start_precheck option)

  1. Post Script Phase at Primary Site

    • Post Script Primary Site (stop option

    • Post Script Primary Site (start_cleanup option)

  1. Pre Script Phase at Standby Site

    • Pre Script Standby Site (start_prepare option)

    • Pre Script Standby Site (start option)

Usage

python siteguard_ovmcli_control.py 
  --action <action>
  --host <host>
  --port <port>
  --pool <pool_name>
  --vm <vm_list>
  --repo <repo_list>
  [--force]
  

This is the script for Site Guard OVM disaster recovery operations.This script can be invoked through a Site Guard operation plan, or it can be run as a stand-alone script.

This script will perform the specified action on the specified list of VMs or repositories. For example:
  • Specifying a "stop_precheck" action for a list of guest VMs will perform a Site Guard Precheck to ensure that all the specified guest VMs exist and can be stopped at the primary site. Note: this will NOT actually stop the specified guest VMs.

  • Specifying a "stop" action for a list of guest VMs will shut down all the guest VMs. You will typically do this to stop the guest VMs at the primary site as part of a Switchover to another site.

  • Specifying a "stop_cleanup" action for a list of repositories will clean up the primary site after you have stopped all the guest VMs (using the "stop" action). You will typically do this to finish evacuating the primary site as part of a Switchover to another site.

  • Specifying a "start_precheck" action for list of guest VMs will perform a Site Guard Precheck to ensure that all the guest VMs can be started at the standby site. Note: this will NOT actually start the guest VMs.

    Specifying a "start_prepare" action for a list of repositories will prepare the guest VMs for a "start" operation. You will typically do this before you start the standby site during a Switchover or Failover operation.

  • Specifying a "start" action for a list of guest VMs will assign all the guest VMs to the specified server pool and start the VMs. You will typically do this to after a "start_prepare" to start the guest VMs at the standby site during a Switchover or Failover operation.

Note:

Specifying the --force option with a "start_prepare" (or "start") action will forcibly acquire ownership of repositories (or start the specified guest VMs). You will typically do this during a Failover operation to forcibly start guest VMs at the standby site, without regard to what happened at the Primary site. This may be necessary because primary site may be unreachable and the guest VMs may not have been cleanly shut down at the primary site.

Options

  • -h, --help

    Show the help message and exit.

  • -a ACTION, --action=ACTION

    The disaster recovery action to perform <start | stop | start_prepare | stop_cleanup | start_precheck | start_prepare_precheck | stop_precheck | stop_cleanup_precheck>.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --action start_precheck.

  • -f, --force

    Forcibly perform the specified action. Ignore any inconsistencies and forcibly perform the specified action. Can be used to forcibly start guest VMs in the specified repositories at the standby site in the event of a fail over. It may be necessary to do this in cases where the primary site is unreachable and a graceful shutdown of guest VMs is not possible. This flag only applies to the 'start' action. It is ignored for other actions.

    OPTIONAL argument. Default value: OFF (--force will not be used).

    Example: --force

  • -h, --host=HOST

    The OVM Manager host to connect to for SSH connections

    Type: OPTIONAL argument. Default Value: localhost.

    Example: --host ovmm.mycompany.com

  • -p, --port=PORT

    The SSH port number to to connect to on the OVM Manager host (see '--host' option).

    Type: OPTIONAL argument. Default Value: 10000

    Example: --port 10000

  • -r "Repository Name(s)", --repo="Repository Name(s)"

    A list of one or more repositories (and their containing storage servers and storage type) on which the action is to be performed. Repository names, storage Server names, and storage server types should be separated using the ":" character. Repository name format must use the format: <repo_name>:<storage_server_name>:<storage_type>. Storage type must be one of: 'nfs', 'iscsi', or 'fcp'. When specifying multiple repositories, separate repository names with commas. Repositories will be processed in the order specified.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --repo "SiteA-Repo1:SAN-Server-X:iscsi, SiteA-Repo2:SAN-Server-Y:fcp, SiteA-Repo3:File-Server-Z:nfs"

  • -v "Ordered list of VMs", --vm="Ordered list of VMs

    An ordered list of VMs (and their containing repositories) on which the action must be performed. The pecified VMs will be processed in the order given. VMs and their repositories should be separated using the ":" character. When specifying multiple VM:repository pairs, separate the pairs with commas. To specify "All VMs in a repository", use the "*" character as a wild-card.

    MANDATORY argument. Default Value: <not applicable>.

    Example: --vm "*:SiteA-Repo-CRM, IDM-VM1:SiteA-Repo-IDM, IDM-VM2:SiteA-Repo-IDM, *:SiteA-Repo-IDM"

  • -p "Pool Name", --pool="Pool Name

    The server pool name on which the action is performed. This argument is mandatory when the 'start' action is specified. It is ignored otherwise.

    CONDITIONALLY MANDATORY argument. Default Value: <not applicable>.

    Example: --pool "My Primary Pool"

Usage Examples

Example 1

Perform a "stop_precheck" at the primary site to ensure that we can stop the guest VMs in the repositories "SiteA-Repo-CRM" and "SiteA-Repo-ERP". Connect to the default host (localhost) using the default port (10000).
siteguard_ovmcli_control.py 
     --action stop_precheck
     --vm "*:SiteA-Repo-CRM, *:SiteA-Repo-ERP"
  

Example 2

Perform a "start_prepare" at the standby site on the ISCCI repository "SiteA-Repo-CRM" on storage server "SAN-Server-100" and the NFS repository "SiteA-Repo-ERP" on file server "NFS-Server-200". Assign all VMs to the server pool "Standby Server Pool Denver". Use the "--force" flag to indicate that this is part of a failover operation. Connect to the host stbyovmm.mycompany.com using the default port (10000).
siteguard_ovmcli_control.py 
     --action start_prepare --force
     --host stbyovmm.mycompany.com
     --repo "SiteA-Repo-CRM:SAN-Server-100:iscsi, SiteA-Repo-ERP:NFS-Server-200:nfs"
     --pool "Standby Server Pool Denver"
     

Example 3

Perform a sequenced (ordered) "start" at the standby site on the guest VMs "CRM-VM1" and "CRM-VM2" in the repository "SiteA-Repo-CRM" and all the guest VMs in the repository "SiteA-Repo-ERP". Use the "--force" flag to indicate that this is part of a failover operation. Connect to the host stbyovmm.mycompany.com using the port 11000.
siteguard_ovmcli_control.py 
     --action start 
     --force
     --host stbyovmm.mycompany.com
     --port 11000
     --vm "CRM-VM1:SiteA-Repo-CRM, CRM-VM2:SiteA-Repo-CRM, *:SiteA-Repo-ERP"
     

Example 4

Perform a sequenced (ordered) "stop" at the primary site on the guest VMs "CRM-VM1" and "CRM-VM2" in the repository "SiteA-Repo-CRM". Then, stop all remaining guest VMs in the repositories "SiteA-Repo-CRM" and "SiteA-Repo-ERP" (in any order). Use default values for host and port.
siteguard_ovmcli_control.py 
     --action stop
     --vm "CRM-VM1:SiteA-Repo-CRM, CRM-VM2:SiteA-Repo-CRM, *:SiteA-Repo-CRM, *:SiteA-Repo-ERP"
   

Example 5

Perform a "stop_cleanup" at the primary site on the ISCCI repository "SiteA-Repo-CRM" on storage server "SAN-Server-100" and the NFS repository "SiteA-Repo-ERP" on file server "NFS-Server-200". Use default values for host and port.
siteguard_ovmcli_control.py 
     --action stop_cleanup
     --repo "SiteA-Repo-CRM:SAN-Server-100:iscsi, SiteA-Repo-ERP:NFS-Server-200:nfs"
    

B.4 WebLogic Server Control Script – wls_control_wrapper.pl

A script that allows you to configure custom Oracle WebLogic Server operations in the Pre or Post stages of an operation plan.

In previous versions of Site Guard, Oracle WebLogic Server (WLS) operations were not directly available for configuration by users. WLS operations could not be be configured outside the operation plan bucket where WLS disaster recovery occurred. This WLS operation bucket was configured and pre-inserted by Site Guard at a fixed point in the operation plan.

The wls_control_wrapper.pl script solves this problem. The script is provided as a bundled (out-of-box) script and it gives you the ability to add and configure their own custom WLS operations anywhere in the Pre or Post stages of an operation plan.

Usage

perl wls_control_wrapper.pl 
   --component '<component>' 
   --usecase    '<usecase>' 
   --wls_home '<wls_home>' 
   --mw_home '<mw_home>' 
   --oracle_home '<oracle_home>' 
   --domain_name '<domain_name>' 
   --domain_dir '<domain_directory>' 
   --server_name '<server_name>' 
   --server_type '<server_type>' 
   --admin_host '<admin_host>' 
   --admin_port '<admin_port>' 
   --nm_host '<node_manager_host>' 
   --timeout '<3600>'                     

Options

  • --component

    The type of the component on which the operation needs to be executed. Supported components: ADMIN_SERVER, MANAGED_SERVER, and CAM_COMPONENT.

  • --usecase

    The usecase to be executed. Supported usecases: ADMIN_SERVER_STATUS, ADMIN_SERVER_START, ADMIN_SERVER_STOP, MANAGED_SERVER_STATUS, MANAGED_SERVER_START, MANAGED_SERVER_STOP, CAM_COMPONENT_STATUS, CAM_COMPONENT_START, and CAM_COMPONENT_STOP

  • --wls_home

    The WebLogic Server HOME directory.

  • --mw_home

    The Oracle Fusion Middleware HOME directory.

  • --oracle_home

    The WebLogic Server's ORACLE_HOME.

  • --domain_name

    The domain name.

  • --domain_dir

    The domain directory.

  • --server_name

    The WebLogic Administration Server’s name.

  • --server_type

    The type of the WebLogic Administration Server.

  • --admin_host

    The host of the WebLogic Administration Server.

  • --admin_port

    The port of the WebLogic Administration Server.

  • --nm_host

    The host of node manager.

  • --help

    Print a brief help message and exits.

  • --usage

    Prints the usage page and exits

  • --manual

    Prints the manual page and exits.

Note:

When configuring this script as a Pre or Post script, the perl interpreter used to execute this script must be the perl binary that is bundled with the Oracle Enterprise Manager agent. To ensure that you use the correct path to this perl interpreter, use one of the following methods.
  • Use $PERL_HOME/perl as the path of the perl interpreter.

  • Locate the perl installed as part of the EM agent installation on the host where this script will execute, and specify the explicit path to the perl interpreter, such as /home/oracle/emagent/ agent_13.2.0.0.0/perl/bin/perl.

B.5 Node Manager Control Script – nm_control_wrapper.pl

A script that allows you to configure custom Node Manager operations in the Pre or Post stages of an operation plan.

In previous versions of Site Guard, Node Manager (NM) operations were not directly available for configuration by users. NM operations could not be be configured outside the operation plan bucket where NM disaster recovery occurred. This NM operation bucket was configured and pre-inserted by Site Guard at a fixed point in the operation plan.

The nm_control_wrapper.pl script solves this problem. The script is provided as a bundled (out-of-box) script and it provides users with the ability to add and configure their own custom NM operation anywhere in the Pre or Post stages of an operation plan

Usage

perl nm_control_wrapper.pl 
   --usecase '<usecase>' 
   --wls_home '<wls_home>'
   --mw_home '<mw_home>' 
   --oracle_home '<oracle_home>' 
   --domain_name '<domain_name>' 
   --domain_dir '<domain_directory>' 
   --nm_host '<node_manager_host>' 
   --timeout '<3600>'

Options

  • --usecase

    The usecase to be executed. Supported usecases: NM_STATUS, NM_START, NM_STOP.

  • --wls_home

    Weblogic server's HOME directory.

  • --mw_home

    Middleware HOME directory

  • --oracle_home

    Weblogic server's ORACLE_HOME.

  • --domain_name

    Domain Name

  • --domain_dir

    Domain directory

  • --nm_host

    Host of node manager.

  • --help

    Print a brief help message and exits

  • --usage

    Prints the usage page and exits.

  • --manual

    Prints the manual page and exits.

Note:

When configuring this script as a Pre or Post script, the perl interpreter used to execute this script must be the perl binary that is bundled with the Oracle Enterprise Manager agent. To ensure that you use the correct path to this perl interpreter, use one of the following methods.
  • Use $PERL_HOME/perl as the path of the perl interpreter.

  • Locate the perl installed as part of the EM agent installation on the host where this script will execute, and specify the explicit path to the perl interpreter, such as /home/oracle/emagent/ agent_13.2.0.0.0/perl/bin/perl.

B.6 Database Control Script - db_control_wrapper.pl

A ready-to-use script that allows you to add and configure custom database prechecks in the Pre or Post stages of an operation plan.

In previous versions of Site Guard, Oracle database operations were not directly available for configuration by users. You could not configure database operations outside the operation plan bucket where database disaster recovery occurred. This database operation bucket was configured and pre-inserted by Oracle Site Guard at a fixed point in the operation plan.

The db_control_wrapper.pl script solves this problem.

Description

Performs database start, stop, switchover, failover ,convert and revert operations, and additionally, it performs prechecks in these use cases.

Syntax

perl db_control_wrapper.pl 
    --usecase <usecase> 
    --oracle_home <oracle_home> 
    --oracle_sid <oracle_sid>
    --is_rac_database  <true/false> 
    --timeout <3600> 
    --target_db <target_db> 
    --target_optional_parameters <target_optional_parameters> 
    --operation_optional_parameters <operation_optional_paramete
Parameter Description

--usecase

One of the following: START, START_PRECHECK, STOP, STOP_PRECHECK, SWITCHOVER, SWITCHOVER_PRECHECK, FAILOVER, FAILOVER_PRECHECK, CONVERT_PHYSICAL_TO_SNAPSHOT_STANDBY, CONVERT_PHYSICAL_TO_SNAPSHOT_STANDBY_PRECHECK, REVERT_SNAPSHOT_TO_PHYSICAL_STANDBY, REVERT_SNAPSHOT_TO_PHYSICAL_STANDBY_PRECHECK

--oracle_home

The database ORACLE_HOME.

--oracle_sid

The database ORACLE_SID.

--is_rac_database

Set to true for RAC database; set to false for a non-RAC database.

--timeout

The time in seconds, for the database role reversal polling timeout.

--target_db

The target database name.

--target_optional_parameters

Target runtime optional parameters.

Options: apply_lag, transport_lag

Format: 'apply_lag=-1&transport_lag=-1'

--operation_optional_parameters

Target operation optional parameters.

Options

force=<true/false>

enable_trace=<true/false>

immediate_failover=<true/false>

lag_check=<true/false>

Format

'force=false&lag_check=false&enable_trace=false'

--help

Prints a brief help message.

--usage

Prints a brief usage message.

--manual

Prints the manual page.

Note:

When configuring this script as a Pre or Post script, the perl interpreter used to execute this script must be the perl binary that is bundled with the Enterprise Manager agent. To ensure that you use the correct path to this perl interpreter, do one of the following:
  • Use $PERL_HOME/perl as the path of the perl interpreter.

  • Locate the perl installed as part of the EM agent installation on the host where this script will execute, and specify the explicit path to the perl interpreter (e.g. /home/oracle/emagent/ agent_13.2.0.0.0/perl/bin/perl).

B.7 ZFS Storage Script - zfs_storage_role_reversal.sh

A ready-to-use script to perform ZFS storage-related prechecks in the Global Pre, Global Post, Pre, or Post stages of an operation plan.

In previous versions of Site Guard, ZFS storage role reversal operations were not directly available for configuration by users at any point in the operation plan. Although ZFS storage-related operations could be configured by users, you could not configure where these operations got inserted in the operation plan. This storage role reversal operation bucket was always pre-inserted by Site Guard at a fixed point in the operation plan.

The zfs_storage_role_reversal.sh script (previously available only as a storage script) solves this problem.

For more information about the use of this script, see zfs_storage_role_reversal.sh.

B.8 ZFS Analysis Script - zfs_analysis.sh

A ready-to-use script that analyzes and reports the lag in a ZFS replication configuration.

The script analyzes and prints all the occurrences when the replication lag exceeded the specified threshold (recovery point objective), and the amount of maximum lag during each of these occurrences. The script performs this analysis over the interval specified by the start_time and end_time parameters.

Oracle recommends that you use this script as a stand-alone tool for data collection and reporting in order to monitor the health of a ZFS replication configuration. You can also run this script as a Custom Precheck (and Health check) script in a traditional Site Guard operation plan, but you cannot depend on this script to trigger an operation plan failure, as you could with a traditional precheck script.

Script Usage

zfs_analysis.sh
  [--zfs_appliance <ZFS Appliance>]
  [--zfs_appliance_user <ZFS Appliance Username>]
  [--zfs_appliance_password <ZFS Appliance Password>]
  [--zfs_project_name <ZFS Project Name>]
  [--start_time <Start Time>]
  [--end_time <End Time>]
  [--objective <Replica Objective>]
  [--cluster_member_file <Cluster Member File>]
  [--objective_file <Objective File>]
  [--force <Force analytic start time>]

where,
  --zfs_appliance : [mandatory] ZFS zppliance host
  --zfs_appliance_user : [mandatory] ZFS zppliance username
  --zfs_appliance_password : [mandatory] ZFS zppliance password
  --zfs_project_name : [mandatory] Project name
  --start_time : [mandatory] Start date/time
  --end_time : [mandatory] End date/time
  --objective : [mandatory] Replica lag threshold
  --cluster_member_file : File that declares a common name to use for the two nodes in each clustered storage appliance
  --objective_file : File that declares replica lag thresholds for specific replication actions
  --force : Force the analysis interval to start at the specified date/time

To configure the script as an Oracle Site Guard Custom Precheck script:

  1. Search for and select the entity "ZFS Lag Analysis Scripts" for the Software Library Entity field.

  2. Set the Script Path as illustrated in the following example:

    sh zfs_analysis.sh 
      --zfs_appliance zfsappl01.mycompany.com 
      --zfs_project_name rproject01 
      --end_time 2015-07-07 
      --objective 30m 
      --start_time 2015-07-08
    
  3. Select the host(s) on which to run the script.

  4. Under Advanced Options, select and configure the credential for the ZFS appliance to pass as a parameter to the script.

A sample script output follows:

Action: zfsappl01sn01&zfsappl02sn02:rproject01
Replication of rproject01 from zfsappl01sn01
to zfsappl02sn02(label=zfsappl02sn-fe)
during the 10172506 second analysis interval
beginning 2015-02-12 06:18:14 UTC and ending 2015-06-10 00:00:00 UTC.
Updates are manually initiated.
Recovery Point Objective is 1800 seconds (30 minutes).
Action UUID (unique identifier) = e1b57778-5e5a-4053-c96b-f5d6e15d3292
                                       |                            |  seconds
          replication update           | at completion, replica lag |   spent
_______________________________________|____________________________|   above
       started     |     completed     | had grown to | then became | objective
___________________|___________________|______________|_____________|___________
2015-02-12 06:18:14 2015-02-12 06:18:24          10             10            0
2015-02-12 06:50:21 2015-02-12 06:50:30        1936              9          136
2015-02-12 06:51:45 2015-02-12 06:51:53          92              8            0
2015-02-15 21:10:59 2015-02-15 21:11:19      310774             20       308974
2015-02-15 21:19:32 2015-02-15 21:19:52         533             20            0
2015-02-16 06:17:34 2015-02-16 06:17:43       32291              9        30491
2015-02-16 06:21:36 2015-02-16 06:21:44         250              8            0
2015-02-16 06:25:12 2015-02-16 06:25:23         227             11            0
2015-02-16 06:27:18 2015-02-16 06:27:30         138             12            0
2015-02-16 06:29:23 2015-02-16 06:29:35         137             12            0
2015-02-16 06:32:07 2015-02-16 06:32:19         176             12            0
2015-02-16 06:33:27 2015-02-16 06:33:39          92             12            0
2015-02-16 06:36:07 2015-02-16 06:36:22         175             15            0
2015-02-16 06:40:17 2015-02-16 06:40:35         268             18            0
2015-02-16 07:03:11 2015-02-16 07:03:33        1396             22            0
2015-02-16 07:26:19 2015-02-16 07:26:29        1398             10            0
2015-02-16 07:28:03 2015-02-16 07:28:15         116             12            0
2015-02-17 00:50:24 2015-02-17 00:50:36       62553             12        60753
2015-02-17 00:55:57 2015-02-17 00:56:09         345             12            0
2015-02-17 01:55:01 2015-02-17 01:55:13        3556             12         1756
2015-02-17 04:25:21 2015-02-17 04:25:32        9031             11         7231
2015-02-18 10:22:19 2015-02-18 10:22:31      107830             12       106030
2015-02-18 10:23:31 2015-02-18 10:23:43          84             12            0
2015-02-23 05:02:22 2015-02-23 05:02:34      412743             12       410943
2015-02-23 07:06:26 2015-02-23 07:06:38        7456             12         5656
at end of interval  2015-06-10 00:00:00     9219214                     9217414
 
 
Replication actions that did not satisfy their Recovery Point Objective
at some point during the 10172506 second analysis interval
beginning 2015-02-12 06:18:14 UTC and ending 2015-06-10 00:00:00 UTC.
 
 replication updates |   total   |     |     peak replica lag        |
_____________________|  seconds  |     |_____________________________|
       |    above    |   above   |     |         |                   |
 total |  objective  | objective | RPO | seconds |   date and time   | source&target:project/share
______ |_____________|___________|_____|_________|___________________|_____________________________
    59      48  81%   358352   4%  1800   340318  2015-06-08 17:47:55  zfsappl01sn01&zfsappl02sn02:1_WING
     3       2  67%  1493546  15%  1800   994866  2015-06-04 04:28:59  zfsappl01sn01&zfsappl02sn02:2_SG
     2       1  50%  1642394  16%  1800  1644194  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:3_SG
     2       1  50%  1642180  16%  1800  1643980  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:4_SG
     3       2  67%  1497621  15%  1800  1203470  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:5_SG
     2       1  50%  6757712  66%  1800  6759512  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:SiteGuard
    13       4  31%  9593787  94%  1800  9219201  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:br_test
    26      10  38% 10149384 100%  1800  9219214  2015-06-10 00:00:00  zfsappl01sn01&zfsappl02sn02:rproject01

B.9 ZFS Standby Site Replication Cleanup Script – sg_zfs_utility.sh

A ready-to-use script that cleans up and correctly reconfigures stale ZFS replication configurations actions in a multi-site replication actions.

In multi-site DR configurations, where ZFS replication action is configured from one primary site to two standby (DR) sites, ZFS storage role reversal from the primary site to one of the standby sites can leave the other (second) standby site isolated, and it will end up with a stale replication action.

The following three-site example illustrates this situation.

Before DR operation:

  • Site A (Primary) —> ZFS replication —> Site B (Standby)

  • Site A (Primary) —> ZFS replication —> Site C (Standby)

Incomplete Storage Replication Configuration After Switchover from Site A to Site B:

  • Site B (New Primary) —> ZFS replication —> Site A (New Standby)

  • Site B (New Primary) —> [NO REPLICATION] —> Site C (Standby)

Desired Storage Replication Configuration After Switch from Site A to Site B:

  • Site B (New Primary) —> ZFS replication —> Site A (New Standby)

  • Site B (New Primary) —> ZFS replication —> Site C (Standby)

The sg_zfs_utility.sh script can be used to help resolve this issue. To do this, configure DR storage operations using the zfs_storage_role_reversal.sh script as you normally would. In addition to this, configure the sg_zfs_utility.sh script as a Global Post Script to be executed at the end of the Site Guard operation plan. The sg_zfs_utility.sh script will ensure that any stale replication actions are cleaned up and reconfigured properly.

The example below shows how the three scripts that are configured to correctly reverse and reconfigure storage replication action in a three-site replication actions:

  1. ZFS Role Reversal (Storage Script) configuration

    sh zfs_storage_role_reversal.sh --source_appliance siteA-zfs.mycompany.com --source_pool_name A_Pool --target_ siteB-zfs.mycompany.com --target_pool_name B_Pool --project_name myproject --is_sync_needed N --continue_on_sync_failure Y --sync_timeout 1800 --operation_type switchover

    Note:

    The zfs_storage_role_reversal.sh script MUST be configured using Stop on Errormode when you are additionally using the sg_zfs_utility.sh script to re-configure storage replication. Using a Continue on Errormode will cause errors and adversely impact your disaster recovery operation.
  2. ZFS Cleanup Stale replication action

    sh sg_zfs_utility.sh --operation_type CLEANUP_OLD_REPLICATION --project_name myproject --source_appliance siteA-zfs,mycompany.com --source_pool_name A_Pool --target_appliance siteC-zfs.mycompany.com --target_pool_name B_Pool

  3. ZFS Re-create New Replication Action

    sh sg_zfs_utility.sh --operation_type SETUP_NEW_REPLICATION --project_name myproject --source_appliance siteB-zfs.mycompany.com --source_pool_name B_Pool --target_appliance siteC-zfs.mycompany.com --target_pool_name C_Pool --replication_properties enabled=true|continuous=true|include_snaps=true|max_bandwidth=unlimited|use_ssl=true|target=cAppliance|pool=C_Pool|syncImmediate=true'