This appendix contains scripts illustrating database control, ZFS storage, and ZFS analysis scripts.
This appendix includes the following section:
The following scripts are bundled with Oracle Site Guard:
In previous versions of Site Guard, Oracle database operations were not directly available for configuration by users. You could not configure database operations outside the operation plan bucket where database disaster recovery occurred. This database operation bucket was configured and pre-inserted by Site Guard at a fixed point in the operation plan.
The db_control_wrapper.pl
script solves this problem. The script is a ready-to-use script that allows you to add and configure custom database precheck or operation anywhere in the Pre or Post stages of an operation plan.
Name
db_control_wrapper.pl - Oracle Siteguard Database Control Wrapper Script
Description
Performs database start, stop, switchover, failover and convert operations, and additionally, it performs prechecks in these use cases.
Syntax
perl db_control_wrapper.pl --usecase <usecase> --oracle_home <oracle_home> --oracle_sid <oracle_sid> --is_rac_database <true/false> --timeout <3600> --target_db <target_db> --target_optional_parameters <target_optional_parameters> --operation_optional_parameters <operation_optional_paramete
Table B-1 db_control_wrapper.pl Parameters
Parameter | Description |
---|---|
|
One of the following: START, START_PRECHECK, STOP, STOP_PRECHECK, SWITCHOVER, SWITCHOVER_PRECHECK, FAILOVER, FAILOVER_PRECHECK, CONVERT_PHYSICAL_TO_SNAPSHOT_STANDBY, CONVERT_PHYSICAL_TO_SNAPSHOT_STANDBY_PRECHECK, REVERT_SNAPSHOT_TO_PHYSICAL_STANDBY, REVERT_SNAPSHOT_TO_PHYSICAL_STANDBY_PRECHECK |
|
The database |
|
The database |
|
Set to |
|
The time in seconds, for the database role reversal polling timeout. |
|
The target database name. |
|
Target runtime optional parameters. Options: Format: |
|
Target operation optional parameters. Options
Format ' |
|
Prints a brief help message. |
|
Prints a brief usage message. |
|
Prints the manual page. |
In previous versions of Site Guard, ZFS storage role reversal operations were not directly available for configuration by users at any point in the operation plan. Although ZFS storage-related operations could be configured by users, you could not configure where these operations got inserted in the operation plan. This storage role reversal operation bucket was always pre-inserted by Site Guard at a fixed point in the operation plan.
You can now configure the zfs_storage_role_reversal.sh
script (previously available only as a storage script) as a generic ready-to-use script and use it at any point in the Global Pre, Global Post, Pre, or Post areas of an operation plan to perform ZFS storage-related prechecks or operations.
For more information about the use of this script, see Section 4.5.3.1, "zfs_storage_role_reversal.sh."
This is a ready-to-use script that analyzes and reports the lag in a ZFS replication configuration. The script analyzes and prints all the occurrences when the replication lag exceeded the specified threshold (recovery point objective), and the amount of maximum lag during each of these occurrences. The script performs this analysis over the interval specified by the start_time
and end_time
parameters.
Oracle recommends that you use this script as a stand-alone tool for data collection and reporting in order to monitor the health of a ZFS replication configuration. You can also run this script as a Custom Precheck (and Health check) script in a traditional Site Guard operation plan, but you cannot depend on this script to trigger an operation plan failure, as you could with a traditional precheck script.
zfs_analysis.sh [--zfs_appliance <ZFS Appliance>] [--zfs_appliance_user <ZFS Appliance Username>] [--zfs_appliance_password <ZFS Appliance Password>] [--zfs_project_name <ZFS Project Name>] [--start_time <Start Time>] [--end_time <End Time>] [--objective <Replica Objective>] [--cluster_member_file <Cluster Member File>] [--objective_file <Objective File>] [--force <Force analytic start time>] where, --zfs_appliance : [mandatory] ZFS zppliance host --zfs_appliance_user : [mandatory] ZFS zppliance username --zfs_appliance_password : [mandatory] ZFS zppliance password --zfs_project_name : [mandatory] Project name --start_time : [mandatory] Start date/time --end_time : [mandatory] End date/time --objective : [mandatory] Replica lag threshold --cluster_member_file : File that declares a common name to use for the two nodes in each clustered storage appliance --objective_file : File that declares replica lag thresholds for specific replication actions --force : Force the analysis interval to start at the specified date/time
To configure the script as a Site Guard Custom Precheck script:
Search for and select the entity "ZFS Lag Analysis Scripts" for the Software Library Entity field.
Set the Script Path as illustrated in the following example:
sh zfs_analysis.sh --zfs_appliance zfsappl01.mycompany.com --zfs_project_name rproject01 --end_time 2015-07-07 --objective 30m --start_time 2015-07-08
Select the host(s) on which to run the script.
Under Advanced Options, select and configure the credential for the ZFS appliance to pass as a parameter to the script.
A sample script output follows:
Action: zfsappl01sn01&zfsappl02sn02:rproject01 Replication of rproject01 from zfsappl01sn01 to zfsappl02sn02(label=zfsappl02sn-fe) during the 10172506 second analysis interval beginning 2015-02-12 06:18:14 UTC and ending 2015-06-10 00:00:00 UTC. Updates are manually initiated. Recovery Point Objective is 1800 seconds (30 minutes). Action UUID (unique identifier) = e1b57778-5e5a-4053-c96b-f5d6e15d3292 | | seconds replication update | at completion, replica lag | spent _______________________________________|____________________________| above started | completed | had grown to | then became | objective ___________________|___________________|______________|_____________|___________ 2015-02-12 06:18:14 2015-02-12 06:18:24 10 10 0 2015-02-12 06:50:21 2015-02-12 06:50:30 1936 9 136 2015-02-12 06:51:45 2015-02-12 06:51:53 92 8 0 2015-02-15 21:10:59 2015-02-15 21:11:19 310774 20 308974 2015-02-15 21:19:32 2015-02-15 21:19:52 533 20 0 2015-02-16 06:17:34 2015-02-16 06:17:43 32291 9 30491 2015-02-16 06:21:36 2015-02-16 06:21:44 250 8 0 2015-02-16 06:25:12 2015-02-16 06:25:23 227 11 0 2015-02-16 06:27:18 2015-02-16 06:27:30 138 12 0 2015-02-16 06:29:23 2015-02-16 06:29:35 137 12 0 2015-02-16 06:32:07 2015-02-16 06:32:19 176 12 0 2015-02-16 06:33:27 2015-02-16 06:33:39 92 12 0 2015-02-16 06:36:07 2015-02-16 06:36:22 175 15 0 2015-02-16 06:40:17 2015-02-16 06:40:35 268 18 0 2015-02-16 07:03:11 2015-02-16 07:03:33 1396 22 0 2015-02-16 07:26:19 2015-02-16 07:26:29 1398 10 0 2015-02-16 07:28:03 2015-02-16 07:28:15 116 12 0 2015-02-17 00:50:24 2015-02-17 00:50:36 62553 12 60753 2015-02-17 00:55:57 2015-02-17 00:56:09 345 12 0 2015-02-17 01:55:01 2015-02-17 01:55:13 3556 12 1756 2015-02-17 04:25:21 2015-02-17 04:25:32 9031 11 7231 2015-02-18 10:22:19 2015-02-18 10:22:31 107830 12 106030 2015-02-18 10:23:31 2015-02-18 10:23:43 84 12 0 2015-02-23 05:02:22 2015-02-23 05:02:34 412743 12 410943 2015-02-23 07:06:26 2015-02-23 07:06:38 7456 12 5656 at end of interval 2015-06-10 00:00:00 9219214 9217414 Replication actions that did not satisfy their Recovery Point Objective at some point during the 10172506 second analysis interval beginning 2015-02-12 06:18:14 UTC and ending 2015-06-10 00:00:00 UTC. replication updates | total | | peak replica lag | _____________________| seconds | |_____________________________| | above | above | | | | total | objective | objective | RPO | seconds | date and time | source&target:project/share ______ |_____________|___________|_____|_________|___________________|_____________________________ 59 48 81% 358352 4% 1800 340318 2015-06-08 17:47:55 zfsappl01sn01&zfsappl02sn02:1_WING 3 2 67% 1493546 15% 1800 994866 2015-06-04 04:28:59 zfsappl01sn01&zfsappl02sn02:2_SG 2 1 50% 1642394 16% 1800 1644194 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:3_SG 2 1 50% 1642180 16% 1800 1643980 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:4_SG 3 2 67% 1497621 15% 1800 1203470 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:5_SG 2 1 50% 6757712 66% 1800 6759512 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:SiteGuard 13 4 31% 9593787 94% 1800 9219201 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:br_test 26 10 38% 10149384 100% 1800 9219214 2015-06-10 00:00:00 zfsappl01sn01&zfsappl02sn02:rproject01