3 Configuring the Microsoft Cluster (Performed by the Customer)

Configuring the Microsoft Cluster is the responsibility of the customer. The installation and configuration of the cluster must be completed before the Oracle Delivery and Installation Team arrives at your location.

The following subsections describe what you need to do to prepare for the Oracle team's arrival. Complete the steps in the following sections to configure the Microsoft Cluster for use with DIVArchive.

Configuring the External Disk

The following subsections are generic in nature. Due to differences in manufacturer disk and array management software, your installation process and configuration may differ slightly from the instructions presented here - however, the overall concept and configuration will be the same.

First you will install the disk management software.

Installing the Disk Management Software

Perform the following steps on each cluster node server:

  1. Log on as a local administrator.

  2. Insert the manufacturer's installation DVD. If the installer does not start automatically, locate and double-click the setup.exe file (or whichever file is used) to launch the installer.

  3. Proceed through the storage software installation wizard; accept the license agreement and click Next.

  4. If prompted for features you want to install, select the full feature set and click Next. This is typically the recommended choice by manufacturers. Be sure to install the following if offered:

    • Management Consoles

    • Host Software

    • Volume Shadow-Copy Services

    • Virtual Disk Services

    • Event Monitoring Service (start automatically on one host only)

  5. Select the installation location and click Next. Oracle recommends leaving the default installation path unless there is a compelling reason to change it.

  6. When the installation process is complete, exit the installation program and restart the computer.

  7. Log into the computer as a local administrator.

  8. Open the Windows Management Console. and select the Device Manager menu item on the left side of the screen.

  9. Confirm that the Multipath I/O (MPIO) driver was installed. This is required during the cluster building operation and should have been installed with the cluster feature.

  10. Expand the Disk Management section of the Device Manager and confirm that multipath disk devices are present for each of your drives.

Next you will configure the storage you just added to the system.

Configuring Storage

Perform the following procedure on a single cluster node server only:

  1. Log on to one of the node servers as a local administrator.

  2. Launch the Disk Storage Manager that was installed with your storage software.

  3. If the storage manager software has an option to automatically detect arrays, Oracle recommends using this method. Select the automatic detection method option (if available) and click OK.

    • If automatic detection is not available, or if the array is not detected, add the array manually.

    • You will need the IP address, DNS Name, or Network Name if the array is outside the local subnetwork.

  4. Once the array is discovered (or manually added), right-click the array name and click Manage Storage Array.

  5. Locate the Host Mappings configuration area in your storage manager software and click Define Host. This is where you will add Cluster Hosts and Host Groups.

  6. Now you need to define the Cluster Hosts. Most storage manager software will use a wizard style interface to perform this task.

    1. Enter the Host Name (in this case rd-mc1).

    2. Tell the wizard if you plan to use storage partitions on the array (you should answer no to this question).

    3. Click Next.

    4. Assign the Host Port Identifier by selecting (or creating) an identifier, giving it an alias (or user label), and then adding it to the list to be associated with the host (in this case rd-mc1).

      If you need to identify the HBA Port Address, open a Windows PowerShell as an administrator and execute the command: Get-InitiatorPort.

    5. Click Add to complete the association and then click Next.

  7. Now you identify the host's operating system (in this case Windows).

  8. Click Next.

  9. This completes the configuration - click Finish.

    Some manager software will allow you to save the host definition as a script. Saving the definition as a script enables using the script as a template for adding additional hosts (if or when necessary).

  10. If you are asked to add another host, click Yes and repeat the steps above to add the second Host Cluster (in this case rd-mc2).

When all Host Clusters are identified and configured, use the following procedure to add the Host Group:

  1. Locate the Host Mappings configuration area in your storage manager software and click Define Host Group. This is where you just defined the Cluster Hosts and now you will define the Host Groups.

  2. Enter the new Host Group Name (DIVA).

  3. Add the Cluster Hosts to the new group.

  4. Click OK.

Next you will add a Disk Group using the following procedure:

  1. Locate the Storage Configuration area in your storage manager software.

  2. Select the Total Unconfigured Capacity object from the Computer Objects list.

  3. Select Disk Group and then click Create.

  4. A message will indicate the total unconfigured capacity - click Next.

  5. Enter the Disk Group Name (DIVA-CL-DISK-GRP).

  6. You must add the physical disks to the Disk Group. Select the automatic detection method option (if available) and click OK.

    • Oracle recommends using the storage manager software's Automatically detect physical disks option if available.

    • If automatic detection is not available, or if the disks are not detected, you must add the disks manually.

    • Automatic detection typically adds all available disk space to the group. If you do not need all of the storage space available for the Oracle Database, you can use the manual method to assign only the amount of space necessary.

  7. Click Next.

  8. Select RAID 5 when presented with the RAID Level and Capacity screen.

  9. Select the number of physical disks to be part of the Disk Group.

    • Leave some unused space to be used as spare disks.

    • Typically four disks are selected for the group - this leaves two disks as spares.

  10. Click Finish.

Next you will create virtual disks. In most disk management software once you complete Step 10, you will be asked to create a virtual disk.

  1. If presented with the option to create a virtual disk, click Yes. If the option is not given to you, locate where to create a virtual disk in your particular management software and follow the steps below.

  2. Assign 30 GB of the free capacity, name the virtual disk U02, and choose Host Group DIVA (under Map to Host), and then click Next.

    Five partitions are required for the Oracle Database, Logs, MetaDB (if used), Backup, and Cluster Quorum as follows:

    U02, 30 GB, E:\

    For the Oracle Database - 8KB allocation size recommended.

    U03, 5 GB (exactly), F:\

    For the Oracle Archive Logs - 4KB allocation size recommended.

    MetaDB, Calculated based on complex object size, G:\

    For the Metadata Database for Complex Objects. The size is based on the size of complex objects - this is typically larger than several terabytes and 150 GB minimum.

    U04, 100 GB, H:\

    For the Oracle Database backup location - 64KB allocation size recommended.

    Quorum, 100 MB, Q:\

    For the Cluster Quorum Witness

  3. If you are prompted with an option to create another virtual disk, click Yes. If not prompted, then repeat Step 1 and Step 2 until all required partitions are created (U02, U03, MetaDB, U04, and Quorum).

  4. In your management software, confirm that all partitions have been added to the Host Group and the database.

You will now configure Windows to use the Virtual disks you just created.

Configuring Windows for Virtual Disk Use

Now that you have created the virtual disks you must configure Windows to use them through the Windows Disk Management Console. You can also check for the virtual volumes you created using the Windows Computer Management utility. Use the following procedure to configure the disks for use in Windows:

  1. Log into the host computer where you created the virtual disks as a local administrator (if not still logged in).

  2. Click Start and enter diskmgmt.msc in the search area and press Enter to start the Disk Management Console.

  3. Confirm that all five disks are present in the console. The physical disks will currently show being Unknown and Offline, but they should all be listed.

  4. While leaving the Disk Management Console open, open the Windows Computer Management utility and check that the virtual volumes you created are listed.

    If they are not listed return to the previous section and review your creation of the virtual disks for errors and make any necessary corrections. Contact Oracle Support if you require additional assistance.

  5. Once you confirm the presence of the virtual disks, close the Windows Computer Management utility and return to the Disk Management Console.

  6. For each Cluster Disk listed in the Disk Management Console that displays an Unknown and Offline status, right-click in the disk name area (on the left side of the screen) and select Online from the resulting menu.

    This will bring the disk to an Online state. The disk will still show as Unknown, but it will now display Not Initialized instead of Offline.

  7. Right-click one of the (now) Online disk names (on the left side of the screen) and click Initialize Disk from the resulting context menu.

  8. Select each of the disks you just created from the list in the dialog box that is displayed.

  9. Click the MBR (Master Boot Record) option for disks up to 2 TB. Click the GPT option if the disk is larger than 2 TB.

  10. Click OK to initialize the selected disks.

Now that all disks are initialized, you must create volumes from the unallocated space.

  1. Select the new U02 disk and right-click the striped area showing the partition size and Unallocated.

  2. Select New Simple Volume from the resulting menu.

  3. When the New Simple Volume Wizard opens, click Next.

  4. On the second page of the wizard leave the default size and click Next.

  5. On the third page assign an unused drive letter to the volume and click Next.

  6. On the fourth page select the Format this volume with the following settings option.

    • Select NTFS for the File system.

    • Use the (pre-filled) Recommended allocation unit size for MetaDB, U04, and Quorum partitions. For U02 and U03, you will need to change the allocation unit size to 64 K otherwise database performance may be impacted.

    • Enter the Volume label (for the first disk U02, the second disk U03, and so on).

    • Select the Perform a quick format check box.

  7. Click Next to format the partition with the selected settings.

  8. Click Finish when the final page appears.

  9. Repeat all of these steps for each partition using the appropriate volume label for each partition.

The disk partitions should now be mapped as follows:

Partition and Volume Label: U02, Drive Letter: E:\, Minimum Size: 30 GB

For the database file.

Partition and Volume Label: U03, Drive Letter: F:\, Exact Size: 5 GB

For the archive log.

Partition and Volume Label: MetaDB, Drive Letter: G:\

For complex objects - the size is calculated based on the size of complex objects - this is typically several terabytes, and 150 GB minimum.

Partition and Volume Label: U04, Drive Letter: H:\, Minimum Size: 100 GB

For the database backups

Partition and Volume Label: Quorum, Drive Letter: Q:\, Minimum Size: 100 MB

For the Quorum Witness

Next you will configure the second node:

  1. Log on to the second node as a local administrator.

  2. Click Start and enter diskmgmt.msc in the search area, and then press Enter to start the Disk Management Console.

  3. Check that the virtual disks are present as you did for the first node.

  4. Check the drive letters of the disks and change them to match the first node's drive letters if necessary.

  5. Open Windows Explorer and confirm that the drives have been created. Update the drive letters according to the previous partition mappings if necessary (on both nodes).

Next you will configure the operating system.

Configuring the Operating System

Now that all disks have been created and configured, you need to configure the operating system on both Cluster Node Servers. First, you will join both server nodes to a single, common domain.

Joining the Two Server Nodes to a Common Domain

The steps below must be completed on both Cluster Node Servers. Use the following procedure to join the two nodes on to a common domain:

  1. Log on to the first node as a local administrator.

  2. Click Start, enter sysdm.cpl in the search area, and press Enter. This opens the System Properties dialog box.

  3. On the System Properties screen, click the Computer Name tab and click Change.

  4. On the Computer Name/Domain Changes screen, check the Computer Name and correct if necessary.

    Tip:

    Oracle recommends using a permanent computer name that is less likely to require changing later. The computer names can be changed in the future if absolutely necessary, however it is not recommended and may adversely affect the database and cluster.

    Note:

    Do not use a server name starting with a dash, number, or any wildcard characters.
  5. On the Computer Name/Domain Changes screen, click the Domain option and enter a valid domain name in the Domain field.

  6. Click OK.

  7. When prompted use a dedicated user for confirmation, click OK and restart the computer.

  8. Repeat all of these steps for the second node.

Next you will add the DIVAClusterAdmin domain account to the local administrator's group.

Adding the DIVAClusterAdmin Domain Account to the Local Administrator's Group

The steps below must be completed on both Cluster Node Servers. Use the following procedure to add the DIVAClusterAdmin to the local administrator group:

  1. Log on to the first node server as a local administrator.

  2. Click Start, enter lusrmgr.msc in the search area, and press Enter. This opens the User Management Console.

  3. Click Groups from the left navigation tree.

  4. Select the Local Administrator group and open the Properties dialog box.

  5. Near the bottom on the left side of the screen click Add.

  6. Add the Cluster Domain (for example: QALAB) and the DIVAClusterAdmin account to the Local Administrator group in the form cluster_domain\cluster_domain_account.

    For example: QALAB\DIVAClusterAdmin

  7. Click OK.

  8. Repeat all of these steps for the second node.

Now that the Cluster Administrator has been added to both nodes you must configure the MSCS Cluster.

Configuring the Microsoft Cluster Server Cluster

The following procedures for configuring the MSCS cluster must be completed on both node servers.

Installing the Windows 2012 R2 Standard Server Clustering Feature

Use the following procedure to install the clustering feature on each node:

  1. Log on to the first node server as a dedicated cluster domain account user (DIVAClusterAdmin).

  2. Open the Server Manager Console and using the menu on the top right side of the screen, navigate to Manage, and then Add Roles and Feature Wizard.

  3. When the Add Roles and Features Wizard opens click Next.

  4. Select the Role-based or feature-based installation option.

  5. Click Next.

  6. Click Select a server from the server pool option.

  7. In the Server Pool listing area, select the server to use and click Next to connect to the local server.

  8. Do not select anything on the Server Roles screen - just click Next.

    This screen is only for installing Server Roles.

  9. On the Features screen select the Failover Cluster check box.

  10. Click Next. A dialog box will open asking to add the required features for failover clustering.

  11. In the dialog box, select the Include management tools (if applicable) check box if not already selected.

  12. Click Add Features.

  13. You will be returned to the Features screen. Click Next.

  14. On the Confirmation screen check that the options you selected in the steps above are present.

  15. Deselect the Restart the destination server automatically if required check box if it is selected.

  16. Click Install.

  17. When the installation is complete, click Close.

  18. Repeat all of these steps for the second node.

Next you will enable the remote registry service on both node servers.

Enabling the Remote Registry Service

Use the following procedure to enable the remote registry service on each node:

  1. Log on to the first node server as a dedicated cluster domain account user (DIVAClusterAdmin).

  2. Click Start, enter services.msc in the search area, and press Enter. This opens the Windows Computer Management utility on the Services tab.

  3. Double-click the Remote Registry Service to open the Properties dialog box.

  4. Select Enable to enable the service.

  5. Select Automatic to start the service automatically in the future.

  6. Click Start to start the service now.

  7. Click OK.

  8. Repeat all of these steps for the second node.

Next you will register the host names with the DNS Manager.

Registering the Required Host Names to the DNS Manager

You, or your DNS Administrator, must add the entries for the Cluster Hostname and the DIVA Group Name to the DNS as follows (respectively):

  • DIVA-CL-MSCS

  • DIVA-CL-ORC

Oracle recommends also adding each Cluster Host Server public IP address. Use the following procedure to register the host names and IP addresses in the DNS Manager:

  1. Open the Server Manager.

  2. Select Tools, then DNS from the menu on the top right side of the screen.

  3. Right-click the DNS Zone and select New Host from the resulting menu.

  4. Add the host name (DIVA-CL-MSCS) and IP address in the appropriate fields.

  5. Select the Create associated pointer (PTR) record check box (if it is not already).

  6. Click Add Host.

  7. Right-click the DNS Zone again and select New Host from the resulting menu.

  8. Add the DIVA Oracle Group Name (DIVA-CL-ORC) and IP address in the appropriate fields.

  9. Select the Create associated pointer (PTR) record check box (if not already).

  10. Click Add Host.

The following steps must be completed on each node server.

  1. Log on to the first node server as a local administrator.

  2. Open the Windows Network and Sharing Center.

  3. Click Change Adapter Settings in the left menu.

  4. Locate the network adapter card for the Private network connection and right-click the icon.

    The private network is the cluster's heartbeat network only and should not be registered in the DNS.

  5. Select Properties from the resulting menu.

  6. Double-click Internet Protocol Version 4 (TCP/IPv4) in the protocols area.

  7. In the displayed dialog box, click Advanced on the bottom right side of the screen.

  8. Select the DNS tab on the Advanced TCP/IP dialog.

  9. Deselect the Register this connection's addresses in DNS check box.

    The DIVArchive Prerequisites Package disables the DNS Client Service by default. To conform to Microsoft best practices, you must start the service and set it to automatically start in the future (after the DIVArchive Prerequisite Package is installed).

  10. Click Start, enter services.msc in the search area, and press Enter. This opens the Windows Computer Management utility on the Services tab.

  11. Double-click the DNS Client service to open the Properties dialog box.

  12. Select Enable to enable the service.

  13. Select Automatic to start the service automatically in the future.

  14. Click Start to start the service now.

  15. Click OK.

  16. Repeat all of these steps for the second node.

Next you will create the Windows Server 2012 R2 Cluster.

Creating the Windows 2012 R2 Server Cluster

The following procedure should be completed on one cluster node only.

  1. Log on to the first node server as a dedicated cluster domain account user (DIVAClusterAdmin).

  2. Select Start, Administrative Tools, and then Failover Cluster Management Console.

  3. In the Management area (in the middle of the screen), click Create a Cluster. This will start the Create a Cluster Wizard.

  4. When the wizard opens, click Next.

  5. Enter the Fully Qualified Domain Name (FQDN) of the first Cluster Node Server in the Enter server name field and click Add.

  6. Enter the Fully Qualified Domain Name (FQDN) of the second Cluster Node Server in the Enter server name field and click Add.

  7. Click Next.

  8. When the Validation Warning dialog box is displayed, leave the default (Yes) selected to run the validation tests and click Next.

  9. When the first screen of the Validate Configuration Wizard is displayed click Next.

    Note:

    You must be a local administrator on each of the servers that you are validating.
  10. On the Testing Options screen, select the Run all tests (recommended) option. This is the default selection.

  11. Click Next.

  12. On the Confirmation screen, click Next.

  13. Monitor the validation tests and wait for them to complete. The Summary screen will be displayed when testing is done.

  14. If warnings or exceptions are noted in the summary, click View Report to see the details.

  15. Resolve any issues and rerun the Validate Configuration Wizard if configuration changes were made.

    Note:

    Disable unused network adapter cards to prevent minor warnings. Some network adapter cards may have IP addresses on the same subnet. If they are not operational, this may not be an issue.
  16. Continue rerunning the Validate Configuration Wizard and resolving any errors until the test all complete successfully.

  17. When all tests complete successfully, select the Create the cluster now using the validated nodes check box, and then click Finish to create the Cluster.

    When the Validate Configuration Wizard closes, you will be returned to the Create Cluster Wizard to continue with the configuration.

  18. Click Next to advance to the Access Point for Administering the Cluster screen.

  19. Enter the cluster name (DIVA-CL-MSCS) in the Cluster Name field.

  20. Enter the Cluster IP address in the Address field.

  21. Click Next.

  22. On the Confirmation screen, verify that all entered information is correct.

  23. Select the Add all eligible storage to the cluster check box.

  24. Click Next to create the cluster.

  25. When the cluster creation is complete, verify that all configurations were successful by clicking View Report.

  26. When you have confirmed that the configuration was successful, click Finish.

    Next you must configure the Cluster Quorum Storage.

  27. In the Failover Cluster Management Console, expand the navigation tree on the left side of the screen so you can see the cluster.

  28. Expand the Storage menu item and select Disks.

  29. In the middle of the screen, you should be able to see drives E:, F:, G: and H:.

  30. Select the main cluster item in the navigation tree on the left side of the screen.

  31. On the right side of the screen (under Actions), click More Actions, and then Configure Cluster Quorum Settings. This will start the Cluster Quorum Wizard.

  32. Select the Select quorum witness option.

  33. Click Next.

  34. In the displayed list of Cluster Disks, select the check box for the 100 MB dedicated Quorum Disk. You can identify the Quorum Disk either by the Location (it will show Available Storage), or by expanding the entry using the plus sign and confirming that it is a 100 MB disk.

  35. Click Next.

  36. Verify that all selections are correct on the Confirmation screen and click Next.

  37. When the configuration is complete, click View Report and verify that all configurations were successful.

  38. When you have confirmed that the configuration was successful, click Finish.

Next you will validate the node configurations.

Validating the Nodes Configuration for MSCS Clustering

The following steps are to be completed on one cluster node only.

  1. Log on to the first node server as a dedicated cluster domain account user (DIVAClusterAdmin).

  2. Click Start, Administrative Tools, and then Failover Cluster Management Console.

  3. Select the cluster name in the navigation tree on the left side of the screen.

  4. Click Validate Cluster on the right side of the screen (under Actions).

    You run the Validate Configuration Wizard again to confirm that there are no errors in your configuration.

  5. When the first screen of the Validate Configuration Wizard is displayed, click Next.

  6. On the Testing Options screen, select the Run all tests (recommended) option. This is the default selection.

  7. Click Next.

  8. Click Next on the Confirmation screen.

  9. Monitor the validation tests and wait for them to complete. The Summary screen will be displayed when testing is done.

  10. If warnings or exceptions are noted in the summary, click View Report to see the details.

  11. Resolve any errors and rerun the tests until all test complete successfully.

  12. Click Finish to exit the wizard when all tests complete successfully.

Now that the cluster has been set up and configured you will test the configuration.

Testing the Configuration

Now that the installation and configuration is complete, you need to test everything to verify proper operation before going to live production. First you will do a manual failover test.

Performing a Manual Cluster Failover Test from the Failover Cluster Manager

Use the following procedure to test manual failover configuration and operation:

  1. If the cluster that you want to configure is not displayed in the navigation tree on the left side of the Failover Cluster Manager, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the desired cluster.

  2. Expand the cluster in the navigation tree on the left side of the screen.

  3. Expand Roles and click the role name to test for failover.

  4. On the right side of the screen (under Actions) click Move, and then Select Node.

    The status is displayed under Results in the center of the screen as the service and application move.

  5. You can repeat Step 4 to move the service or application to an additional node or back to the original node.

Next you will do a restart failover test on the active node.

Performing a Cluster Failover Test by Restarting the Active Cluster Node

Use the following procedure to perform a restart failover test on the active node:

  1. Connect to the DIVArchive Control GUI using the virtual IP address (DIVA-CL-ORC) and confirm normal DIVArchive operation.

  2. Disconnect the Public Network cable from the Active Cluster Node.

  3. Confirm that the services move and start operation on the second Cluster Node.

  4. Connect to the DIVArchive Control GUI using the virtual IP address (DIVA-CL-ORC) and confirm normal DIVArchive operation.

  5. Reconnect the Public Network cable to the Active Cluster Node.

Next you will test moving a configured role to another Cluster Node.

Moving a Configured Role to Another Cluster Node

Use the following procedure to move a configured role to another Cluster Node:

  1. Open the Failover Cluster Manager (if not already open).

  2. Expand the cluster in the navigation tree on the left side of the screen.

  3. Select Roles.

  4. Right-click the role to failover in the Roles area in the center of the screen.

  5. Click Move, and then Select Node from the resulting menu.

  6. In the Move Cluster Role dialog box, select the Cluster Node where you want to move the role.

  7. Click OK.

    The Role will now move to the selected Cluster Node.

  8. Verify the Owner Node in the Roles area in the center of the screen - it should now be the selected node.

If all tests have completed successfully, you are ready to place the system into live production.