Cabling Several RoCE Network Fabric Racks Together using Oracle Exadata System Software Release 20.1.0 or Later

Use this procedure to add another rack to an existing multi-rack system with RoCE Network Fabric using Oracle Exadata System Software Release 20.1.0 or later.

This procedure is for systems with RoCE Network Fabric (X8M or later).

In this procedure, the existing racks are R1, R2, … ,Rn, and the new rack is Rn+1. In the following steps, these example switch names are used:

  • rack5sw-roces0: Rack 5 Spine switch (SS)
  • rack5sw-rocea0: Rack 5 Lower Leaf switch (R5LL)
  • rack5sw-roceb0: Rack 5 Upper Leaf switch (R5UL)

Note:

Cabling three or more racks together requires no downtime for the existing racks R1, R2, …, Rn. Only the new rack, Rn+1, is powered down

Use the applicable cabling tables depending on your system:

  1. Ensure the new rack is near the existing racks R1, R2, …, Rn.
    The RDMA Network Fabric cables must be able to reach the servers in each rack.
  2. Ensure you have a backup of the current switch configuration for each switch in the existing racks and the new rack.
    For each switch, complete the steps in the Oracle Exadata Database Machine Maintenance Guide, section Backing Up Settings on the RoCE Network Fabric Switch.
  3. Shut down all servers in the new rack Rn+1.
    Refer to Powering Off Oracle Exadata Rack. The switches must remain online and available.
  4. Apply the golden configuration settings on the RoCE Network Fabric switches in the new rack Rn+1.

    Use the procedure described in Applying Golden Configuration Settings on RoCE Network Fabric Switches, in Oracle Exadata Database Machine Maintenance Guide.

  5. Perform the physical cabling of the switches in the new rack Rn+1.

    Caution:

    Cabling within a live network must be done carefully in order to avoid potentially serious disruptions.
    1. Remove the eight existing inter-switch connections between each leaf switch in the new rack Rn+1 (ports 4, 5, 6, 7 and 30, 31, 32, 33).
    2. Cable the leaf switches in the new rack according to the applicable cabling table.

      For example, if you are adding a 5th rack and rack Rn+1 is R5, then use "Table 21-14 Leaf Switch Connections for the Fifth Rack in a Five-Rack System".

  6. Add the new rack to the switches in the existing racks (R1 to Rn).
    1. For an existing rack (Rx), cable the lower leaf switch RxLL according to the applicable cabling table.
    2. For the same rack, cable the upper leaf switch RxUL according to the applicable cabling table.
    3. Repeat these steps for each existing rack, R1 to Rn.
  7. Confirm each switch is available and connected.

    For each switch in racks R1, R2, …, Rn, Rn+1, confirm the output for the switch show interface status command shows connected and 100G. In the following example, the leaf switches are ports Eth1/4 to Eth1/7, and Eth1/30 to Eth1/33. The spine switches are ports Eth1/5 to Eth1/20.

    When run from a spine switch, the output should be similar to the following:

    rack1sw-roces0# show interface status
    --------------------------------------------------------------------------------
    Port          Name               Status    Vlan      Duplex  Speed   Type
    --------------------------------------------------------------------------------
    mgmt0         --                 connected routed    full    1000    -- 
    --------------------------------------------------------------------------------
    Port          Name               Status    Vlan      Duplex  Speed   Type
    --------------------------------------------------------------------------------
    ...
    Eth1/5        RouterPort5        connected routed    full    100G    QSFP-100G-CR4
    Eth1/6        RouterPort6        connected routed    full    100G    QSFP-100G-SR4
    Eth1/7        RouterPort7        connected routed    full    100G    QSFP-100G-CR4
    Eth1/8        RouterPort8        connected routed    full    100G    QSFP-100G-SR4
    Eth1/9        RouterPort9        connected routed    full    100G    QSFP-100G-CR4
    Eth1/10       RouterPort10       connected routed    full    100G    QSFP-100G-SR4
    Eth1/11       RouterPort11       connected routed    full    100G    QSFP-100G-CR4
    Eth1/12       RouterPort12       connected routed    full    100G    QSFP-100G-SR4
    Eth1/13       RouterPort13       connected routed    full    100G    QSFP-100G-CR4
    Eth1/14       RouterPort14       connected routed    full    100G    QSFP-100G-SR4
    Eth1/15       RouterPort15       connected routed    full    100G    QSFP-100G-CR4
    Eth1/16       RouterPort16       connected routed    full    100G    QSFP-100G-SR4
    Eth1/17       RouterPort17       connected routed    full    100G    QSFP-100G-CR4
    Eth1/18       RouterPort18       connected routed    full    100G    QSFP-100G-SR4
    Eth1/19       RouterPort19       connected routed    full    100G    QSFP-100G-CR4
    Eth1/20       RouterPort20       connected routed    full    100G    QSFP-100G-SR4
    Eth1/21       RouterPort21       xcvrAbsen      routed    full    100G    --
    ...

    When run from a leaf switch, the output should be similar to the following:

    rack1sw-rocea0# show interface status
    --------------------------------------------------------------------------------
    Port          Name               Status    Vlan      Duplex  Speed   Type
    --------------------------------------------------------------------------------
    mgmt0         --                 connected routed    full    1000    -- 
    --------------------------------------------------------------------------------
    Port          Name               Status    Vlan      Duplex  Speed   Type
    --------------------------------------------------------------------------------
    ...
    Eth1/4        RouterPort1        connected routed    full    100G    QSFP-100G-CR4
    Eth1/5        RouterPort2        connected routed    full    100G    QSFP-100G-CR4
    Eth1/6        RouterPort3        connected routed    full    100G    QSFP-100G-CR4
    Eth1/7        RouterPort4        connected routed    full    100G    QSFP-100G-CR4
    Eth1/8        celadm14           connected 3888      full    100G    QSFP-100G-CR4
    ...
    Eth1/29       celadm01           connected 3888      full    100G    QSFP-100G-CR4
    Eth1/30       RouterPort5        connected routed    full    100G    QSFP-100G-SR4
    Eth1/31       RouterPort6        connected routed    full    100G    QSFP-100G-SR4
    Eth1/32       RouterPort7        connected routed    full    100G    QSFP-100G-SR4
    Eth1/33       RouterPort8        connected routed    full    100G    QSFP-100G-SR4
    ...
  8. Check the neighbor discovery for every switch in racks R1, R2, …, Rn, Rn+1.
    Log in to each switch and use the show lldp neighbors command. Make sure that all switches are visible and check the switch ports assignment (leaf switches: ports Eth1/4 - Eth1/7, Eth1/30 - Eth1/33; spine switches: ports Eth1/5 - Eth1/20) against the applicable cabling tables.

    Each spine switch should see all the leaf switches in each rack, but not the other spine switches. The output for a spine switch should be similar to the following:

    Note:

    The interfaces in the rightmost output column (for example, Ethernet1/5) are different for each switch based on the applicable cabling tables.
    rack1sw-roces0# show lldp neighbors | grep roce
    rack1sw-roceb0 Eth1/5 120 BR Ethernet1/5
    rack2sw-roceb0 Eth1/6 120 BR Ethernet1/5
    rack1sw-roceb0 Eth1/7 120 BR Ethernet1/7
    rack2sw-roceb0 Eth1/8 120 BR Ethernet1/7
    rack1sw-roceb0 Eth1/9 120 BR Ethernet1/4
    rack2sw-roceb0 Eth1/10 120 BR Ethernet1/4
    rack3sw-roceb0 Eth1/11 120 BR Ethernet1/5
    rack3sw-roceb0 Eth1/12 120 BR Ethernet1/7
    rack1sw-rocea0 Eth1/13 120 BR Ethernet1/5
    rack2sw-rocea0 Eth1/14 120 BR Ethernet1/5
    rack1sw-rocea0 Eth1/15 120 BR Ethernet1/7
    rack2sw-rocea0 Eth1/16 120 BR Ethernet1/7
    rack3sw-rocea0 Eth1/17 120 BR Ethernet1/5
    rack2sw-rocea0 Eth1/18 120 BR Ethernet1/4
    rack3sw-rocea0 Eth1/19 120 BR Ethernet1/7
    rack3sw-rocea0 Eth1/20 120 BR Ethernet1/4 

    Each leaf switch should see the spine switch in every rack, but not the other leaf switches. The output for a leaf switch should be similar to the following:

    Note:

    The interfaces in the rightmost output column (for example, Ethernet1/13) are different for each switch based on the applicable cabling tables.
    rack1sw-rocea0# show lldp neighbors | grep roce
    rack3sw-roces0 Eth1/4 120 BR Ethernet1/13
    rack1sw-roces0 Eth1/5 120 BR Ethernet1/13
    rack3sw-roces0 Eth1/6 120 BR Ethernet1/15
    rack1sw-roces0 Eth1/7 120 BR Ethernet1/15
    rack2sw-roces0 Eth1/30 120 BR Ethernet1/17
    rack2sw-roces0 Eth1/31 120 BR Ethernet1/13
    rack3sw-roces0 Eth1/32 120 BR Ethernet1/17
    rack2sw-roces0 Eth1/33 120 BR Ethernet1/15
  9. Power on all the servers in the new rack, Rn+1.
  10. For each rack, confirm the multi-rack cabling by running the verify_roce_cables.py script.

    Refer to My Oracle Support Doc ID 2587717.1 for download and usage instructions.

    Check the output of the verify_roce_cables.py script against the applicable cabling tables. Also, check that output in the CABLE OK? columns contains the OK status.

    When running the script, two input files are used, one for nodes and one for switches. Each file should contain the servers or switches on separate lines. Use fully qualified domain names or IP addresses for each server and switch.

    The following output is a partial example of the command results:

    # ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
    SWITCH PORT (EXPECTED PEER)  LEAF-1 (rack1sw-rocea0)     : CABLE OK?  LEAF-2 (rack1sw-roceb0)    : CABLE OK?
    ----------- --------------   --------------------------- : --------   -----------------------    : ---------
    Eth1/4 (ISL peer switch)   : rack1sw-roces0 Ethernet1/17 : OK         rack1sw-roces0 Ethernet1/9 : OK
    Eth1/5 (ISL peer switch)   : rack1sw-roces0 Ethernet1/13 : OK         rack1sw-roces0 Ethernet1/5 : OK
    Eth1/6 (ISL peer switch)   : rack1sw-roces0 Ethernet1/19 : OK         rack1sw-roces0 Ethernet1/11: OK
    Eth1/7 (ISL peer switch)   : rack1sw-roces0 Ethernet1/15 : OK         rack1sw-roces0 Ethernet1/7 : OK
    Eth1/12 (celadm10)         : rack1celadm10 port-1        : OK         rack1celadm10 port-2       : OK
    Eth1/13 (celadm09)         : rack1celadm09 port-1        : OK         rack1celadm09 port-2       : OK
    Eth1/14 (celadm08)         : rack1celadm08 port-1        : OK         rack1celadm08 port-2       : OK
    ...
    Eth1/15 (adm08)            : rack1dbadm08 port-1         : OK         rack1dbadm08 port-2        : OK
    Eth1/16 (adm07)            : rack1dbadm07 port-1         : OK         rack1dbadm07 port-2        : OK
    Eth1/17 (adm06)            : rack1dbadm06 port-1         : OK         rack1dbadm06 port-2        : OK
    ...
    Eth1/30 (ISL peer switch)  : rack2sw-roces0 Ethernet1/17 : OK         rack2sw-roces0 Ethernet1/9 : OK
    Eth1/31 (ISL peer switch)  : rack2sw-roces0 Ethernet1/13 : OK         rack2sw-roces0 Ethernet1/5 : OK
    Eth1/32 (ISL peer switch)  : rack2sw-roces0 Ethernet1/19 : OK         rack2sw-roces0 Ethernet1/11: OK
    Eth1/33 (ISL peer switch)  : rack2sw-roces0 Ethernet1/15 : OK         rack2sw-roces0 Ethernet1/7 : OK
    
  11. Verify the RoCE Network Fabric operation by using the infinicheck command.

    Use the following recommended command sequence. In each command, hosts.lst is the name of an input file that contains a comma-delimited list of database server host names or RoCE Network Fabric IP addresses, and cells.lst is the name of an input file that contains a list of RoCE Network Fabric IP addresses for the storage servers.

    • Use infinicheck with the -z option to clear the files that were created during the last run of the infinicheck command. For example:

      # /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c cells.lst -z
    • Use infinicheck with the -s option to set up user equivalence for password-less SSH across the RoCE Network Fabric. For example:

      # /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c cells.lst -s
    • Finally, verify the RoCE Network Fabric operation by using infinicheck with the -b option, which is recommended on newly imaged machines where it is acceptable to suppress the cellip.ora and cellinit.ora configuration checks. For example:

      # /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c cells.lst -b
      
      INFINICHECK                    
              [Network Connectivity, Configuration and Performance]        
                     
                ####  FABRIC TYPE TESTS  #### 
      System type identified: RoCE
      Verifying User Equivalance of user=root from all DBs to all CELLs.
           ####  RoCE CONFIGURATION TESTS  ####       
           Checking for presence of RoCE devices on all DBs and CELLs 
      [SUCCESS].... RoCE devices on all DBs and CELLs look good
           Checking for RoCE Policy Routing settings on all DBs and CELLs 
      [SUCCESS].... RoCE Policy Routing settings look good
           Checking for RoCE DSCP ToS mapping on all DBs and CELLs 
      [SUCCESS].... RoCE DSCP ToS settings look good
           Checking for RoCE PFC settings and DSCP mapping on all DBs and CELLs
      [SUCCESS].... RoCE PFC and DSCP settings look good
           Checking for RoCE interface MTU settings. Expected value : 2300
      [SUCCESS].... RoCE interface MTU settings look good
           Verifying switch advertised DSCP on all DBs and CELLs ports ( )
      [SUCCESS].... Advertised DSCP settings from RoCE switch looks good  
          ####  CONNECTIVITY TESTS  ####
          [COMPUTE NODES -> STORAGE CELLS] 
            (60 seconds approx.)       
          (Will walk through QoS values: 0-6) [SUCCESS]..........Results OK
      [SUCCESS]....... All  can talk to all storage cells          
          [COMPUTE NODES -> COMPUTE NODES]               
      ...