Go to main content

Managing Network Datalinks in Oracle® Solaris 11.3

Exit Print View

Updated: December 2017
 
 

Use Case: Configuring a Link Aggregation

The following end-to-end use case shows how to accomplish the following actions:

  • Create a DLMP aggregation.

  • Add links to the aggregation.

  • Configure an IP interface over the aggregation.

  • Configure a VNIC over the aggregation.

  • Configure probe-based failure detection for the aggregation.

  • Configure the target IP address in the routing table.

  • Monitor the ICMP and transitive probes.

  1. Become an administrator.

    For more information, see Using Your Assigned Administrative Rights in Securing Users and Processes in Oracle Solaris 11.3.

  2. Display datalink information to identify the datalinks for aggregation.

    # dladm show-link
    LINK      CLASS     MTU     STATE   OVER
    net0      phys      1500    up      --       
    net1      phys      1500    up      --      
    net2      phys      1500    up      --
  3. Ensure that the datalinks that you want aggregate do not have IP interfaces configured over the link. Delete the interface if any interface is configured on any of the links.

    # ipadm show-if
    IFNAME       CLASS        STATE     ACTIVE     OVER
    lo0          loopback     ok        yes        --
    net0         ip           ok        no         --
    # ipadm delete-ip net0
  4. Create a DLMP aggregation with the links net0 and net1.

    # dladm create-aggr -m dlmp -l net0 -l net1 aggr1
  5. Add another link, net2, to the aggregation.

    # dladm add-aggr -l net2 aggr1

    Reconfigure the switch to accommodate the new links if the existing switch configuration requires it. See the switch manufacturer's documentation.

  6. Configure an IP interface on top of the aggregation aggr1.

    # ipadm create-ip aggr1
    # ipadm create-addr -a local=203.0.113.1 aggr1/v4
  7. Create a VNIC on top of the aggregation.

    # dladm create-vnic -l aggr1 vnic1
  8. Configure probe-based failure detection for the aggregation.

    # dladm set-linkprop -p probe-ip=+ aggr1

    Since the source and the target addresses are not specified, they will be automatically chosen.

  9. Display the state of the aggregated ports and the targets.

    # dladm show-aggr -S
    LINK       PORT        FLAGS   STATE     TARGETS         XTARGETS
    aggr1      net0        u--3    active    203.0.113.2      net2 net1
    --         net1        u-2-    active       --           net2 net0
    --         net2        u-2-    active       --           net0 net1
  10. Monitor the ICMP probe statistics.

    # dlstat show-aggr -n -P i
       TIME     AGGR      PORT   LOCAL          TARGET        PROBE NETRTT  RTT
       1.16s    aggr1     net0   203.0.113.1     203.0.113.2    i33   --      --
       1.16s    aggr1     net0   203.0.113.1     203.0.113.2    i33   0.08ms  0.33ms
       2.05s    aggr1     net0   203.0.113.1     203.0.113.2    i34   --      --
       2.05s    aggr1     net0   203.0.113.1     203.0.113.2    i34   0.01ms  0.64ms
       4.05s    aggr1     net0   203.0.113.1     203.0.113.2    i35   --      --
       4.05s    aggr1     net0   203.0.113.1     203.0.113.2    i35   0.10ms  0.35ms
       5.54s    aggr1     net0   203.0.113.1     203.0.113.2    i36   --      --
       5.54s    aggr1     net0   203.0.113.1     203.0.113.2    i36   0.08ms  0.34ms 
  11. Monitor the transitive probe statistics between the ports.

     # dlstat show-aggr -n -P t
       TIME     AGGR   PORT        LOCAL       TARGET  PROBE NETRTT  RTT
       0.30s    aggr1  net2        net2        net0    t38   --      --
       0.30s    aggr1  net2        net2        net0    t38   0.46ms  0.59ms
       0.46s    aggr1  net0        net0        net1    t39   --      --
       0.46s    aggr1  net0        net0        net1    t39   0.46ms  0.50ms
       0.48s    aggr1  net1        net1        net0    t39   --      --
       0.48s    aggr1  net1        net1        net0    t39   0.34ms  0.38ms
       0.72s    aggr1  net2        net2        net1    t38   --      --
       0.72s    aggr1  net2        net2        net1    t38   0.38ms  0.42ms
       0.76s    aggr1  net0        net0        net2    t39   --      --
       0.76s    aggr1  net0        net0        net2    t39   0.33ms  0.38ms
       0.87s    aggr1  net1        net1        net2    t39   --      --
       0.87s    aggr1  net1        net1        net2    t39   0.32ms  0.38ms
       1.95s    aggr1  net2        net2        net0    t39   --      --
       1.95s    aggr1  net2        net2        net0    t39   0.36ms  0.42ms
       1.97s    aggr1  net2        net2        net1    t39   --      --
       1.97s    aggr1  net2        net2        net1    t39   0.32ms  0.38ms
       1.99s    aggr1  net0        net0        net1    t40   --      --
       1.99s    aggr1  net0        net0        net1    t40   0.31ms  0.36ms
       2.12s    aggr1  net1        net1        net0    t40   --      --
       2.12s    aggr1  net1        net1        net0    t40   0.34ms  0.40ms
       2.14s    aggr1  net0        net0        net2    t40   --      -- 

The aggregation aggr0 with an IP interface configured over it is created. The VNIC vnic1 is configured on top of the aggregation aggr0. Probe-based failure detection is configured without specifying either the source IP address or the target IP address of the probes. To enable probing, the target in the routing table is configured with an IP address, 203.0.113.2, on the same subnet as the specified IP address, 203.0.113.1. The ICMP and transitive probe statistics are monitored.