JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Managing Oracle Solaris 11.1 Network Performance     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Introduction to Network Performance Management

2.  Using Link Aggregations

3.  Working With VLANs

4.  Administering Bridged Networks (Tasks)

5.  Introduction to IPMP

6.  Administering IPMP (Tasks)

7.  Exchanging Network Connectivity Information With LLDP

8.  Working With Data Center Bridging Features in Oracle Solaris

9.  Edge Virtual Bridging in Oracle Solaris

10.  Integrated Load Balancer (Overview)

11.  Configuring Integrated Load Balancer

Installing ILB

Enabling ILB

How to Enable ILB

Configuring ILB

How to Configure ILB

Disabling ILB

How to Disable ILB

Importing and Exporting Configurations

Configuring ILB for High-Availability (Active-Passive Mode Only)

Configuring ILB for High-Availability Using the DSR Topology

How to Configure ILB to Achieve High-Availability by Using the DSR Topology

Configuring ILB for High-Availability Using the Half-NAT Topology

How to Configure ILB to Achieve High-Availability by Using the Half-NAT Topology

12.  Managing Integrated Load Balancer

13.  Virtual Router Redundancy Protocol (Overview)

A.  Link Aggregation Types: Feature Comparison

B.  Link Aggregations and IPMP: Feature Comparison

Index

Configuring ILB

This section describes the steps for setting up ILB to use a half-NAT topology to load balance traffic among two servers. See the NAT topology implementation in ILB Operation Modes.

How to Configure ILB

  1. Assume a role that includes the ILB Management rights profile, or become superuser.

    You can assign the ILB Management rights profile to a role that you create. To create the role and assign the role to a user, see Initially Configuring RBAC (Task Map) in Oracle Solaris 11.1 Administration: Security Services.

  2. Set up the back-end servers.

    The back-end servers are set up to use ILB as the default router in this scenario. This can be done by running the following commands on both servers.

    # route add -p default 192.168.1.21

    After executing this command, start the server applications on both servers. Assume that it is a TCP application listening on port 5000.

  3. Set up the server group in ILB.

    There are 2 servers, 192.168.1.50 and 192.169.1.60. A server group, srvgrp1, consisting of these two servers can be created by typing the following command.

    # ilbadm create-sg -s servers=192.168.1.50,192.168.1.60 srvgrp1
  4. Set up a simple health check called hc-srvgrp1, can be created by typing the following command.

    A simple TCP level health check is used to detect if the server application is reachable. This check is done every 60 seconds. It will try at most 3 times and wait for at most 3 seconds between trials to see if a server is healthy. If all 3 trials fail, it will mark the server as dead.

    # ilbadm create-hc -h hc-test=tcp,hc-timeout=3, \
    hc-count=3,hc-inerval=60 hc-srvgrp1 
  5. Set up an ILB rule by typing the following command.

    Persistence (with 32 bits mask) is used in this rule. And the load balance algorithm is round robin. The server group srvgrp1 is used and the health check mechanism used is hc-srvgrp1. The rule can be created by typing the following command.

    # ilbadm create-rule -e -p -i vip=10.0.2.20,port=5000 -m \
     lbalg=rr,type=half-nat,pmask=32 \
    -h hc-name=hc-srvgrp1 -o servergroup=srvgrp1 rule1_rr