Go to main content

Configuring an Oracle® Solaris 11.4 System as a Router or a Load Balancer

Exit Print View

Updated: November 2020
 
 

Managing an ILB

ILB management consists of defining the server groups, monitoring the health checks of ILB, and creating ILB rules.

Defining Server Groups and Back-End Servers

Back-end servers that are added to the group automatically obtain server IDs that are unique within the group. For reference, see the ilbadm(8) man page.

Creating an ILB Server Group

To create an ILB server group, first identify the servers that are to be included in the group. Servers can be specified by their host name or IP addresses and optional ports. Then issue the following command:

$ ilbadm create-servergroup -s servers=server1,server2,server3 servergroup 

Unique server IDs prefixed with a leading underscore (_) are generated for each added server.


Note -  A server that belongs to multiple server groups would have multiple IDs.

Adding Back-End Servers to an ILB Sever Group

To add a back-end server to a server group, issue the following command:

$ ilbadm add-server -s server=server1[,server2...] servergroup

Server specifications must include a host name or IP address and can optionally include a port or a range of ports. Server entries with the same IP address are disallowed within a server group.


Note -  IPv6 addresses must be enclosed in square brackets.
Example 16  Creating an ILB Server Group and Adding Back-End Servers

The following example shows how to simultaneously create a server group and its three back-end servers.

$ ilbadm create-servergroup -s \
   servers=192.0.2.11,192.0.2.12,192.0.2.13 webgroup
$ ilbadm show-servergroup
SGNAME         SERVERID            MINPORT MAXPORT IP_ADDRESS
webgroup       _webgroup.0         --      --      192.0.2.11
webgroup       _webgroup.1         --      --      192.0.2.12
webgroup       _webgroup.2         --      --      192.0.2.13

The following example shows how to create a server group and separately add three back-end servers.

$ ilbadm create-servergroup webgroup1
$ ilbadm add-server -s server=[2001:0db8:7::feed:6]:8080,\
   [2001:0db8:7::feed:7]:8080,[2001:0db8:7::feed:8]:8080 webgroup1

Enabling or Disabling a Back-End Server in an ILB Server Group

First identify the IP address, host name, or server ID of the back-end server you want to re-enable or disable. You must associate the server group with a rule before the servers in the server group can be enabled or disabled. In disabled servers, packet forwarding is halted.

A server can have multiple server IDs if it belongs to multiple server groups. You must specify a server ID to re-enable or disable the server for the specific rules that are associated with the server ID.

Use the following command syntax:

$ ilbadm disable-server|enable-server server1

To display the state of the server, type the following command:

$ ilbadm show-server [[-p] -o field[,field...]] [rulename]

Note -  A server's enabled or disabled state is displayed only when the server group to which it belongs is associated with a rule.

Deleting a Back-End Server From an ILB Server Group

To remove a back-end server from one or more server groups, first Identify the server's ID:

ilbadm show-servergroup -o all

The server ID is a unique name for the IP address that is assigned to a system when the server is added to a server group.

Then, delete the server.

$ ilbadm remove-server -s server=server-ID server-group

This example removes the server _sg1.2 from server group sg1.

$ ilbadm remove-server -s server=_sg1.2 sg1

If the server is being used by a NAT or half-NAT rule, disable the server first before removing it. See Enabling or Disabling a Back-End Server in an ILB Server Group. A server being disabled enters the connection-draining state. Periodically check the NAT table by using the ilbadm show-nat command to see whether the server still has connections. After all the connections are drained, the server is no longer included in the show-nat command output. You can then remove the server.

If the conn-drain timeout value is set, the connection-draining state will be completed upon conclusion of the timeout period. The default value of conn-drain timeout is 0, which means that the connection-draining waits until a connection is gracefully shut down.

Deleting ILB Server Groups

This section describes how to delete an ILB server group. You cannot delete a server group that is used by any active rule.

First, display all the available information about server groups.

$ ilbadm show-servergroup -o all
sgname      serverID       minport     maxport     IP_address
specgroup   _specgroup.0   7001        7001        192.0.2.18
specgroup   _specgroup.1   7001        7001        192.0.2.19
test123     _test123.0     7002        7002        192.0.2.18
test123     _test123.1     7002        7002        192.0.2.19

Then, delete the group. For example, based on the previous output, you would type:

$ ilbadm delete-servergroup test123

If the server group is in use by an active rule, the deletion fails.

Monitoring Health Checks in ILB

    ILB provides the following optional types of server health checks:

  • Built-in ping probes

  • Built-in TCP probes

  • Built-in UDP probes

  • User-supplied custom tests that can run as health checks

By default, ILB does not perform any health checks. You can specify health checks for each server group when you create a load-balancing rule. Only one health check for every load-balancing rule is allowed. Health checks on the server group that is associated with the enabled virtual service start automatically and are repeated periodically. The checks stop if the virtual service is disabled. The previous health check states are not preserved when the virtual service is re-enabled.

When you specify a TCP, UDP, or custom test probe for running a health check, ILB sends a ping probe, by default, to determine whether the server is reachable before it sends the specified TCP, UDP, or custom test probe to the server. If the ping probe fails, the corresponding server is disabled with the health check status unreachable. If the ping probe succeeds but the TCP, UDP, or custom test probe fails, the server is disabled with the health check status dead.

You can disable the default ping probe except for the UDP probe. The ping probe is always the default probe for UDP health checks.

Creating a Health Check

In the following example, two health check objects, hc1 and hc-myscript, are created. The first health check uses the built-in TCP probe. The second health check uses a custom test, /var/tmp/my-script.

$ ilbadm create-healthcheck -h hc-timeout=3,\
   hc-count=2,hc-interval=8,hc-test=tcp hc1
$ ilbadm create-healthcheck -h hc-timeout=3,\
   hc-count=2,hc-interval=8,hc-test=/var/tmp/my-script hc-myscript
hc-timeout

Time limit beyond which a health check that does not complete is considered to have failed.

hc-count

Number of attempts to run the hc-test health check.

hc-interval

Time between consecutive health checks. To avoid sending probes to all servers at the same time, the actual interval is randomized between 0.5 * hc-interval and 1.5 * hc-interval.

hc-test

Type of health check. You can specify the built-in health checks, such as tcp, udp, and ping or an external health check, which has to be specified with the full path name.


Note -  The port specification for hc-test is specified with the hc-port keyword in the create-rule subcommand. For more information, see the ilbadm(8) man page.

A user-supplied custom test can be a binary or a script.

  • The test can reside anywhere on the system. You must specify the absolute path when you create the health check.

    When you specify the test, for example, /var/tmp/my-script, as part of the health check specification, the ilbd daemon forks a process and runs the test as follows:

    /var/tmp/my-script $1 $2 $3 $4 $5
    $1

    VIP (literal IPv4 or IPv6 address)

    $2

    Server IP (literal IPv4 or IPv6 address)

    $3

    Protocol (UDP, TCP as a string)

    $4

    Numeric port range (the user-specified value for hc-port)

    $5

    Maximum time (in seconds) that the test must wait before returning a failure. If the test runs beyond the specified time, it might be stopped, and the test is considered failed. This value is user-defined and specified in hc-timeout.

    • The user-supplied test, does not have to use all the arguments, but it must return one of the following:

    • Round-trip time (RTT) in microseconds

    • 0 if the test does not calculate RTT

    • -1 for failure

By default, the health check test runs with the following privileges: PRIV_PROC_FORK, RIV_PROC_EXEC, and RIV_NET_ICMPACCESS.

If a broader privilege set is required, you must implement setuid in the test. For more details on the privileges, refer to the privileges(7) man page.

Listing Health Checks

To obtain detailed information about configured health checks, issue the following command:

$ ilbadm show-healthcheck
HCNAME      TIMEOUT COUNT   INTERVAL DEF_PING TEST
hc1         3       2       8        Y        tcp
hc2         3       2       8        N        /var/usr-script

Displaying Health Check Results

The ilbadm list-hc-result command shows results of all the health checks unless you specify a specific rule to be displayed.

The following example displays the health check results associated with a rule called rule1.

$ ilbadm show-hc-result rule1
RULENAME   HCNAME     SERVERID   STATUS   FAIL LAST      NEXT      RTT
rule1      hc1        _sg1:0     dead     10   11:01:19  11:01:27  941
rule1      hc1        _sg1:1     alive    0    11:01:20  11:01:34  1111 

Note -  The show-hc-result command displays the health check result only when the rules have associated health checks.

The LAST column of the output shows the time a health check ran, while the NEXT column shows the time the next health check will run.

Deleting a Health Check

The following example deletes a health check called hc1.

$ ilbadm delete-healthcheck hc1

Configuring ILB Rules

This section describes how you can use the ilbadm command to create, delete, and list the load-balancing rules.

ILB Algorithms

ILB algorithms control traffic distribution and provide various characteristics for load distribution and server selection.

    ILB provides the following algorithms for both DSR and NAT modes of operation:

  • Round-robin – The load balancer assigns the requests to a server group on a rotating basis. After a server is assigned a request, the server is moved to the end of the list.

  • src-IP hash – The load balancer selects a server based on the hash value of the source IP address of the incoming request.

  • src-IP, port hash – The load balancer selects a server based on the hash value of the source IP address and the source port of the incoming request.

  • src-IP, VIP hash – The load balancer selects a server based on the hash value of the source IP address and the destination IP address of the incoming request.

Creating an ILB Rule

    In ILB, a virtual service is represented by a load-balancing rule and is defined by the following parameters:

  • Virtual IP address

  • Transport protocol: TCP or UDP

  • Port number (or a port range)

  • Load-balancing algorithm

  • Load-balancing mode (DSR, full-NAT, or half-NAT)

  • Server group consisting of a set of back-end servers

  • Optional server health checks that can be run for each server in the server group

  • Optional port to use for health checks


    Note -  You can specify health checks on a particular port or on any port that the ilbd daemon randomly selects from the port range for the server.
  • Rule name to represent a virtual service

Before you can create a rule, you must do the following:

  • Create a server group that includes the appropriate back-end servers. See Defining Server Groups and Back-End Servers.

  • Create a health check to associate the server health check with the rule. See Creating a Health Check.

  • Identify the VIP, port, and optional protocol that are to be associated with the rule.

  • Identify the operation you want to use (DSR, half-NAT, or full-NAT).

  • Identify the load-balancing algorithm to be used. See ILB Algorithms.

To create an ILB rule, issue the ilbadm create-rule command together with specific parameter definitions from the previous list. For reference, see the ilbadm(8) man page.

$ ilbadm create-rule -e -i vip=IPaddr,port=port,protocol=protocol \
   -m lbalg=lb-algorithm,type=topology-type,proxy-src=IPaddr1-IPaddr2,\
   pmask=value -h hc-name=hc1-o servergroup=sg rule1

Note -  The -e option enables the rule that is being created. Otherwise, rules you create are disabled by default.
Example 17  Creating a Full-NAT Rule With Health Check Session Persistence

This example creates a health check called hc1 and a server group called sg1. The server group consists of two servers, each with a range of ports. The last command creates and enables a rule called rule1 and associates the rule to the server group and the health check. This rule implements the full-NAT mode of operation. Note that the creation of the server group and health check must precede the creation of the rule.

$ ilbadm create-healthcheck -h hc-test=tcp,hc-timeout=2,\
   hc-count=3,hc-interval=10 hc1
$ ilbadm create-servergroup -s server=192.0.2.10:6000-6009,192.0.2.11:7000-7009 sg1
$ ilbadm create-rule -e -p -i vip=203.0.113.10,port=5000-5009,\
   protocol=tcp -m lbalg=rr,type=NAT,proxy-src=192.0.2.34-192.0.2.44,pmask=27 \
   -h hc-name=hc1 -o servergroup=sg1 rule1

When you create persistent mapping, subsequent requests for connections, packets, or both, to a virtual service with a matching source IP address of the client are forwarded to the same back-end server. The prefix length in Classless Inter-Domain Routing (CIDR) notation is a value between 0-32 for IPv4 and 0-128 for IPv6.

When creating a half-NAT or a full-NAT rule, specify the value for the connection-drain timeout. The default value of conn-drain timeout is 0, which means that connection draining keeps waiting until a connection is gracefully shut down.

Proxy source IP address is needed only for full NAT configuration. In full NAT mode of operation, ILB rewrites both the source and destination IP addresses of the packets coming from a client. The destination IP address is changed to one of the back-end servers' IP address. The source address is changed to be one of the proxy source addresses given in the ilbadm command line.

Proxy source address is needed because only a maximum of 65535 connections exist between one source address and one back-end server that is using one service port at any point of time. This limit becomes a bottleneck in load balancing. The list of proxy source addresses enables ILB to overcome this bottleneck because ILB has a number of source addresses to use.

Using proxy source address also avoids the problem of address conflict between ILB and the system, whether virtual or not, where ILB is running. Some network configurations require that the source address used by NAT be completely different from the address used by the system, whether virtual or not, where ILB is running.

Listing ILB Rules

To list the configuration details of a rule, issue the following command. If no rule name is specified, information is provided for all rules.

$ ilbadm show-rule
RULENAME        STATUS   LBALG           TYPE    PROTOCOL VIP           PORT
rule-http       E        hash-ip-port    NAT     TCP      203.0.113.1      80
rule-dns        D        hash-ip         NAT     UDP      203.0.113.1      53
rule-abc        D        roundrobin      NAT     TCP      2001:db8::1   1024
rule-xyz        E        ip-vip          NAT     TCP      2001:db8::1   2048-2050

Deleting an ILB Rule

You use the ilbadm delete-rule command to delete a rule. Add the -a option to delete all rules.

$ ilbadm delete-rule rule1