Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Administration: IP Services Oracle Solaris 11 Information Library |
1. Planning the Network Deployment
2. Considerations When Using IPv6 Addresses
3. Configuring an IPv4 Network
4. Enabling IPv6 on the Network
5. Administering a TCP/IP Network
7. Troubleshooting Network Problems
11. Administering the ISC DHCP Service
12. Configuring and Administering the DHCP Client
13. DHCP Commands and Files (Reference)
14. IP Security Architecture (Overview)
16. IP Security Architecture (Reference)
17. Internet Key Exchange (Overview)
19. Internet Key Exchange (Reference)
20. IP Filter in Oracle Solaris (Overview)
Part IV Networking Performance
22. Integrated Load Balancer Overview
ILB and the Service Management Facility
23. Configuration of Integrated Load Balancer (Tasks)
24. Virtual Router Redundancy Protocol (Overview)
25. VRRP Configuration (Tasks)
26. Implementing Congestion Control
Part V IP Quality of Service (IPQoS)
27. Introducing IPQoS (Overview)
28. Planning for an IPQoS-Enabled Network (Tasks)
29. Creating the IPQoS Configuration File (Tasks)
30. Starting and Maintaining IPQoS (Tasks)
31. Using Flow Accounting and Statistics Gathering (Tasks)
This section describes the key features of ILB.
ILB supports stateless DSR and NAT modes of operation for IPv4 and IPv6, in single-legged and dual-legged topologies.
Stateless DSR mode – In the DSR mode, ILB balances the incoming requests to the back-end servers, but lets the return traffic from the servers to the clients bypass it. However, you can also set up ILB to be used as a router for the back-end server. In this case, the response from the back-end server to the client is routed through the machine that is running ILB. With stateless DSR, ILB does not save any state information of the processed packets, except for basic statistics. Since ILB does not save any state in this mode, the performance is comparable to the normal IP forwarding performance. This mode is best suited for connectionless protocols.
NAT mode (full-NAT and half-NAT) – ILB uses NAT in stand-alone mode strictly for load-balancing functionality. In this mode, ILB rewrites the header information and handles the incoming as well as the outgoing traffic. NAT mode provides additional security and is best suited for HTTP (or SSL) traffic.
Note - The NAT code path that is implemented in ILB differs from the code path that is implemented in the IP Filter feature of Oracle Solaris. Do not use both of these code paths simultaneously.
ILB algorithms control traffic distributions and provide various characteristics for load distribution and server selection. ILB provides the following algorithms for the two modes of operation:
Round-robin – In a round-robin algorithm, the load balancer assigns the requests to a list of the servers on a rotating basis. Once a server is assigned a request, the server is moved to the end of the list.
src IP hash – In source IP hash method, the load balancer selects a server based on the hash value of the source IP address of the incoming request.
src-IP, port hash – In source IP, port hash method, the load balancer selects a server based on the hash value of the source IP address, and the source port of the incoming request.
src-IP, VIP hash – In source IP, VIP hash method, the load balancer selects a server based on the hash value of the source IP address, and the destination IP address of the incoming request.
The CLI is located in the /usr/sbin/ilbadm directory. It includes subcommands to configure load-balancing rules, server groups, and health checks. It also includes subcommands to display statistics as well as view configuration details. The subcommands can be divided into two categories:
Configuration subcommands – These subcommands enable you to perform the following tasks:
Create and delete load-balancing rules
Enable and disable load-balancing rules
Create and delete server groups
Add and remove servers from a server group
Enable and disable back-end servers
Create and delete server health checks for a server group within a load-balancing rule
Note - To administer the configuration subcommands, you require privileges. The privileges are obtained through Role Based Access Control (RBAC). To create the appropriate role and assign the role to a user, see Initially Configuring RBAC (Task Map) in Oracle Solaris Administration: Security Services.
View subcommands – These subcommands enable you to perform the following tasks:
View configured load-balancing rules, server groups, and health checks
View packet forwarding statistics
View the NAT connection table
View health check results
View the session persistence mapping table
Note - You do not need privileges to administer the view subcommands.
For a list of ilbadm subcommands, see ILB Command and Subcommands . For more detailed information about ilbadm subcommands, refer to the ilbadm(1M) man page.
ILB offers an optional server monitoring feature that can provide server health checks with the following capabilities:
Built-in ping probes
Built-in TCP probes
Built-in UDP probes
User-supplied tests that can be run as server health checks
By default, ILB does not perform any health checks. You can specify health checks for each server group when creating a load-balancing rule. You can configure only one health check per load-balancing rule. As long as a virtual service is enabled, the health checks on the server group that is associated with the enabled virtual service start automatically and repeat periodically. The health checks stop as soon as the virtual service is disabled. The previous health check states are not preserved when the virtual service is re-enabled.
When you specify a TCP, UDP, or custom test probe for running a health check, ILB sends a ping probe, by default, to determine if the server is reachable before it sends the specified TCP, UDP, or custom test probe to the server. The ping probe is a method of monitoring server health. If the ping probe fails, the corresponding server is disabled with the health check status of unreachable. If the ping probe succeeds, but the TCP, UDP, or custom test probe fails, the server is disabled with the health check status of dead.
Note -
You can disable the default ping probe.
The default ping probe cannot be disabled for the UDP probe. Thus, for the UDP health checks, the ping probe is always the default probe.
You can configure the health check for the parameters shown in the following table.
Table 22-1 Configuring Health Check Parameters
|
This section describes the additional features of the ILB.
Enables clients to ping virtual IP (VIP) addresses – ILB can respond to Internet Control Message Protocol (ICMP) echo requests to VIPs from clients. ILB provides this capability for DSR and NAT modes of operation.
Enables you to add and remove servers from a server group without interrupting service – You can dynamically add and remove servers from a server group, without interrupting existing connections established with the back-end servers. ILB provides this capability for the NAT mode of operations.
Enables you to configure session persistence (stickiness) – For many applications, it is important that a series of connections, packets or both from the same client are sent to the same back-end server. You can configure session persistence for a virtual service by specifying the netmask in the subcommand create-rule{{-m persist=<netmask>]]. After a persistent mapping is created, subsequent requests for connections packets or both to a virtual service with a matching source IP address of the client are forwarded to the same back-end server. The support for session persistence mechanism is available for both DSR and NAT modes of operation.
Enables you to perform connection draining – ILB provides support for this capability only for servers of NAT-based virtual services. This capability prevents new connections from being sent to a server that is disabled. Existing connections to the server continue to function. After all the connections to that server terminate, the server can then be shut down for maintenance. After the server is ready to handle requests, enable the server so that the load balancer can forward new connections to it. This feature enables you to shut down servers for maintenance without disrupting active connections or sessions.
Enables load-balancing TCP and UDP ports – ILB can load balance all ports on a given IP address across different sets of servers without requiring you to set up explicit rules for each port. ILB provides this capability for DSR and NAT modes of operation.
Enables you to specify independent ports for virtual services within the same server group – With this feature, ILB enables you to specify different destination ports for different servers in the same server group for the NAT modes of operation.
Enables you to load balance simple port range – ILB can load balance a range of ports on the VIP to a given server group. For convenience, you can conserve IP addresses by load-balancing different port ranges on the same VIP to different sets of back-end servers. Also, when session persistence is enabled for NAT mode, ILB sends requests from the same client IP address for different ports in the range to the same back-end server.
Enables port range shifting and collapsing – Port range shifting and collapsing depend on the port range of a server in a load-balancing rule. So, if the port range of a server is different from the VIP port range, port shifting is automatically implemented. Port collapsing is implemented, if the server port range is a single port. These features are provided for the NAT modes of operation.