If installation fails for one or more hosts, note the reason in the page that reports the failure. The following conditions must be in effect for successful installation:
Each array member must be running and must be configured, as described in the Netra Proxy Cache Array Configuration Guide.
All host and service addresses must be unique and must have the same subnet number.
One host in the array must be configured as a DNS server for the array and must have a unique DNS service address with the same subnet number as the host and proxy cache service addresses.
All control addresses must be unique and must have the same subnet number.
If you configured a local name service (not recommended), you might have made a mistake so that, for example, on one or more hosts, the loopback interface is configured with the host address.
There are other, relatively remote possibilities, such as the update process being dead on a given array member. You could probably correct such an obscure problem by rebooting the affected host.
In the event of installation failure, consult the error logs.
Click the home icon to load the Main Administration page.
Click Proxy Cache Service to load the Proxy Cache Administration page.
Under the Monitoring heading, click Log Files.
In the Proxy Cache Log Administration page, click View for the Administration Client Error log or the Configuration Installation Error log.
In addition to the error logs, a useful troubleshooting tool, if you have a serial connection to a Netra Proxy Cache Server, is ifconfig. On an array member, correct ifconfig output is as follows:
# ifconfig -a lo0: flags=<num><UP,LOOPBACK,RUNNING,MULTICAST> mtu 8232 inet 127.0.0.1 netmask ff000000 hme0: flags=<num><UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST> mtu 1500 inet <host address> netmask <service net netmast> broadcast <service net number>.255 ether <ethernet address> hme0:1: flags=<num><UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,PRIVATE> mtu 1500 inet <proxy cache service address> netmask <service net netmast> broadcast <service net number>.255 The following entry (hme0:2:) is present only on the array DNS server: hme0:2: flags=<num><UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,PRIVATE> mtu 1500 inet <DNS service address> netmask <service net netmast> broadcast <service net number>.255 hme1: flags=<num><UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST> mtu 1500 inet <control net address> netmask <control net netmast> broadcast <service net number>.255 ether <ethernet address>
In the preceding output, note that spacing is altered for readability. Also, the broadcast addresses show examples of Class C broadcast addresses. Your own broadcast address might differ, depending on the netmask you use on your service and control networks.
Regarding ifconfig output, if a host cannot provide a service (proxy cache or DNS), the hme0:<num> entry will not be present for that service. On the other hand, a host might have additional hme0:<num> entries, indicating that it has acquired additional service addresses, from other array members.
Most of the processes listed below are present on a Netra Proxy Cache Server as well as on the hosts in a Netra Proxy Cache Array.
OAM Server Process (runs only on administrative host):
jre -cp ./oamserver.zip -noasyncgc -Djava.rmi.server.hostname=<admin host>\ -Djava.rmi
HTTP Daemon (runs on all hosts, not just administrative host):
/opt/netra/SUNWnetra/bin/httpd -f /etc/opt/netra/SUNWnetra/conf/httpd.conf
Update daemon (runs on all hosts):
/opt/SUNWoam/lib/oampushd -s -d /tmp/oampushd -e /opt/SUNWoam/lib/oamutil -p 12
DNS server (runs only on array DNS server):
/usr/sbin/in.named -b named.boot
DNS name lookup process (used by proxy cache service for DNS name lookups):
(dnsserver) -t
By default there are five of the preceding type of process. You can increase this number to 32.
Proxy cache service SNMP agent (runs on all hosts in an array):
proxycachesnmpd
Array software SNMP Agent (runs on all hosts in array):
scalrsnmpd
FTP get process used by proxy cache service (all hosts in array):
/opt/SUNWcache/lib/ftpget -S 39388
Proxy cache process (all hosts in array):
/opt/SUNWcache/sbin/proxycache -P /var/opt/SUNWcache/proxycache.pid
Solstice DMI-to-SNMP translator (all hosts in array):
/usr/lib/dmi/snmpXdmid -s <host name>
Array software daemon (all hosts in array):
/opt/SUNWscalr/lib/scalrd -f /etc/opt/SUNWscalr/scalrd.conf -p \ /var/opt/SUNWscalr
SNMP master agent (all hosts in array):
/usr/lib/snmp/snmpdx -y -c /etc/snmp/conf
The Netra Proxy Cache Array and Server products have man pages available. To access these pages add the paths shown below to your MANPATH.
/opt/SUNWcache/man /opt/SUNWoam/man
For Netra Proxy Cache Array, add the preceding paths, plus:
/opt/SUNWscalr/man
To add to your MANPATH, add lines such as those shown below to your shell startup file.
For a C-shell, in your $HOME/.cshrc file enter:
setenv MANPATH ${MANPATH}:/opt/SUNWscalr/man:/opt/SUNWcache/man:\ /opt/SUNWoam/man
For a Bourne or Korn shell, in your $HOME/.profile file enter:
MANPATH=${MANPATH}:/opt/SUNWscalr/man:/opt/SUNWcache/man:/opt/SUNWoam/man export MANPATH
Load distribution in a Netra Proxy Cache Array is optimum in an environment where resolution of the name of proxy cache service provided by the array occurs on a continual basis. This occurs in a network where the name service acknowledges the time-to-live (TTL) of the name-to-address entries made available by the array DNS. Examples of such a name service are the DNS (using bind v. 4.9.3 or later) or NIS, as shipped with Solaris 2.6.
In an environment where name resolution is static or occurs infrequently (such as with pre-Solaris 2.6 NIS), you might be able to use browser facilities, such as the Proxy Access Control (PAC) file, to force name service lookups on an ongoing basis.
For an NIS-only environment, the following are two alternatives for resolving the name of the proxy cache service provided by a Netra Proxy Cache Array. Other alternatives are available.
Configure the NIS server to forward unresolved queries to a DNS server that delegates the proxy cache's zone to the array. Set the Array DNS Proxy Records Time-To-Live property in the Advanced array configuration page, described in "DNS", to a low value, such as 3 seconds.
Assign an NIS service name for each service address in the array. By doing this, you achieve failover functionality. However, the DNS configuration on the array becomes redundant.
The browser's PAC file might have a facility for name resolution.
A Netra Proxy Cache Array and Server products are shipped with the packages listed below installed.Unless otherwise indicated, packages are installed on both the array and server versions of the product.
Table 19-1 Product Packages
Package Name |
Description |
---|---|
SUNWcache |
Proxy cache server software |
SUNWcaoam |
Proxy cache user interface and configuration database software (Netra Proxy Cache Server only) |
SUNWcasnm |
SNMP agent for proxy cache software |
SUNWjvjit |
Java JIT compiler |
SUNWjvrt |
Java Virtual Machine run-time environment; includes Java, appletviewer, and classes zip file |
SUNWmibii |
Solstice Enterprise Agents SNMP daemon |
SUNWnsA |
Netra HTML forms for configuring name systems (DNS, NIS client, local) |
SUNWntr |
Netra-required library functions, boot scripts and HTTP daemon |
SUNWntrA |
Netra HTML forms for configuring common Solaris and Netra functionality |
SUNWntrpP |
Netra images and HTML forms for the proxy cache product |
SUNWoam |
Proxy cache plus array configuration files |
SUNWprxyA |
Netra HTML forms for configuring proxy cache |
SUNWsacom |
Solstice Enterprise Agents files for root file system |
SUNWsadmi |
Solstice Enterprise Agents Desktop Management Interface |
SUNWsasdk |
Solstice Enterprise Agents Software Developer Kit |
SUNWsasnm |
Solstice Enterprise Agents Simple Network Management Protocol |
SUNWscalr |
Array daemon and supporting binaries |
SUNWscapp |
Appliance setup |
SUNWscoam |
Array software configuration files (Netra Proxy Cache Array only) |
SUNWscsml |
Array software service monitor license (Netra Proxy Cache Array only) |
SUNWscsnm |
Array daemon SNMP agent |
Table 19-2 lists the disk partitions on the two internal drives of a Netra Proxy Cache Server. You cannot change the disk partitioning without affecting the operation of the server.
If you experience a disk failure, the procedure described in Appendix A, System Recovery," automatically re-creates the partitions specified in Table 19-2.
Table 19-2 Disk Partitions for Netra Proxy Cache Server
File System/Mount Point |
Disk/Slice |
Size |
---|---|---|
/ |
c0t0d0s0 |
600 MB |
/var (including proxy cache service logs) |
c0t1d0s0 |
600 MB |
swap |
c0t0d0s1 |
128 MB |
swap |
c0t1d0s1 |
128 MB |
overlap |
c0t0d0s2 |
4092 MB |
overlap |
c0t1d0s2 |
4092 MB |
/var/opt/SUNWcache/cache1 |
c0t0d0s6 |
3044 MB (or rest of disk, whatever that number might be) |
/var/opt/SUNWcache/cache2 |
c0t1d0s6 |
3044 MB (or rest of disk, whatever that number might be) |
The disk layout for the Netra Proxy Cache Server is illustrated in Figure 19-1.
The Netra Proxy Cache Array software multicasts load and heartbeat information over the control network. It also performs a redundant multicast of the same data over the service interface. This raises the possibility of overlapping addresses if you have more than one array on a given subnet.
If you have more than one array on a subnet, it is recommended you use different multicast addresses and not just different port numbers to distinguish each array. See the description of the multicast address property in "Networks". You can use snoop (1M) to ensure uniqueness of a multicast address within your network.
Netra Proxy Cache software enables you to establish email recipients for mail that is addressed to root@<netra host name> or Postmaster@<netra host name> . When entering email addresses, make sure you specify addresses in a form compatible with your sendmail configuration. For example, if your mail system expects an address of a form <login>@<nis domain name>, mail sent to <login>@<host name> is undeliverable.
See "System Administrator Alias" for a description of the system administrator alias and "Primary Configuration" for a description of the proxy webmaster alias.
In the absence of siblings, upon a miss (an object not in its local cache) a proxy cache server issues a HTTP request for the object to its parents or to the origin web server.
In an environment in which the Inter Cache Protocol (ICP) is supported (as it is in the Netra Proxy Cache Server), upon a miss, a proxy cache server asks all of its parents and siblings if any of them has the requested object. If no parent or sibling responds within a certain period, the proxy cache server forwards the request to one its parents.
Note that a parent might be called upon to be responsible for returning the object to a requesting server. A request to a sibling never goes beyond that sibling; that is, a sibling only checks its local cache and does not forward a request.
You can specify the use of certain parents (or siblings) for certain domains, through the use of the Query Parent Cache for Domains property, described in "Proxy Cascade".
The following example illustrates the use of ordering in the parent/sibling table and the Query Parent Cache for Domains property. Assume the following table:
host1 ICP-capable parent host2 non-ICP-capable parent host3 ICP-capable parent host4 sibling
Assume further the Query Parent Cache for Domains property is defined as follows:
host1 .edu host2 .com host3 .com host4 .com
Your server receives a request containing the domain acme.eng.com. The following sequence occurs:
Your server contacts host3 and host4. It does not contact host2 because that host is not ICP-capable; host1 is not contacted because you configured it to handle the .edu domain.
Both host3 and host4 return ICP misses
Your server fetches the URL from host2 because it is the first parent in the parent/sibling table that matches the .com domain.
In the Host Status page (see "Host Status"), if the control interface test displays as not OK, it indicates one of the following:
The host being monitored has an incorrect control network number or an incorrect netmask for the control network.
The preceding is true for other array members.
A possible, but less likely, alternative is that the control interface hardware is not working correctly.
The Netra Proxy Cache Server supports parent failover, in which, if the server's parent fails, the server switches to the next parent on its list. (See "Proxy Cascade" for a description of the table of parent and sibling proxies.) Failover occurs if the Netra Proxy Cache Server's TCP connect call fails, not if the proxy cache service's connect timeout (2 minutes, by default) is exceeded. (See "Timeouts" for a description of the Timeout for Server Connections property.)
A TCP connect call might fail because the operating system's timeout (3 minutes, by default) is exceeded or from some other cause. If the proxy cache service's timeout is shorter than the operating system's (as is true for the default case), the connect attempt is terminated before an error is returned, with the result that parent failover does not occur.
If your server experiences frequent connection timeouts when attempting to connect to a parent, you can set the proxy cache service's connect timeout to be at least 10 seconds greater than the operating system's TCP connect timeout. Alternatively, (if you have a serial connection to your server) you can reduce the operating system's timeout. To change the operating system's timeout, use the ndd command, which takes arguments in milliseconds. For example:
# ndd -set /dev/tcp tcp_ip_abort_cinterval 30000
The preceding command sets the TCP connect timeout to 30 seconds. To view the current TCP connect timeout, enter:
# ndd /dev/tcp tcp_ip_abort_cinterval
Listed below are the rules for pattern matching used for the <reg expression> component of the TTL Selection Based on URL property, described in "URL Policy". These rules are taken from Section 3C of the Solaris regexec man page.
If subexpression i in a regular expression is not contained within another subexpression, and it participated in the match several times, then the byte offsets in pmatch[i] will delimit the last such match.
If subexpression i is not contained within another subexpression, and it did not participate in an otherwise successful match, the byte offsets in pmatch[i] will be -1. A subexpression does not participate in the match when:
* or \{ \} appears immediately after the subexpression in a basic regular expression, or *, ?, or {} appears immediately after the subexpression in an extended regular expression, and the subexpression did not match (matched zero times)
or
| is used in an extended regular expression to select this subexpression or another, and the other subexpression matched.
If subexpression i is contained within another subexpression j, and i is not contained within any other subexpression that is contained within j, and a match of subexpression j is reported in pmatch[j], then the match or non-match of subexpression i reported in pmatch[i] will be as described in 1. and 2. above, but within the substring reported in pmatch[j] rather than the whole string.
If subexpression i is contained in subexpression j, and the byte offsets in pmatch[j] are -1, then the pointers in pmatch[i] also will be -1.
If subexpression i matched a zero-length string, then both byte offsets in pmatch[i] will be the byte offset of the character or NULL terminator immediately following the zero-length string.
Test and load objects are pieces of software that run in the context of the Netra Proxy Cache array daemon, communicating the health of a service/host instantiation to the monitor object (cache_monitor or dns_monitor) in that daemon. The monitor object is responsible for monitoring a service on a given array host.
The format of the values returned by test and load objects are:
From a test object, a monitor object expects a boolean value, indicating, for example, whether an interface is up or whether a service is available.
From a load object, a monitor object expects two integers, one for current load, the other for current capacity.
The return values for test and load objects can be applied to a wide variety of resources. For example, a memory-intensive service might call for a load object to measure the availability of swap space.
In the current release of the Netra Proxy Cache product, all array members have the same set of test and load objects. These objects are selected for their appropriateness for a proxy cache service and an array DNS.
The array daemon configuration file, scalrd.conf, contains parameter settings for each test and load object. The file scalrd.conf is stored in /etc/opt/SUNWscalr. If you have a serial connection to an array host, you can use the scalrcontrol (1) utility, stored in /opt/SUNWscalr/bin, to obtain the output from the test and load objects.
There is a man page for each test object type, in /opt/SUNWscalr/man/man5. These man pages describe the parameters for each test object instance below. There is also a man page for scalrcontrol, in /opt/SUNWscalr/man/man1.
In the following object descriptions, parameters are taken from scalrd.conf. Values for these parameters are the default values.
The test objects listed below are shipped with the Netra Proxy Cache product. Their output is displayed in the Host Status page that you invoke from the Array Status page.
cache_connect_test
An object of type ConnectTest (5). Tests the TCP port used by the proxy cache service (8080). Also tests the service address(es) and control address used by the proxy cache service. The test object instance is configured to test persistent TCP connections. The parameters for this test object are as follows:
ConnectTest cache_connect_test port=8080 check_addr=0.0.0.0 interval=10 retries=3 retry_interval=2 reset_min_interval=60 monitor_object=cache_monitor max_connect=99999999 check_control=true persistent_connection=true connection_test_object=cache_http_test
cache_process_test
An object of type ProcessTest (5). Tests for the presence of the process associated with the proxy cache service. The parameters for this test object are as follows:
ProcessTest cache_process_test process_id_script="/etc/init.d/scalr.cache getpid" interval=2 retries=3 retry_interval=2 reset_min_interval=60
cache_test
An object of type AndTest (5). Combines the outputs from cache_connect_test, cache_process_test, and service_interface_test. Reports failure to the monitor object (cache_monitor) if any of these "child" test objects returns failure. The parameters for this test object are as follows:
AndTest cache_test test_objects=cache_connect_test,cache_process_test reset_script="/etc/init.d/scalr.cache restart" reset_min_interval=60 monitor_object=cache_monitor
control_interface_test
An object of type PingTest (5). Tests the integrity of the control interface. The parameters for this test object are as follows:
PingTest control_interface_test ping_addr=192.168.89.255 min_replies=1 exclude_same_host=true interval=600 ping_timeout=5 retries=3 retry_interval=2
dns_connect_test
An object of type ConnectTest (5). Tests the TCP port used by the array DNS (53). Also tests the service address(es) and control address used by the DNS. The parameters for this test object are as follows:
ConnectTest dns_connect_test port=53 check_addr=0.0.0.0 interval=10 retries=3 retry_interval=2 reset_min_interval=60 monitor_object=dns_monitor max_connect=99999999 check_control=true persistent_connection=false
dns_process_test
An object of type ProcessTest (5). Tests for the presence of the process associated with the array DNS. The parameters for this test object are as follows:
ProcessTest dns_process_test process_id_script="/opt/SUNWscalr/scripts/dns.getpid" interval=2 retries=3 retry_interval=2 reset_min_interval=60
dns_test
An object of type AndTest (5). Combines the outputs from dns_connect_test, dns_udp_test, dns_process_test, and service_interface_test. Reports failure to the monitor object (dns_monitor) if any of these "child" test objects returns failure. The parameters for this test object are as follows:
AndTest dns_test test_objects=dns_connect_test,dns_process_test,dns_udp_test reset_script="/opt/SUNWscalr/scripts/dns.reset" reset_min_interval=30 monitor_object=dns_monitor
dns_udp_test
An object of type DNSTest (5). Tests the ability of the array DNS to resolve the name of a domain. By default the name localhost is used. The parameters for this test object are as follows:
DNSTest dns_udp_test domain_name=localhost port=53 check_addr=0.0.0.0 interval=10 timeout=5 retries=3 retry_interval=2 reset_min_interval=60 monitor_object=dns_monitor max_check=99999999 check_control=true
service_interface_test
An object of type PingTest (5). Tests the integrity of the service interface used by a monitor object. The parameters for this test object are as follows:
PingTest service_interface_test ping_addr=129.144.91.255 min_replies=1 exclude_same_host=true interval=60 ping_timeout=5 retries=3 retry_interval=2
The load objects listed below are shipped with the Netra Proxy Cache product. Their output is displayed in the Host Status page that you invoke from the Array Status page.
There is a man page for each load object type, in /opt/SUNWscalr/man/man5. These man pages describe the parameters for each load object instance below.
cache_adjust_load
An object of type AdjustLoad (5). Adjusts the output from the cpu_load object to account for special conditions, such as startup and shutdown. The parameters for this load object are as follows:
AdjustLoad cache_adjust_load interval=10 adjust_load_file=/tmp/.proxyload.adjust max_adjust=100 load_object=cpu_load
cpu_load
An object of type CPULoad (5). Returns the CPU utilization on a host. The parameters for this test object are as follows:
CPULoad cpu_load interval=30 divide_by_cpus=false divide_by_cpu_clocks=false
The relationship among monitor, test, and load objects is illustrated in Figure 19-2.
The significance of the relationships illustrated in Figure 19-2 is as follows:
For test objects, a failure of a lowest-level object (indicated by a not-OK status in the Host Status page) causes the parent object (cache_test and dns_test, both of type AndTest) to fail. The failure of such a parent object, in turn, causes the monitor object return failure status. This failure is also reflected in the Host Status page. When a service on a host fails, the monitor object removes the service address associated with that service from the array's DNS zone and moves the service address to the least loaded host in the array.
For load objects, the lowest-level object (cpu_load) returns its load and capacity figures to its parent (cache_adjust_load, of type AdjustLoad). Using our example, the cache_adjust_load object performs any adjustments required and returns "final" load and capacity figures to the monitor object, cache_monitor. The monitor object compares figures obtained from cache_adjust_load to high- and low-water marks that it maintains for the service and takes action if one of these thresholds is crossed. If a monitor object determines that a service is overloaded, it removes its service address from the array's DNS zone. If the monitor object determines that a formerly overloaded service is now in its normal range, it reintroduces the service address for that service in the DNS zone.