Release Notes |
Due to the size of the OS distributions that are used in the AllStart module, the distribution files are not captured as part of the backup process on the Sun Control Station.
For the AllStart module, the backup process captures the defined metadata for Files, Distributions, Profiles, Payloads, Clients, Services and Advanced settings. You will have to re-load your distributions individual files, and then re-create the payloads.
When you re-load the OS distributions, you need to apply new names to the distributions; the metadata for the old distributions still appears in the UI, but it can no longer be used to perform builds. You cannot delete the metadata for these old distributions until you have re-directed the clients to the new payloads.
To restore the data to your AllStart module:
Note - For backup and restore procedures, refer to Chapter 2 in the Sun Control Station Administration Manual, 817-3603. |
1. Restore the .scs backup file to your control station.
2. Re-load the distribution(s) that you had loaded before. You must give the re-loaded distributions new names.
3. Edit the payload(s) so that they use the new distribution(s).
4. Delete the old distributions from the AllStart Distributions table.
If the Sun Control Station agent on a managed host stops functioning, the cron job that should restart the agent does not work.
1. Determine whether the agent is functioning. Run the following command:
ps -ef | grep agent
2. Look for a line similar to the following line:
root 13367 1 0 Mar26 ? 00:00:01 /usr/mgmt/libexec/agent-Linux-i386 server -port 27000
If you do not see such a line, then the agent is not functioning.
3. Restart the agent manually by running the following command as root:
/etc/init.d/init.agent start
If you try to add a host that does not have a functioning agent, you receive a message that the user name and password are incorrect.
1. Verify whether you have an agent on the managed host by running the command:
rpm -qa | grep agent
a. If there is an agent on the managed host, this command returns a result similar to the following line:
To determine whether the agent on the host is functioning, refer to the procedure detailed in Broken Agent Cron Job on Managed Host.
b. If there is no agent on the managed host, this command returns nothing as a result.
In this case, you need to obtain a copy of the agent and install it on the managed host.
2. You can obtain a copy of the agent from the control station by running the command:
wget http://<IP_address_or_host_name_of_control_station>/pkgs/base-mgmt-agent-1.1-22.i386.rpm
3. Install the agent on the managed host by running the command:
rpm -ihv base-mgmt-agent-1.1-22.i386.rpm
4. Start the agent by running the command:
/etc/init.d/init.agent start
When uploading an OS distribution from a set of CD-ROMs, allow the entire uploading operation to complete. Do not click the option "Put Task in Background" when the Task Progress dialog appears, and do not choose another task in the UI.
If you do so, the system does not allow you to continue with the next CD-ROM in the set and the complete distribution will not be uploaded.
If, by accident, you do put the task in the background, you will need to delete this distribution from the AllStart Distributions table and start this procedure again.
If there is not sufficient space available on the hard disk drive, the task of enabling a client will fail, but the UI will not inform you that the task has failed.
Instead, the payload installation on that client will fail, indicating that the ksconfig file could not be retrieved or that the NFS server did not respond.
To correct this, do the following:
1. Free up space on your hard disk drive.
2. In the control station UI, add a new payload that is configured identically to the failed payload.
3. Switch the clients to use the new payload.
4. (optional) Delete the old payload.
5. (optional) Rename the new payload.
When building a client, you can follow the progress in the UI. However, the build status for an AutoDiscovery client does not display in the table.
The AllStart module uses the AutoYaST feature to build Sun JDS clients.
The following items are known issues for the AutoYaST feature.
1. When building a client using a serial port or in text mode, AutoYaST does not link the X server for the video card in /usr/X11R6/bin/.
2. If AutoYaST cannot detect the attached monitor, it ignores the X monitor configurations.
3. Building a client attached to a KVM may cause AutoYaST to be unable to detect the monitor; this depends on the KVM and monitor).
4. The keyboard section of /etc/X11/XF86Config is hard-coded to English. To reconfigure the keyboard setting, run the command: /sbin/yast2
5. AutoYaST cannot run perl scripts in any phase. The file name to redirect STDERR is not generated correctly; this causes all perl scripts to fail.
The Sun Control Station can manage a managed host over eth0 or eth1.
If you have a managed host that you now want to add as an AllStart client and build this client over a NIC other than eth0, you have to add the managed host manually. Refer to the procedure for adding a single client.
Currently, if you try to add a managed host as a client, AllStart defaults to rebuild the client over eth0. In this case, when the client reboots as part of the build process, it cannot connect back to the control station and the build fails.
Within AllStart, you can upload a custom file(s) to the control station and then add this file(s) to a payload. The custom file can be a tar ball, a zip file or another type of file.
If you upload a custom file, you can access it in the post- or pre-script section of the install. However, the installer only knows how to install RPM-type files.
Before writing a script to access the file, you need to know the payload number.
1. You can obtain a ist of payload IDs by running the command:
/scs/sbin/as_payload.pl -l
In the list, locate the payload that you want and check the ID number at the start of the line.
2. You can access the payload files through http:// and wget by running the following commands, as in the following example:
#!/bin/sh
wget http://<IP_address_of_control_station>/allstart/web/<PAYLOAD_ID>/suse/custom/tarball.tgz
tar xzvf tarball.tgz
sh tarball/install
An alternate method is to create a directory manually on your Sun Control Station server, for example, /scs/share/allstart/web/files.
Put your files in this new directory. You can then access the files by running the command:
wget http://IP_address_of_control_station>/allstart/web/files/<FILENAME>.
In this way, your customization scripts can refer to the files without worrying about the directory path changing.
To configure the Sun Control Station to act as a YOU server, there are several steps, both on the control station and on the Sun JDS client.
1. Configure a YOU server in the Software Management module.
a. Select Software Management > Remote Servers.
The Remote Software Servers table appears.
b. Click Add Server below the table.
The Add A Remote Software Server table appears.
c. Fill in the following fields:
http://jdsupdate.sun.com:8080/lpsauth-1.0/updates/
Note - You can find the user name and password for accessing this YOU server in the Sun Java Desktop System Support Entitlement Certificate in your Sun JDS media kit. |
The Remote Software Servers table refreshes with the new server added. The servers are sorted by server name in ascending order.
2. Synchronize the Packages table in Software Management with the remote YOU server.
a. Select Software Management > Packages.
b. Click Refresh above the table.
The Task Progress dialog appears.
This operation downloads the files from the remote YOU server. There are currently two patches on http://jdsupates.sun.com.
3. Publish one or more of the patches.
a. Select Software Management > Packages.
b. Select the file(s) in the list of available package files that you want to publish.
c. Click Publish at the bottom of the table.
The Task Progress dialog appears.
This operation makes the patches available at the following URL:
http://[<scs_ipaddr OR scs_hostname>]/you/
Note - If you use the AutoInstall feature of YaST, all available patches will be installed. You cannot select in individual patches to install.
|
1. Make the SCS the YOU update server by running the following command:
echo `http://<scs_ipaddr OR scs_hostname>/you/' > /etc/suseservers
2. Disable yast online update server list updates. To do so, edit the file:
/etc/sysconfig/onlineupdate
YAST2_LOADFTPSERVER="yes"
YAST2_LOADFTPSERVER="no"
You can access and install patches from the SCS YOU server in two ways:
Enter the following two commands from the terminal window.
yast online_update .auto.get
yast online_update .auto.install
You can also access the Sun Control Station YOU server from a terminal window.
1. Launch yast or yast2 in a terminal window.
yast
yast2
2. Select Software > Online Update.
3. In the Choice of Update Mode box, choose Manual Update.
4. Select Next in the bottom-right corner and press Return.
5. In the Authorization popup window, do not enter a user name or password. Simply select log in and press Return.
A list of available patches is presented.
6. Select the patch the you want to install.
7. To proceed with the installation, select OK and press Return.
Note - If there are no available patches on the YOU server, you might see an error message when logging in from the Authorization popup window. |
Within the Software Management module (Packages > Display Options), you can specify the type(s) of package file to display.
To display package files for certain products only, you must move the item "All" from the Products Displayed scrolling window to the Products Not Displayed scrolling window, as well as the individual products for which you do not want to display the package files.
When adding a remote software server, the system does not validate the URL path to the server. If you add a URL path with back slashes instead of forward slashes, the system throws a Java exception.
Ensure that you enter the path correctly with forward slashes. For example:
http://<fully_qualified_domain_name>/packages/
On the Packages screen, you can select a package file(s) and click the Info button at the bottom of the table to view the information about that package file.
If you select a large number of package files for which to view the information, the system might not display the information for all of the package files that you selected.
The list of Needed Software for a given managed host is not updated automatically. You need to perform this task manually.
To perform this task, in the Needed Software table, select the managed host(s) and click the Update button in the bottom-right corner.
The control station performs dependency checking on a package file(s) when you select a package file in the Packages table and install it on a managed host.
It also performs dependency checking when you select a managed host in the Needed Software table and update the list of needed software for that host.
There is a difference in the way that the control station performs dependency checking on these two tables:
This difference might be noticed when one version of a package file is installed on a managed host and a different version of the same package file resides in the repository on the control station.
As an example, for Sun JDS Release 2, the versions of the RPM expect differ between QS5d (the beta version of the software) and QS7 (a post-beta build):
QS5d - expect-5.34-277
QS7 - expect-5.34-288
ITvpntool requires both ITgcfg and and a version of expect that is equal to or higher than (≥) version 5.3.
Note - For this example, we are assuming that the RPM ITgcfg is not installed on the managed host. |
From the Packages table, we want to install the RPM ITvpntool on a managed host running the QS5d software. In this case, with the RPM expect-5.34-277 installed on the host as part of the QS5d software, the dependency "expect≥ 5.3" is satisfied, while the dependency ITgcfg is not. As a result, only one additional RPM (ITgcfg) is selected to be installed with the RPM ITvpntool. The same result would be seen if we wish to install the RPM ITvpntool on a managed host running QS7.
Alternatively, let us suppose that, from the Needed Software table, we select this same managed host and perform the update operation to see a list of needed software for that host.
In the case of QS5d, a newer version of expect (expect-5.34-288) is available for installation on the managed host. Therefore, the list of package files available for the managed host includes this newer version of expect, the RPM ITgcfg and the RPM ITvpntool. In contrast, if you perform this operation on a managed host running QS7, the list does not include the RPM expect-5.34-288 since this version is already installed on the host.
If, in the Packages table, you select the RPM expect-5.34-288 to install on a host running QS5d, the newer version is displayed as "installable". On the other hand, if you select it to install on a host running QS7, the control station returns a message indicating that this RPM is already installed.
In the Needed Software table, when you perform the update operation on a managed host, the control station creates a list of available package files for the host. It also performs the required dependency checks on these files. The generate list displays the package files in the correct installation order, with the first item to install at the top of the list.
If you install all of the package files at once, the installation will succeed because the control station has already ordered the files according to the dependency requirements.
If you install the packages individually, starting from the top of the list and you install all of the files in the list, the installation will again succeed.
However, if you install an individual package file out of the displayed order, the installation might not succeed, because one or more of the package files higher up the list might be a dependency for the selected package file.
If you want to install only certain package files, take note of the name and install the package file(s) from the Packages table. In this case, the control station will perform dependency checking on this package file and automatically add additional package files if they are needed.
A host can be managed by more than one Sun Control Station. The Health-Monitoring settings (for example, the CPU alarm thresholds) can be changed from any of the control stations. When the settings are changed on one control station, the new values are propagated to all of the managed hosts.
In this case, the values from the most recent settings changes overwrite the earlier values on the managed host; however, the settings that appear in the UIs of the other control stations do not update to reflect the most recent settings changes.
To resolve this issue, if more than one control station manages a given host(s), ensure that the Health-Monitoring settings on each of these control stations are set to the same values.
The Alive Polling interval can be set to a minimum of one minute; the Status Polling interval can be set to a minimum of one hour.
We recommend that you set the Alive Polling Interval at a minimum of five (5) minutes. If a Sun Control Station is managing many hosts, you should set a longer interval. When the control station encounters a "non-alive" host, the time-out period for Alive Polling is one (1) minute.
We recommend that you set the Status Polling Interval at a minimum of two (2) hours. If a Sun Control Station is managing many hosts, you should set a longer interval. When the control station encounters an unreachable host (including SCS agent failures), the time-out period for Status Polling is ten (10) minutes.
Frequent Alive Polling and Status Polling can also generate very large files and potentially fill up the /var directory.
The default interval for Alive Polling is set at five (5) minutes.
The default interval for Status Polling is set at two (2) hours.
You can change these default intervals. For more information, refer to the Scheduler feature in Chapter 3 of the PDF Administrator Manual.
On a managed host using the Health Monitoring module, when eth0 is active, the event-generator script always passes back to the control station the eth0 IP address along with the other information.
If this managed host was imported into the control-station framework using an IP address different from the one associated to eth0, the Health Monitoring status table may not display the correct status for this managed host.
To correct this, when you are viewing the detailed information tables for a managed host, you can click Update Now above the tables. You can also wait for Alive Polling and Status Polling tasks to retrieve the correct status.
If possible, you can also re-import the managed host using the IP address associated with eth0.
For a selected host, you can view Detailed Information tables for the Performance information.
If a hard disk drive is larger than 32GB, the information on the Filesystem Usage table will only show a combined maximum value of 32767 MB for the In Use (MB) and Free (MB) columns.
The following control characters are not valid in any text field. They might or might not throw an exception.
To reset your login password for the Sun Control Station 2.1.
1. In your preferred editor, open the following file.
/var/tomcat4/webapps/sdui/WEB-INF/database.xml
2. Change the line starting with password= to the following:
password="0DPiKuNIrrVmD8IUCuw1hQxNqZc="
This resets your password to "admin".
4. Restart tomcat by running the following command.
/etc/rc.d/init.d/tomcat4 restart
In your browser, you should now be able to log in to the Sun Control Station.
Depending on your Web browser and encoding preferences, double-byte or multi-byte characters might be used to represent non-ASCII characters.
You cannot use any non-ASCII characters in file names or in directory paths. The Sun Control Station cannot correctly process these characters in in file names or directory paths.
You can enter non-ASCII characters into text fields inthe browser-based user interface, but they might not display correctly.
Sun Control Station 2.1 can manage the following clients running .
Note - [S8] and [S9] represent clients running Solaris operating system (OS) 8.0 and Solaris OS 9.0, respectively. |
The SPARC-based clients supported by by Sun Control Station 2.1 include
The x86-based clients supported by by Sun Control Station 2.1 include
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.