Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 Release Notes

The Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 releases incorporate design changes, feature changes, and function enhancements. These releases also include fixes to the software. System administrators and programmers who are familiar with these software products will see changes that can affect daily operations and can affect automated scripts written to co-exist with this software. For these reasons, Sun Microsystems recommends that you study these release notes prior to upgrading to the Sun StorEdge QFS or Sun StorEdge SAM-FS 4.1 releases.

If you are installing this product's base release and its software patches, Sun Microsystems recommends that you study these release notes and the patch README files that are distributed with the software patches. The patch README files contain information that supplements the information in this document.

You can obtain a copy of the Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 software through Sun Microsystems or through your authorized service provider.


Features in this Release

The following sections describe the new features in this release.

New Device Support

This release added support for the following new devices:

TapeAlerts

TapeAlert support was added for direct-attached libraries and tape drives. For more information, see the tapealert(1M) man page.

ACSLS Support Enhancements

The Sun StorEdge SAM-FS 4.1 release added support for ACSLS 6.1, 6.1.1, and the Sun SolarisTM operating system (OS) version of ACSLS 7.0 and 7.1. As of this release, the Sun StorEdge SAM-FS software supports one-step export of media from ACSLS-attached libraries. For more information, see the export(1M) man page. The cartridge access port (CAP) must be in manual mode to support this feature.

Large Device Support (Extensible Firmware Interface (EFI) Labels)

This release added support in the Sun StorEdge QFS and Sun StorEdge SAM-FS software for SCSI devices and Solaris Volume Manager volumes larger than 1 terabyte. The file system must be built using 4.1 software. It is not possible to grow an existing file system onto large devices.

Continuous Archiving

This release completed the implementation of continuous archiving. Continuous archiving was accomplished by replacing the sam-arfind file system scanning mechanism with a file system change detection mechanism. Each file system has a sam-arfind daemon executing a system call and requesting work. As files are created or modified, the file system notifies sam-arfind. The sam-arfind daemon determines the archive requirements for the file as defined in the archiver command file, archiver.cmd(4).

If the file is to be archived, the directory containing the file and the time for the archive action is recorded in a ScanList to be acted upon later. The earliest time for archive action is also kept. When this time is reached, the following occur:

The ScanList is a file. It is mapped and kept in the archiver data directory for the file system. You can see the ScanList in the showqueue(1M) command's output.

Continuous archiving yields noticeable performance improvements for file systems containing large numbers of files, for example those with greater than 1,000,000 files. These file systems require long times to scan, sometimes several days. In most situations that involve large numbers of files, most of the files never need archive activity; the files were archived in the past and never change. Continuous archiving avoids scanning directories and the .inodes file, so these large file systems pose no burden on the archiver.

In the event of a system crash or an unexpected stoppage of sam-arfind processing, the software initiates a full directory scan as a background activity.

The samu(1M) utility's arrestart command causes all archiver work-in-progress to be discarded and initiates a full file system scan.

Continuous archiving is enabled by default. If you do not want to use continuous archiving, specify the examine=scan directive in the archiver command file, archiver.cmd(4).

Controlling how the Archiver Examines a File System

This release provides file system scanning alternatives. You can use the examine=method directive in the archiver.cmd file to specify the file system examination mode, as follows:

The samu(1M) utility's arscan fsname[.directory | ..inodes] [delay] command causes the archiver to scan a file system. If this command is specified with no options, the system scans the file system recursively from the root. Specifying the .inodes option causes an inodes scan. The directory option specifies that directory to be scanned. The optional delay argument specifies that the scan be delayed by delay seconds.

Initiating File Archiving

As the system identifies files to be archived, it creates a list of files known as an archive request. The system schedules the archive request for archival at the end of a file system scan. The following archive set parameters have been added to better control the archiving workload and to ensure timely archival of files:

If more than one of -startage, -startcount, and -startsize are set, the first condition encountered initiates the archival operation. If none are set, the archival operation starts at the end of a file system scan. For more information, see the archiver.cmd(4) man page.

Archiver Extensions to allsets

The allsets feature for defining archive set parameters has been extended to allow you to specify a copy number. The format is allsets.copy. This new format allows you to define parameters for only a single archive set copy. Previously, any parameters assigned by allsets applied to all archive set copies.

You can use the allsets and allsets.copy parameters to define volume assignments. VSNs defined for allsets and allsets.copy are applied to any Archive Sets that do not have a VSN definition. For more information, see the archiver.cmd(4) man page.

Archiver Soft Restart

This release includes a soft restart capability for the archiver. When the archiver is stopped, for any reason (for example, it receives a signal such as a SIGINT, SIGTERM, SIGKILL, etc.) and subsequently started by the sam-fsd daemon, it recovers any work that was in progress.

The samu(1M) utility's arrestart command causes the archiver to discard all work in progress and restart all archiver daemons.

The samu(1M) utility's arrerun command causes the archiver to perform a soft restart. It restarts the archiver daemons and recovers all work in progress.

Provision for Choosing the Time Reference for Unarchiving File Copies

This release gives sites the choice of unarchiving by modify date instead of by access date. The new -unarchage time_ref archive set parameter allows sites to select which file time (access or modify) to use to determine when to unarchive a file copy.

For more information, see the archiver.cmd(4) man page.

Separate Scheduling for New, Versus Rearchive, Operations

This release separates archival of new and recycled copies. If an archive copy of a file is being rearchived, an internal archive set copy is used for scheduling the archive operation. It is called a rearchive set copy, and it uses the archive parameters from the actual archive set copy. If desired, you can set the archive set parameters and VSN associations by using the archive set copy name followed by the character R. The rearchive set copy allows users to differentiate new and rearchive operations. It also allows them to use different parameters and VSNs for each operation.

For more information, see the archiver.cmd(4) man page.

Limiting Archiver Drive Work

This release enables you to direct the archiver to balance multiple drive usage.

In the SAM-FS 3.5.0 release, you could limit the amount of work that a drive could do by setting the number of drives to use for an archive set to several times the number of physical drives. This had the effect of dividing the load into smaller chunks.

In the Sun SAM-FS 4.0 release, the archiver strictly scheduled only the drives available.

In the Sun StorEdge SAM-FS 4.1 release, to get the effect of the SAM-FS 3.5.0 feature, you can use the -drivemax archive set parameter to limit the amount of data that each arcopy operation writes. This allows you to balance the drive work.

For example, assume that there are 300 gigabytes of data to archive in the archive request. You can use the following parameters to specify that 5 drives be used and that 10 gigabytes of file data be archived at a time on each drive:

This is equivalent to specifying -drives 30 in SAM-FS 3.5.0.

For more information, see the archiver.cmd(4) man page.

Limiting Messages for Archiving Problems

This release retains archive requests for files that cannot be archived imminently. The archiver keeps archive requests for files that have persistent archiving problems in a wait queue. The archiver also uses the wait queue for archive requests that are awaiting restart of idled or stopped archiving.

The archiver maintains a record of which messages for the archive request have been sent to the SAM log file and to the archiver notify script. Such messages are sent only once for each unschedulable archive request.

The messages written to the archive request show the waiting condition. You can use the showqueue(1M) command to view them. You can also view the wait conditions in the samu(1M) utility's sam-archiverd display.

The archiver examines the wait queue and reschedules archive requests in the following situations:

Sorting Files to be Archived

This release allows you to direct the archiver to sort files in reverse order. The -rsort archive set parameter performs a sort in the reverse order of the -sort parameter.

Command to Remove Archiver Queue Files (Archive Requests)

The samu(1M) utility's arrmarchreq fsname.[* | arname] command allows an operator to remove one or more archive requests. TABLE 1 shows the samu(1M) commands to use to remove archive requests.

TABLE 1 Commands for Removing Archive Requests

samu(1M) Command

Archive Request Removed

arrmarchreq fsname.*

All archive requests for file system fsname.

arrmarchreq fsname.arname

The archive request arname. For example, samfs1.1.145.

arrmarchreq fsname.asname.n.*

All archive requests for archive set asname.n.


Provision to Use File Access Time as Archive Set Specifier

This release allows you to direct the archiver to use file access time as the archive set specifier. You can use the -access specifier on the archive set directive to include files whose access time is older than age. This allows files that have not been accessed for a long time to be rearchived to cheaper media.

Additions to the samu(1M) Operator Utility

The K display shows the kernel statistics accumulated by the Sun StorEdge SAM-FS software.

This release added an additional priority command to samu(1M). The priority pid newpri command sets the load priority for a volume in the preview queue.

The samu(1M) commands now include additional mount point specifications. The new commands are as follows:

Support for Devices up to 16 Terabytes

This release enables you to include disk devices with sizes of up to 16 terabytes in Sun StorEdge QFS and Sun StorEdge SAM-FS file systems. File systems created on these large devices cannot be used with earlier versions of Sun StorEdge QFS or Sun StorEdge SAM-FS software. This large device support is only available when running a 64-bit kernel.

New samfsdump(1M) Options

This release added the following samfsdump(1M) options:

New samfsrestore(1M) Option

This release added the -r option to the samfsrestore(1M) command. Using this option specifies that the software replace existing files for cases in which the existing files have an older modification time than the dumped files. If the existing files are newer, they are not restored.

New releaser.cmd Directive

The list_size directive allows you to increase the number of releaser candidates above the default number of 10000 for file systems containing small files.

For more information, see releaser.cmd(4).

New sfind(1) Option

The partial_on test yields true if the file has the partial release attribute set and the partially retained portion of the file is online.

Single Port Multiplexing (SPM)

This release added infrastructure to Sun StorEdge QFS and Sun StorEdge SAM-FS software so that each uses only one listener port on a host for all product daemons. This contrasts with the practice of the daemons using one port per daemon. The list of daemons using SPM is:

The samu(1M) P display shows which services are available for connection. The samsock entries required in /etc/services for the 4.0 release are no longer needed for the 4.1 release. If you do not anticipate falling back to the 4.0 release, you can remove the entries.

Stager Daemon Log File Enhancements

This release standardizes the date and time stamps of daemon log files and adds the year to the time stamp in the stager daemon's log file. The new format is yyyy/mm/dd hh:mm:ss. This change also added the staged file's copy number, user ID, group ID, requestor's user ID, and Equipment Ordinal of the drive upon which the file was staged to the log file.

This release adds a start event record in the stager daemon's log file. A customer can currently request that the stage daemon collect staging event information and write a log file. This feature allows a start event to be recorded in the log file. In the stager.cmd file, the customer can specify the following directive to specify the staging activities that are to be logged:

logfile = filename [ event ]

For event, specify start, finish, cancel, error, or all. The default is finish, cancel, and error.

These events are logged in the first column of the log entry as S (start), F (finish), C (cancel), or E (error). This change also adds the Equipment Ordinal of the drive upon which the file was staged to the log file.

Disk Archiving Enhancements

The disk archiving capability can now write multiple files in tar(1) format to a single disk archive file.

This release addresses the disk archiving performance problems that had been observed when transferring very large files across a wide area network. You can specify the following configuration parameters in the /etc/opt/SUNWsamfs/rft.cmd file to accommodate large files:

For more information about these parameters, see rft.cmd(4).

In the archiver log file, the system now appends the archive tar(1) file path to the disk volume name disk for disk archive entries.

Total Quotas

Sun StorEdge SAM-FS file systems now implement the capability to keep and enforce both total and online block quotas.

Because metadata is not released, the values for files online and total limits are the same. The new defaults for the quota(1M) and samquota(1M) commands show the total values and limits as well as the online values and limits. For sparse files, the total block count reflects the number of blocks used if the file is online; the total block count reflects the actual size of the file if the file is offline.

SAM-Remote Daemon Trace Files

This release implements daemon trace files for sam-serverd and sam-clientd. You can control the daemon trace files by putting directives in the trace file section of the defaults.conf(4) configuration file. The samset(1M) debug flag, remote, has been removed.

Catalog Field to Display Volume Information

You can use the chmod(1M) command's -I option to add a field to the catalog that displays information for a volume. This field can contain up to 128 characters. You can use this field, for example, to display information such as the location of exported media or archive sets on a volume.

New Mount Options

This release adds the following new mount options:

For more information, see the mount_samfs(1M) man page.

Standard Network Management Protocol (SNMP) Trap Support

You can configure the Sun StorEdge QFS and Sun StorEdge SAM-FS software to notify you when potential problems occur in its environment by using SNMP traps.

This feature allows you to monitor a Sun StorEdge SAM-FS system remotely from a network management console. Supported management consoles include the following:

Active fault determination and enhanced diagnostic support of Sun StorEdge SAM-FS systems is achieved with an asynchronous notification. You can enable and disable this feature through the alerts=on|off directive in the defaults.conf(4) file.

SAM-QFS Manager 1.0 Release

The SAM-QFS Manager 1.0 is a web-based graphical user interface tool for configuring and monitoring a Sun StorEdge QFS or Sun StorEdge SAM-FS environment. The software consists of two components:

SAM-QFS Manager Installation

The Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide includes instructions for installing the SAM-QFS Manager. You can install and configure SAM-QFS Manager along with the file systems, or you can install it later. If you install it later, use the instructions in the installation guide.

The SAM-QFS Manager software package has hardware and software requirements beyond that of the Sun StorEdge QFS and Sun StorEdge SAM-FS software. Refer to the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide for information about these requirements.

Security

For security reasons, the management console ends your https session after 15 minutes. You have to log in again to resume.

You must secure the management station and the Sun StorEdge QFS and Sun StorEdge SAM-FS servers that are managed. Place it inside the firewall.

Using the SAM-QFS Manager in Existing Sun StorEdge QFS or Sun StorEdge SAM-FS Environments

If you install the SAM-QFS Manager in an environment that already includes Sun StorEdge QFS or Sun StorEdge SAM-FS file systems, the software reads the existing configuration information and presents this information to you for modification and/or viewing. For more information about using SAM-QFS Manager in existing configurations, see Using SAM-QFS Manager With pre-4.1 Configuration Files.

Propagating Configuration File Changes

If you change any configuration files manually, you are responsible for the correctness of the files you change. Such configuration files include /etc/opt/SUNWsamfs/mcf, /etc/opt/SUNWsamfs/archiver.cmd, and others. You can use the /opt/SUNWsamfs/sbin/archiver -lv command to check the correctness of the archiver.cmd file. For information about propagating configuration file changes, see Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

Limitations

The SAM-QFS Manager 1.0 does not interoperate with all Sun StorEdge QFS and Sun StorEdge SAM-FS features. Depending on your equipment, configuration, and environment, it can simplify Sun StorEdge QFS and Sun StorEdge SAM-FS configuration and control.

Specifically, the SAM-QFS Manager does not support the following features:

The Sun StorEdge QFS and Sun StorEdge SAM-FS software includes a complete command line interface that allows you to configure and monitor the features that the SAM-QFS Manager does not support.

Single-user Administration

You can administer any Sun StorEdge QFS or Sun StorEdge SAM-FS server through a single instance of the Sun Web Console. You can configure any server to be administered by a single user name with SAMadmin privileges at any time.

The SAM-QFS Manager 1.0 release does not support multiple instances of the SAMadmin role managing the same server or multiple instances of Sun Web Console managing the same server. It is important to note that this includes opening another browser window and manipulating the Sun StorEdge QFS and Sun StorEdge SAM-FS configuration. It is the responsibility of the site administrator(s) to comply with this policy.

The administrator should log on as samadmin and choose SAMadmin on the Role selection page. All other users should log on as samuser. If you want to create additional administrator or user roles, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Sun Cluster 3.1 4/04 Interoperability With Sun StorEdge QFS Software

Sun Cluster 3.1 4/04 supports failover of standalone (nonshared) Sun StorEdge QFS file systems via the HAStoragePlus resource type implementation with the Sun StorEdge QFS 4.1 release.

The ability to configure Sun StorEdge QFS software as a highly available local file system is supported on Solaris 9 OS platforms only. This capability is not supported on Solaris 8 OS platforms.

/etc/vfstab Requirements

To configure Sun StorEdge QFS for failover with HAStoragePlus, configure the /etc/vfstab files on all applicable cluster nodes in the usual way with the following information:

The following is an example /etc/vfstab entry:

qfs1 - /local/qfs1 samfs 3 no sync_meta=1

In this example, observe the following:

/etc/opt/SUNWsamfs/mcf Requirements

The Sun StorEdge QFS Family Set name specified in /etc/vfstab must be a valid Sun StorEdge QFS Family Set that is present in the mcf file.

The mcf file entry can contain following types of Sun Cluster device names:



Note - Use of did devices of the form /dev/did/* is not supported.



Example 1. CODE EXAMPLE 1 is an example mcf file entry for use with HAStoragePlus that uses raw devices.

CODE EXAMPLE 1 mcf File that Specifies Raw Devices
qfs1                   1   ma    qfs1     on
/dev/global/dsk/d4s0  11   mm    qfs1     
/dev/global/dsk/d5s0  12   mr    qfs1     
/dev/global/dsk/d6s0  13   mr    qfs1     
/dev/global/dsk/d7s0  14   mr    qfs1     

Example 2. CODE EXAMPLE 2 is an example mcf file entry for use with HAStoragePlus that uses Solaris Volume Manager metadevices. The example assumes that the Solaris Volume Manager metaset in use is named red.

CODE EXAMPLE 2 mcf File that Specifies Solaris Volume Manager Devices
qfs1                   1   ma    qfs1     on
/dev/md/red/dsk/d0s0  11   mm    qfs1     
/dev/md/red/dsk/d1s0  12   mr    qfs1     

Example 3. CODE EXAMPLE 3 is an example mcf file entry for use with HAStoragePlus that uses VxVm devices.

CODE EXAMPLE 3 mcf File that Specifies VxVM Devices
qfs1                    1   ma    qfs1     on
/dev/vx/rdsk/oradg/m1  11   mm    qfs1     
/dev/vx/rdsk/oradg/m2  12   mr    qfs1     

The mcf file entry must be identical on all cluster nodes that are possible masters of the Sun StorEdge QFS file system.

Configuring the HAStoragePlus Resource

While using HAStoragePlus to configure a Sun StorEdge QFS file system for failover, the FilesystemCheckCommand property of HAStoragePlus must be set to /bin/true. All other resource properties for HAStoragePlus apply as specified in SUNW.HAStoragePlus(5).

The following example command shows how to use the scrgadm(1M) command to configure an HAStoragePlus resource:

# scrgadm -a -g qfs-rg -j ha-qfs -t SUNW.HAStoragePlus \
        -x FilesystemMountPoints=/local/qfs1 \
        -x FilesystemCheckCommand=/bin/true

Notes


Product Changes

The following sections describe the product changes in this release.

Packaging Changes

The SUNWsamfs and SUNWqfs packages have been split into root and usr packages. SUNWsamfs is replaced by SUNWsamfsr (root) and SUNWsamfsu (usr). SUNWqfs is replaced by SUNWqfsr (root) and SUNWqfsu (usr).

You must install the root package before the usr package, or pkgadd(1M) reports an error.

Packages are delivered in directory format rather than datastream format. For information about installing the software, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Script Changes

This release changed several command names and example script installation directories. This section describes the command name changes and example script install directory changes that have been made.

The 4.1 releases renamed some of the .sh commands. Specifically, it eliminated the use of the .sh suffix from within the /opt/SUNWsamfs/sbin directory. TABLE 2 shows the script commands that were affected.

TABLE 2 New Script Names, Including Path

Pre 4.1 Name

4.1 Name

/opt/SUNWsamfs/sbin/info.sh

/opt/SUNWsamfs/sbin/samexplorer

/opt/SUNWsamfs/sbin/set_admin.sh

/opt/SUNWsamfs/sbin/set_admin

/opt/SUNWsamfs/sbin/trace_rotate.sh

/opt/SUNWsamfs/sbin/trace_rotate


The following 4.1 man pages describe the commands in TABLE 2:

The target installation directory changed for some of the example scripts found in the /opt/SUNWsamfs/examples directory. The software automatically copies these scripts from the old location to the new location at installation time. It is your responsibility to update these scripts with any changes that might have been made to the default versions. The system displays a message at installation time if they need to be updated. You might want to add /etc/opt/SUNWsamfs/scripts to your PATH environment if you run the scripts manually. However, this is unlikely. TABLE 3 shows the directories into which the default versions of the site-customizable scripts are installed automatically when the package or patch is installed.

TABLE 3 Script Directories for Scripts that the System Copies Automatically

Source Directory

Install Directory

/opt/SUNWsamfs/examples/archiver.sh

/etc/opt/SUNWsamfs/scripts/archiver.sh

/opt/SUNWsamfs/examples/recycler.sh

/etc/opt/SUNWsamfs/scripts/recycler.sh

/opt/SUNWsamfs/examples/save_core.sh

/etc/opt/SUNWsamfs/scripts/save_core.sh

/opt/SUNWsamfs/examples/ssi.sh

/etc/opt/SUNWsamfs/scripts/ssi.sh


TABLE 4 shows other scripts that you might use. The installation process does not copy these into /etc/opt/SUNWsamfs/scripts automatically. You can copy the default versions of these optional site-customizable scripts after the package or patch is installed.

TABLE 4 Script Directories for Scripts that Sites Must Copy Manually

Source Directory

Install Directory

/opt/SUNWsamfs/examples/dev_down.sh

/etc/opt/SUNWsamfs/scripts/dev_down.sh

/opt/SUNWsamfs/examples/load_notify.sh

/etc/opt/SUNWsamfs/scripts/load_notify.sh

/opt/SUNWsamfs/examples/log_rotate.sh

/etc/opt/SUNWsamfs/scripts/log_rotate.sh


Archiver Notification Script Changes

The archiver now maintains its own record of the messages pertaining to conditions that prevent archiving, such as no space and no volumes. The directories /var/opt/SUNWsamfs/archiver/NoSpace and /var/opt/SUNWsamfs/archiver/NoVsns are no longer used.

Archiver Command File Reading

As of this release, the system no longer automatically rereads a changed archiver.cmd(4) file. Previously, the system reread the archiver command file within 60 seconds after it was changed. When reconfiguring Sun StorEdge SAM-FS software, it is often necessary to change the archiver command file. At some time in the process, the configuration files are no longer synchronized. If you changed the archiver.cmd file before the others, the archiver automatically reread the changed file and found errors in it.

Now, the archiver only rereads the archiver command file when it receives a SIGHUP. This occurs automatically when the sam-fsd(1M) daemon receives a SIGHUP. For information about propagating changes to configuration files, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

Archiving Metadata

This release changes the way metadata is archived, as follows:

You can suppress metadata archiving by using the archivemeta=[on|off] directive in the archiver command file. For more information, see the archiver.cmd(4) man page.

Changes to the samu(1M) Operator Utility

The samu(1M) help display has been reorganized to make the command and media help screens smaller. The stager commands have been put in a separate help screen.

Removed the chmed(1M) Command's -i Option

This change removes the ability to use the chmed(1M) -i command to set or clear the catalog slot in use (i) flag. This option has not been useful since the library catalog was redesigned in release 3.5.0. It is being removed.

Renamed the sam-ftpd Daemon

The sam-ftpd daemon was included in releases prior to 4.1. This daemon has been renamed sam-rftd (remote file transfer daemon). The configuration file is /etc/opt/SUNWsamfs/rft.cmd. The /etc/opt/SUNWsamfs/ftp.cmd file is copied to /etc/opt/SUNWsamfs/rft.cmd automatically when upgrading from a 4.0 system.

When upgrading to 4.1, you need to change your daemon tracing commands in /etc/opt/SUNWsamfs/defaults.conf from sam-ftpd to sam-rftd.

Quota Mount Option Enabled by Default

The quota mount option is enabled by default when you mount a file system if any quota files (.quota_a, .quota_g, or .quota_u) are present. Depending on your configuration, this has the following effects:

Changes to archive_audit(1M) Exit Codes

This release added new exit codes to archive_audit(1M). In previous releases, the software sometimes returned success for nonfatal errors. It now returnes a nonzero result code. User scripts that use archive_audit(1M) might need to be modified.

For more information about error codes, see the archive_audit(1M) man page.

Graphical User Interface Tools Removed

The libmgr(1M), samtool(1M), robottool(1M), devicetool(1M), and previewtool(1M) graphical user interfaces have been removed. The SAM-QFS Manager replaces the functionality formerly found in these tools.

Using SAM-QFS Manager With pre-4.1 Configuration Files

The following are some of the product changes that might affect you if you use SAM-QFS Manager:

For more information about SAM-QFS Manager known issues, see SAM-QFS Manager Issues.

Changes to /etc/name_to_sysnum

Some of the Solaris patches may inadvertently remove the samsys line from the /etc/name_to_sysnum file when they are installed. One indication of the problem is the appearance of the following message in the /var/adm/messages file:

WARNING: system call missing from bind file

Beginning with Solaris 9 patch 112233-11, the Solaris OS uses system call number 181 to get information about resource utilization (SYS_rusagesys). The default Sun StorEdge QFS and Sun StorEdge SAM-FS configuration was changed in 4.1 to use system call number 182. You might have a different system call number configured if you upgraded from a previous release. In order for the Sun StorEdge QFS and Sun StorEdge SAM-FS software to be operational after installing this, or a subsequent, Solaris 9 patch, change the default Sun StorEdge QFS and Sun StorEdge SAM-FS entry in /etc/name_to_sysnum from samsys 181 to an alternate entry, such as samsys 182 or samsys 183, based on the following guidelines:

For more information about how to correct this situation, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Falling Back to a Previous Release

If you create a new file system with the Sun StorEdge QFS 4.1 or Sun StorEdge SAM-FS 4.1 software and you want to revert to a 4.0 version, install Sun StorEdge QFS or Sun StorEdge SAM-FS 4.0 patch -06 or later. If you do not install Sun StorEdge QFS or Sun StorEdge SAM-FS 4.0 patch -06 or later, you might damage the new file system.

The Sun QFS and Sun SAM-FS 3.5.0 and earlier systems do not support file systems created by the Sun StorEdge QFS or Sun StorEdge SAM-FS 4.1 releases.

Specific Information About Upgrading from the 4.0 to 4.1 Releases

When upgrading from a 4.0 release to 4.1, pkgadd(1M) checks for the presence of the /etc/opt/SUNWsamfs/LICENSE.4.0 file and the absence of the /etc/opt/SUNWsamfs/LICENSE.4.1 file. If this tests true, the system performs the following copies:

Conversely, just before a 4.1 package is removed, you can move these files back to their pre-4.1 locations by running the /opt/SUNWsamfs/sbin/backto40 script.

If you fall back from 4.1 to 4.0, you must fall back to a 4.0.62 (patch -06 or later) system. This is necessary in order for catalog conversion to occur.

You can avoid the conversion from 4.0 to 4.1 by creating the /etc/opt/SUNWsamfs/LICENSE.4.1 file or by moving the /etc/opt/SUNWsamfs/LICENSE.4.0 file. The conversion from 4.1 does not occur unless the /opt/SUNWsamfs/sbin/backto40 script is run manually.

The Sun StorEdge QFS 4.1 shared filesystem uses a different version number for the shared hosts file (4 versus 3). Rolling forward is automatic, but rolling back is not. You need to run the /opt/SUNWsamfs/sbin/backto40 script in Sun StorEdge QFS standalone environments or in Sun SAM-QFS environments if Sun StorEdge QFS shared file systems exist. This script saves the .hosts file for each shared file system so that it can be converted to a version 3 .hosts file prior to running 4.0. The script only needs to be run on the server that is designated as the metadata server for the Sun StorEdge QFS shared file system. It does not need to be run on the clients. After you have executed the /opt/SUNWsamfs/sbin/backto40 script, you can remove the 4.1 packages and install the 4.0 packages. After you install the 4.0 package, issue a samd(1M) config command and then issue /opt/SUNWsamfs/sbin/hosts41to40shared on the server for the Sun StorEdge QFS shared file system. This script converts the .hosts file for each Sun StorEdge QFS shared file system from a version 4 to a version 3. After this has been completed, issue the samd(1M) config command to make sure that the conversion completed and continue with normal system startup for 4.0.

Specific Information About Upgrading from the 3.5.0 to 4.1 Releases

When upgrading from a 3.5.0 release to 4.1, pkgadd(1M) checks for the presence of the /etc/opt/LSCsamfs/mcf file and the absence of the /etc/opt/SUNWsamfs/mcf file. If this is true, the system performs the following copies:

Conversely, just before a 4.1 package is removed, you can use the /opt/SUNWsamfs/sbin/backto350 script to move the files in /etc/opt/SUNWsamfs and /var/opt/SUNWsamfs back to /etc/opt/LSCsamfs and /var/opt/LSCsamfs.

If you fall back from 4.1 to 3.5.0, you must fall back to a 3.5.0.81 or later system. This is necessary in order for catalog conversion to occur.

You can avoid the conversion to 4.1 from 3.5.0 by moving the /etc/opt/LSCsamfs/mcf file. The conversion from 4.1 does not occur unless the /opt/SUNWsamfs/sbin/backto350 script is run manually.

The staging code in 3.5.0 was replaced by a new stager daemon in release 4.0. If you had stage logging directives in /etc/opt/LSCsamfs/samlogd.cmd, add the equivalent directives to /etc/opt/SUNWsamfs/stager.cmd to have the same logging functionality under the Sun StorEdge SAM-FS 4.1 release. If your /etc/opt/LSCsamfs/samlogd.cmd file looked like this, for example:

stage=/var/opt/SUNWsamfs/log/stager start

You should have the following in /etc/opt/SUNWsamfs/stager.cmd:

logfile = /var/opt/SUNWsamfs/log/stager

For more information, see the stager.cmd(4) man page.

Directives Removed From archiver.cmd

The queuedir= and datadir= directives are no longer supported in the archiver.cmd file. You must remove these directives manually. If these directives are not removed, the archiver generates an error message and does not run.

The archiver writes its queue files to the following directory:

/var/opt/SUNWsamfs/archiver/Queues

The archiver data directory is as follows:

/var/opt/SUNWsamfs/archiver

Sun SAM-Remote Compatibility

Sun SAM-Remote 4.1 is incompatible with SAM-Remote 3.3.1 and SAM-Remote 3.3.0. This is because of the SAM-FS 3.5.0 catalog re-design. The same version of SAM-Remote must be installed on SAM-Remote clients and servers.

Ampex Support Dropped

The Sun StorEdge SAM-FS 4.1 release removes support for Ampex 410, 810, and 914 tape libraries and for Ampex tapes and the Ampex DST driver.

Licensing Changes

For the 4.1 release, the license file is /etc/opt/SUNWsamfs/LICENSE.4.1. Licenses generated for Sun StorEdge QFS and Sun StorEdge SAM-FS 4.0 software work with 4.1.

A new license scheme was implemented in the 4.0 releases. Both the 4.0 and 4.1 releases follow the new license scheme, which is as follows:



Note - If you made changes to your site's configuration during the upgrade procedure, you might need a new license in order for the configuration changes to work correctly.



To get a temporary license, go to the following web site:

http://www.lsci.com/licensestart.sun

A Sun StorEdge SAM-FS license is divided into two logical sections: system and media. The system license licenses the host, expiration date, and the Sun StorEdge SAM-FS features. The media licenses an automated library type and media type pair. This is tied to the system license by the hostid. The number of media slots for the media type and automated library type are kept here.

A Sun StorEdge QFS license is a system license that licenses the host, the expiration date, and the Sun StorEdge QFS features.

If the license is missing, is corrupted, has an incorrect hostid, or has expired, the license is regarded as expired or corrupt. The system no longer allows file system mounts, media mounts, or staging.

In a Sun StorEdge SAM-FS environment, if the number of slots in use exceeds the licensed number, the license is regarded as suspended. The system no longer allows media mounts, labelling new media, staging, or importing media. Relabeling of old media is still allowed if the license is suspended. Because exporting is still allowed in the suspended condition, exporting enough media to bring the number of slots in use back into conformance with the license clears the suspended condition.


System Requirements

The following sections describe some of the system requirements that must be met in order to use the Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 releases.



Note - For more information about system requirements, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.



Operating System Support

The Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 releases support the following Sun SolarisTM operating system (OS) levels:

Required Solaris Patches

You can obtain the patches mentioned in this section from Sun. The following patches are recommended, depending on your environment:

Refer to the Sun Microsystems web page for a list of recommended patches:

http://sunsolve.Sun.COM

Sun SAN-QFS File System Compatibility

Verify that you have Tivoli SANergy File Sharing software at release level 2.2.3 if you plan to use the Sun SAN-QFS file system. For more information about the SAN-QFS file system, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.


Known Issues and Bugs

The following sections contain information about known issues and on software bugs.

Known Issues

SAM-QFS Manager Issues

The following are the known issues surrounding SAM-QFS Manager use:

If you create a file system using SAM-QFS Manager and an archiver.cmd(4) file already exists on the server, the SAM-QFS Manager automatically creates a VSN association to an available or valid media type for the default archive copy.

If you create a file system using SAM-QFS Manager and an archiver.cmd(4) file does not exist on the server, the VSN association is not explicitly created and the default archiving behavior is retained. In this situation, you can create an archive policy from the Archive Management tab and apply the policy to the file system. This action creates the archiver.cmd file and creates the necessary VSN association for the file system's default archive copy.

To change these default copy definitions, you can edit the archiver.cmd(4) manually at a later time.

Connect to <hostname.domain>:6789 failed (connection refused)
The connection was refused when attempting to contact <hostname.domain>:6789

The system generates these messages under the following conditions

To remedy this, become superuser on the host that was supposed to run the web server (as mentioned in hostname) and issue the following command:

# /usr/sbin/smcwebserver restart

Sun StorEdge QFS Shared File System Issues

The following are known issues that pertain to Sun StorEdge QFS shared file system use:

Bugs

TABLE 5 shows the bugs that are known to exist in the Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 software.

TABLE 5 Known Bugs in Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 Software

Bug Number

Description

4953265

Free inode warning when creating/removing directories

4958281

Large filenames cause EDNLC to be frequently purged

5007267

SAM GUI does not report meaningful error message for licensing problems

5022851

samfsck(1M) reports fewer blocks without -F than are reclaimed with -F

5026130

Direct-attached L700 library downed if single drive powered off

5029547

samfsck(1M) -F command on a file system with 120 million files core dumps

5032918

ACL count too small error after creating file in dirirectory with default ACL

5044512

PANIC: kernel heap corruption detected when running ACL tests

5051275

SAM-QFS read-only and multireader capabilities are broken in 4.1.1



Release Documentation

The Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 documentation is available on the web at the following URLs:

TABLE 6 shows the complete release 4.1 documentation set for these products.

TABLE 6 Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 Documentation

Title

Part Number

Sun SAM-Remote Administration Guide

816-2094-11

Sun QFS, Sun SAM-FS, and Sun SAM-QFS Disaster Recovery Guide

816-2540-10

Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide

817-4091-10

Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide

817-4092-10

Sun StorEdge SAM-FS Storage and Archive Management Guide

817-4093-10

Sun StorEdge QFS and Sun StorEdge SAM-FS 4.1 Release Notes

817-4094-10


You can obtain hard copy manuals from the following website:

http://www.iuniverse.com



Note - The README file will not be distributed in future major releases of the Sun StorEdge QFS and Sun StorEdge SAM-FS software.