1 Planning for HSC/VTCS Configuration

Table 1-1 is designed to help plan and verify completion of your system's configuration tasks, and if you look at the notes, you will see that, depending on your situation (new or upgrade install, adding hardware or not, and so forth), you may not have to do anything for a specific task except check it off.

Table 1-1 HSC/VTCS Configuration Checklist

Task Required or Optional? Notes Check to Verify Completion

"Determining HSC/VTCS Configuration Values"

Required

Plan configuration values because you cannot take defaults here.

 

"Planning VTCS Operating Policies"

Optional

You have a choice, that is, you can take the defaults and change them later.If you take that route, it is not a bad idea to take a high level pass through this chapter with and read it thoroughly when you have the time.

 

"Defining A Security System User ID for HSC and VTCS"

"Configuring MVS Device Numbers and Esoterics"

"Setting the MVS Missing Interrupt Handler (MIH) Value"

Required

None of these tasks is difficult, but they are all critical. For example, depending on the default settings of your security system, VSM may not be able to mount and write to MVCs until you have defined a security system user ID for HSC and tape volume security profiles for the MVCs.

 

"Creating the HSC CDS"

Required

Define the ACSs, LSMs, and drives so ELS can use them.

 

"Defining Volumes to ELS"

Required

ELS 7.3 lets you define all volumes: Library Volumes, MVCs, VTVs, and cleaning cartridge, and you do this one time only with the POOLPARM/VOLPARM statements and SET VOLPARM utility.

 

"Adding Definitions for ACF/VTAM Communications"

Required

If you set COMMPath METHod to VTAM, you must define the appropriate values to VTAM.

 

"Defining the SYS1.PARMLIB Member SMFPRMxx"

Optional

This task is optional but highly recommended, because you need the SMF information to know how your system is performing.

 

"Creating the HSC PARMLIB Member"

Required

The HSC PARMLIB is where critical items such as the COMMPATH and FEATURES statements reside.

 

"Defining Command Authority"

Required

This is required to ensure that the correct personnel and applications have access to the ELS resources required.

 

"Updating HSM"

Required

This is required if you are an HSM user and are routing HSM jobs to VSM.

 

"Creating and Cataloging the HSC Startup Procedure"

Required

An HSC startup proc is required.

 

"Reconfiguring a TapePlex"

Optional

How much of reconfiguring a TapePlex you actually have to do depends on whether you are adding any hardware or changing values in the PARMLIB data sets.

 

"Building a Simple CONFIG Deck"

Required if you license VTCS

A step procedure to create a basic CONFIG deck

 

"Updating the Tape Management System for VSM"

Required if you license VTCS

This is required to give your TMS access to VTVs and keep the TMS from stepping on MVCs.

 

"Defining MVC Pool Volser Authority"

Required if you license VTCS

Updating your security package to give the HSC user update authority to MVCs.

 

"Storing VTCS Locks in a Coupling Facility (Optional)"

Optional if you license VTCS

Recommended in some instances for performance reasons.

 

"Configuring VTCS for 512 VTD Support"

Required if you license VTCS

A step procedure to configure 512 VTDs.

 

"Starting and Stopping HSC"

Required

NA

 

How VSM Measures Sizes and Capacities

VTCS uses the binary standard rather than the decimal standard in displaying and calculating sizes and capacities for VTVs and MVCs. Thus:

  • 1 kilobyte(KB)=1024 bytes

  • 1 megabyte(MB)=1000 kilobytes or 1000*1024 byte

  • 1 gigabyte(GB)=1000 megabytes or 1000*1000*1024 bytes

VTCS uses the decimal standard in displaying and calculating sizes and capacities for VTSSs. Thus:

  • 1 kilobyte(KB)=1000 bytes

  • 1 megabyte(MB)=1000 kilobytes or 1000*1000 bytes

  • 1 gigabyte(GB)=1000 megabytes or 1000*1000*1000 bytes

Determining HSC/VTCS Configuration Values

The following sections describe how to determine configuration values for the HSC/VTCS system. Many of the following sections provide configuration planning information for HSC followed by planning information for VTCS. For example, to define RTDs, you must first define them as library-attached transports, then as RTDs. If you want transports to function as "native Nearline only," you define them as library-attached transports only, not as RTDs. The following sections are ordered as "both HSC and VTCS" followed by "VTCS only."

Note:

If you are configuring devices for VTCS, unless otherwise noted, in each of the following sections, the values you determine must match wherever you use them. For example, the unit addresses described in "Planning for Library-Attached Transports" must match the following:
  • The device addresses on the HSC SLIDRIVS macro.

  • The MVS device addresses you assign using the HCD facility (if you will share these transports with MVS).

Control Data Sets

Control data sets (CDSs) contain:

  • inventory information on all volumes (real, virtual, and vaulted)

  • configuration information, including how many ACSs, LSMs, tape transports, and so forth

  • information about library hardware resource ownership across multiple processors

The control data sets are:

Primary CDS (required)

The primary CDS resides on a DASD. The primary CDS must be accessible by all CPUs configured with the library. All configuration information about the TapePlex is stored in this data set, and it is continuously updated to reflect changes in the library.

Secondary CDS (optional, but strongly recommended)

This data set is an exact duplicate of the primary CDS and is also continuously updated to reflect changes in the library. In general, if the primary CDS becomes corrupted, the HSC continues to operate by automatically switching to the secondary CDS. The secondary CDS then becomes the new primary CDS, but the data set name remains unchanged.

Standby CDS (optional, but strongly recommended)

This data set is a formatted CDS containing only the first CDS record and cannot be used as is to continue normal operations. In general, if the primary CDS becomes corrupted, and a switch to the secondary occurs, the contents of the secondary CDS are copied to the standby. The standby CDS then becomes the new secondary data set, but the data set name remains unchanged.

Control Data Set Placement

For performance and recovery considerations, each copy of the CDS should reside on a separate disk volume. Separate control units and channel paths are also recommended to provide adequate recovery conditions.

If possible, place each control data set on its own dedicated volume. Do not place HSC control data sets on any volumes containing HSC journal data sets.

The following restrictions also apply:

  • VSM does not support copies of the CDS at multiple sites (for example, Primary CDS at one site and Secondary at another). A link failure would allow the two sites to run independently, and VSM cannot enforce separation of all resources. This prevents reconciliation of the two divergent CDSs as can be accomplished in a pure non-VSM environment.

  • Similarly, copies of the entire CDS at two sites where a link failure may occur is not recommended. For more information, refer to the ELS Legacy Interfaces Reference.

  • Copies of the entire CDS at two sites that are not linked is not allowed.

  • The client-server approach of using HSC on only one host and SMC on all other hosts is recommended for TapePlexes spanning multiple geographic locations.

Control Data Set Sharing

The following sections details sharing requirements and recommendations.

Sharing Requirements

Sharing CDS devices between multiple host systems and processors requires that the CDS devices be defined with the SHARED=YES attribute using the IBM Hardware Configuration Definition (HCD) facility.

Note:

Unit Control Blocks (UCBs) for HSC CDS volumes can be defined either as LOCANY=NO or LOCANY=YES using the IBM Hardware Configuration Definition (HCD) facility.

If the control data sets are required by more than one system, then the primary, secondary, and standby control data sets must be capable of being shared in read/write mode by all host systems that share the devices.

Sharing Recommendations

Several HSC functions use the MVS RESERVE/RELEASE service to serialize CDS access between multiple hosts. If your MVS system is configured to issue device reserves in support of the RESERVE/RELEASE service, I/O contention to the device can be noticed from other systems. This is because the device will appear busy to any system not holding the reserve, and cause those I/O operations to be queued. If queued long enough, IOS start pending messages will begin to appear on those systems.

The dynamic nature of the MVS operating environment makes it impossible to predict a maximum length of time that the HSC, or any other authorized program, will hold a RESERVE. The vast majority of HSC functions hold a RESERVE for a short interval, generally only a few I/O operations to the CDS. However, some functions require a longer interval to perform the work. For example, the BACKUP utility can take several minutes to finish; the RESERVE must be held for the duration to guarantee the integrity of the backup.

Given the above, consider these recommendations:

  • Do not place copies of the HSC control data set on the same volume(s) as other data sets that generate high I/O activity to that volume during normal processing. This applies to all control data set copies including the secondary (shadow) and the standby. If possible, place each control data set on its own dedicated volume (no other data sets on the volume)

  • Do not place other datasets that require RESERVE/RELEASE functions on the same volume as the HSC control data sets. You must ensure programs that interact with each other do not access multiple CDS volumes concurrently, or a deadly embrace scenario can occur.

    For example, TMS catalog and audit data sets, JES Checkpoint, RACF databases, and DFSMShsm journals are known to cause contention and lockout problems when on the same volume as HSC CDSs. A backup copy of a data set that is used only occasionally for restore purposes normally does not cause a significant problem. However, if response problems or lockouts do occur, an examination should be made of all ENQ and I/O activity related to that volume.

  • The default or recommended missing interrupt handler time interval for most disk devices should be high enough to avoid problems with the HSC reserves. However, if I/O contention due to HSC reserves begins to cause significant problems, consider increasing the missing interrupt handler time interval. Specific recommendations are detailed in the following sections.

  • Do not use the I/O Timing facility (the MIH IOTIMING parameter in the IECIOSxx member of the system PARMLIB on MVS systems) for devices containing the HSC control data sets.

  • Do not use the RECOVERY statement (in the IECIOSxx member) for devices containing the HSC control data sets.

  • Do not specify IOTHSWAP=YES (in the IECIOSxx member) for devices containing the HSC control data sets.

Additional Recommendations When HSC Runs on an MVS Guest Under VM

In order to prevent errors caused by contention lockout situations with other hosts when an MVS system is running as a guest under VM, it is strongly recommended that the VM missing interrupt interval for each disk device containing the primary, secondary, and standby control data sets be set to a value greater (at least 1 minute) than the value of the MVS missing interrupt interval.

Note:

If using SLUADMIN utility to backup and restore the CDS, set the MVS missing interrupt interval value higher than the time it usually takes to complete the backup or restore.

The VM missing interrupt interval is specified on the SET MITIME command or on the INIT_MITIME statement. The MVS missing interrupt interval is specified on the TIME parameter of the MIH statement. Refer to the appropriate IBM publications for more information about setting these values.

Additional Recommendations When the CDS is Shared Between MVS and VM

When there are no MVS guest systems running under VM, but only "native" MVS systems that share the CDS with VM systems, the MVS and VM missing interrupt intervals act independently of each other, however, for long-term ease of management, you may want to follow the same recommendations for when HSC runs on an MVS guest under VM in case you ever change a native MVS system to run as an MVS guest under VM:

  • It is strongly recommended that the VM missing interrupt interval for each disk device containing the primary, secondary, and standby control data sets be set to a value greater (at least 1 minute) than the value of the MVS missing interrupt interval.

  • If you are using the SLUADMIN utility to backup and restore the CDS, set the MVS missing interrupt interval value higher than the time it usually takes to complete the backup or restore.

The VM missing interrupt interval is specified on the SET MITIME command or on the INIT_MITIME statement. The MVS missing interrupt interval is specified on the TIME parameter of the MIH statement. Refer to the appropriate IBM publications for more information about setting these values.

Other considerations that apply to sharing a CDS between MVS and VM systems include, but are not limited to, the following:

  • DFSMSdss DEFRAG running on MVS can cause lockout conditions on VM. Also, because DEFRAG can move data sets, control data set integrity can be compromised. Avoid running a DEFRAG on CDS volumes while the HSC is in operation on any shared system.

  • DFSMShsm processing of CDS volumes has also been known to cause lockout conditions. If possible, avoid DFSMShsm COMPRESS, COPY, DEFRAG, DUMP, PRINT, RELEASE, and RESTORE operations on CDS volumes during peak periods of HSC operation.

Make sure the HSC command prefix does not conflict with any of the VM facilities, such as the CP line editing symbol.

Limiting CDS Access Privileges

Use your system security software (such as RACF, ACF-2, or TopSecret) to give WRITE access to the CDS primary, secondary and standby data sets for the user ID associated with the HSC started task. You must also submit SLUADMIN utility jobs with this user ID because some SLUADMIN utility functions update the CDS.

Caution:

DO NOT ASSIGN CDS WRITE ACCESS TO ANY OTHER USER.

ELS utilities such as SLUCONDB and ExPR require READ access to the CDS data sets, as LCM does. Assign CDS READ access only to user IDs that require this access.

Serializing CDSs

Resource serialization in the IBM z/OS operating system is generally accomplished using either the IBM Global Resource Serialization (GRS) facility or a third-party software product such as Unicenter CA-MIM/MII.

A resource is identified by two names:

  • An 8-character QNAME

  • A 1-255 character RNAME

The relationship between the two names is hierarchical; different RNAMEs can be used to describe specific different resources that are generically related to a single QNAME.

The HSC and HSC utilities use two principal QNAMES:

  • The default QNAME of STKALSQN can be changed and is used by the HSC to serialize access to the HSC CDS and by LCM to serialize LSM-related processing.

    Note:

    If you have changed STKALSQN to a different QNAME, just substitute your name for the STKALSQN references in this documentation.
  • The StorageTek­defined QNAME of STKENQNM cannot be changed and is used to serialize HSC utilities, such as the SET utility.

Set Utility Serialization

One of the facilities provided by the SET utility is to change the QNAME that was defined during LIBGEN (the STKALSQN default or customer-defined QNAME) and stored in the CDS. The SET utility and HSC also use the STKENQNM QNAME to maintain serialization while the customer­defined QNAME is being changed.

The SET utility issues two RESERVEs against the CDS prior to an update:

  • a RESERVE with the StorageTek­defined QNAME "STKENQNM"

  • a RESERVE using the existing customer­defined QNAME (or the default value of "STKALSQN")

When the HSC is started on any host, it initially serializes on the CDS using the StorageTek­defined QNAME. This prevents the HSC from being started while the customer­defined QNAME is potentially in the process of being changed. If the serialization is successful (no SET utility in progress), the customer­defined QNAME is read from the CDS and is used for future serialization requests.

GRS Serialization

In GRS environments only, resources can be serialized across multiple systems by global (SCOPE=SYSTEMS) requests or serialized on individual systems only by local (SCOPE=SYSTEM) requests. GRS provides three ways (Resource Name Lists) to change how a request is to be processed:

  • The Systems Exclusion Resource Name List (referred to as the Systems Exclusion List) allows global requests to be converted to local requests.

  • The Systems Inclusion Resource Name List allows local requests to be converted to global requests.

  • The Resource Conversion Resource Name List (referred to as the Reserve Conversion List) allows RESERVE requests to be suppressed.

These three lists are built from RNLDEF statements specified in the GRSRNLxx member of the system PARMLIB in GRS environments.

Note when a RESERVE is issued, there is also an enqueue associated with the RESERVE. If a matching entry is found in the Systems Exclusion List, the RESERVE is issued, and the enqueue associated with a RESERVE is issued as a local enqueue request. If a matching entry is not found in the Systems Exclusion List, the RESERVE is issued and the enqueue associated with a RESERVE is issued as global enqueue request.

Caution:

If no matching entry is found in the Systems Exclusion List and the Reserve Conversion List, double serialization occurs. Avoid this at all costs. The IBM z/OS MVS Planning: Global Resource Serialization publication shows a diagram of this process.

Multiple HSC TapePlex Considerations

If you have multiple HSC TapePlexes (each TapePlex must use a different set of primary, secondary, and standby CDSs) within the same GRS or MIM/MII serialization complex, make sure you change the default HSC QNAME STKALSQN to a different value for each HSC TapePlex. This ensures the serialization resources for one TapePlex does not delay the serialization of resources for the other TapePlexes. The default name can be changed either through the MAJNAME parameter on the SLILIBRY macro in LIBGEN or through the SET MAJNAME command of the SLUADMIN utility.

Note:

With multiple TapePlexes, remember to replicate the STKALSQN examples shown in this documentation as many times as necessary and change STKALSQN to the different values you chose for each TapePlex.
Example

For two HSC TapePlexes, you could change the default HSC QNAME of STKALSQN to HSCPLEX1 for one TapePlex and HSCPLEX2 for the other. This allows the two TapePlexes to operate simultaneously without interfering with each other.

As a specific case, LCM management runs issue global enqueues for the HSC QNAME and an RNAME of PROCESSLSMaa:ll, where aa is the ACSid and ll is the LSMid. In a configuration of two HSC TapePlexes, there is an LSM 00 in an ACS 00 for each TapePlex, which results in the same RNAME of PROCESSLSM00:00. Two simultaneous LCM management runs for LSM 00 in ACS 00. Two simultaneous LCM management runs for LSM 00 in ACS 00 will conflict unless the HSC QNAMEs are different for the two TapePlexes.

HSC CDS Performance and Sharing Tips

Follow these tips for optimal CDS performance and sharing:

  • Place each copy of the HSC CDS (primary, secondary, and standby) on its own dedicated volume with no other data sets on those volumes. This is especially important when you do not convert HSC RESERVEs. A RESERVE issued by one system locks out access to all data on the volume for all other systems, so if you put a catalog or the TMC on the same volume as an HSC CDS, you will create a performance problem. Isolating the HSC CDSs to dedicated volumes can also simplify recovery after an outage.

  • Make sure all HSC CDS devices are genned as SHARED in your I/O device definitions. To do this, assign a value of YES to the SHARED feature for the device on the HCD View Device Parameter/Feature Definition panel and then activate the Input/Output Definition File (IODF) containing the change.

  • Make sure you use the TCP communications method for all HSC host-to-host communication. This is specified by the METHOD parameter on the COMMPath command and control statement.

  • If you use both HSC and VTCS software, and all hosts are running on release 6.1 or later, consider converting the CDS to the ”F” level format (or later), which reduces CDS I/O during HSC/VTCS initialization and periodically thereafter when VTCS refreshes its cache.

  • If you use VTCS in a sysplex configuration and you believe you have a CDS performance problem because of VTCS, contact StorageTek Software Support to have the problem analyzed before implementing VTCS lock data in a Coupling Facility structure.

Unicenter CA-MIM/MII Considerations

Start the HSC only after the CA-MII address space has fully initialized.

Follow Computer Associates' recommendations to start the CA-MII address space using their MIMASC utility, make sure the CA-MII address space is in the WLM SYSTEM service class so that it runs with a dispatching priority of FF, and add a PPT entry in the SYS1.PARMLIB(SCHEDxx) member on all systems in the MIM/MII serialization complex for the MIMDRBGN program. Refer to the CA documentation for other tuning recommendations.

Whenever you need to add a QNAME statement for the HSC, you must specify SCOPE=SYSTEMS and not SCOPE=RESERVES on the QNAME statement, since other NCS products (LCM and LibraryStation) issue global enqueues that must be propagated to all systems. Specifying SCOPE=RESERVES prevents these enqueues from being propagated and will cause problems.

GRS Considerations

For GRS ring configurations, the ACCELSYS and RESMIL parameters (specified on GRSDEF statements in the GRSCNFxx member of the system PARMLIB) can influence ring performance. Refer to IBM z/OS MVS Planning: Global Resource Serialization for information about how best to set these values.

When HSC RESERVEs Must Remain as RESERVEs (All Environments)

Caution:

Do not convert HSC RESERVEs to global enqueues in the following configurations and environments, or CDS data integrity will be compromised and CDS damage will occur.
  • No serialization facility or product (e.g., GRS or CA-MIM/MII) is used, and the HSC CDSs are on devices that are shared by multiple systems.

    • Even unconverted RESERVEs are insufficient to guarantee data integrity in this case. You should consider why you are trying to share data without a serialization product.

  • A serialization facility or product is used, but the HSC CDSs are on devices shared by multiple serialization complexes (GRS or MIM/MII).

    • Even unconverted RESERVEs are insufficient to guarantee data integrity in this case. The IBM z/OS MVS Planning: Global Resource Serialization publication indicates that multiple serialization complexes cannot share resources.

    • Computer Associates cites this restriction in their paper Unicenter CA-MII Data Sharing for z/OS.

  • The environment consists of a GRS star configuration that attempts to share the HSC CDSs between a sysplex and a system that is not part of the sysplex.

    • IBM cites this restriction in z/OS MVS Planning: Global Resource Serialization.

  • The HSC CDSs reside on devices shared by z/OS and z/VM systems.

    • IBM cites this restriction in z/OS MVS Planning: Global Resource Serialization.

    • Computer Associates cites this restriction in their paper Unicenter CA-MII Data Sharing for z/OS.

When You May Want to Leave HSC RESERVEs as RESERVEs

You may not want to convert HSC RESERVEs to global enqueues in the following configurations and environments.

  • You use GRS in a ring configuration and converting HSC RESERVEs affects ring performance.

    • IBM cites this consideration in z/OS MVS Planning: Global Resource Serialization.

    • Performance can suffer due to propagation delays in GRS ring configurations. Deciding whether or not to convert RESERVEs is best determined by actual experience, but as an arbitrary rule, do not convert RESERVEs if there are more than three or four systems in your GRS ring.

  • You use GRS in a ring configuration and converting HSC RESERVEs affects HSC/VTCS performance.

    • Because of ring propagation delays for global enqueues, HSC and VTCS throughput may significantly degrade during periods of high HSC and/or VTCS activity.

  • You have a large virtual tape configuration. For example, if you have several million Virtual Tape Volumes (VTVs) defined and have not yet migrated the CDS to the ”F” level format, you may experience slow HSC/VTCS initialization times and higher CDS I/O activity due to the need for VTCS to initialize and periodically refresh its cache of VTV information.

  • You use GDPS HyperSwap, which requires specifying a pattern RNL to convert all RESERVEs, but you do not want to convert the HSC RESERVEs. These competing goals can be accommodated in a GRS environment by performing the following tasks:

    • Ensure that no device defined to GDPS can have a RESERVE issued to it.

    • Place the HSC CDSs on devices that are outside the scope of GDPS and HyperSwap control (i.e., not defined to GDPS) and therefore are not HyperSwap eligible.

    • Specify the following RNLDEF statements in the GRSRNLxx member of the system.PARMLIB on all systems:

    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)
    RNAME(hsc.primarycds.datasetname)
    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)
     RNAME(hsc.secondarycds.datasetname)
    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)
     RNAME(hsc.standbycds.datasetname)
    RNLDEF RNL(CON)  TYPE(GENERIC) QNAME(*)
    

The Reserve Conversion List is not searched if an entry is found in the Systems Exclusion List, thus GDPS is unaware of the RESERVEs processed by GRS. The net effect is that the GDPS HyperSwap requirement of a pattern RNL to convert all RESERVEs is satisfied, but the HSC RESERVEs are not converted.

When HSC RESERVEs Must be Converted to Global Enqueues

Convert HSC RESERVEs to global enqueues if the HSC CDSs are on devices that are defined to GDPS.

How to Leave HSC RESERVEs as RESERVEs in a GRS Environment

If you do not convert HSC RESERVEs, remove all RNLDEF RNL(CON) statements for the HSC from the GRSRNLxx member on all systems. Even though the Reserve Conversion List is not searched if an entry is found for the HSC RESERVE in the Systems Exclusion List, it is good practice to keep your GRSRNLxx definitions up to date.

More importantly, if there are no RNL(EXCL) or RNL(CON) statements at all for the HSC, both a SYSTEMS ENQ and a RESERVE are issued for the HSC CDS. This results in double serialization and reduced performance. Refer to IBM z/OS MVS Planning: Global Resource Serialization for a diagram showing how RESERVEs are processed by GRS.

To avoid double serialization, perform only one of the following tasks:

  • Do not convert HSC RESERVEs by adding the recommended RNL(EXCL) statements for the HSC.

  • Convert HSC RESERVEs by adding the recommended RNLDEF(CON) statements for the HSC.

    To not convert HSC RESERVEs, include one RNL(EXCL) statement for the STKALSQN resource for each copy of the CDS (primary, secondary, and standby) and one statement for the STKENQNM resource as follows:

    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)RNAME(hsc.primarycds.datasetname)
    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)
     RNAME(hsc.secondarycds.datasetname)
    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKALSQN)RNAME(hsc.standbycds.datasetname)
    RNLDEF RNL(EXCL) TYPE(GENERIC) QNAME(STKENQNM)
    
  • Do not specify TYPE(SPECIFIC) on the RNLDEF statements for the STKALSQN resource.

    The HSC code uses an RNAME 44 bytes long with blank fill on the right, so a TYPE(SPECIFIC) statement as shown will not be matched and will not produce the desired results.

  • Do not remove the RNAME parameter from the RNLDEF statements for the STKALSQN resource.

    The RNLDEF RNL(EXCL) statements tell GRS a RESERVE for the HSC CDS should be issued as a RESERVE and a global (SCOPE=SYSTEMS) enqueue issued for STKALSQN (including the enqueue associated with the RESERVE) should be changed to a local (SCOPE=SYSTEM) enqueue. The RNAME parameter limits these changes in scope to the CDS only.

    Using a generic QNAME statement without an RNAME parameter tells GRS to change all global enqueues for STKALSQN to local enqueues. Since other NCS products (LCM and LibraryStation) issue global enqueues with the expectation that they will be propagated to other systems and not reduced in scope, using a generic QNAME statement for STKALSQN without an RNAME parameter will cause problems. The RNAME parameter must be explicitly specified for each copy of the CDS as in the example above. The STKENQNM resource can continue to be defined using a generic QNAME only.

How to Leave HSC RESERVEs as RESERVEs in an MIM/MII Environment

For PROCESS=SELECT and PROCESS=ALLSYSTEMS environments:

  • Whenever you need to add a QNAME statement for the HSC, specify SCOPE=SYSTEMS and not SCOPE=RESERVES on the QNAME statement, since other NCS products (LCM and LibraryStation) issue global enqueues that must be propagated to all systems. Specifying SCOPE=RESERVES prevents enqueues from being propagated and causes problems.

For PROCESS=SELECT and PROCESS=ALLSYSTEMS environments with LCM and/or LibraryStation:

  • To not convert the CDS RESERVEs but propagate the other STKALSQN global enqueue requests, use an Exempt List and specify the QNAME statements as follows:

    STKALSQN EXEMPT=YES,GDIF=YES,RESERVES=KEEP,SCOPE=SYSTEMS
    STKENQNM EXEMPT=NO,GDIF=NO,RESERVES=KEEP,SCOPE=SYSTEMS
    
  • Make sure an Exempt List can be used. If the GDIINIT statement specifies EXEMPT=NONE, either change it to specify EXEMPT=membername, or remove the EXEMPT parameter to cause the default member name of GDIEXEMPT to be used.

  • In the GDIEXEMPT member (or whatever you named it) of the MIM parameter data set, specify:

    LOCAL QNAME=STKALSQN RNAME=hsc.primarycds.datasetname
    LOCAL QNAME=STKALSQN RNAME=hsc.secondarycds.datasetname
    LOCAL QNAME=STKALSQN RNAME=hsc.standbycds.datasetname
    GLOBAL QNAME=STKALSQN
    
  • Enter the following in the MIM commands member for every system in the MIM/MII serialization complex.

    SET GDIF EXEMPTRESERVES=YES
    

    Caution:

    The same value of EXEMPTRESERVES=YES must be specified and in effect on all systems in the MIM/MII serialization complex at the same time, otherwise, a data integrity exposure will exist for all shared data. If you have any questions about changing to EXEMPTRESERVES=YES, contact Computer Associates before making the change.

For PROCESS=SELECT environments without LCM and LibraryStation:

  • You have a choice. PROCESS=SELECT implies that only those resources that are explicitly defined with QNAME statements will be processed by CA-MIM/MII, so you can do one of the following:

    • Make sure there are no QNAME statements or Exempt List entries for the HSC, and CA-MIM/MII will leave all HSC RESERVEs alone.

    • Explicitly add and enable new QNAME statements for the HSC, and make sure that Exempt List entries do not exist for the HSC. If you choose this option, you must specify GDIF=NO, RESERVES=KEEP, and SCOPE=SYSTEMS on the QNAME statements for the HSC. As a precaution, also specify EXEMPT=NO on the QNAME statements to prevent the Exempt List from overriding any QNAME statement values. For example:

      STKALSQN EXEMPT=NO,GDIF=NO,RESERVES=KEEP,SCOPE=SYSTEMS
      STKENQNM EXEMPT=NO,GDIF=NO,RESERVES=KEEP,SCOPE=SYSTEMS
      

For PROCESS=ALLSYSTEMS environments without LCM and LibraryStation:

  • Add and enable new QNAME statements for the HSC and make sure Exempt List entries do not exist for the HSC.

Specify GDIF=NO, RESERVES=KEEP and SCOPE=SYSTEMS on the QNAME statements for the HSC. As a precaution, also specify EXEMPT=NO on the QNAME statements to prevent the Exempt List from overriding any QNAME statement values. You must code your HSC QNAME statements as shown below, since the dynamic addition of HSC resources by PROCESS=ALLSYSTEMS inappropriately defaults to assign the EXEMPT=YES and GDIF=YES attributes. For example:

STKALSQN EXEMPT=NO,GDIF=NO,RESERVES=KEEP,SCOPE=SYSTEMS
STKENQNM EXEMPT=NO,GDIF=NO,RESERVES=KEEP,SCOPE=SYSTEMS

How to Convert HSC RESERVEs to Global Enqueues in a GRS Environment

For both GRS ring and star configurations:

  • Include one RNL(CON) statement for the STKALSQN resource and one RNL(CON) statement for the STKENQNM resource:

    RNLDEF RNL(CON)  TYPE(GENERIC) QNAME(STKALSQN)RNLDEF RNL(CON)  TYPE(GENERIC) QNAME(STKENQNM)
    
  • Remove all RNLDEF RNL(EXCL) statements for the HSC from the GRSRNLxx member on all systems, otherwise, the HSC RESERVEs will not be converted to global enqueues. The Reserve Conversion List is not searched if an entry is found for the HSC RESERVE in the Systems Exclusion List.

How to Convert HSC RESERVEs to Global Enqueues in an MIM/MII Environment

For both PROCESS=SELECT and PROCESS=ALLSYSTEMS environments:

  • Specify GRSRNL=EXCLUDE in the IEASYSxx member of the system PARMLIB on all systems to prevent RNL processing by GRS.

  • If any systems are in a sysplex, refer to "Sysplex Considerations" in the "Advanced Topics" chapter of the MII System Programmer's Guide for other IEASYSxx requirements.

  • Whenever you need to add a QNAME statement for the HSC, specify SCOPE=SYSTEMS and not SCOPE=RESERVES on the QNAME statement since other NCS products (LCM and LibraryStation) issue global enqueues that must be propagated to all systems. Specifying SCOPE=RESERVES prevents these enqueues from being propagated and causes problems.

For PROCESS=SELECT environments:

  • Specify GDIF=YES, RESERVES=CONVERT, and SCOPE=SYSTEMS on the QNAME statements for the HSC. Make sure Exempt List entries do not exist for the HSC, and as a precaution, also specify EXEMPT=NO to prevent the Exempt List from overriding any QNAME statement values. For example:

    STKALSQN EXEMPT=NO,GDIF=YES,RESERVES=CONVERT,SCOPE=SYSTEMS
    STKENQNM EXEMPT=NO,GDIF=YES,RESERVES=CONVERT,SCOPE=SYSTEMS
    

For PROCESS=ALLSYSTEMS environments:

  • If the GDIINIT statement specifies RESERVES=KEEP, you must specify the same QNAME statements as required for PROCESS=SELECT environments since the dynamic addition of HSC resources by PROCESS=ALLSYSTEMS will not (by default) assign the RESERVES=CONVERT attribute needed to convert the RESERVE:

    STKALSQN EXEMPT=NO,GDIF=YES,RESERVES=CONVERT,SCOPE=SYSTEMS
    STKENQNM EXEMPT=NO,GDIF=YES,RESERVES=CONVERT,SCOPE=SYSTEMS
    

If the GDIINIT statement specifies or is defaulted to RESERVES=CONVERT, you do not need to specify any QNAME statements unless you have special requirements that conflict with the defaults assigned to dynamically added QNAMEs. See "Selecting the GDIF Processing Mode" in the Unicenter CA-MII Systems Programming Guide for the defaults assigned to dynamically added QNAMEs.

CDS DASD Space Requirements

This section tells how to calculate the CDS DASD space required for ELS. The space in the CDS supports the following areas:

  • HSC - Library hardware configuration and library volumes

  • VTCS - VSM hardware configuration and VSM volumes (VTV/MVC)

  • HSC SET VOLPARM - VOLPARM input card images

  • HSC SET VAULTVOL - Vault volumes

  • HSC SET CDKLOCK - Locking service for Open Systems ACSAPI

SLICREAT creates the HSC area from the HSC LIBGEN process. The other areas are created dynamically. The space for the dynamic areas must be initially allocated by SLICREAT; however, if insufficient space is allocated by SLICREAT, you can use CDS EXPAND procedure to dynamically expand the CDS. Refer to Managing HSC and VTCS for more information.

If you are specifying multiple CDSs (SLSCNTL2, SLSSTBY), StorageTek recommends that you allocate the same amount of space (in blocks) for all your data sets when you define them.

Note:

If the data sets are defined with different space allocations, the HSC uses the size of the smallest data set to determine the number of 4K blocks that it will use for the CDS. The additional space in the other CDS data sets, if any, will not be used by the HSC.

The difference in the space between the minimum space required (returned by SLICREAT) and the size of the smallest CDS data set is formatted as CDS free blocks. These free blocks are available to create the dynamic areas (VTCS, and so forth).

Calculating DASD Space Requirements for POOLPARM/VOLPARM

The HSC SET VOLPARM utility uses CDS space to store the input card images. The number of blocks required for this function can be calculated by the following formula:

(input / 50)

where input is the number of input records in the SET VOLPARM utility.

When the SET VOLPARM utility is used to specify VTV and MVC volumes, the VTCS areas for these are built when the SET VOLPARM data is applied. See the following section for information on calculating space for VTVs and MVCs.

Calculating DASD Space Requirements for VTCS

This section tells how to calculate the additional CDS DASD space required for VTCS. The additional number of 4K blocks required in the CDS for VTCS is expressed as:

  • For F and G format CDSs:

    (# VTV ranges) + (# VTV ranges)/862 + (# VTVs defined)/23 + (# VTVs
     defined)/19826 + (# MVC ranges) + (# MVCs defined)/37 + 18*(# of VTSSs) + 14
    
  • For H format CDSs:

    (# VTV ranges) + (# VTV ranges)/862 + (# VTVs defined)/23 + (# VTVs
     defined)/19826 + (# MVC ranges) + (# MVCs defined)/37 + 18*(# of VTSSs) + 14 +
     (#MVC volumes to be used for Dynamic Reclaim)/7
    

Your Oracle representative will run the VSM pre-sales planning tool to identify VSM candidate data sets. This will assist you in estimating the number of VTVs and MVCs required.

Calculating DASD Space Requirements for VAULTVOL

The HSC VAULT facility uses the CDS to store information on Vaulted Volumes. The number of blocks required for this function can be calculated by the following formula:

(nnnn * 1.2 / 99)

where nnnn is the number of Vaulted Volumes specified in the SET VAULTVOL utility.

To increase/decrease the number of Vaulted Volumes, the CDS MERGE procedure must be run.

Calculating DASD Space Requirements for CDKLOCK

The HSC Open Systems locking service uses a CDS subfile to store resource lock information. This subfile is only required if the open system platforms use the XAPI enhancement to the ACSAPI to provide resource (drive or volume) serialization. The number of blocks required for this subfile can be calculated by the following formula:

(xxx/10)  +  (xxx/15)  +  (xxx/20)

where xxx is the maximum number of HSC drives that are used by the Open System platforms.

For example, if 200 HSC drives can be used by Open Systems, allocate 44 blocks for the CDKLOCK subfile. To increase or decrease the size of the CDKLOCK subfile follow the MERGEcds procedure.

CDS VTCS Level Requirements

Note that there are different CDS levels for HSC and VTCS, and some key VTCS functionality is only enabled by migrating to the required CDS level as described in Table 1-2. You can determine what level CDS you currently have with the HSC D CDS command as shown in the following example. In this example:

  • The HSC CDS level is 6.1

    Note:

    The HSC CDS level is the same for HSC 6.1 and above, so D CDS shows CDS LEVEL = 060100 for all three releases.
  • The HSC CDS level is the same for HSC 6.1 and above, so D CDS shows CDS LEVEL = 060100 for all three releases.

    .SLS2716I Database Information 029   029
    SYS00001 = VTCS.HARDWARE.CFG16Y.CDS  PRIVOL = ENG001     FLAGS(40) ACTIVE
    
    JOURNALING NOT ACTIVE FOR THIS SUB-SYSTEM
    
    LOG DATA SET STATUS:
    SYS00013 = VTCS.HARDWARE.CFG16Y.HSCLOG1
      ACTIVE
    SYS00014 = VTCS.HARDWARE.CFG16Y.HSCLOG2
      ACTIVE
    UTILIZATION =    .76%
    
    CDS LEVEL = 060100          DATE = 20150313
    CREATE    = I813156         TIME = 14:53:37
    VSM CDS LEVEL = H
    
    LAST CDS BACKUP  = NONE
    LAST CDS RESTORE = NONE
    LAST NCO ON
    
    
    ENQNAME  = STKALSQN       - SMFTYPE = 255
    CLEAN PREFIX = CLN        - LABTYPE = (00) SL
    RECOVERY = (00) NO RECOVERY
    THIS HOST IS = ECCL       - CDS BLOCK COUNT = 12,060
    

As described in Table 1-2, each supported VTCS version supports only a subset of these VTCS Levels. If you are, therefore, running with multiple versions of VTCS against a CDS, it is important to ensure that the CDS is set at a level that is supported by all the versions being run. Also note that VTCS functions are available only by running the CDS at a certain level.

Table 1-2 CDS Levels for Supported VTCS Versions

VTCS CDS Level Valid VTCS/NCS Versions Enhancements

E

6.0, 6.1, 6.2, 7.0

  • 4 MVC copies

  • 800 Mb VTVs

F

6.1, 6.2, 7.0, 7.1, 7.2

  • Near Continuous Operations (NCO)

  • Bi-directional clustering

  • Improved CDS I/O performances - reduces the I/O required to manage virtual scratch sub-pools

G

6.2, 7.0, 7.1, 7.2

  • 400Mb/800Mb/2Gb/4Gb VTVs

  • Standard/Large VTV Pages

  • 65000 VTVs per MVC

H

7.1, 7.2

  • Dynamic Reclaim

  • Autonomous Device Support


Specifying Valid CDS Levels for VTCS 7.2

Table 1-3 describes the valid CDS levels for VTCS 7.2 and the corresponding CONFIG CDSLEVEL values.

Table 1-3 Valid CONFIG CDSLEVEL Values for VTCS 7.2

CDS VTCS Leve CDSLEVEL Value

F

V61ABOVE

G

V62ABOVE

H

V71ABOVE


Note:

  • VTCS 7.2 will tolerate sharing the CDS with down-level VTCS versions. However, to run mixed 7.2 and down-level you need to run at a CDS level valid for all versions. For example, VTCS 7.2 and VTCS 6.2 can share the same CDS, but the VTCS level must be E, F, or G.

  • Conversely, if you want specific features, you must be at the required CDS level, which may require you to upgrade down-level hosts. For example, Dynamic Reclaim requires CDS Level H, which is only valid at VTCS 7.1 and above.

Using CDS Logging to Help Recover the CDS

The HSC Transaction Logging Service is an internal HSC service that records information into one or, optionally, two Log Files. If all copies of the CDS fail (that is, you lose the Primary, Secondary, and Standby copies), recovery from a backup can be time consuming. When recovering from a backup, CDS logging can aid in resynchronizing the CDS with the data content of the VSM subsystem.

Planning for CDS logging includes the following:

  • Do you want one or two copies of the log file? Two copies gives more redundancy but takes up additional DASD space.

  • How much DASD space do you need for logging? It is dependent on the activity on your system and the frequency with which you do backups. Log file space requirements depend on the number of VTVs mounted between log file off loads. Use Table 1-4 to size your log file space allocation.

    Table 1-4 Log File Sizing

    Log file size (Mb) VTV Mounts

    1

    10,000

    2

    20,000

    4

    40,000

    8

    80,000

    16

    160,000

    32

    320,000

    64

    640,000


  • Where do you place the log files? Basically, each log file should reside on different volumes and not on the same volume as any of the CDS copies.

Planning for LSMs

Planning for LSMs is an important phase of the installation process. Effective planning for StorageTek automated tape libraries is part of a smooth ACS installation process. Ensure that planning considerations are reviewed for:

  • Placement of ACS network connections on your network

  • LSM Unit address requirements

  • LSM/Pass-thru Port (PTP) relationships

  • Host-to-Host communication options

  • SL8500 libraries

  • SL3000 libraries

Placement of ACS Network Connections on Your Network

TCP/IP Network attached libraries should be attached to a separate subnet or a controlled network to protect it from floods of ARP broadcasts. All Streamline libraries are TCP/IP network attached. In addition, some legacy LSM hardware can also be attached through the TCP/IP network.

LSM Unit Address Requirements

Some legacy hardware LSMs may use a LMU Station that emulates a 3278-2 terminal attached to a local controller. This LMU station must be assigned an MVS unit address. The LMU must be defined as a 3x74 local controller. The HCD facility is used to assign MVS unit addresses as a 3278-2 terminal to the LMU station.

LSM Pass-Thru Port (PTP) Relationships

If your ACS contains two or more connected LSMs, you must define the LSM/PTP relationships. The Master LSM controls the PTP; the Slave LSM does not. You define the Master/Slave LSM relationships in the SLILSM Macro. Legacy LSMs (e.g. 9310 Powderhorn) can be a master of a maximum of two PTPs. Even though a Legacy LSM may have more than two PTPs, it can control only two PTPs; it is possible for one LSM to be both a master and a slave. Streamline libraries are different. The SL8500 can control 8 PTPs. The SL3000 has no PTPs.

Host-to-Host Communication Options

To optimize performance, Oracle recommends that you set the HSC COMMPath METHod parameter to TCP to allow even sharing or resources in a multi-host configuration as shown in the example in "Creating the HSC PARMLIB Member."

Calculating Cartridge Capacity - SL8500 and SL3000

Message SLS0557I displays the total cartridge capacity for the library. For the SL8500 and SL3000 libraries, you must first vary the library online (Vary ACS command) to see the actual capacity of the library rather than the maximum capacity of the library. Before you vary the library, the maximum capacity is based on the highest possible number of panels that the HSC allows to be defined, not the number you defined in LIBGEN.

After you vary the library online, enter the Display Acs or Display Lsm command to show the actual library capacity. Refer to ELS Command, Control Statement, and Utility Reference for information about the Vary and Display commands.

Planning for the SL8500

Configuring and managing an SL8500 Library is significantly different than the other supported LSMs. Therefore, the SL8500 warrants some separate planning considerations, which are shown in the example SL8500 configuration in Step 8 through Step 10 of "Coding the LIBGEN Macros."

  • The SL8500 library contains four rails on which four handbots travel, and HSC sees each rail in an SL8500 as an entire LSM. When you configure an SL8500, the SLIACS macro LSM parameter specifies the assembler labels of the 4 SLILSM macros for each SL8500 rail, where the first label corresponds to the first rail, the second label to the second rail, and so forth.

    When you are defining the adjacent LSMs in an SL8500 with only internal passthru ports (elevators), the adjacent LSMs are the other rails, as shown in Step 9 in "Coding the LIBGEN Macros."

  • Each SL8500 has 3 "internal PTPs" (elevators) and can also have actual PTPs that connect two SL8500s. On the SLILSM PASSTHRU parameter, 0 denotes and internal PTP, 1 an external PTP. The first LSM (rail) in an SL8500 is always the PTP master.

  • For an SL8500 with no external PTPs, the adjacent LSMs are the other rails, as specified on the SLILSM ADJACNT parameter.

  • HSC sees SL8500 panels as follows:

    • Panel 0 = CAP panel

    • Panel 1 = drive panel (Note: this is the only drive panel)

    • Panels 2 through n = cell storage panels. The total number of panels depends on the configuration of the library.

      • base library — 2-10

      • With one expansion module — 2-18 (expansion module is 8-15)

      • With two expansion modules — 2-26 (expansion modules are 8-23)

      • With three expansion modules — 2-34 (expansion modules are 8-31).

      When you are configuring SL8500 panels, this translates into the following:

    • Specify SLILSM DRIVE=(1) for the single drive panel.

    • Specify DOOR=SL8500-1 for a single SL8500 CAP, specify DOOR=SL8500-2 for two SL8500 CAPs

  • To HSC, as viewed from inside the library, SL8500 column numbers are positive starting with +1 to the right of the center line of the drive bays. They are negative starting with -1 to the left of the drive bays. HSC reports two columns for each cell storage panel (columns 0 and 1).

  • storage panel (columns 0 and 1).To HSC, the SL8500 outer wall = 1 and the inner wall = 2.

  • SL8500 row numbers: within each LSM (rail), rows are numbered consecutively from the top down. Row numbers start with 1 for the SL8500 and 0 for HSC.

    Note:

    The SL8500 library uses TCP/IP protocol over an Ethernet physical interface to communicate with the host and HSC. Note the following:
    • When SL8500s are connected through passthru ports, all hosts must communicate with only one SL8500; preferably the first or rightmost one.

    • The SL8500 library should be attached to a separate subnet or a controlled network to protect it from floods of ARP broadcasts.

Planning for Library-Attached Transports

Determine 4-digit hexadecimal MVS unit addresses for your system's library-attached transports. You will use these addresses to:

  • Add a SLIDRIVS macro (ADDRESS parameter) to define RTD device addresses during the HSC LIBGEN process as described in "Coding the LIBGEN Macros."

  • Use the HCD facility to assign MVS device numbers to these transports as described in "Configuring MVS Device Numbers and Esoterics."

    Note:

    • LTO and SDLT transports are not supported as direct-attach devices in an MVS environment. These transports are recognized by HSC but are accessible only to open systems clients using LibraryStation.

    • Nonexistent devices can be defined in LIBGEN SLIDRIVS macro only if the LIBGEN and LSM panel types match. Changing an LSM panel type can only be done by a qualified service provider. Nonexistent drives are treated by HSC as incompatible with any media type.

    • When the new devices are actually installed, be sure to reconfigure your MVS unit address and esoteric definitions to accurately reflect the changes.

    • Oracle recommends specifying unit addresses for devices not defined on a host if that host may be used as a server for another host where device addresses have been configured.

    • After you define library-attached transports to HSC, you can define them as RTDs as described in "Planning for Library-Attached Transports."

  • Refer to Configuring and Managing SMC for defining different unit addresses for client server.

Planning for Library-attached Transports as RTDs

After you define library-attached transports to HSC as described in "Planning for Library-Attached Transports," you can define them as RTDs as follows:

  • Specify the MVS device numbers on the CONFIG VTSS RTD DEVNO parameter.

    You also specify the RTD identifier on the CONFIG VTSS RTD NAME parameter. To help identify the RTDs connected to each VTSS, StorageTek recommends that you choose RTD identifiers that reflect the VTSS name (specified on the VTSS NAME parameter) and the RTD's MVS device number (specified on the RTD DEVNO parameter).

    In configurations where multiple VTSSs are connected to and dynamically share the same RTD, in each VTSS definition you can either assign unique RTD identifiers or use the same RTD identifier.

  • Oracle strongly recommends that you define your RTDs to MVS through HCD (as normal 3490 tape drives), even if you do not intend to vary them online to MVS. This prevents the RTD addresses used in CONFIG and LIBGEN from accidentally being used for other devices. If you do not do this, and subsequently use the addresses for other MVS devices, you will cause problems with LOGREC processing, because VTCS will write records using the RTD addresses, and MVS will write records for other devices with those same addresses.

  • You can specify that library-attached transports can only be used as RTDs. For more information, see "Creating the HSC CDS."

  • Ensure that you use the drive operator's panel or T10000 Virtual Operator Panel (VOP) to enable the SL PROT (Standard Label Protect) function on the RTDs (ESCON or FICON).

Planning for Library Volumes

Determine the volume serial ranges and media types for VTVs and for all cartridges that will reside in the ACS. You will use these VOLSER ranges to create POOLPARM/VOLPARM statements to define real, virtual, scratch, MVC, and cleaning tape pools.

Guidelines for MVCs

  • Create separate volser range for MVCs to prevent HSC from writing to MVCs and to prevent VSM from writing to conventional Library Volumes. If you use POOLPARM/VOLPARM, VOLSER range validation is automatic.

  • VTCS, not MVS, controls access to MVCs. The tape management system does not control VSM access to an MVC volume and does not record its usage. If you choose to define MVCs to the tape management system, to ensure that the tape management system does not accidentally access MVCs, follow the guidelines in "Updating the Tape Management System for VSM."

  • Use your security system to restrict access to MVCs as described in "Defining MVC Pool Volser Authority."

  • HSC automatically marks newly entered MVC volumes as non-scratch. If you define existing Library Volumes as MVCs, ensure that these volumes do not contain data you need, then run the HSC UNSCratch Utility to unscratch them. For more information, Refer to the ELS Command, Control Statement, and Utility Reference.

  • You define MVC media and recording technique as described in "VTCS Considerations to Correctly Specify MVC Media." Note that VTCS requires a unique value for the MEDIA parameter on the STORCLAS statement.

VTCS Considerations to Correctly Specify MVC Media

Table 1-5 describes the values required to specify the desired media and recording technique on the HSC VOLPARM/POOLARM statements and HSC STORCLAS statement to correctly specify the desired MVC media.

Table 1-5 RTD Model/MVC Media Values

Transport Model TAPEREQ/ VOLPARM MEDIA RECTECH STORCLAS MEDIA Cartridge Type - Specified by STORCLAS MEDIA Density Encrypted?

4490

STANDARD

STANDARD

STANDARD

standard length 3480 cartridge

single

NA

9490, 9490EE

ECART

ECART

ECART

3490E cartridge

single

NA

9490EE

ZCART

ZCART

ZCART

3490EE cartridge

single

NA

9840

STK1R

STK1RA

STK1RAB

T9840A or T9840B cartridge

single

NA

T9840B

STK1R

STK1RB

STK1RAB

T9840A or T9840B cartridge

single

NA

T9840C

STK1R

STK1RC

STK1RC

T9840C cartridge

double

NA

T9840D - T9840D Non- Encrypting Transport

STK1R

STK1RD

STK1RD

T9840D cartridge

triple

no

T9840DE - T9840D Encrypting Transport

STK1R

STK1RDE

STK1RDE

T9840D cartridge for encryption

triple

yes

T9940A

STK2P

STK2PA

STK2PA

T9940A cartridge

single

NA

T9940B

STK2P

STK2PB

STK2PB

T9940B cartridge

double

NA

T1A34 -

T10000A Non-Encrypting Transport

T10000T1

T1A34

T1A000T1

T10000 full capacity cartridge

single

no

T1AE34 -T10000A Encrypting Transport

T10000T1

T1AE34

T1A000E1

T10000 full capacity cartridge for encryption

single

yes

T1A34 -

T10000A Non-Encrypting Transport

T10000TS

T1A34

T1A000TS

T10000 sport cartridge

single

no

T1AE34 -T10000A Encrypting Transport

T10000TS

T1AE34

T1A000ES

T10000 sport cartridgefor encryption

single

yes

T1B34 -

T10000B Non-Encrypting Transport

T10000T1

T1B34

T1B000T

T10000 full capacity cartridge

double

no

T1BE34 -T10000B Encrypting Transport

T10000T1

T1BE34

T1B000E1

T10000 full capacity cartridge for encryption

double

yes

T1B34 -

T10000B Non-Encrypting Transport

T10000TS

T1B34

T1B000TS

T10000 sport cartridge

double

no

T1BE34 -T10000B Encrypting Transport

T10000TS

T1BE34

T1B000ES

T10000 sport cartridgefor encryption

double

yes

T1C34 -

T10000C Non-Encrypting Transport

T10000T2

T1C34

T1C000T2

T10000C full capacity cartridge

triple

no

T1CE34 -T10000C Encrypting Transport

T10000T2

T1CE34

T1C000E2

T10000C full capacity cartridge for encryption

triple

yes

T1C34 -

T10000C Non-Encrypting Transport

T10000TT

T1C34

T1C000TT

T10000C sport cartridge

triple

no

T1CE34 -T10000C Encrypting Transport

T10000TT

T1CE34

T1C000ET

T10000C sport cartridgefor encryption

triple

yes

T1D34 -

T10000D Non-Encrypting Transport

T10000T2

T1D34

T1D000T2

T10000D full capacity cartridge

quadruple

no

T1DE34 -T10000D Encrypting Transport

T10000T2

T1DE34

T1D000E2

T10000D full capacity cartridge for encryption

quadruple

yes

T1D34 -

T10000D Non-Encrypting Transport

T10000TT

T1D34

T1D000TT

T10000D sport cartridge

quadruple

no

T1DE34 -T10000D Encrypting Transport

T10000TT

T1DE34

T1D000ET

T10000D sport cartridgefor encryption

quadruple

yes


Use Table 1-5 as a guideline to:

  • Create VOLPARM/POOLARM statements that segregate single/double density media or encrypted/non-encrypted media.

  • Specify the correct STORCLAS MEDIA values to assign the desired cartridge type and recording technique to MVCs.

  • Determine which transport models can write to/read from which media. A higher capability transport (double density vs. single, or encryption vs. non-encryption) can read from media written by a lower capability transport but can only write to that media from the beginning of the tape. A lower capability transport, however, cannot read from media written by a higher capability transport but can write to that media from the beginning of the tape.

Examples

If you are adding T10000D encrypting transports and new media to encrypt, create new POOLPARM/VOLPARM statements for the new media and STORCLAS statements to allow VTCS to select this media. For example:

POOLPARM NAME(SYS1MVC10D)TYPE(MVC)MVCFREE(40) MAXMVC(4) THRESH(60)INPTHRSH(10)    START(70)VOLPARM VOLSER(MVC900-MVC999) MEDIA(T10000T2) RECTECH(T1DE34)STORCLAS NAME(10DENCRYPT) INPLACE(YES) MEDIA(T1D000E2)

Note:

In the preceding example:
  • The T10000D (full) MVCs have been partitioned, which allows them to be dynamically reclaimed.

  • Dynamic reclaim is enabled by specifying INPLACE(YES) for Storage Class 10DENCRYPT.

If you are adding T10000D encrypting transports and want to convert existing media to encryption media, change existing VOLPARMs to specify encryption and change existing STORCLAS statements to request encryption. For example:

POOLPARM NAME(SYS1MVC10D)TYPE(MVC)MVCFREE(40) MAXMVC(4) THRESH(60)INPTHRSH(10)
 START(70)VOLPARM VOLSER(MVC900-MVC999) MEDIA(T10000T2) RECTECH(T1DE34)
STORCLAS NAME(T10K) INPLACE(YES) MEDIA(T1D000E2)

Here is how it works: If you have MVCs that already contain data, you cannot add "encrypted" VTVs to these MVCs. You can, however, encrypt data on initialized MVCs that do not contain data. To make this strategy work, therefore, ensure that you have sufficient free T10000 MVCs and also consider doing demand drains on MVCs that contain data to free them up.

Using the STORclas MEDIA Parameter for MVC Media Preferencing

By default, in mixed-media VSM systems, VTV automatic and demand migrations (and consolidations) attempt to go to MVCs by media type in this order:

  1. Standard length 3480 cartridge

  2. 3490E cartridge

  3. 3490EE cartridge

  4. T9840A/B cartridge

  5. T9840C cartridge

  6. T9840D cartridge

  7. T9940A cartridge

  8. T9940B cartridge

  9. T10000A sport cartridge

  10. T10000B sport cartridge

  11. T10000C sport cartridge

  12. T10000D sport cartridge

  13. T10000A full capacity cartridge

  14. T10000B full capacity cartridge

  15. T10000C full capacity cartridge

  16. T10000D full capacity cartridge

  17. LTO-6 full capacity cartridge

  18. LTO-7 full capacity cartridge

  19. LTO-8 full capacity cartridge

By default, for automatic and demand space reclamations, VSM attempts to write VTVs to output MVCs by media type in this order:

  1. LTO-8 full capacity cartridge

  2. LTO-7 full capacity cartridge

  3. LTO-6 full capacity cartridge

  4. T10000D full capacity cartridge

  5. T10000C full capacity cartridge

  6. T10000B full capacity cartridge

  7. T10000A full capacity cartridge

  8. T10000D sport cartridge

  9. T10000C sport cartridge

  10. T10000B sport cartridge

  11. T10000A sport cartridge

  12. T9940B cartridge

  13. T9940A cartridge

  14. T9840D cartridge

  15. T9840C cartridge

  16. T9840A/B cartridge

  17. 3490EE cartridge

  18. 3490E cartridge

  19. Standard length 3480 cartridge

The MEDIA parameter of the STORclas statement specifies a preference list of MVC media types. This list supersedes the default media selection list. Note that for reclamation, VTCS attempts to write VTVs back to MVCs in the reverse of the order specified on the MEDIA parameter. for more information about STORCLAS MEDIA values, see "VTCS Considerations to Correctly Specify MVC Media."

For example, if you specify the following on the MEDIA parameter of the STORclas statement:

MEDIA(STK1RAB,STK1RC,STK2PB)
  • To select an MVC for migration to this Storage Class, VTCS searches for a usable MVC in the order STK1RAB, STK1RC, STK2PB.

  • To select an MVC for the output of reclaim to this Storage Class, VTCS searches for a usable MVC in the order STK2PB, STK1RC, STK1RAB.

You can specify the media and ACS preferencing through the Storage Class(es) specified on the MIGpol parameter of the MGMTclas control statement.

To optimize recall processing in mixed-media systems, ensure that your MVC pool has at least one media type compatible with each RTD type.

Planning for VTSSs

Determine your system's 1 to 8 character VTSS IDs, which you specify when you run VTCS CONFIG to install and configure your VSM system as described in "Configuring VTCS."

Caution:

Note the following:
  • The VTSS ID can consist of the characters ”A-Z”, ”0-9”, ”@”, ”$”, and ”#”.

  • You specify the VTSS ID only through the NAME parameter, which sets the VTSS ID in both the VTSS microcode (as displayed in the Subsystem Name field in the LOP or VOP) and in the configuration area of the HSC CDS. After VSM is put into operation, the VTSS ID is also stored in each VTV record in the CDS. Each VTV record contains either:

    The VTSS ID on which that VTV is resident.

    The VTSS ID from which the VTV was migrated.

  • Once you set the VTSS ID using the NAME parameter, you cannot change this identifier in the HSC CDS. That is, the CONFIG utility does not let you change the NAME parameter after an initial setting. Moreover, changing the VTSS ID using the Subsystem Name field of the LOP, VOP, or DOP cannot change the VTSS ID in the HSC CDS.

  • It is especially critical that you do not attempt to rename a VTSS that contains data on VTVs, which includes VTSS-resident VTVs and migrated VTVs.

  • For an initial setting only (not a change), you can set the VTSS ID in the NAME parameter only if the VTSS ID value in the VTSS microcode is:

    The factory setting (all blanks).

    A value of 99999999 (eight 9s).

  • Therefore, for an initial setting only, if the name in the VTSS microcode is not all blanks or 99999999, your StorageTek hardware representative must use the VTSS LOP, VOP, or DOP to set the VTSS ID to 99999999. This will enable you to set the VTSS ID to the value you want using the NAME parameter.

Planning for VTDs

Determine MVS unit addresses for your system's VTDs as follows:

  • For each VTSS in your VSM configuration, determine a unique unit address range for the VTDs in that VTSS. Do not use duplicate addresses or overlapping address ranges, either within the VTDs in a VTSS or across VTSSs.

  • For each VTSS in your VSM configuration, define its VTD unit addresses to VTCS using CONFIG.

    In a multi-host, multi-VTSS configuration, you can configure your VTD unit addresses to restrict host access to VTSSs. Note that the VTVs created and MVCs initially written to from a VTSS are considered that VTSS's resources, so only hosts with access to a VTSS also have access to its VTVs and MVCs. For more information, see "Configuring VTCS." However, restricting access by host restricts your ability to use the client/server feature. Also, if you are using client/server, you will need to define host access to VTDs that do not have data path access to some drives (as these drives will be used by client hosts). In that case, you need to use the NOVERIFY parameter when the VTDs are defined to server hosts.

  • For each SMC host, use the HCD facility to define to MVS the VTDs that host can access as described in "Configuring MVS Device Numbers and Esoterics." The unit addresses you specify through the HCD facility must match the unit address range you specified for that host using CONFIG, or use the SMC unit address mapping facility to use different addresses. Refer to Configuring and Managing SMC for more details.

  • If you use CA-MIA or MVS shared tape feature, add VTDs to the list of managed devices.

Planning for VTVs

Use these guidelines to plan for VTVs:

  • Depending on the VTV size you choose and the media your volumes were previously written on, you may need to update the JCL volume count. You can do this without changing the JCL. For more information, see the IDAXVOLCNT parameter in Configuring and Managing SMC.

  • HSC/VTCS does not allow allocation of unlabeled tapes to VTVs. Unlabeled VTVs can cause the following for scratch VTV allocation requests:

    • If your JCL specifies a virtual esoteric, SMC fails the allocation.

    • If you have a default esoteric such as CART and specify allocation to virtual (using SMC policy), the allocation will go to a non-virtual device.

  • You must define VTV volsers to your tape management system; for more information, see "Updating the Tape Management System for VSM."

  • Ensure that VTV volser ranges do not duplicate or overlap existing TMS ranges or volsers of real tape volumes, including Library Volumes, including MVCs and Library Volumes that are regularly entered and ejected from the ACS.

  • If VTDs are being used across multiple MVS images and VTV volsers are unique, add a generic entry for SYSZVOLS to the SYSTEM inclusion RNL to insure that a VTV is used by only one job at a time. If you are using automatic tape switching, also add a generic entry for SYSZVOLS to the SYSTEM inclusion RNL to prevent a system from holding a tape device while it waits for a mount of a volume that is being used by another system.

    For more information, refer to the IBM publication z/OS MVS Planning: Global Resource Serialization.

  • HSC support lets you mix VTV and real volume types in the same scratch pool. In this case, ensure that the mount request specifies a VTD as transport type (for example, using POLICY MEDIA (VIRTUAL). In addition, if you are routing data to a specific VTSS (for example, by using esoteric substitution) and the request specifies a subpool, ensure that the subpool contains scratch VTVs.

    Refer to Managing HSC and VTCS for information about displaying and managing scratch subpools.

  • By default, VTCS assigns a Management Class to VTVs only on scratch mounts. You can, however, specify that VTCS assigns a Management Class whenever VTCS mounts a VTV (for read or write).

    Caution:

    If you specify that VTCS assigns a Management Class whenever VTCS mounts a VTV, these attributes can change, which can cause undesirable or unpredictable results.

    For example, if an application writes data set PROD.DATA to VTV100 with a Management Class of PROD, then writes data set TEST.DATA to VTV100 with a Management Class of TEST, then the VTV (and both data sets) has a Management Class of TEST. Similarly, it is possible to write SMC POLICY statements that assign different management classes to the same data set (for example, based on jobname), which can also cause a VTV's Management Class to change.

  • To accommodate your VSM system's VTVs, you may need to increase the DASD space for your tape management system. After you determine the number and range of VTVs your VSM system requires, refer to your tape management system documentation for specific information on calculating the DASD space requirements.

Data Chaining a VTD Read Forward or Write Command

Note that when data chaining a Read Forward or Write command, the VTSS requires the minimum data chained update count.