All of the CICS components are declared with the same name, in the z/OS CICS CSD File. All of the resource declarations are made inside a z/OS CICS GROUP named PJ01TERM. This group is declared in the z/OS CICS LIST PJ01LIST used by CICS at start up to be automatically installed.
Table 4‑1 Simple Application Mapsets
Table 4‑2 Simple Application Programs
Table 4‑3 Simple Application Transactions Codes
Table 4‑4 Simple Application VSAM File In our example, all of the UNIX components resulting from platform migration are stored in the trf directory.The COBOL programs and BMS mapsets should be compiled and available as executable modules in the respective directories ${HOME}/trf/cobexe and ${HOME}/trf/MAP_TCP.The ${HOME}/trf/cobexe directory contains the Simple Application CICS executable programs:The ${HOME}/trf/MAP_TCP directory contains the Simple Application Data z/OS BMS mapsets compiled:
•
2. Configure the CICS Runtime Tuxedo Servers Groups and Servers to manage these resources. See Reference for a full description of which configuration files are used with each server.In the following examples using the CICS Simple File-to- Oracle Application, we will use the CICS Runtime Group name SIMPAPP and all our *.desc files will be located in the ${home}/trf/config/resources directory.These declarations are made by filling the transactions.desc file for each transaction you have to implement.In the File-to-Oracle Simple Application example, we have to declare four transactions: SA00, SA01, SA02 and SA03 in the SIMPAPP Group, starting the corresponding COBOL programs RSSAT000, RSSAT001, RSSAT002 and RSSAT003.Listing 4‑1 Simple Application transactions.desc FileAll the programs used by the transactions previously declared, directly or indirectly through EXEC CICS statements like LINK, XCTL, START … must be declared in the same Group.These declarations are made in the programs.desc file for each program to implement.In our Simple Application example, the only programs needed are RSSAT000, RSSAT001, RSSAT002 and RSSAT003 which are all coded in the COBOL languageListing 4‑2 Simple Application programs.desc FileTo converse with end-users thru 3270 terminals or emulators, declare to CICS Runtime all of the physical mapsets (*.mpdef file) used in the COBOL programs previously defined thru the specific EXEC CICS statements described above in this document.These declarations are made by filling the mapsets.desc file for each mapset you have to implement.
1. On the first free physical line, type the [mapset] keyword.
2. On the next line, enter the keyword name= followed by the name of your mapsets.
3. On the next line, enter the keyword filename= followed by the physical path of your physical mapsets (.mpdef file).In our Simple Application example, the mapsets used in our COBOL programs are RSSAM00, RSSAM01, RSSAM02 and RSSAM03.Listing 4‑3 Simple Application mapsets.desc File
Note: The mapsets.desc file does not accept UNIX variables, so a fully expanded path must be provided in this file.
•
• The Simple Application uses only the CUSTOMER Oracle table, resulting from the Oracle Tuxedo Application Rehosting Workbench Data Conversion of the z/OS VSAM KSDS file PJ01AAA.SS.VSAM.CUSTOMER.So, for our File-to-Oracle application example, we have only one accessor, RM_ ODCSF0 (RM for Relational Module), to declare to CICS Runtime.
Note: ODCSF0 represents the logical name previously defined in CICS that pointed to the physical file name PJ01AAA.SS.VSAM.CUSTOMER. Consequently, it is also the only file name known from the CICS COBOL program to access this file by EXEC CICS statements.
2. If the file does not exist, physically create the desc.vsam file at the indicated location.
3. Modify the desc.vsam file by adding a new line describing the different information fields used by the accessor in a "csv" format for each accessor/file used.Listing 4‑4 Simple Application ISAM File Declaration
1.
2. Indicate to CICS Runtime to start only the transactions belonging to the SIMPAPP CICS Runtime Group name.The following example of a *SERVERS section of the Tuxedo ubbconfig file shows the configuration of a ARTSTRN server.Is the Tuxedo Group Name to which ARTSTRN belongs.To be started, the ARTSTRN server must be defined in a Tuxedo Server Group previously defined (and not commented) in the ubbconfig file.Enter the Tuxedo tmadmin psr command to check that all of the CICS Runtime required servers (ARTTCPL, ARTCNX, and ARTSTRN) are running and that their messages conform to the Tuxedo documentation and this document.Another possible check can be made by entering the Tuxedo tmadmin psc command to display all the different Tuxedo Services running.In addition to the CICS Runtime System transactions/services (CSGM, CESN, CESF …), you can now see the transaction codes of your CICS Runtime application SA00, SA01, SA02 and SA03
2. Clear it by pressing the Clear key of your 3270 emulator keypad.
3. Type the main transaction code SA00 (of your CICS Runtime application) in the top left corner:Figure 4‑1 Simple Application transaction Code EntryFigure 4‑2 Simple Application Main MenuAdd the MRM parameter in the group entry of *GROUPS and *RMS section in Tuxedo ubbconfig file. See the following example:Listing 4‑9 Adding MRM Parameter in ubbconfig File ExampleAdd the following text in $TUXDIR/udataobj/RM to build the BDB TMS server:RBDB02 is the logical file name.On z/OS, this limit cannot be defined in the transaction resource itself but is defined in a distinct resource named TRANCLASS (transaction class) that contains a specific MAXACTIVE parameter describing the maximum number of concurrent instances of the same transaction.To link a transaction to a transaction class, to inherit its parameters, especially the MAXACTIVE parameter, the z/OS CICS transaction resource has a TRANCLASS field containing the name of the TRANCLASS resource.This instance management is performed differently on UNIX with CICS Runtime. The maximum number of transactions running concurrently is defined by the number of servers offering the same transaction. This maximum number and the minimum number are indicated respectively in the MAX and MIN parameters of the ARTSTRN definition in the *SERVERS section of the Tuxedo file ubbconfig.The MAXACTIVE=1 is really an exception in this management because it indicates that no concurrent transaction belonging to these kind of transaction classes can be run simultaneously.All of the transactions linked to transactions classes with a MAXACTIVE superior or equal to 2 are managed by the CICS Runtime Tuxedo Server ARTSTRN and do not required modifying anything else. For the transactions with a MAXACTIVE parameter set to 1, an CICS Runtime Tuxedo Server named ARTSTR1 is dedicated to their specific management.Listing 4‑10 Adding a ARTSTR1 Server to ubbconfig
Note: All of the CICS Runtime Transaction Servers (ARTSTRN, ARTSTR1, ARTATRN and ARTATR1) share the same CICS Runtime Transaction Group Servers, no modifications are required to the ubbconfig Server Group Section (*GROUPS).For ART CICS, concurrent transactions do not really need to be bound to transactions classes with MAXACTIVE parameters superior or equal to two because parallelism is the default behavior.For sequential transactions, it is mandatory because it is the only way to declare these transactions to CICS Runtime. Declare specific transaction classes defined with a MAXACTIVE=1 parameter. Like the other CICS Runtime resources, this one must belong to an CICS Runtime Group name. For each TRANCLASS, declare in a csv format:The first transclass TRCLASS1 has is maxactive parameter equal to 1, indicating that all the transaction belonging to this transclass must be managed sequentially by the ARTSTR1.The two last tranclasses, TRCLASS2 and TRCLASS10, are in fact similar because their maxactive parameters are superior to 1 indicating that the transactions belonging to these tranclasses can run concurrently managed by the ARTSTRN server.Then these transactions must be linked to the following tranclasses that we have previously defined:Once modified, the transactions.desc file will look like this:Listing 4‑11 Example transactions.desc FileSA02;SIMPAPP; Customer Maintenance Screen of the Simple Application;RSSAT002; ; ; ; ; ; ; ; TRCLASS2
• No modification is made to SA00 meaning that no transaction class is associated with this transaction code. It means that this transaction is not associated with a MAXACTIVE=1 parameter and so is not sequential.
• SA02 and SA03 are associated to transaction classes, respectively TRCLASS2 and TRCLASS10, defined with MAXACTIVE >= 2. Knowing that these transactions are not required, the result would be the exactly the same if SA02 and SA03 were defined like SA00 without transaction classes.
• SA01, which can run sequentially, is the only one where the transaction class field is mandatory. Verify that its associated transaction class, TRCLASS1, is really defined with a MAXACTIVE=1.The ARTSTR1, is shown below:These transactions are launched by specifics CICS EXEC CICS START TRANSID requests coded in the CICS programs that are not using DELAY or TIME parameters to delay their execution.The file is modified in the same manner as for the ARTSTRN and the ARTSTR1 servers, except the "s" (synchronous) character used to prefix the name of these servers should be replaced by the "a" (asynchronous) character.To use parallel asynchronous transactions, with a MAXACTIVE parameter strictly superior to one, the dedicated server is the ARTATRN. Please refer to the section describing the installation of the ARTSTRN server to install the atrn_server.
• The psc command shows that five new services are running, one is dedicated to the asynchronous transaction while each synchronous transaction (SA00 to SA03) is duplicated (ASYNC_SA00 to ASYNC_SA03) to allow them to run in an asynchronous mode.To use non-parallel asynchronous transactions, with a MAXACTIVE parameter exactly equal to one, the dedicated server is ARTATR1.Please refer to the section describing the reasons and the installation of the ARTSTR1 server to install the ARTSTR1 server.These transactions are launched when ASYNC_QSPACE for EXEC START is set with option INTERVAL or PROTECT.
1. The creation of a Tuxedo /Q Queue Space named ASYNC_QSPACE.
2.
3.
1. Before using the script, define and export in your UNIX ~./.profile file:
• The QMCONFIG variable QMCONFIG- containing the full directory path that stores the Tuxedo /Q Queue Space ASYNC_QSPACE.
• The KIX_QSPACE_IPCKEY variable - containing the IPC Key for the Queue Space.
2. Execute mkqmconfig.sh from the command line to create the Tuxedo /Q features.
1. Listing 4‑16 Simple Application Tuxedo Queue ubbconfig Example
2. Then, two servers, TMQUEUE and TMQFORWARD, must be added to the ubbconfig file in the *SERVERS section.Using the tmadmin psr and psc commands check that four new servers and two new services are running:These transactions use CICS programs containing EXEC CICS requests relative to CICS Temporary Storage Queues.The statements used are EXEC CICS WRITEQ TS … END-EXEC, EXEC CICS READQ TS … END-EXEC, EXEC CICS DELETEQ TS … END-EXEC.To manage TS Queues, activate the ARTTSQ CICS Runtime Tuxedo Server.
• To activate this server, add this server to the *SERVERS section of the Tuxedo ubbconfig file:Listing 4‑19 Activating the ARTTSQ in the ubbconfig FileIs the Tuxedo Group Name to which ARTTSQ belongs.Use the Tuxedo tmadmin psr and psc commands to check that the server is running and that six new services are published:Listing 4‑20 Checking ARTTSQ Server and Services are RunningTS Queues are stored in a sequential file in a dedicated directory defined in the KIX_TS_DIR UNIX environment variable. This variable is defined and then exported from the ~/.profile UNIX System File:Modify the Tuxedo ubbconfig file to activate the new ARTTSQ server dedicated to their management.To use recoverable TS Queues you need to define an Oracle Table to contain the TS Queues. CICS Runtime provides a UNIX script to create all these tables: crtstable_Oracle.
1. Before using the script define and export from your UNIX ~./.profile file
• The ORA_USER variable containing the user ID used to connect to Oracle.
• The ORA_PASSWD variable containing the associated password.
2. Once the variables have been set, execute the crtstable_Oracle script.
3. Parameters send to the Oracle_XA Manager.Listing 4‑22 Simple Application Check For Recoverable TS Queues
• Write data to a transient data queue (WRITEQ TD command).
• Read data from a transient data queue (READQ TD command).
• Delete an intrapartition transient data queue (DELETEQ TD command).If an automatically initiated task does not empty the queue, access to the queue is not inhibited. The task may be normally or abnormally ended before the queue is emptied (that is, before a QZERO condition occurs in response to a READQ TD command). If the contents of the queue are to be sent to a terminal, and the previous task completed normally, the fact that QZERO has not been reached means that trigger processing has not been reset and the same task is reinitiated. A subsequent WRITEQ TD command does not trigger a new task if trigger processing has not been reset.
Table 4‑5 Source to Target Mapping CICS Runtime Resource DeclarationEvery CICS-like resource in CICS Runtime, is declared using a dedicated configuration file stored in directory ${KIXCONFIG}.Intrapartition queues are declared in the file tdqintra.desc, described in Oracle Tuxedo Application Runtime for CICS Reference Guide.In the current release are documentary only in tdqintra.desc, it is their value in /Q configuration which is taken in account.For detailed and accurate information on qmadmin and /Q configuration Using the ATMI /Q Component in the Tuxedo documentation.The script mk_td_qm_config.sh distributed with CICS Runtime provides an example of qspace creation and then of queue creation and configuration into /Q, to be used for TD intrapartition queues.
• KIX_TD_QSPACE_DEVICE: must contain the filename of the physical file containing the /Q database for TD queues.
• KIX_TD_QSPACE_NAME: contains the name of the logical QSPACE to create, which will contains the queues.
• KIX_TD_QSPACE_IPCKEY: a specific key which must be unique on the machine for the IPC used by the instance of /Q.The creation of the device (KIX_TD_QSPACE_DEVICE) and of the QSPACE are very standard, we will not detail them.A qopen QspaceName command, to open the qspace which will contain the queues must be made before the creation of any queue. The QspaceName must match the QSPACENAME in the resource declaration of these queue(s).Below is an example of an interactive queue creation using qmadmin, where the questions asked by qmadmin are in normal font, while the entries typed in by the user are in bold.Listing 4‑23 qopen Dialogqopen TD_QSPACEQueue name: TESTQueue capacity command: "TDI_TRIGGER -t S049"Listing 4‑24 qopen ScriptThis is the command to be launched when the trigger level is reached, in CICS Runtime it should be set to: TDI_TRIGGER -t TRID. Where TRID is the Transaction identifier of the transaction to trigger which should match the TRANSID of the resource configuration.For several reasons, on z/OS, the Distributed Program Link function enables a local CICS program (the client program) to call another CICS program (the server program) in a remote CICS region via EXEC CICS LINK statements. CICS Runtime supports this feature used in multi-CICS architecture like MRO among migrated regions.
1. Listing 4‑25 Checking for Remote ProgramsEXECUtionset ==> Dplsubset Fullapi ! DplsubsetDYNAMIC(YES|NO)Remote server program name. An empty field is not relevant with DYNAMIC(YES) because the default is the client program name (PROGram ==>).Remote mirror transaction. An empty field is not relevant with DYNAMIC(YES) because the default is the mirror system transaction CSMI.
2. Then check in the programs, inside the EXEC CICS LINK API:Listing 4‑26 CICS LINK API For DPLIf at least one of your programs use the DPL, install and activate the ARTDPL server without changing your other settings.To activate this server, modify your ubbconfig file to add this server to the *SERVERS section of the Tuxedo ubbconfig file. This server belongs to the same Server Group as the Transactions Servers (ARTSTRN, ARTSTR1, ARTATRN, ARTATR1).Is the Tuxedo Group Name to which ARTDPL belongs.Use the Tuxedo tmadmin psr and psc commands to check that this server is running and that no new service is published:Listing 4‑28 tmadmin Commands to Check ARTDPL ServerTo allow an application to use distributed programs called in EXEC CICS LINK statements, these programs must be declared to CICS Runtime.
• In the programs.desc file, set REMOTESYSTEM (the 7th field of the csv format dataset), to remote SYSID name (KIXD in sample of Listing 4‑27).The default is local (empty field), meaning that local programs are declared because they can use the FULL CICS API.In our Simple Application example, if we suppose that RSSAT000, RSSAT001 are remote and RSSAT002 and RSSAT003 are local, then the programs.desc file is set to:
3. Using the Tuxedo tmadmin psr and psc commands, check that new services for DPL programs are published and managed by ARTDPL: KIXD_RSSAT0001 and KIXD_RSSAT0003.
Note: To avoid problems with homonyms, these distributed services have their names composed of the Tuxedo DOMAINID defined in the ubbconfig and the name of the program they manage.Listing 4‑30 Using tmadmin Commands to Check DPL ServicesTo reduce the scope of the services listed to only those managed by ARTDPL (SRVID=500), use the Tuxedo psc command followed with the -i srvid parameter to restrict the display to a particular server id.This area is addressed thru a pointer delivered by the CICS statement EXEC CICS ADDRESS CWA. If you find this CICS statement in your application, you have to implement this feature in CICS Runtime.Listing 4‑32 COBOL Example of CWA Usage
2. Modify your ~/.profile UNIX system file to export a new CICS Runtime variable, KIX_CWA_SIZE, and set it to the value found in the WRKAREA of the DFHSIT. If this variable is not declared, note that the default value is 0 and the authorized interval from 0 to 32760 bytes.
3. Modify your ~/.profile UNIX system file to export a new CICS Runtime variable, KIX_CWA_IPCKEY, and valorize it to a Unix IPC key to define the cross memory segment used as CWA.On z/OS, the TWA is a common storage area defined in memory for a CICS region that programs can use to save and exchange data between themselves during the execution time of one CICS transaction. In other words, this TWA can only be accessed by the programs participating in the transaction. This area is addressed thru a pointer delivered by the CICS statement EXEC CICS ADDRESS TWA. If you find an EXEC CICS ADDRESS TWA statement in your application, you have to implement this feature in CICS Runtime.Listing 4‑33 A COBOL Example of Use of the TWAAfter the CICS ADDRESS TWA, the address of the COBOL group named TRANSACTION-WORK-AREA is set to the address of the TWA allocated by CICS, meaning that TRANSACTION -WORK-AREA maps and refines this memory area. The total amount of this shared memory is defined for each transaction in the z/OS CSD configuration file in the field TWasize.The next screen shows the result of a z/OS CEDA system transaction where the TWasize parameter is set to 122 for the SA00 transaction code:Figure 4‑3 z/OS ceda System Transaction Example
1. Modify the CICS Runtime transactions.desc file to report the needed amount of TWA memory (TWasize>0).
2. For each transaction using programs with CICS ADDRESS TWA statements, modify the transactions.desc file to declare its TWasize in the sixteenth field of this csv format file.
Listing 4‑34 Configuration of TWA in the transactions.desc FileListing 4‑35 stderr_strn TWA ExampleFigure 4‑4 illustrates the behavior.Figure 4‑4 WebSphere MQ Trigger Condition
• -i trigger_interval: specifies the maximum time (in milliseconds) that the ARTCKTI server waits for a message to arrive at the WebSphere MQ initiation queue.
• -s retry_interval: specifies the retry interval for ARTCKTI to reconnect to WebSphere MQ queue manager or reopen WebSphere MQ initiation queue upon failure.
• -m queue_manager_name: specifies the name of the WebSphere MQ queue manager to be monitored.
• -q queue1,queue2,……: specifies the name of the WebSphere MQ initiation queue to be monitored.This log is the standard Tuxedo User Log (ULOG) whose name is contained in the system variable ULOGPFX of the Tuxedo ubbconfig file.When declaring a service in the Tuxedo ubbconfig file, each server has CLOPT options defined including two files:
Table 4‑7 Message Files by Server
• The groups of resources installed depending on the -l list parameter of each CICS Runtime server.
•
• One group (SIMPAPP) is selected with the -l option
• Information on the successful loading of these resources (Warning: zero TSQMODEL loaded).See also dynamic administration of CICS resources information in the Oracle Tuxedo Application Runtime for CICS Reference Guide.To switch your transaction from enabled to disabled, you have to modify the seventh field of this csv file, to change the previous value from an implicit (" " space(s)) or an explicit ENABLED status to the explicit DISABLED status.To enable a program, you have only to do the opposite, changing the STATUS field from DISABLED to ENABLED or " " (at least one space).Listing 4‑38 Log Report Showing Program Status|RSSAT002 |SIMPAPP |COBOL |USER|DISABLED |Each CICS Runtime Tuxedo Server reads a list of groups to be selected and installed at start up, contained in its CLOPT options after the -l parameter. To remove or add group(s) from an application, you have only to remove or add theses groups from this list for each CICS Runtime Tuxedo server.Listing 4‑39 Example of Application in ARTSTRN ServerIf you want to remove groups, you remove them from the -l lists when they are present, leaving only one : character between the remaining groups.Listing 4‑41 Example of Removing group1 in ARTSTRN Server