- 1.1 Why Use a Mainframe?
- 1.2 Getting Started
- 1.3 Job Control Language (JCL)
- 1.4 z/OS UNIX System Services
- 1.5 Getting Help
- 1.6 Additional Information
1.3 Job Control Language (JCL)
Entering commands from TSO is one way to accomplish tasks in z/OS, but many other ways exist. One of the most popular and powerful ways is to create files that contain lists of things to do. These lists are called batch jobs and are written in z/OS Job Control Language (JCL), which fulfills roughly the same role as shell scripting languages in UNIX.
1.3.1 Introduction to JCL
JCL is a language with its own unique vocabulary and syntax. Before you can write your first JCL, you need to understand a few z/OS concepts and facilities.
We use JCL to create batch jobs. A batch job is a request that z/OS will execute later. z/OS will choose when to execute the job and how much z/OS resources the job can have based upon the policies that the system administrator has set up. This is a key feature of z/OS: z/OS can manage multiple diverse workloads (jobs) based upon the service level that the installation wants. For example, online financial applications will be given higher priority and, therefore, more z/OS resources, and noncritical work will be given a lower priority and, therefore, fewer z/OS resources. z/OS constantly monitors the resources that are available and how they are consumed, reallocating them to meet the installation goals. We could spend volumes describing just this one feature of z/OS, but this book is supposed to be about security, so we won't.
In your batch job, you will tell z/OS this information:
- You'll give the name of your job, with a //JOB statement
- You'll specify the program you want to execute, with a //EXEC PGM=<program name> statement
- If your program uses or creates any data, you'll point to the data using a //DD statement.
Listing 1.1 shows a trivial JCL job. Don't worry about executing this job, or about the exact meaning of each word—we explain them later in this chapter.
Listing 1.1. Trivial Batch Job
//MARKNJ JOB CLASS=A,NOTIFY=&SYSUID,MSGCLASS=H // EXEC PGM=IEFBR14
This job executes an IBM-provided z/OS program called IEFBR14. This is a dummy program that tells z/OS "I'm done and all is well." It requires no input and produces no output other than an indication to the operating system that it completed successfully.
You can also run TSO as a batch job by using JCL to tell z/OS this information:
- The name of the job
- The program to run, which is the TSO interpreter IKJEFT01
- Where to get the input for IKJEFT01 and the commands that you want to execute
- Where to put the output from IKJEFT01, the output from TSO, and the commands that you execute
Listing 1.2 shows a batch job that runs TSO to send a message.
Listing 1.2. Batch Job That Sends a Message Using TSO
//TSOJOB JOB CLASS=A,NOTIFY=&SYSUID,MSGCLASS=H // EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * SEND 'Hello, World' U(ORIPOME) /*
1.3.2 Data Sets
To submit a batch job, you need to understand data sets. As the name implies, a data set is a set or collection of data. Data sets are made up of records. To improve performance, records can be gathered together into blocks. Data sets fill the same function as files and directories in UNIX and Windows.
When you create a data set, you assign it a name. The name can be up to 44 characters long and consists of multiple parts, separated by dots (.). Each part can be up to eight characters. In a RACF-protected system, the first qualifier is either a user ID or a group name. We discuss group names in Chapter 2, "Users and Groups."
Examples of valid data set names are
- MARKN.BOOK.CNTL
- ORI.LONG$$$$.DATASET.NAME.WITHLOTS.OFQUALS
- SYS1.PARMLIB
Examples of data set names that are invalid are
- MARKN.QUALIFIERTOOLONG.CNTL (the middle qualifier is longer than eight characters)
- ORI.THIS.DATA.SET.NAME.IS.WAY.WAY.WAY.TOO.LONG (the total data set name is longer than 44 characters)
The act of creating a data set is called data set allocation. To allocate a data set, you need to tell z/OS a few things about the data set:
- The length of records within the data set expressed in bytes (often called the LRECL)
- The expected size of the data set
- If records are to be blocked, the number of bytes in the block (called the BLKSIZE)
- The organization of the data set (referred to as the DSORG)
Data set organization requires a little explanation. z/OS allows you to define a data set that is partitioned into multiple "mini data sets" called members. This type of data set is called a partitioned data set (PDS). PDSs contain a directory that tells z/OS the name of the member as well as how to locate the member, similar to directories under UNIX, Windows, and Linux. Much of the work that you do in z/OS involves the use of PDS data sets, or their more modern version, the extended PDS called the PDSE or library.
In contrast to UNIX, Linux, and Windows, z/OS requires you to specify the maximum size of each data set, for two reasons. The first is historical—z/OS is backward compatible and can run applications that were developed 40 years ago when disk space was at a premium. The second reason is that z/OS is designed for high-availability applications. When you specify the maximum size of each data set, you can ensure that the important data sets will always have the disk space they need. For simple data sets, such as the ones that we are discussing here, the allocation consists of two parts:
- The initial size of the data set is called the primary extent. This is the amount of space that z/OS reserves for the data set right now. If you think that your data set might grow in size later, you can specify the size of the secondary extents.
- If the data set is expected to grow beyond its initial size, additional allocations of disk storage can be given to the data set by specifying the size of the secondary extent. If the primary extent of your data set fills up, z/OS allocates the secondary extent up to 15 times. This allows your data set to grow gradually up to the maximum data set size.
When defining the size of the primary and secondary extents, you can do it in bytes or based on the device geometry in units of space called tracks or cylinders. Understanding these two terms requires understanding how a disk drive works. A disk drive consists of a set of rotating metallic platters upon which data is stored magnetically. Data is written on the disk in sets of concentric circles. Each of these circles is called a track. If you project that track from the top of the stack of platters to the bottom, you have created a cylinder. It is faster to read information that is stored in the same cylinder than information that is spread across cylinders.
1.3.3 Using ISPF to Create and Run Batch Jobs
Before we can create and submit a batch job, we need to create a data set to hold it. The simplest way to allocate a data set is to use the Interactive System Productivity Facility (ISPF).
1.3.3.1 Data Set Creation
Getting into ISPF is very simple: just type ISPF on the TSO command line. ISPF enables you to perform many common z/OS tasks from a full-screen interactive dialog. You move about the ISPF dialogs by specifying the number of the dialog that you want to use. For example, Utilities is option 3. You can then choose the suboption, which enables you to define and delete data sets. That's option 2. We often combine these two and type them as ISPF option 3.2.
As you can see in Figure 1.8, each ISPF panel presents the list of options that you can select. When you get familiar with ISPF, you can use ISPF's fast-path feature and specify =3.2 from any ISPF panel to have ISPF take you directly to the data set allocate and delete panel.
Figure 1.8 Main menu of ISPF
Select option 3.2 and press Enter (or the right Ctrl key). ISPF now takes you to a panel where you can allocate and delete data sets. Type A as the option, your user ID as the project (ORIPOME in the screenshot), RACFBK as the group, and CNTL as the type, as shown in Figure 1.9. By convention, CNTL is used for data sets that store JCL jobs, which correspond roughly to shell scripts or batch files.
Figure 1.9 Step 1 in data set creation
To allocate the data set, you need to tell z/OS this information:
- The expected size of your data set. We'll be adding other members to this partitioned data set, so let's give it an initial size (primary allocation) of ten tracks and allow it to grow five tracks at a time (secondary allocation). Remember that z/OS uses the secondary allocation 15 times before the data set reaches its maximum size.
- The length of each record in your data set. One of the most common record lengths in z/OS is 80 bytes, which is what we will use for our first few data sets.
- The size of each block. For performance reasons, you might want to tell z/OS that whenever it reads a record, it should read a group of them. That way, when you read the next record, it will already be in memory. Every time z/OS reads from the disk, it reads an entire block. The block size that you select also affects the efficiency of the records stored on the disk drive. If you specify 0, z/OS calculates the best block size for the device upon which the data set is placed.
- The number of directory blocks. When a data set is a partitioned data set, you need to tell z/OS how much space on the data set should be reserved for the directory. Each directory block has enough space to hold the information for about five members. We'll specify 20 blocks, which will give us plenty of space for new members.
- The organization of the data set. Many different types of data sets exist. For our purposes, we'll be working with two types of data sets: normal data sets (called sequential data sets) and partitioned data sets. For this data set, specify PDS for a partitioned data set. Sequential data sets are similar to files under other operating systems. Partitioned data sets contain multiple members, distinguished by name. Each member is similar to a file, so the entire partitioned data set is similar to a directory.
After you have typed all this information, your panel should look similar to Figure 1.10. Press Enter to create the data set. ISPF responds by representing the Data set utility panel with Data Set Allocated highlighted in the upper-right corner.
Figure 1.10 Step 2 in data set creation
1.3.3.2 Editing Data Set Members
When the data set is created, go to the ISPF editor. To do this, enter =2 on any command line. This is the ISPF "fast path" to the ISPF edit panel, which is option 2 from the main ISPF menu. On this panel, specify the name of the data set that you just allocated. Because you are editing a PDS, you need to specify either the name of an existing member or the name of a member that you want created, as shown in Figure 1.11. In this example, we're creating a member named HELLOW.
Figure 1.11 ISPF edit panel
After you press Enter, ISPF creates the member and places you in the ISPF editor. At this point, type the JCL shown in Listing 1.2. You need to type it on the lines that start with '''''', under the blue asterisks (*), as shown in Figure 1.12. Remember to change ORIPOME to your own user ID. Traditionally, JCL lines use the eight characters after the // for identifiers or leave them empty when no identifier is required. That is the reason, for example, that the word EXEC on the second line starts on the same column as the word JOB on the first line. The JCL would work with just one space, but it is more readable this way.
Figure 1.12 The editor after typing the batch job
After this is done, press Enter. ISPF saves your changes and replaces the quotes on the left with line numbers.
At this point, you're ready to submit your job. Type SUBMIT on the command line, and your batch job is submitted to the job entry subsystem at your installation. You will get a confirmation message with the job number, as shown in Figure 1.13.
Figure 1.13 Job submission confirmation message
Your installation has a policy for executing batch jobs, and that policy determines when your batch job is executed. After it has executed, you can view the output of the job. When your job executes, it sends a message to your TSO user ID. If you are logged on and are accepting messages, the message appears as your batch job executes. If you are not logged on or are not accepting messages, it is saved and displayed when you next log on.
When you see the confirmation message, press Enter again. In all likelihood, your job will have already executed and you will see the message, as well as a job confirmation message, as shown in Figure 1.14.
Figure 1.14 The message the job sent
When you are done with ISPF, enter =X on the command line to tell it to exit. If you get a log data panel, such as the one in Figure 1.15, select option 2 to delete the log. You can then use LOGOFF to exit TSO.
Figure 1.15 The log data panel when leaving ISPF
1.3.4 JCL Syntax
Now that you've run the JCL and seen that it works, let's review Listing 1.2 line by line and explain exactly what it does.
First, you'll notice that most lines start with two slashes. The two slashes mark a line as part of JCL. Lines that do not contain those slashes, such as the last two lines in this job, are usually embedded input files.
//TSOJOB JOB CLASS=A,NOTIFY=&SYSUID,MSGCLASS=H
This line is the job header. It defines a job called TSOJOB. The CLASS parameter specifies the job's priority, the maximum amount of resources the job is allowed to consume, and so on. A is a good default in most installations, at least for the short jobs we'll use in this book.
The NOTIFY parameter specifies that a user should be notified when the job ends. It could be the name of a user to notify, but here it is &SYSUID, which is a macro that expands to the user who submits the job.
The MSGCLASS parameter specifies that the output of the job needs to be held. This makes it accessible afterward, as you will see in Section 1.3.5, "Viewing the Job Output."
// EXEC PGM=IKJEFT01
This line starts an execution step—a step in the batch job that runs a program. It is possible for these steps to be named using an identifier immediately after the two slashes. However, this is a very simple job, so there is no need to identify this stage.
The program that this step executes is IKJEFT01, which is the TSO interpreter.
//SYSTSPRT DD SYSOUT=*
This line is a data definition. It defines the data stream called SYSTSPRT, which is the output of TSO. SYSOUT=* means that this data stream will go to the standard output of the job. In the next section, you will learn how to get to this output to read it.
//SYSTSIN DD *
This line is another data definition. It defines SYSTSIN, which is the input to the TSO interpreter. The value * means that the text that follows is the data to be placed in SYSTSIN.
SEND 'Hello, World' U(ORIPOME) /*
This is the input to the TSO interpreter. The first line is a command, the same "Hello, World" command we executed in Section 1.2.3, " 'Hello, World' from TSO." The second line, /*, is a delimiter that means the end of the file.
1.3.5 Viewing the Job Output
One of the outputs from your batch job was the "Hello, World" that was sent to your TSO ID. Your batch job produced other output as well. What happened to that output? It waits in the system until you or your system administrators tell the system what to do with it.
When you submitted the batch job, it was handed over to the job entry subsystem (JES). JES is responsible for scheduling the job, allocating some of its resources, and managing the job's input and output.
IBM provides job entry subsystems: JES2 and JES3. Most of the z/OS environments use JES2, so our examples are oriented toward it. For those of you who are using JES3, equivalent services exist there.
One of the most popular ways to view the output of your job is to use the IBM System Display and Search Facility (SDSF) program product. You start up SDSF either as a TSO command (SDSF) or as a dialog from within ISPF. In most installations, SDSF is option S from the ISPF Primary Options menu.
From the ISPF Primary Options menu, select the SDSF option, which brings you to the SDSF Primary Option menu, shown in Figure 1.16. On this panel, the options that are presented depend upon your level of authorization: The more things you are authorized to do, the more options you'll see presented by SDSF on the panel.
Figure 1.16 The SDSF Primary Option menu
The job's output is in the output queue. Type O to enter the output queue, find your job, and type S next to it to open the output, as shown in Figure 1.17. If necessary, you can scroll down using F8 or up again using F7.
Figure 1.17 The job's output in the output queue
The top part of the output, shown in Figure 1.18, tells when the job started, when it ended, which user ID was assigned to the job, and other job statistics. JES also displays the JCL. Scroll down a page to see more system-generated messages telling you about the resources allocated for your job. You can scroll up (F7), down (F8), left (F10), and right (F11).
Figure 1.18 The first part of the job's output
The real output of the job is in the last four lines of the job, shown in Figure 1.19. These lines show where we see the batch version of TSO displaying the READY prompt, the echoing of the "Hello, World" command, TSO's READY response, and the generated END statement.
Figure 1.19 The output of the job's TSO interpreter
1.3.5.1 Filtering Jobs
A large z/OS installation can have many jobs running at the same time. It is possible to use filtering to see only the jobs that are relevant to you.
To see the current filters, run this command inside SDSF:
SET DISPLAY ON
To filter, enter the name of the field to filter (prefix in the job name, destination, owner, or sysname) and the value. For example, this command filters for jobs that start with L.
PREFIX L*
After this command, SDSF will show only those jobs that start with L, as you can see in Figure 1.20.
Figure 1.20 Filtered job list in SDSF
To remove the filter, run this command:
PREFIX