Tuesday, September 20, 2011

FTP - Broken Pipe Error

One of the regular jobs we are doing in most of our mainframe job is doing copy of our job result to some other server data base. The FTP (File transfer protocol) utility is commonly used for copying file to and from other source.

//STEP1     EXEC PGM=FTP,PARM='(EXIT',REGION=2048K          
//INPUT    DD *                                 
IP ADDRESS
REMOTE USER ID
REMOTE PASSWORD                                      
CD DIRECTORY
PUT ‘INPUT – DATASET ’ OUTPUT – FILE NAME (receiving file)
QUIT                                             
//OUTPUT   DD SYSOUT=*                          
//SYSPRINT DD SYSOUT=*

Commands used in FTP

quit     – Exit the FTP
cd       - To change the directory in the remote machine
ascii    – Mode of transfer is ascii
binary   – Mode of transfer is binary
delete   - To delete the file from current remote directory
get      - To copy a file from remote machine to our specified                        local machine
put      - To copy our local machine file to remote directory
lcd      - To change the directory on local machine
ls       - To list the files available in current remote directory
mkdir    - Create one new directory in current remote directory
mput     -  To copy multiple files from local machine to remote                  directory
mget     -  To copy multiple files from remote directory to local                machine
open     – To open connection with another machine

One common error we encountered while performing FTP is “Broken Pipe “ error. That means insufficient space in remote server. For that we can delete old version file from remote server using “DELETE” command before transferring file using “PUT” command.

Thursday, July 14, 2011

GDG



GDG -  Generation Data Group
GDG is group of datasets that are related to each other chronologically or functionally. Each of these dataset is called a generation. These related dataset share a unique Dataset Name. Every GDG data set has a Generation number and Version number assigned to each data set.
Generation number can be represented by GaaaaVnn aaaa is between 0000 to 9999 nn is between 00 to 99 In JCL
For example
AALIB.LIB.TEST.G0001V00
AALIB.LIB.TEST.G0002V00
AALIB.LIB.TEST.G0003V00

What is the main purpose of GDG and where we can use
  • To maintain all generation of data sets  without any complex way; if we want to see all the transactions have done till today; we can use GDG base to retrieve all generation.
  • To delete / uncatalog older generation
  • To refer current and older versions of data sets Very easily
  •  No need of change the JCL every time before submitting
            Mostly in production environment, for every month we want to take report of certain process; in this case we will use GDG. For example:-  if we want to take the report of all the accounts in monthly basis means, we can use GDG to create the monthly report.
We can create the dataset as mentioned below:
            January - MYLIB.LIB.ACC.JAN
            February - MYLIB.LIB.ACC.FEB
            March  - MYLIB.LIB.ACC.MAR and so on..
So this will avoid the problem of every month we need to go and change the name of the dateset in JCL.

GDG Generation
GDG can be created by IDCAMS utility
GDG model can be created by IEFBR14
We can use IEFBR14  for delete the GDG version alo
GDG definition can be altered by IDCAMS
Refer current version with 0 ( Ex. MYLIB.LIB.ACC(0) ) new version going to create with +1 ( Ex. MYLIB.LIB.ACC(+1) ) older versions , refer with -1 -2 -3 etc.... ( Ex. MYLIB.LIB.ACC(-1) )
All new generation dataset should have DISP parameter as “DISP=(NEW,CATLG,DELETE)”

Tuesday, July 5, 2011

Utilities

IEFBR14
This utility mainly used for allocation and deallocation of dataset. Most of the time mainframe developers encountered to some errors when submitting the jobs. The most common error is “FILE IS ALREADY CATLOGUED”. Even it is a warning message in some cases; it might be the reason for unsuccessful execution of our job. So resolve this problem we need to delete the files which are already catalogued.  
The below mentioned code should be included in our job as initial step,

//**********************************************
//*  IEFBR14
//**********************************************
//DD1      EXEC  PGM=IEFBR14
//DELDD    DD DSN=<< input file name >>,
//         DISP=(MOD,DELETE,DELETE),UNIT=SYSDA,
//         SPACE=(TRK,(1,1))
//**********************************************

Utility IEFBR14 also used to delete the temporary files which are created during job run and also files on Tape.

IEBGENER
This basic uses of this utility are:
  • Create a backup copy of a sequential data set, a member of a partitioned dataset
  • Produce a partitioned data set, or a member of a partitioned dataset, from a sequential dataset
  • Expand an existing partitioned dataset by creating partitioned members and merging them into the existing dataset
  • Produce an edited sequential or partitioned dataset
  • Manipulate datasets containing double-byte character set data
  • Print sequential datasets, members of partitioned datasets  
Below we can see some example code here.
Copy 
//*---------------------------------------------
//step01   EXEC  PGM=IEBGENER
//SYSPRINT DD  SYSOUT=*
//SYSUT1    DD  DSN=&TEST1.DDQ.NN1.DEFGGH0(+1),
//                  DISP=SHR
//SYSUT2   DD  DSN=&=&TEST1.DDQ.NN1.DEFGGH0.FTP,
//                  DISP=MOD
//SYSIN    DD  DUMMY
//*----------------------------------------------
   
Concatenation
//*----------------------------------------------
//step01   EXEC  PGM=IEBGENER
//SYSPRINT DD  SYSOUT=*
//SYSUT1   DD  DSN=&TEST1.DDQ.NN1.DEFGGH0(+1),
//                 DISP=SHR
//         DD  DSN=&TEST1.DDQ.NN1.DEFGGH0 (+1),
//                 DISP=SHR
//SYSUT2   DD  DSN=& TEST1.DDQ.NN1.DEFGGH0.FTP,
//                 DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
//                       SPACE=(CYL,(20,10),RLSE) 
      
Empty Existing Data
//*-------------------------------------------
//step01   EXEC  PGM=IEBGENER
//SYSPRINT DD  SYSOUT=*
//SYSUT1   DD DUMMY,DCB=(LRECL=80,RECFM=FB,BLKSIZE=800)
//SYSUT2   DD  DSN=&TEST1.DDQ.NN1.DEFGGH0.FTP,             
//             DISP=SHR
//SYSIN    DD  DUMMY
//*-------------------------------------------
 

SUBMIT a JOB WITHIN JOB using IEBGENER
Main advantage of IEBGENER utility is we can submit one job from another one job. When the first task of our JCL has completed processing then we can add IEBGENER step to submit the next job form our current job.

For this we need to direct the IEBGENER output to the “Internal Reader”. So it will take our input and send it to JES2/JES3 and it will process to submit the next job.

The DDNAME SYSUT1 will point to the input, i.e., the JCL job which we want to submit next. This could either be in a dataset (PDS or PS) or JCL in stream.
The DDNAME SYSUT2 (output) will point to the internal reader.

Example
//*
//STEP040 EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=SYSPDD.JCLLIB(JCL2),DISP=SHR
//SYSUT2 DD SYSOUT=(A,INTRDR)
//SYSIN DD DUMMY




Sunday, July 3, 2011

JCL Error Codes and Abends

Once the work to be done is defined in JCL, it will be submitted to the operating system using SUBMIT command. Before submitting the job the programmer should ensure that, there should not be any JCL error by compiling the job.

JCL ERROR
If there is any syntax error/allocation issue in dataset/dataset not found, then the whole job will be rejected with the error message. Error message will be present in JES MESSAGES. This needs to be restart and resubmission of job. Following are the list of JCL errors and corresponding details:

00 - Successful completion
02 - Duplicate key, Non unique key, Alt index
04 - Record length mismatch
05&35 - Open file not present
10 - End of file
14 – RRN(Relative record no) > Relative key data
20 - Invalid Key VSAM KSDS/RRDS
21 - Sequence error on write/ changing key on re-write
22 - Duplicate key found
23 - Record/file not found
24 & 34 &44 - Boundary violation
30 - Data error
37 - Open mode not compact with the device
38 - Opening file closed with lock
39 - Open, file attributes conflicting
41 - File is open
42 - File is closed
43 - Delete/Rewrite & invalid read
46 - Sequential read without positioning
47 - Reading file not open
48 - Write without open IO
91 - VSAM password failure
92 - Opening an already open file
93 - VSAM resource not available
94 - VSAM sequential read after end of file
95 - VSAM invalid file information
96 - VSAM missing DD statement in JCL
97 - VSAM open ok, file integrity verified


ABEND
Abend happens during the execution of a program in a step. Generally it can be categorized into system abend and user abend.
  • System Abend - It will occurs when the system is not able to execute a statement which is coded in a program. This abend code will be thrown by OS.
  • User Abend - This is due to unexpected condition occurs in data passed; this abend will be thrown by application based on the requirement.

Most of the time mainframe developer struck up with SOC4, SOC7 and some of the user abend when submitting the job. Here we will see some of the system and user abend details and resolving technique.

SOC1
This is the system abend, occurs due to the following condition.
  • Misspelled DD name
  • Missing DD card
  • Error in parameters passed to subroutines
  • Tried to read the file which was not open
  • Same name given for an array or subroutine
  • Tried to call within COBOL sort I/O procedure
  • Tried to call subroutine which did not exist
  • Incomplete DCB for SORTIN file

SOC4
This system abend occurs due to following reasons:
  • A SOC4 in sort step resulted in an invalid sort control card
  • Generally this will occur due to index overflow

So in this case try to find out which variable casing the abend and see what is the maximum array size allocated for that, modify the array size according to your requirement and re-run the job.

Some times SOC4 abend will be occurred for the reason of SOC7 also

SOC7
Mainly SOC7 abend occurs due to invalid digit / invalid sign present at last byte of comp-3 value. And some cases it will due to incorrect overlap in decimal field, table overflow, alphanumeric field is being moved to numeric fields and null values being moved and using for some calculation in the code.

To resolve the SOC7 abend, first check out the SYSOUT of the corresponding job to find out where data exception is listed out. For example,
<<< AMM09884 - PROGRAM COMPILED 07/21/10  11.48.23   >>>                       
CEE3207S The system detected a data exception (System Completion Code=0C7).    
From compile unit ATT00200 at entry point ATT00200 at compile unit offset +00003022 at address 21380AA2.
<> LEAID ENTERED (LEVEL 04/26/10 AT 13.28)                                   
<> LEAID ABENDAID DD ALLOCATED BY CWBMAKDD DYNALLOC RC =00000

From the Sysout of the job try to find out where the program getting abended and also which variable causing the job to get abend. Try to find out the program which is throwing the offset present in the SYSOUT. Once you found the details, see whether any move statement present and what was the value moved to, that field before job abended. Then do the data correction for that particular record and re-run the job.

S222
This abend causing our job to get cancel, because it was suspected of looping or resource unavailability for that job.

S322   
This is the Time Abend, due to our job/job step/any catalogued procedure took more time to execute than the time limit specified in the EXEC step or job step in the job. Sometimes, this time limit can also set by internal JES2 or JES3 system.

To resolve this issue give TIME = MAXIMUM in job card.
//TZEA249C JOB CLASS=C,PRTY=15,NOTIFY=&SYSUID,TIME=MAXIMUM

S522   
This is due to TSO time out - TIME=1440 on the exec statement to bypass all job step timing

S722   
This abend due to output lines exceeded the line parameter specified on main card

S822   
This ABEND occurs when the region requested to initiate the job or TSO user could not be obtained. 

SB37
This is out of space abend. To resolve the issue, specify the output dataset with larger primary and secondary quantity, and re-run the job.

SD37
This abend due to the dataset which is specified in the job used all the primary space, and there was no secondary space specified in the job. To resolve this issue, specify the JCL with the secondary quantity for that output dataset.
           
SE37  
A multi-volume physical sequential data set was being written on a direct access device.  All space was filled on the volume; an attempt was made to obtain space on the next specified volume.  Either the space was not available on that volume or the data set already existed on that volume.

U4094
This is the commonly occurred user abend. The reason for this abend is undefined/invalid format in input dataset of the job (input dataset in packed decimal format)

For example, if suppose i want to update some records in production table using UPDATE query which is present in the input dataset. While allocating the input dataset we should make sure that, input dataset should be in normal format, otherwise if it is in packed decimal format, the JCL will take the query as undefined format(Packed format), and it resulted in user abend of U4094. We could see following message in Sysout of the job.

IKJ5555442 NO VALID TSO USERID, DEFAULT USER ATTRIBUTES USED
READY
%E77441QRY
  I  -   > Execution Begins
E  -   > ..
E  -   > Trying to execute a non – update SQL
E  -   > Check your Input
READY
END

To resolve this issue, change the (DCB parameter) input dataset to default format rather than in PACKED DECIMAL format. Just type PACK OFF in our input dataset and re-run our job.
(Note:- Sometimes we can lock our dataset form other unfortunately specification changes, by profile lock)