What you need to know about Mainframe applications
A Mainframe application is often based on several types of components such as programs, JCL jobs, databases, files, transactions, maps etc.
The Mainframe Analyzer takes into account these components for the IBM z/OS platform. It can also analyze COBOL programs based on the COBOL-85 standard for other platforms. However, in this case, only COBOL programs are analyzed and other platform-specific components are not.
For analysis purposes, it is necessary to identify the characteristics of the COBOL dialect used to develop the programs and if they access any databases. There are several DBMS and the CAST Mainframe Analyzer covers only some of them. Moreover, it is useful to find out if the application has a batch processing part and/or an on-line part (based on transactions) or not. Indeed, batch processing means that programs are executed through jobs (implemented in JCL language), logical files are assigned to physical files and IMS resources are attached to programs. JCL source files can refer to procedures or include files. If they are not taken into account, then the analysis will not be complete and results will be partial. In addition, on-line processing means that programs can access specific resources and can call each other through specific commands. If this is also taken into account, it influences the results for programs in terms of incoming or outgoing links.
On-line programs usually have more "coupling" and, as a result, may be more sensible than batch programs in robustness point of view. Thus, it is useful to be able to differentiate batch programs and on-line programs in order to deliver more accurate information in the resulting dashboard.
Generally speaking, dividing the application into fine grained modules means that CAST AIP will provide more value.
The following sections discuss about the source code qualification process, the various existing mainframe platforms, how the delivered source code could be organized, and the description of components that are supported by CAST AIP.
Multiple mainframe platforms
It is important to understand that there is not only one platform but several mainframe platforms. The best known is certainly the IBM z/OS one.
Although each platform generally manages the same type of components (programs, jobs, transactions, files, databases, etc.), these components can not be taken into account by the same set of parsers. For instance, jobs from IBM zSeries computer family are not implemented with the same language than jobs defined on Unisys A Series computers or IBM AS/400 computers. As such, it is important to know and understand which platforms (or part of them) can be analyzed by the Mainframe Analyzer and which ones cannot.
The following tables show the different types of COBOL dialects, job control languages, transaction managers and DBMS that have already been encountered and contain information about their support by CAST AIP and the potential issues that may occur.
Platform / OS
Enterprise COBOL for z/OS
IBM Cobol for OS/390VS COBOL II
IBM AS/400 (OS400)
Unisys A Series
Bull DPS7 (GCOS7)
Bull DPS8 (GCOS8)
Unix, AIX, Windows
|OpenVMS||HP Cobol for OpenVMS||Medium|
|Stratus VOS||VOS Cobol||Medium|
IBM zSeries (z/OS)
IBM AS/400 (OS400)
A Universal Analyzer profile has already been created (see the UA Corner and/or UA CoE).
Unisys A Series
Workflows are implemented in Algol language (yes, this language is still used!) so it is difficult to parse it with the Universal Analyzer.
Bull DPS7 (GCOS7)
Bull DPS8 (GCOS8)
A Universal Analyzer profile could be developed to take GCOS8 job into account. This is more difficult for GCSO7.
Difficult to parse with a Universal Analyzer profile.
IBM zSeries (z/OS)
IBM zSeries (z/OS)
Bull DPS7 (GCOS7)
Bull DPS8 (GCOS8)
IBM zSeries (z/OS)
EXEC SQL … END-EXEC macros
EXEC SQL … END-EXEC macros for proCobol.
IBM zSeries (zOS)
IBM zSeries (z/OS)
IBM zSeries (z/OS)
It is often used in conjunction with the IDEAL programming language that is not supported by Cast AIP.
IBM zSeries (z/OS)
A specific Universal Analyzer profile has been developed.
Bull DPS7 (GCOS7)
Unisys A Series
Questions to ask...
The first thing to do with regard to the source code is to ask some questions about it and about how the architecture within it has been developed. These questions are going to help you qualify the source code, and find out if applications contain components that can not be analyzed and if it will be necessary to undertake any customization.
Possible answers to questions are listed below - these are only indications and you will probably receive other answers:
Questions to ask
What platform and/or operating system is in use?
Platforms and associated OS that are generally encountered are:
Are COBOL programs compliant with COBOL-85? If yes, then what is the compiler?
Most common COBOL compilers are:
Is there a COBOL code generator?
There are many code generators. Some of them are frequently encountered, such as:
Are there programming languages other than COBOL?
There are many other programming languages that you may encounter:
There are Universal Analyzer profiles for PL/I and Natural. It is also possible to implement new profiles for language such as EasyTrieve or SAS.
Is there a transactional part? If yes, then what is the transactional monitor?
There are several Transaction Managers:
What is the DBMS accessed by programs? What is the technology of the DBMS?
There are several types of DBMS used on mainframe platforms.
Are there technical layers for data access or program execution?
Technical layers can be developed to manage access to persistent data or to manage calls between programs. They are often materialized by specific subprograms that are called by applicative programs.
Are there technical tools like MOM?
How is the name of executable programs built (load module on z/OS platform)?
The executable program can be named by using the PROGRAM-ID clause of COBOL programs, by using their source filename or by using another name defined during the compilation step.
The two first cases are taken into account by the Mainframe Analyzer but not the third case (and, in this specific case, call links can not be created). It is therefore important to understand how executable programs are named.
Are there naming conventions that allow programs to be grouped in applications, chains or domains?
For example, first two characters corresponding to the application code, the two after corresponding to the sub-application code…
Collecting information about the source code
At the end of this step, you should know how:
- To find the components required for the analysis
- To check if they are correctly defined in order to be processed by the Mainframe Analyzer
- To identify the characteristics of the COBOL program
You should also have information about:
- The location of all required components
- The different types of COBOL programs
- The databases accessed by the programs
- The list of potential missing components
In Mainframe environments, source files do not have extensions. An extension must be added during or after the file transfer from the host machine to a Windows server. It is useful to use dedicated extensions for each type of source file. The following table shows the extensions you can use:
COB or CBL
CPY or COP
CICS CSD flat file
CSD, CICS or TXT
In addition, it is very useful to isolate the source files in dedicated folders as soon as possible. The following figure shows an example of a source file tree:
A source file tree example
If this is done, the Delivery Manager Tool could be used more easily to collect the whole source code or specific parts required for the analysis.
The Delivery Manager Tool, known as DMT, is introduced here. Please refer to this for more information regarding features of DMT.
Distributing the source code
The larger the application you need to analyze, the longer the analysis can take to complete. Large applications and consequently long analysis times also increase the risk of memory related errors occurring during the analysis. If this is the case it may be necessary to distribute the source code in several packages in the Delivery Manager Tool that will lead to the creation of multiple Analysis Units in CAST Management Studio, according to the recommendations listed hereafter:
- Place all JCL components for analysis in the same package where possible even if multiple packages contain the same JCL components (see 5 below)
- Always include copybooks in packages that also contain COBOL programs.
- COBOL programs can be distributed over multiple packages - CAST will then automatically associate linked objects. It is difficult to estimate the size of COBOL programs because the expanded version will be analyzed - as a result, you should try to distribute the COBOL programs evenly across the available packages.
- CICS files can be placed in their own package.
- For applications that contain COBOL programs, JCL components and IMS databases together, IMS files can be placed in one separate package, but it is preferable to place the COBOL programs and the all JCL components in the same package (see 1 and 2 above) - if you split the COBOL programs over multiple packages (see 3 above), you should place all the accompanying JCL files in all packages that contain COBOL programs.
- Identical objects taken into account in multiple packages associated to the same Analysis Service (formerly known as the Local/Knowledge Base) will be merged.
- AU 1: all IMS files
- AU 2: all JCL files and first part of COBOL program files and all copybook files
- AU 3: all JCL files and second part of COBOL program files and all copybook files
- AU 4: all CICS files
All packages must be associated to the same parent Application if you want links between objects in the different associated Analysis Units to be detected.
Components that are taken into account
The COBOL analysis forms the main part of the Mainframe Analyzer. Consequently, it is important to find all COBOL components and to qualify the COBOL source code properly.
The COBOL language analyzer is the most important part of the Mainframe Analyzer. You need to find the following information in order to know how many jobs to create and which parameters require activation:
- COBOL type
- Code format
- Source files location
The COBOL Analyzer is based on the COBOL ANSI 85 standard and several extensions (refer to the Cast AIP Release Notes to find out which extensions are taken into account). You need to know this before running the analysis. The customer can provide you with this information.
Initially, COBOL is a strict code-formatted programming language. A source code line is based on five areas:
1 through 6
Used for comment line, continuation character, debug line…
8 through 12
Program division and section names; first level data definition; procedure division section and paragraph names
13 through 72
Program clauses; second level (or more) data definition; procedure division statements
73 through 80
There are extensions available which make it possible to avoid this strict column format. The Left area and the Right area disappear and the developer can use more than 60 columns for his code (in area B). This is the case, for instance, with Microfocus COBOL.
You need to search for the source files: where are the programs and where are the copybooks? It is quite common for the files to be located in several directories. For instance, you could have a directory for the batch programs, another for the transactional programs, another for the subprograms, another for the common copybooks, another for the transactional copybooks…
It is important to look at these directories and to check their content by answering the following questions:
- Is it really and only COBOL code?
- Are they programs or copybooks?
- Which column number is used for the Indicator Area?
- Do they have the Right Area?
Occasionally, component source files implemented in other programming languages than COBOL are hidden among COBOL source files. When you try to analyze them, the analyzer will produce syntax error messages or will ignore them. If you find this type of source files then you can change the file extension (for instance, "NOT-CBL"). As a result, they will not be analyzed. You will probably find them after the first analysis.
The COBOL Analyzer needs to know the Indicator Area column number. You can easily find this information by opening a program source file and searching for the '*' column in comment lines. If there is no comment line then look for the "IDENTIFICATION DIVISION" statement. The Indicator Area column number is the column number of this statement minus 1.
It is possible that program source code does not have the Right Area. You can verify this by opening a source file. If there are statements after column number 72 then you will need to check the corresponding option ("End Column No Comments") in the analyzer wizard ("Options" page).
The following piece of code shows Microfocus COBOL code without the Left area and Right area. The Indicator area starts at column number 1:
The COBOL terminal format
If programs have different Indicator Area column numbers or if some programs use the Right Area and others do not, then you will have to analyze the programs in different Analysis Units configured with specific parameters and options.
Partitioned Data Set (PDS)
If the source code of your COBOL programs is stored in Partitioned Data Sets (PDS), you need to extract all MEMBERs that correspond to the COBOL programs you want to analyze and place them in a flat file. In order that this flat file can be divided up (on the Windows machine) into as many files as there are COBOL programs, it is important that each MEMBER is preceded by a banner that contains its name (ex: "MEMBER NAME myprog", where myprog is the name of a member). The DMT tool provides a specific extractor allowing to get component source code through PDS dump files.
The following image shows an example of a JCL file that can extract MEMBERS stored in a PDS and then save them in a flat file. This JCL must be configured according to the norms used in the execution environment.
JCL for extracting MEMBERs to a flat file
A COBOL-based application can have a batch processing part. This step explains how to identify batch components and how to verify the completeness of the source code. The Mainframe Analyzer takes into account JCL for the IBM z/OS platform only.
Batch processing allows you to run large groups of tasks (which can invoke programs) without any human interaction. A task is implemented by a JCL running a sequence of programs. A JCL source file can include procedure source files which contain common commands.
A JCL is a sequence of commands (named cards such as JOB, EXEC or DD) and is composed of one or more steps which run a program and assign resources such as data files, database environment… The first thing to do is to look for the source files for JCL and for procedures. You can use a GREP tool to find them. A JCL source file starts with the "JOB" card and a procedure source file starts with the "PROC" card. The second thing to do is to find out if the JCL includes procedures. Calls to procedures are made through the "EXEC" cards. Use your GREP tool to find these cards. There are two categories of "EXEC" cards:
- EXEC <procedure_name>
- EXEC PGM=<program_name>
The first allows the inclusion of procedures into a JCL and the second allows a program to be called (which is not necessarily a COBOL program).
JCL source code example
If you find the first category of "EXEC" cards then it means that there are procedures used by the jobs. You must then make sure you have the corresponding source files. If not, then you can ask the customer to give you the missing procedures.
You will inevitably find the second category of "EXEC" cards because the main goal of a JCL is to run programs (utility, technical and application programs). If you build the list of called programs then you will be able to check if it does not miss any. However, you will have to distinguish the technical programs from the others. It is often easier to run the analysis and to check any unresolved programs after the analysis has completed.
A COBOL-based application can have a transactional part. This step explains how to find out usages related to transactions and helps the reader to identify the corresponding components. The Mainframe Analyzer can analyze on-line programs developed to run on the CICS environment. It works with several types of information:
- Macros embedded in the program source code
- Screen definition
- CSD table
You can easily find out if the programs use CICS commands by scanning the source files with a GREP tool to find embedded macros. These macros look like the embedded SQL macros and start with the "EXEC CICS" string.
If there are CICS programs then there are generally also screen definition files (except if the screens are managed by another software layer). These files contain BMS code describing the screens (named maps). You can find them by using your GREP tool and searching for the "DFHMSD" string.
There is a similar concept on IMS DC environment. Screens are described in MFS files. The MFS language is a macro language like BMS but it can not be analyzed by the CAST CICS analyzer. So be careful if you hear the word MFS. In addition, if the customer uses IMS DC, then it is not appropriate to ask him BMS files…
BMS map source code example
Finally, if there are CICS programs, then it is useful to also have the CSD table. It allows the creation of CICS objects and more particularly links from transactions to programs.
The CICS Analyzer currently works with the definition of the CSD resources and not with a dump of them. This should contain the following statements (you can see an example in the Figure 8):
- DEFINE TRANSACTION()
- DEFINE PROGRAM()
- DEFINE TDQUEUE()
- DEFINE FILE()
- DEFINE MAPSET()
This resource definition can be delivered via a copy (into an ASCII flat file) of the script used to define the CICS environment or by using a JCL in order to extract this information from CICS. The following JCL is shown as an example. If you want to use it, then you have to configure it according to the norms used in the execution environment and then you must specify the lists or the groups of CICS objects to collect (please note that you cannot specify a list and a group on the same command). The application owners and/or CICS administrators are best placed to help you retrieve the information you need.
Example of JCL code to extract the CSD
The resulting flat file must look like this:
CICS CSD flat file example
The goal of this step is to explain to the reader how to find information about accesses to databases made by COBOL programs and to indicate which different files the Mainframe Analyzer needs.
A COBOL program can access several types of database:
- Relational (e.g: IBM DB2)
- Network (e.g: CA IDMS)
- Hierarchical (e.g: IBM IMS/DB)
The Mainframe Analyzer can detect the accesses made to a DB2 or an Oracle relational database and to an IMS/DB hierarchical database. However, these detections are not based on the same types of information.
DB2 and Oracle
If the COBOL programs access a relational database like DB2 or Oracle through embedded SQL (queries are delimited by the "EXEC SQL" and "END-EXEC" strings), then it is necessary to analyze it in the same application. The CAST Management Studio will automatically perform database analysis before programs analysis. Links will be drawn when analyzing programs. As such, it is important to check if the COBOL code contains SQL queries before defining the Analysis Units.
If not, then the links will not be created in the Analysis Service. However, it is possible to draw complementary links by using the "Dependencies" tab of the Application editor available in CAST Management Studio.
Please read the following Cook Books related to database analysis for more information about that subject:
- "CB - 007 - Oracle Database Analysis", section 3.6
- "CB - 008 - DB2 LUW Database Analysis", section 3.5
- "CB - 010 - ASE and SQL Server Database Analysis", section 3.3
- "CB - 015 - DB2 for zOS Database Analysis", Step 1
IMS database structure is directly analyzed by the Mainframe Analyzer through the database definition files (DBD). Without these files, you will not be able to analyze these databases. The IMS/DB accesses are made via embedded macros starting with "EXEC DLI" (in the CICS context only) or more generally through a technical sub-program called "CBLTDLI". In addition, main programs making access to IMS contain the entry point "DLITCBL" at the beginning of the PROCEDURE division. So if you want to know if a program is an IMS program, then you should find out one of these syntaxes:
- EXEC DLI … END-EXEC
- CALL "CBLTDLI" USING …
- ENTRY DLITCBL USING …
If you find one or other of these syntaxes in the COBOL source code then you can be sure that programs make accesses to IMS.
The Mainframe Analyzer does not take into account the IMS/DC transaction manager. Accesses to this transaction manager are also made via the CBLTDLI technical sub-program and associated to PSB files. It is not really possible to know if a program works with IMS/DC or with IMS/DB by looking at the source code. It is necessary to look at the PSB associated to the COBOL program. If this PSB contains a first PCB that is not typed as a DB PCB, then you can be sure that the program makes access to IMS/DC.
If the program works with IMS/DB, then you need to obtain the DBD and the PSB files associated to the IMS databases accessed by programs. You can search for the DBD using the "DBD[ ]+NAME=" string.
DBD source code example.
You can search for the PSB by using the "PCB[ ]+TYPE=" and "PSBGEN" or "PSBNAME" strings.
PSB source code example
If you do not have the PSB and the associated DBD then it will be impossible for the Mainframe Analyzer to recognize the database elements accessed by the programs and it will not be able to create the corresponding links. The analyzer will indicate these missing components in the log file/window.
Delivering the source code
COBOL based applications can be deployed on various platforms. As a consequence, source code delivery can be performed in different ways.
Applications deployed on IBM zOS mainframe
Program and copybook source files can either be managed in a Partitioned Data Set (PDS) or in a Source Code Manager (SCM) like Endevor. In both cases, resulting files are transferred by using the FTP protocol and this operation can be automated through a dedicated JCL job or by adding a last step in JCL job used to extract the source code as described below.
It is possible to dump the content of a PDS in a flat file where each member is preceded by a specific banner. This operation is performed by executing a dedicated tool through a JCL job (you can download this file (note this is a ZIP file that contains the JCL file)). The resulting file can be then transferred onto the analysis server ready for delivery via the Delivery Manager Tool.
If the source code is managed in a SCM, it is necessary to study the interfaces provided by the tool to export source code and to select the most appropriate one. It is generally possible to export components in to individual files or to group them in a PDS. In the first case, the resulting files can be transferred directly and in the second case, it is necessary to apply the operation described above.
The process looks like this:
SCM --[extraction]--> PDS --[JCL]--> PDS dump --[FTP]--> PDS dump --[DMT]--> AIP
Although it is possible to manage JCL jobs and procedure source files in an SCM, this type of component is generally managed in a PDS. The same extraction and file transfer procedures described for COBOL can therefore be applied to JCL source files.
Two types of source files are supported by the Mainframe Analyzer: CSD files and map source files (BMS files):
- CSD files can be extracted from the CICS system by executing a dedicated JCL job (you can download this JCL (note this is a ZIP file that contains the JCL file)). The resulting file can be then transferred onto the analysis server by using the FTP protocol, ready for delivery via the Delivery Manager Tool.
- BMS files must be transferred manually to the Windows file system ready for delivery via the Delivery Manager Tool.
Two types of source files are supported by the Mainframe Analyzer: DBD files and PSB files. These files must be transferred manually to the Windows file system ready for delivery via the Delivery Manager Tool.
The structure of a DB2 subsystem can be dumped in to extraction files by executing a dedicated JCL job. This job is delivered with CAST AIP as the DB2 zOS extractor. Generated files can be then transferred onto the analysis server by using the FTP protocol. See the DB2 zOS Extractor Admin Guide for more details on how to configure and execute this tool.
The complete source code delivery process for all source code types can be summarized by the graph shown below:
Applications deployed on other platforms
Like for the IBM zOS platform, program and copybook source files can either be managed in a Source Code Manager (SCM) or in file system. In both cases, the resulting files are transferred by using the FTP protocol and the operation can be automated with a scheduler or a command language.
- If the source code is managed in an SCM, it is necessary to study the interfaces provided by the tool to export source code and then to select the most appropriate one.
- If the source code is managed in the file system, then it may be appropriate to use a file compression utility to group the source files in a limited number of ZIP files for transfer.