Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

On this page:

Introduction

Two aspects are taken in to account when measuring the software functional size: the application boundary and the changes done on features throughout the application life cycle. It is extremely important to ensure that the measurement process is as accurate as possible and then to control the measurement process and to validate source code, the application boundary, the configuration, and finally the results. 

Control of the measurement process can be achieved through four steps:

  • The first one is to compare the source code delivery with the source code selection in analysis settings. If necessary, adjustments can be done in the CAST Delivery Manager Tool (DMT) package settings by the AI Administrator.
  • The next step is to compare the source code that has been analyzed with the resulting objects in the Analysis Service. Customization can be implemented in the CAST Management Studio to take in to account specific technologies or coding.
  • The third step involves checking the application module content. Modules are used to compute indicators for specific parts of the application and, as a consequence, it is important to justify all the exclusions of objects.
  • The last step is to review the list of empty transactions with regard to the configuration that has been set up by the AI Administrator. Empty transactions can be considered as excluded from the measurement results and, as such, they must be validated.


Check the application boundary consistency

Because the application boundary is the aggregation of all the components that compose the application, all the source code that has been provided to the AI Center should be considered. As a consequence, it is critical to compare the same code that has been delivered with the code that is visible in the various CAST dashboards.  

Based on the architecture review you should have a good overview of the different technologies and interfaces used by the application to communicate with other applications and end-users. Several mechanisms are available to adjust the application boundary throughout the on-boarding process with specific coding requiring additional pieces of code that are not necessarily maintained by the application team.

The application boundary can be refined in several steps:

    1. Check the source code delivery in the CAST Delivery Manager Tool (DMT): The source code that is used as input for the application analysis should be the one you consider as the application boundary. In this case all the source code delivered through the DMT packages should be in the Analysis Service and part of the application transactions.
    2. Adjust analysis settings in the CAST Management Studio (CMS): If the source code perimeter has not been refined in DMT packages, then you can also fine tune the corresponding Analysis Units created in CMS.
    3. Tune the module content: If you still have to keep some elements during the application analysis but you do not want to consider them in the results, then you should refine the module definition in CMS.

    4. Configure transaction in Transaction Configuration Center (TCC): Finally, if you have to keep some modules of the application to display quality measures and you do not want to consider them in the functional sizing, then exclusions in the CAST Transaction Configuration Center (TCC) can be an option to consider.

Consistency between DMT package input and DMT package output

It is critical to ensure that all delivered source code will be "consumed" by CAST AIP. Pieces of source code that cannot be part of the application analysis must be documented as being excluded from the application boundary. Get the list of files that contain the application source code by executing the following Windows batch command:

dir /a:a/s/b/o:n >listFiles.csv

Open the resulting CSV file with a tool like Excel. You will see in column A one row per file. Apply the following formula to the column B in order to get the actual list of file extensions:

=RIGHT($A2,LEN($A2)-FIND("|",SUBSTITUTE($A2,".","|",LEN($A2)-LEN(SUBSTITUTE($A2,".","")))))

Select the worksheet, and create a pivot table with the extensions as rows and corresponding number of files as value:

Click to enlarge:

The following image displays the resulting pivot table:

Click to enlarge:

You can then search for missing files by comparing this list with the actual list displayed in the CAST Delivery Manager Tool "Package content" tab:

Consistency between application analysis input and application analysis output

The CAST Delivery Manager Tool package content will be the input for the application analysis. The CAST Delivery Manager Tool "Package content" tab displays the various file extensions that have been discovered:

You can compare this list with the file extensions that are specified in the CAST Management Studio in the Analysis Unit "Source Settings" tab. In the following example, the file extension "PGM" has been discovered in the CAST Delivery Management Tool package but is not present in the Analysis Unit "Source Settings" file extension list. It means that only the COBOL programs with the file extension "CBL" will be analyzed and those with the "PGM" file extension (13 programs) will not. To take in to account these files, you must add the missing file extensions in the Analysis Unit "Source Settings":

Once the file extensions are coherent in both the CAST Delivery Manager Tool package and the CAST Management Studio Analysis Unit, then you should now expect to get the same number of files in the Analysis Service schema after the analysis has completed. To check that you can query the Analysis Service to get the number of source files that have been analyzed and compare them to the numbers displayed in the CAST Delivery Manager Tool "Package Content" tab. you should execute the following SQL query against the Analysis Service schema to get the number of files for the file extensions that have been taken in to account during the analysis:

set search_path=<prefix>_local;
select substring(p.path from '\.([a-z]+)$') ext, count(*)
from (select distinct path from RefPath) p
group by ext;

The figure below displays the results returned by the query. They show that the files with the extension "PGM" have not been taken in to account during the application analysis:

This can be fixed by adding the "PGM" file extension in the Analysis Unit "Source Settings" tab:

Analyze the application again and then execute the SQL query. You should see the expected "PGM" files in the result set:

Consistency between application analysis output and module content

Use the following SQL query to check the content of the modules associated to the application:

 

set search_path=<prefix>_local;

SELECT ps.MODULE_NAME, SUBSTRING(cob.OBJECT_FULLNAME FROM '\.([a-z]+)$') as extension, count(*)
FROM <prefix_local>.PMC_SUBSET_OBJECTS pso
 JOIN (SELECT ps.SUBSET_ID as MODULE_ID,
 pm.OBJECT_NAME as MODULE_NAME
 FROM (SELECT pm.OBJECT_ID,
 pm.OBJECT_NAME
 FROM <prefix_mngt>.CMS_PORTF_MODULE pm) pm
 JOIN <prefix.local>.PMC_SUBSETS ps
 ON ps.SUBSET_NAME LIKE 'CMS_MOD__' || pm.OBJECT_ID || '_Preparation2'
 ) ps
 ON pso.SUBSET_ID = ps.MODULE_ID
 JOIN <prefix_local>.CDT_OBJECTS cob
 ON cob.object_id = pso.object_id
 AND cob.OBJECT_TYPE_STR LIKE '%File'
GROUP BY MODULE_NAME, extension, cob.OBJECT_TYPE_STR
ORDER BY 1 ASC, 2 ASC;

The result should look like this:

It is important to be sure a file has not been assigned to multiple modules. This can be checked by executing the following SQL query against the Analysis Service schema:

set search_path=<prefix>_local;

SELECT pso.OBJECT_ID, count(*) 
FROM <prefix_local>.PMC_SUBSET_OBJECTS pso
JOIN (SELECT ps.SUBSET_ID as MODULE_ID,
pm.OBJECT_NAME as MODULE_NAME
FROM (SELECT pm.OBJECT_ID_ID,
pm.OBJECT_NAME
FROM <prefix_mngt>.CMS_PORTF_MODULE pm ) pm
JOIN <prefix_local>.PMC_SUBSETS ps
on ps.SUBSET_NAME like 'CMS_MOD__' || pm.OBJECT_ID || '_Preparation2'
) ps
on pso.SUBSET_ID = ps.MODULE_ID
GROUP BY OBJECT_ID 
HAVING count(*) >1;

The result should be empty:

Consistency between module content and transaction graph

Once you reach this step, you are sure the input for the transaction building engine is (almost) correct. The resulting transaction call graphs must now be reviewed and validated. Several points should be checked and are presented below.

Before investigating the transactions that have been discovered by CAST AIP, you may have to adjust the list of Transaction Entry Points - see Transaction configuration.

Objects that contribute to a transaction

The following SQL query will extract the object types that are part of a transaction. This is illustrated with COBOL in the following example but the query can be adapted to other object types:

set search_path=<prefix>_local;
select object_type_str,object_language_name,count(1)
from CDT_OBJECTS where object_id not in (
    select distinct objc.object_id
    from dss_transaction dt, dss_transactiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
    where dt.form_id =  obj.object_id
    and objc.object_id = dtd.child_id
    and dt.object_id = dtd.object_id
  union all
    select distinct objc.object_id
    from dss_datafunction dt, dss_datafunctiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
    where dt.maintable_id =  obj.object_id
    and objc.object_id = dtd.table_id
    and dt.object_id = dtd.object_id
)
and object_fullname not like '[Unknown%'
and object_language_name != '<N/A>'
and object_language_name != 'N/A'
and object_type_str not like  '%Project'
and object_type_str not like  '%Directory'
and object_type_str not like  '%Folder'
and object_type_str not in ('Cobol Paragraph','Cobol Section','Cobol CopyBook','Cobol Data Link','Cobol Entry Point')
 group by 1,2
 order by 2,1;

The result should look like this:

Objects that do not contribute to any transaction

The following SQL query will extract the object types that are not in any transaction. This is illustrated with COBOL in the following example but the query can be adapted to other object types:

set search_path=<prefix>_local;

select object_id, object_name, object_fullname, object_type_str,object_language_name
from CDT_OBJECTS where object_id not in (
    select distinct objc.object_id
    from dss_transaction dt, dss_transactiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
    where dt.form_id =  obj.object_id
    and objc.object_id = dtd.child_id
    and dt.object_id = dtd.object_id
  union all
    select distinct objc.object_id
    from dss_datafunction dt, dss_datafunctiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
    where dt.maintable_id =  obj.object_id
    and objc.object_id = dtd.table_id
    and dt.object_id = dtd.object_id
)
and object_fullname not like '[Unknown%'
and object_language_name != '<N/A>'
and object_language_name != 'N/A'
and object_type_str not like  '%Project'
and object_type_str not like  '%Directory'
and object_type_str not like  '%Folder'
and object_type_str not in ('Cobol Paragraph','Cobol Section','Cobol CopyBook','Cobol Data Link','Cobol Entry Point')
 order by 2,1;
 

The result should look like this:

Objects that do not contribute to any transaction and that are not called by another object

The following SQL query will extract the objects that are not part of any transaction. This is illustrated with COBOL in the following example but the query can be adapted to other object types:

set search_path=<prefix>_local;

select obj.object_id,obj.object_name,obj.object_fullname
from CDT_OBJECTS obj
where obj.object_type_str = 'Cobol Program'
and object_fullname not like '[Unknown%'
and obj.object_id not in (     --- reduce the list to the program which are not part of a transaction
select distinct objc.object_id
from dss_transaction dt, dss_transactiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
where dt.form_id =  obj.object_id
and objc.object_id = dtd.child_id
and dt.object_id = dtd.object_id
union
select distinct objc.object_id
from dss_datafunction dt, dss_datafunctiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
where dt.maintable_id =  obj.object_id
and objc.object_id = dtd.table_id
and dt.object_id = dtd.object_id)
and obj.object_id not in (--- reduce the list to the program which are not called by something else
select obj.object_id
from ctv_links cl,CDT_OBJECTS obj
where cl.called_id = obj.object_id
and obj.object_type_str = 'Cobol Program'
and object_fullname not like '[Unknown%'
)
order by 3;

The following image shows the result set returned by the query:

It is is important to investigate this list. The objects with their associated links can be visualized in CAST Enlighten, as shown below:

Click to enlarge:

A good way to investigate is to look at the comments inserted in the source code as well as discussing with the SME:

In the above source code extract, we can see the program is an interface belonging to another application - it can therefore be considered as a transaction end point.

Adding new Transaction Entry Points

Use Free Definition rules in the CAST Transaction Configuration Center to select objects that should be considered as Transaction Entry Points (seeTransaction configuration for mote information). The following example uses COBOL programs but can be adapted to other types of objects as well:

Click to enlarge:

The objective here is to add new program names to the regular expression set up by the above rule in a limited number of operations. The first thing to do is to collect the program names you want to add to the regular expression. This list can be built by executing the following SQL query:

set search_path=<Prefix>_local;
 
select '<value>'||obj.object_name||'</value>' as text
from CDT_OBJECTS obj
where obj.object_type_str = 'Cobol Program'
and object_fullname not like '[Unknown%'
and obj.object_id not in ( --- reduce the list to the program which are not part of a transaction
select distinct objc.object_id
from dss_transaction dt, dss_transactiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
where dt.form_id = obj.object_id
and objc.object_id = dtd.child_id
and dt.object_id = dtd.object_id
union
select distinct objc.object_id
from dss_datafunction dt, dss_datafunctiondetails dtd, CDT_OBJECTS obj, cdt_objects objc
where dt.maintable_id = obj.object_id
and objc.object_id = dtd.table_id
and dt.object_id = dtd.object_id)
and obj.object_id not in (--- reduce the list to the program which are not called by something else
select obj.object_id
from ctv_links cl,CDT_OBJECTS obj
where cl.called_id = obj.object_id
and obj.object_type_str = 'Cobol Program'
and object_fullname not like '[Unknown%'
);

The result should look like this:


The second thing to do is to create an empty Transaction Entry Point rule and save it. You can call it "Exposed Program" for example.

The third operation is to replace the regular expression used in the Management Service rule by executing the following SQL query:

set search_path=<Prefix>_mngt;
 
update cal_objsetdef set setdefinition = '<set>
  <selection-criteria subobjects="no" externalobjects="yes">
   <property name = "name" operator = "eq" >
    <value>GND0130</value>
    <value>M3194262</value>
    <value>M3109CCV</value>
    <value>M3194734</value>
    <value>M3194734</value>
    <value>M3109CGU</value>
    <value>M3109UM2</value>
    <value>M3109CGX</value>
    <value>GNDC920</value>
    <value>M3194480</value>
    <value>M3194480</value>
    <value>GND0307</value>
    <value>M3109CGX</value>
    <value>GND0301</value>
    <value>M3109CCV</value>
    <value>M319473X</value>
    <value>M3109UM2</value>
    <value>M3194T20</value>
    <value>M3194263</value>
    <value>M3194BDS</value>
    <value>M3194T20</value>
    <value>M3194871</value>
    <value>M3194BDS</value>
    <value>M319473X</value>
    <value>M3109CGU</value>
    <value>M3194262</value>
    <value>GNDCONV</value>
    <value>M3194263</value>
    <value>M3194871</value>
   </property>
   <property name = "type" operator = "eq" >
    <value>CAST_COBOL_SavedProgram</value>
   </property>
  </selection-criteria>
</set>
' where setname = 'Exposed Program';

 

The result can be seen in TCC as shown below:

The Function Ppoint computation can then be done again and the expected transactions should appear:

You can repeat to check if there are other objects that are not part of a transaction:

Check the database tables that have been excluded

Some database tables are automatically excluded from the Function Point computation process by applying a set of dedicated rules. Excluded database tables will not be visible in deleted, ignored, or retained Data Functions and it is important to validate the list to avoid unexpected exclusions. Currently, the only way to get this list is to generate the object set from the CAST Transaction Configuration Center "Built-in parameters" view:

Click to enlarge:

You can extract the list of exclusion rules with the following SQL query:

set search_path=<prefix>_mngt;
select * from cal_ignoredtable;

The result should look like this:

The database tables selected by the exclusion rule are visible when you generate the object set in the CAST Transaction Configuration Center: 

These database tables can be also listed by applying the following SQL query:

set search_path=<prefix>_local;
 
Select obj.object_id, obj.object_name, obj.object_fullname, obj.object_type_str, fplt.appli_id
from fp_lookup_tables fplt, cdt_objects obj
where obj.object_id = fplt.object_id;

The result should look like this:

Appendix A: Checking links in call graphs

Objects with High Fan-Out

The purpose is to identify incorrect links OR side effects in the Function Point counting due to these links. The SQL query presented here identifies objects with High Fan-Out (objects that call a lots of other objects) and must be executed against the Analysis Service. You can customize it with snapshot_id, application_id, ... as well as the type of objects (in this example, "C# Method" are considered but you can search for "Java Method" in Java application as well).

set search_path=<prefix>_local;
 
select count(L.caller_id), L.caller_id  , O.object_name, O.object_fullname
from ctv_links L, ctv_guid_objects O
where L.caller_id = O.object_id
and O.object_type_str = 'C# Method' --Java Method
group by L.caller_id,O.object_name,O.object_fullname
order by 1 desc
limit 100;

The results should look like this:

In CAST Enlighten, these of objects can lead to this type of graphical view: 

 

Objects with High Fan-In

The purpose is to identify wrong links OR side effects in the FP counting due to these links. The SQL query presented here identifies the objects with High Fan-In (objects that are called by a lot of other objects) and must be executed against the Analysis Service. You can customize it with snapshot_id, application_id, ... as well as the type of objects (in this example, "C# Method" as considered but you can search for "Java Method" in Java application as well).

set search_path=<prefix>_local;
 
select count(O.object_id),O.object_id, O.object_name, O.object_fullname
from ctv_links L, ctv_guid_objects O
where L.called_id = O.object_id
and O.object_type_str = 'C# Method' --Java Method
group by L.called_id,O.object_id, O.object_name,O.object_fullname
order by 1 desc
limit 100;

The results should look like this:

In CAST Enlighten, these of objects can lead to this type of graphical view:

Appendix B: Searching for potential Transaction Entry Points

The purpose is to identify missing links. The SQL query presented here identifies objects that are not called (ex: unreferenced methods) and must be executed against the Analysis Service. It can be customized with other types of objects (ex: "Java Method"):

set search_path=<prefix>_local;
 
select *
from ctv_guid_objects O
where O.object_id not in (select l.called_id from ctv_links L)
and O.object_type_str = 'C# Method'
limit 100;

This list must be validated by the SME and the application team and the transaction configuration must then be adjusted accordingly.

Appendix C: Searching for potential Transaction End Points

The purpose is to identify missing links. The SQL query presented here identifies objects that do not call any other object. This situation may happen if, for instance, a DAO object is analyzed and there is no corresponding database in the application boundary. This object can be seen as an interface and then it should be considered as Transaction End Point that contributes to the transaction.

This SQL query must be executed against the Analysis Service and can be customized with other types of objects (ex: "Java Method"):

set search_path=<prefix>_local;
 
select *
from ctv_guid_objects O
where O.object_id not in (select l.caller_id from ctv_links L)
and O.object_type_str = 'C# Method'
limit 100;

This list must be validated by the SME and the application team and the transaction configuration must be adjusted accordingly.

Appendix D: List of Transaction End Points

The following SQL query searches for all the objects that have been identified as Transaction End Points. It must be executed against the Analsysis Service.

set search_path=<prefix>_local;
 
select count(1) as used, dtd.child_id, obj.object_name, obj.object_fullname, obj.object_type_str, obj.object_language_name
from dss_transactiondetails dtd, cdt_objects obj
where dtd.childtype in (5, 6 ,7)
and dtd.child_id = obj.object_id
group by 2,3,4,5,6
order by 1 desc,2 asc ,3 asc;

The results should look like this:

Click to enlarge:

This list must be validated by the SME and the application team.

  • No labels