CAST OMG-compliant Automated Function Points

Introduction

See ISO declaration of conformance for more information about the standards that CAST AIP conforms to for AFP.

CAST OMG-compliant Automated Function Points technology is an automatic function points counting method. OMG defines such automatic function points counting methods in the following specification:  Automated Function Points (AFP) - Object Management Group

CAST automates the count by automatically using the structural information retrieved by CAST source code Analyzers (Architecture-based method). An example of this structural information retrieved is illustrated by the following image generated by CAST. CAST OMG-compliant Automated Function Points technology is based on the ability to retrieve:

  • Database structure and relationships (both SQL and Mainframe)
  • Transaction structure from the GUI or batches entry points down to the data

Measures used to calculate enhancement Function Points

CAST AIP use two distinct measures to calculate enhancement Functions Points - these can be chosen when you generate a snapshot:

OMG Automated Enhancement Points (AEP)

Note that this measure is used by default from CAST AIP 8.2.x onwards.

CAST integrates the estimation of OMG Automated Enhancement Points (AEP). Like EFP, the feature helps to evaluate overtime the size of evolutions implemented by teams producing applications. It computes and displays data functions and transactional functions that have been modified, added, or deleted, and information on the technical components of applications that do not belong to any transaction. The unadjusted function points that are calculated are weighted with Complexity Factors based on the objects complexity.

For more detailed information about AEP, please refer to CAST Automated Enhancement Points Estimation - AEP.

The AEP measure considers both the functional and the technical sides of the application enhancement. Therefore, the computation of Automated Enhancement Points (AEP) is divided in two sections: Automated Enhancement Function Points (AEFP) and Automated Enhancement Technical Points (AETP). The total AEP will be the addition of these two: AEP = AEFP + AETP.

AEFP Computation

The total value of Automated Enhancement Function Points (AEFP) is calculated by adding the AEFP value of all Transactional Functions and Data Functions in the application. The AEFP of each individual Transactional Function / Data Function is calculated by multiplying its FP value by its Complexity Factor (AEFP = FP x CF). The Complexity Factor of a certain Transactional Function / Data Function is an adjustment factor (defined by an OMG-specification) which is calculated based on its status (added / modified / deleted) and the complexity of the objects inside the Transactional Function / Data Function.
NOTE: the value reported for AEFP includes only added, modified and deleted Functions. All unchanged Functions are automatically excluded from this value (their Complexity Factor is considered as 0, and thus their AEFP value will also be 0).

AETP Computation

Automated Enhancement Technical Points calculation considers all technical objects in the application, i.e. those computational elements which are not the Automated Enhancement Function Point scope.

OMG Enhancement Function Points (EFP)

In addition to AEP, CAST also integrates the estimation of Enhancement Function Points (EFP) also called Project Function Points, so that CAST provides both baselining, e.g sizing an entire application using Automated Function Points and Enhancement Function Points. With the Enhancement Function Points feature, CAST AIP is in a position to display detailed information about the changes between two versions of an application and between two snapshots of the same application.

This feature helps to evaluate overtime the efficiency of teams producing applications. It is particularly useful in outsourcing contexts as many outsourcing contracts are based on FP when it comes to determine the size of the deliverables. This feature computes and displays data functions & transactional functions that have been modified, added or deleted. For each application, CAST AIP will provide 3 different pieces of information:

  1. Added FPs: FPs identified from the functions identified as created in the Vn+1 (i.e. that is added by the enhancement project)
  2. Modified FPs: FPs counted because existing function is changed during the enhancement project
  3. Deleted FPs: function points counted for function deleted during the enhancement project. 

For detailed information about EFP, please refer to CAST Enhancement Function Points Estimation - EFP.

Differences between the two measures

The main differences between the legacy Enhancement Function Points (EFP) mode and the Automated Enhancement Points (AEP) mode (introduced in CAST AIP 8.2) are as follows:

  • EFP does not consider the technical side of the application evolution; AEP does (calculating Automated Enhancement Technical Points, AETP)
  • EFP uses a transaction reduced call graph (from Entry Points to End Points / Data Entities); whereas AEP considers the transaction full call graph (it covers also the paths that do not reach an End Point / Data Entity). This means that the number of objects in the transaction call graph for AEP is higher.
  • The adjustment factor used to calculate Enhancement Function Points (EFP) is called "Impact Factor", and it is based on fixed user-defined formulas; on the contrary, the adjustment factor to calculate Automated Enhancement Function Points is called "Complexity Factor", and it is based on dynamic OMG-defined formulas.

The table below summarizes these differences:

 ModeTransaction Call GraphAdjustment FactorEvolution
EFPReducedImpact Factor (fixed user-defined formula)Only Functional
AEPFullComplexity Facto r(dynamic OMG-defined formula)Functional and Technical

Log messages when using the measures regarding full or reduced call graphs

When computing the Function Point values for an Application, the log file of the CAST Management Studio (i.e. the snapshot generation log) and the CAST Transaction Configuration Center will contain information about whether the computed call graph is "reduced" or "full". The information will look like this:

  • Number of applications with full call graph : value

or

  • Number of applications with reduced call graph : value

Note that:

  • When computing the Function Point values of an application in the CAST Transaction Configuration Center for Configuration purposes (i.e. using the Compute action), the call graph that will be computed will always be reduced.
  • When computing the snapshot of an application in in the CAST Management Studio, either the reduced or the full call graph will be computed, because one or the other is necessary for measuring the enhancement. The type of graph that is used is dependent on the measurement mode that is selected in the Function Points tab in the Application editor: if the Enhancement Measure is set to AEP, then the full call graph will be computed, otherwise the reduced call graph will be computed.

Unadjusted Data Functions

Estimation Process for Data Functions

Unadjusted Data Functions metric for an application is computed the following way:

  1. Identification of the Data Entities (group tables forming an Entities)
  2. Compute Record Element Types (RET) and Data Element Types (DET) for each Entity (~ number of tables, ~ number of non-redundant columns),
  3. Compute Function Points for each Entity
  4. Compute Unadjusted Data Functions (the sum of all Entities Function Points).

The main case development below is for SQL databases and Mainfraime IMS databases.

Identify the Data Entities

The estimation engine identifies data entities (OMG-compliant Automated Function Point ILF and EIF) by grouping Master-Details tables based on their foreign key relationships and their naming convention (a group of Master-Details tables is a Data Entity – OMG-compliant Automated Function Point ILF), eventually using configuration data (see below Configuration - Database Tables Prefixes).

For a given table A, the engine searches attached tables, i.e. having a (1,1)-(x,x) with the table A (called the Master). If the attached tables share a naming convention with the table A (called the Master), the attached tables are set to be the Details table of the Master table.
A Master table shares a naming convention with its Detail table if the first 4 characters of the Master table name matches any part of the Detail table name.

As explained below it is possible to configure Table Prefixes for a database schema. A table prefix is a set of characters that are systematically added at beginning of table names like 'T_' or 'SYS_' or 'APP_'. Whenever prefixes are defined for a schema, the prefix is removed from table names before the check of naming convention.

As an example, for tables called 'T_SALES' and 'T_SALESDETAILS' and a prefix set by configuration to 'T_', the naming convention check will be done on 'SALES' and 'SALESDETAILS'.

For a database schema, the result of is a list of Data Entities each one being composed of one master table and 0 or more Detail tables.

Compute RET and DET for each Entity

RET = Record Element Types
= Number of tables belonging to the data entity
= 1+number of detail tables.
Or = number of IMS segments
Or = 1 for GSAM/VSAM data files.


DET = Data Element Types
= Number of non-redundant columns for all tables of the data entities (redundant columns are columns with same names).
Or = IMS segment's fields

Compute Function Points for each Entity

Unadjusted Data Functions = Sum of (Data Entities FP). There are 2 modes used to compute Function Points for Data Entities:

The basic mode assumes that all entities are ILF (Internal Logical File). It is used whenever the source analyzer used can not identify Insert, Update, Select, Delete statements.

The Advanced mode identifies ILF (Internal Logical File) and EIF (External Interface File). To do so, CAST uses the information from the source analyzer that describe the action made on the tables or data files : Insert, Update, Select, Delete statements or Read/Write. "ILF are group of data maintained through an elementary process within the application boundary being counted." Maintained means the data are created, deleted, updated within the application. The Advanced Mode works as follows:

Data Functions:

The goal is to differentiate ILF & EIF using I,U,S,D Link Information. Any Data Entity that has one of the following characteristic will be identified as an ILF

  • At least one table is inserted, deleted or updated,
  • At least one file is written.

Other Data Entities will be identified as EIF.

The ILF OMG-compliant Automated Function Points is computed as follows:


1 to 19 DET

20 to 50 DET

51 or more DET

0 or 1 RET

Low = 7 FP

Low = 7 FP

Average = 10 FP

2 - 5 RET

Low = 7 FP

Average = 10 FP

High = 15 FP

6 or more RET

Average = 10 FP

High = 15 FP

High = 15 FP

The EIF OMG-compliant Automated Function Points is computed as follows:


1 to 19 DET

20 to 50 DET

51 or more DET

0 or 1 RET

Low = 5 FP

Low = 5 FP

Average = 7 FP

2 - 5 RET

Low = 5 FP

Average = 7 FP

High = 10 FP

6 or more RET

Average = 7 FP

High = 10 FP

High = 10 FP

Compute Unadjusted Data Functions (sum of all Entities Function Points)

Unadjusted Data Functions = Sum of (Function Points of all Data Entities)

Example

As an example, this figure captured from CAST Enlighten shows a Data Entity. The tables that are part of Data Entity are identified by retrieving all tables that have Primary Key – Foreign Key relationship with the table called SALES (master table).
Then within this set of tables, the engine identifies SALESDETAIL to be part of the data entity because it shares a naming convention with the SALESDETAIL table.

Example - SALES data function (ILF):

In the following example, the result of the counting will be 7 FP:

  • RET = 2 and DET = 6
  • Weight is Low
  • Results = 7 Function Points!

RET = 2 because there are 2 tables and DET = 6 because there are 8 columns minus 2 redundant columns STOR_ID & ORD_NUM:

Assumptions

With the current Function Points Estimation method, the Data Function count is based on the following assumptions:

  • The Estimation method considers all data functions as Internal Logical Files (ILF). So it weights the External Logical Files (EIF) the same manner as the internal ones.
  • This approximation is based on the ISBSG statistics (International Software Benchmarking Standards Group http://www.isbsg.org/ ).
  • The ISBSG statistics show that within a given application the External Logical Files (EIF = Read-Only) account for 5% of the application Function Points size whereas Internal Logical Files (ILF = Read-Write) account for 22%. Given the small difference in weight compared to the proportions of ILF/EIF, this approximation is acceptable.
  • It is also assumed that all table's columns are user recognizable (all DET).

Calibration using Transaction Configuration Center

You can use CAST Transaction Configuration Center to calibrate the Function Points Counting and take into account specific technical implementations, naming convention. CAST Transaction Configuration Center can help you filter technical or temporary tables, technical or temporary files and accommodate technical naming conventions thus calibrating the Function Points Algorithm.

Excluding Technical Items

The CAST Transaction Configuration Center enables you to exclude technical elements before or after the Function Points estimation is executed. Users usually exclude the temporary, template and auditing trail data based on naming convention or inheritance. Here are examples of such typical excluded naming conventions

  • TEMP, SESSION, ERROR, SEARCH, LOGIN, LOGON, FILTER (Temporary data),
  • TEMPLATE, TEMPLATE (Template data), HISTORY,
  • _OLD, AUDIT (Auditing data).

User Configuration - Database Table Prefixes

In order to increase the accuracy of the algorithm, it is possible to take into account technical naming conventions, table prefixes, when the algorithm groups several SQL Tables into data entities. A table prefix is a set of characters that are systematically added at beginning of table names like 'T_' or 'SYS_' or 'APP_'.  Using the CAST Transaction Configuration Center, it is possible to configure several Table Prefixes for a given database/schema, so that the Data Function Algorithm removes the prefixes from table names before trying to group together several SQL Tables into one data entity:

User Configuration - Data Entities

In order to increase the accuracy of the algorithm, it is possible to take into account technical naming convention and base class inheritance to identify Data Entities other than RDBMS defaults. So by configuration, it is possible for the user to provide a rule to automatically select Data Entities of a given application (so the list of the Data Entities is made available to the Estimation engine). By configuration, the selection of the Data Entities of a module is done using the following type of rules:

  • By naming (selection of an Object Type + a naming convention)
  • By inheritance (name of an Object, then all the objects that inherit directly or indirectly from that object)
  • By type (selection of an Object Type)
  • Free Definition (define more complex criteria for detecting objects that you want to include in the Function Point Calibration process via the creation of sets)

Note that CAST provides a predefined set of Data Entities in the "Built-in parameters node".

Unadjusted Transactional Functions

Estimation Process for Transactions

The Unadjusted Transactional Functions metric for an application is computed the following way by the CAST AIP estimation method, which:

  1. Identify the transactions entry points (user GUI elements or batch's entry points),
  2. Identify the tables used directly or indirectly by each entry point,
  3. Compute FTR and DET for each entry point,
  4. Compute Function Points for each entry point,
  5. Compute Unadjusted Transactional Functions (the sum of all transaction Function Points).

Identify the transaction entry points

For the following technologies, the entry point list is automatically created based on user forms concepts that are built-in the programming language:

  • .NET
  • ABAP
  • ASP
  • ASP .NET
  • CICS
  • Forms
  • JCL
  • JSP
  • PowerBuilder
  • Visual Basic

For the other languages such as pure Java, C/C++, Cobol, the entry points list is created using the Transaction Configuration Center user setup (see Calibration using Transaction Configuration Center below).

Identify the tables used directly or indirectly by each entry point (FTR)

For each Entry point, The FP estimation engine identifies using an impact analysis and a search algorithm all the tables used directly or indirectly by the entry point. It explores all the dependencies starting from the entry point down to the data functions. When the entry point is a composite object, container (such as classes), the FP estimation engine explores all the dependencies including those implemented inside sub-objects (such as methods).

For example, if the transaction entry point is class, the engine will explore all the dependencies including those implemented in the methods and fields of the class.

For COBOL programs only, the FP estimation engine identifies also the files and the IMS databases accessed by the entry point.

Compute FTR and DET for each entry point

For each Entry point, the following numbers are computed:

FTR = File Types Referenced
= Number of tables used directly or indirectly by the entry point + number of GSAM/VSAM files directly accessed by the entry point + Number of IMS Databases.

DET = Data Element Types
= Number of non-redundant columns for all tables used directly or indirectly by the entry point (redundant columns are columns with same names) + (3 * Number of COBOL files accessed).

Compute Function Points for each Entry point

For all the User Entry points, the Function Points metric are computed as follows. There are 3 types of Transactional Entities:

  • EI - External Inputs,
  • EO - External Outputs,
  • EQ - External Inquiries.

The enhanced mode will enable the differentiation between EI and EO/EQ (at this stage, it is not yet possible to differentiate EO from EQ). A Transaction is identified as an EI if the transaction contains at least one Insert, Update Delete link of a Table or at least one Write link on a file/IMS Database used. All other transactions will be indentified as EO/EQ.

The EI number of Function Points is computed as follows:


1 to 4 DET

5 to 15 DET

> 15 DET

0 or 1 FTR

Low = 3 FP

Low = 3 FP

Average = 4 FP

2 FTR

Low = 3 FP

Average = 4 FP

High = 6 FP

> 2 FTR

Average = 4 FP

High = 6 FP

High = 6 FP

The EO/EQ transaction type will be computed as follows:


1 to 5 DET

6 to 19 DET

> 19 DET

0 or 1 FTR

Low = 4 FP

Low = 4 FP

Average = 5 FP

2-3 FTR

Low = 4 FP

Average = 5 FP

High = 7 FP

> 3 FTR

Average = 5 FP

High = 7 FP

High = 7 FP

When Insert/Update/Delete/Select information is not available, the default value is set to be the ones used for EI and EQ.

Compute Unadjusted Transactional Functions (the sum of all Forms Function Points)

Unadjusted Transactional Functions = Sum of (Function Points of all User Forms)

Example

As an example, this image captured from CAST Enlighten shows a transactional function to illustrate how the CAST Estimation algorithm counts function points for it. The PowerBuilder window called "w_gui_newsale" is listed as language built-in user form and the CAST Estimation algorithm runs a search automatically to identify the tables, data files etc. used. Here in the example, the engine finds 2 file type referenced (FTR):

  • SALES
  • TITLES

The Salesdetail table has been found to be part of the SALES FTR.

Example - "w_gui_newsale" (EI)

In the following example, the result of the counting will be 7 FP

  • FTR = 2 and DET = 16
  • Weight is high
  • Results = 6 Function Points!

Assumptions

The OMG-compliant Automated Function Points standard requires then the counter to qualify the "Primary Intent" of the elementary process (see OMG-compliant Automated Function Points Counting Manual – Count Transactional Function Chapter). This Primary Intent helps distinguish the screen function to be an Input, Output or Inquiry (EI, EO, EQ). An algorithm cannot distinguish the "Primary Intent" of a transaction. This is the reason why CAST Function Points Estimation engine uses a default value for Transactional Functions Points in general. The default value is set to be ones used for EI and EQ. The Transactional Functions count assumes all columns of the accessed tables of files fields are used in the user forms and user recognizable.

Calibration using Transaction Configuration Center

To compute the Transactional Functions, it is necessary to identify the User Interface elements (windows, web pages or buttons used on theses windows and web pages) or batches entry points. While the concepts of User Interface elements are built-in for many 4GL languages, they are not built-in in languages such as C/C++, Java and COBOL and then depend on the library or frameworks used.

So if the application uses a specific library or framework, the user need to configure the estimation algorithm in order to setup a rule to automatically select transaction entry points for the application to be measured. The automatic selection of the transaction entry points is done using the following type of rules:

  • Naming Convention rules, with the selection of an object type and a naming convention,
  • Inheritance rule (name of an Object, then all the objects that inherit directly or indirectly from that object will be selected as entry points).
  • A simple object.

Several of the configuration rules can be setup for a given application. These configurations are done using CAST Transaction Configuration Center in the Entry Points node. CAST provides a list of predefined Entry Points as shown below:

Built-in support for EAI Services and Web Services

CAST offers built-in support for EAI Services and Web Services. Indeed, enterprises are deploying more and more IT applications using web services and without a direct access to relational database. To enable easy counting of applications, CAST FP estimation algorithm provides the ability to define a set of functions/programs that are used either as :

  • Data functions and thus are associated to an average Function Points count,
  • Or that are used as Transaction end points (without any Function Points count).

Example: the application does not use a database but simply sends messages to another one. Then data entities can be automatically identified by CAST through configuration.  This is done by configuration using inheritance, naming convention or a specific object type to identify services that are to be counted as data entity and as a transaction end point without FP weight.

Transaction Identification: Path Removal

As explained above, CAST FP estimation engine uses an impact and search algorithm that explores all the dependencies starting from the entry point down to the data functions. However, CAST field research has demonstrated that some specific applications have clusters of thousands of objects that are highly coupled. These clusters appear when there is a central component that handles in a way or the other all the transactions. Integrating these highly coupled clusters often leads to overweight the transactions and to have all transactions using all data functions. This is why the CAST FP estimation engine removes from the results the clusters of highly coupled objects and theses call paths are then removed.