Page tree
Skip to end of metadata
Go to start of metadata
On this page

A bit of history...

The use of Function Points for measuring software functional size was initially introduced in the mid-1970s and is used today by organizations worldwide. Allan Albrecht (IBM) was the first person to publicly release a method for functionally sizing software. This method was called Function Point Analysis (Albrecht, 1979, 1981). Since its formation in 1986 the International Function Point Users Group (IFPUG) has continuously maintained and enhanced Albrecht's original method.

Evaluating the software Functional Size

Function Point is now a normalized metric that can be used consistently with an acceptable degree of accuracy. The value is centered on the ability to measure the size of any software in terms of functionalities provided to end-users. Function Point counting is qualified as "technology agnostic" in that it does not depend on the technologies or programming languages.

Function Point Counting method evaluates a software deliverable and measures its size based on its functional characteristics. It takes in to consideration the following constituents of an application:

  • External Inputs: Input data that is entering a system (logical transaction inputs, system feeds)
  • External Outputs and External Inquires: data that is leaving the system (on-line displays, reports, feeds to other systems)
  • Internal Logical Files: data that is processed and stored within the system (logical groups of user defined data)
  • External Interface Files: data that is maintained outside the system but is necessary to satisfy a particular process requirement (interfaces to other systems)

Along with selected other measures, Function Points can be used by organizations for different purposes:

  • Quality and productivity analysis
  • Estimating the costs and resources required for software development, enhancement, and maintenance
  • Normalizing data used in software comparisons
  • Determining the size of a purchased application package (Commercial On-the-shelf or customized systems) by sizing all the functionalities included in the package
  • Enabling users to determine the Return on Investment (RoI) of an application by sizing the functionalities that specifically match the requirements from their organization

Keep in mind

Function Point is a unit of measurement related to the functional size of software applications like meters and inches are about to measure distance, degrees to measure temperature...

There are several basic usages and characteristics: it measures the amount of functionalities in a software application, where the larger the number of function points, the more functionalities are implemented  in the application. Function points are also used to size business requirements.

All systems have input, output, and storage components. These components are always considered in the estimation of the number Function Points in a system. An application will be composed of some interwoven elementary processes, many of which do not  make sense until they are woven together. All these elementary processes make up the entire software application, even when they are not built by the same person. Therefore, a software application can be defined as an interwoven defined set of elementary processes, where the processes are woven together becoming interdependent and forming what is called a software application. 

In Function Points we are looking for data that is at rest. Some elements will be viewed as persistent such as storage elements like customer and student enrollment data files.  Other elements will be view as data in motion via some transaction like: Add customer, Inquire on an employee, or Print check…

Function Point Usage Scenarios

There are three frequent scenarios in using Function Points: measuring software size in order to estimate effort and cost, normalizing other measures, and benchmarking to help decision-making.

  1. Function Points have been defined around components that can be identified in a well-written specification. Consequently Function Points can be counted from the specification before other sizing measures such as lines of code are available. This is the reason why many commercial cost estimation formulas integrate Function Points in their calculations as the preferred sizing measure to deduce effort and cost figures. These formulas can be adjusted to increase the result accuracy as soon as the number of Function Points is available for the delivered software.
  2. Function Points are frequently used to normalize other measures. For instance, the total number of defects detected in a software system can be normalized by the number of Function Points to deduce the defect density per Function Point. This allows a better comparison between systems that differ by their size as well as the programming language used.
  3. Because they are counted from constructs and are independent of the programming language, Function Points are a preferred method for comparing software systems when benchmarking the sub-contractor productivity. As a consequence, productivity KPI are often based on Function Points.

Keep in mind

Function Point is a unit of measurement that can be used on the following use cases:

  1. Normalization & Benchmarking: Detect portfolio outliers, identify improvement opportunities and track evolution of size, risk, complexity and quality
  2. Productivity Measurement: Monitor, track and compare ADM teams’ utilization, delivery efficiency, throughput and quality of outputs
  3. Quantify Effectiveness of Transformation Initiative: Optimize operating costs while preserving throughput and de-risking business transformation initiative

  4. ADM Supplier Outcome Measurement: Provide visibility to management; manage risk, quality and throughput through enhanced Service Level Agreement

  5. Optimizing ADM Estimation: Provide visibility to estimation team to better understand challenges and fine tune effort estimation ratios. 

Counting Function Points

Here after is a summary of the IFPUG manual counting process (note that the IFPUG Counting Practices manual describing the whole process is about 370 pages).

This manual process is usually based on the requirements and specification defined for the application. However, if those documents do not exist anymore, the Function Points analyst uses the live application itself as well as the user documentation or any documents describing the application Data Entity Relationship Diagram and Data Flow Diagram.

The manual process for counting Function Points for a given application is described as follows:

  1. Determine the Application Boundaries
  2. Identify the Data Functions
  3. Estimate the weight of Data Functions
  4. Identify the Transactional Functions
  5. Estimate the weight of Transactional Functions
  6. Get the Unadjusted Function Points
  7. Calculate the Value Adjustment Factor
  8. Get the Functional Size

Step 1: Determine the Application Boundaries

The application boundary has to be determined first. In this step, the Function Points analyst distinguishes functionalities that are provided by the application from those that come from outside the application (for example, external reference data). 

Keep in mind

Each application, defined by a system boundary, has its own set of inputs, outputs, and storage. Applications share information with users (ex: through on-line screens) and with other applications (ex: through APIs). In terms of Function Points, it is important to draw the boundary between these applications.

The boundary separates one application from another, and interaction with other applications will be viewed as interfaces of the targeted application. Since the Function Point metric will be correlated with other metrics in order to get productivity measures, it is important to define the same boundary for all the metrics. For example, metrics can include: staff, the number of team members, price of development, price of design, price of test, or price to include maintenance. Metrics that can be used for comparison may include: Function Points with duration, duration up to the release candidate, and overall size...

The boundary is drawn according to the sophisticated user’s point of view, in order to reuse those concepts. Each application has its own set of inputs, outputs and storage. The boundary indicates the border between the application being measured and the external applications. It is drawn according to the team structure point of view: an application can interact with a user or another application.

Step 2: Identify the Data Functions

Data Functions are responsible to store and maintain data within the application.

There are 2 different types of Data Functions:

  • Internal Logical File (ILF)
  • External Interface File (EIF)

ILF are groups of logical data or control information (i.e: an API) that are maintained by the application itself. ILF have to respect the following rules:

  1. The group of data or control information is logical and user-defined
  2. The group of data is maintained through an elementary process within the application boundary.

EIF, like ILF, are groups of logical, user-defined data or control information. The major difference with ILF is that EIF are not maintained within the application boundary.

Keep in mind

There are 2 types of components used for Function Points counting at a high level. Data Functions include Internal Logical Files (ILFs) and External Interface Files (EIFs) as they are intended to represent the functionality provided to a user in order to meet internal and external data requirements:

  • ILF: Table is inside the boundary and is modified by application components. This first type of Data Functions is maintained by the users.  
  • EIF: Table is outside the boundary. In this case, users are not in charge of the maintenance this second type of Data Functions.

Like for other components, complexity of Transactional Functions can be considered as low, average, or high. It is based on some on interactions with the storage elements such as the following relations:

  • one to one
  • one to many
  • many to one
  • many to many

Step 3: Estimate the weight of Data Functions

In order to assign a weight to a Data Function, the Function Points Analyst counts the sub-groups of data (i.e: the Record Element Type) and the user-defined elementary pieces of data (i.e: the Data Element Type) existing in that Data Function.

The weight and the associated number of Function Points of each Data Function is then given by the following table:

 

1 to 19 DET

20 to 50 DET

51 or more DET

1 RET

Low

Low

Average

2 - 5 RET

Low

Average

High

6 or more RET

Average

High

High


The number of Function Points differs slightly between ILF and EIF:


 

 

Internal (ILF)

External (EIF)

Low

7 FP

5 FP

Average

10 FP

7 FP

High

15 FP

10 FP

 

 

Keep in mind

Logical and Interface files are containers for storing data. Data Element Types are elementary data having the following characteristics: they are user-defined, non recursive, dynamic, and non static. For example:  a customer name, his address, or a command action.

Data Element Types can be stored in or retrieved from a file. They are grouped in Record Element Types that correspond, typically, to relational database tables, hierarchical database root segments, or files with columns, child segments , or file fields.   

Step 4: Identify the Transactional Functions

The Transactional Functions provide the functionalities to the user in order to process or retrieve data. There are three types of Transactional Functions:

  • External Input (EI)
  • External Output (EO)
  • External Inquiry (EQ).

An External Input (EI) "imports" information from outside the boundary into the application, whereas External Output (EO) and External Inquiry (EQ) “export” information from inside the boundary to the user or to another application. In a user point of view, each Transactional Function is an elementary process, defined as the smallest unit of activity. It is self-contained and leaves the application in a consistent state.

Processes often perform similar functions and therefore it may be difficult to determine the type of a Function. In order to help Function Points Analysts, IFPUG describes the concept of Primary Intent to clarify the identification of Function types.

The following table displays the Primary Intent per Transactional Function type:

 Purpose

Input (EI)

Output (EO)

Inquiry (EQ)

Alter Behavior of the System

Primary Int.

May be

NA

Maintain data

Primary Int.

May be

NA

Present information to the user

May be

Primary Int.

Primary Int.


Keep in mind

Applications dialog with end-users and exchange data with other applications. Interactions are managed by external interfaces.

3 types of Transactional Functions have been defined with regards to the interface used to communicate with users and applications:

  • External Input (EI): manages incoming information sent by users or other applications and saves it into data storage
  • External Output (EO): provides users and other applications with information. It does not alter the information saved in data storage
  • External Inquiry (EQ): sends information to users and other applications and can perform update in data storage

Like for other components, complexity of Transactional Functions can be considered as low, average, or high. It is based on some on interactions with the storage elements such as the following relations:

  • one to one
  • one to many
  • many to one
  • many to many

Step 5: Estimate the weight of Transactional Functions

To estimate the weight of Transactional Functions, the Functional analyst counts the number of "File Type Referenced" objects (FTR) used by the function. A FTR is a logical file or an interface file that can be either an ILF (used by the function for input and/or for output) or an EIF (used by the function for input only). Each FTR is then associated to a number of Data Element Type (DET).

The Function Points assigned to a Transactional Function depends on the complexity of the function that is determined by the following tables:

 

 

 

1 to 4 DET

5 to 15 DET

16 or more DET

0 or 1 FTR

Low

Low

Average

2  FTR

Low

Average

High

2 or more FTR

Average

High

High

 

 

 

EI

EO

EQ

Low

3 FP

4 FP

3 FP

Average

4 FP

5 FP

4 FP

High

6 FP

7 FP

6 FP

Keep in mind

An external interface will interact with logical or interface files in order to display or update some data. The type of element to be considered in this interaction will be the DET (Data Element Type) and the FTR (File Type Referenced). The FTR elements are the data entity defined during the discovery of the logical or interface file. A FTR is a logical group of Record Element Types (RET) which contains Data Element Types (DET).  

  • External Input (EI): information comes from the interface for updating databases. It means the external input injects data into the system.
  • External Output (EO): takes information from databases and provides the values through interfaces.
  • External Inquiry (EQ): processes information from databases and delivers it through interfaces. The external inquiry retrieves and maybe updates data from the system.

Step 6: Get the Unadjusted Function Points

Once the weight of all functions has been determined, it is possible to produce the Unadjusted Function Points metric. This one corresponds to the first estimation of the Functional size of the application.

The adjustment factor is a deprecated value in the manual Function Points counting process and is not managed by CAST AIP. Thus, the Unadjusted Function Points metric is the final measure produced by CAST AIP and is considered as the Functional Size of the application.

 

Step 7: Calculate the Value Adjustment Factor

Once Data Functions and Transactional Functions have been identified and weighted, It remains to adjust the metric with the Value Adjustment Factor (VAF). This factor uses the General Systems Characteristics (GSC) method and takes the specific characteristics of the application into account. This is done by evaluating 14 different influence factors for the application, that are being evaluated on a scale from zero (no influence) to five (strong influence).

General System Characteristic

Brief Description

1.

Data communications

How many communication facilities are there to aid in the transfer or exchange of information with the application or system?

2.

Distributed data processing

How are distributed data and processing functions handled?

3.

Performance

Was response time or throughput required by the user?

4.

Heavily used configuration

How heavily used is the current hardware platform where the application will be executed?

5.

Transaction rate

How frequently are transactions executed daily, weekly, monthly, etc.?

6.

On-Line data entry

What percentage of the information is entered On-Line?

7.

End-user efficiency

Was the application designed for end-user efficiency?

8.

On-Line update

How many ILF’s are updated by On-Line transaction?

9.

Complex processing

Does the application have extensive logical or mathematical processing?

10.

Reusability

Was the application developed to meet one or many user’s needs?

11.

Installation ease

How difficult is conversion and installation?

12.

Operational ease

How effective and/or automated are start-up, back-up, and recovery procedures?

13.

Multiple sites

Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations?

14.

Facilitate change

Was the application specifically designed, developed, and supported to facilitate change?

If for each of the influence factors has been weighted with a number from zero to five, summing all 14 GSC ratings for an application will result in a value between 0 and 70; that sum is referred to as the Total Degree of Influence (TDI).

The TDI is then inserted into the following equation:

  • VAF = (TDI * 0.01) + 0.65
  • VAF designates the Value adjustment factor.

As we can see, this factor may be in the range from 0.65 (no influence determined for all of the 14 points above) to 1.35 (highest influence for all of the 14 points).

  • Development: CFP = ADD + DFP
  • Enhancement: EFP = [(ADD + CHGA + CFP + DEL) ]
  • Application after DEV: AFP = ADD
  • Application after ENH: AFPA = [(AFPB + ADD + CHGA ) - (CHGB +  DEL) ] 

Legend:

  • ADD:  size of the functions being added by the enhancement project 
  • AFPA: Application Function Point count after the enhancement project
  • AFPB: Application Function Point count before the enhancement project
  • CFP:  size of the conversion functionality
  • CHGA: size of the functions being changed by the enhancement project – as they are / will be after implementation
  • CHGB: size of the functions being changed by the enhancement project – as they are / were before the project commenced
  • DEL: size of the functions being deleted by the enhancement project
  • VAF: Value Adjustment Factor.The adjustment factor can adjust a function point count by plus or minus 35%.
  • TDI: Total Degree of Influence. TDI is then used to create the value adjustment factor (VAF) using the following expression: VAF = (TDI * 0.01) + 0.65.  

 

Every application will have its own unique VAF.  VAFs generally do not change to a huge degree after an application is initially developed.   However, the ISO version of the counting process does not use this process and IFPUG now judges the VAF as optional.

Step 8: Get the functional size

At the end of the process, the Adjusted Function Points value can be calculated to provide the expected functional size of the application:

  • Functional Size = Unadjusted Function Points x Value Adjustment Factor

Measuring Functional Changes

Maintenance tasks include all activities allowing to keep an application on by modifying or not its functionalities. These tasks are carried out in the environment and the technical infrastructure in which the application operates.

Conversion and re-platforming activities are not addressed in this guideline. Conversion can be, for example, changing the programming language used to implement the features, transferring the application from a system to another one that differs, or hanging the data storage to accommodate the introduction of a new database management system. As a consequence, conversion activities are considered as technical and, even if they have a direct impact on the productivity measured for a given team, they should not be part of the enhancement measure that focuses on functional aspects. This point is addressed in a dedicated section.

The term "enhancement" refers to modifications done on application features. Operations done on technical components are not taken in account by the Enhancement Function Points measure. 

Once an enhancement is done, it is sometimes necessary to develop a specific conversion mechanism to upgrade data to the new version of the application. This mechanism can be considered as a new development that can be measured by using the OMG Automated Function Point method. Function Point analysis sees the Information System as a set of Transactional Functions and Data Functions. This principle is applied to the Enhancement Function Point (EFP) process.

There are 3 types of enhancements that are considered when counting Enhancement Function Points:

  • Added development: Enhancements resulting only in the addition of new functions, with no changes to existing functions.
  • Modified development: Enhancements resulting in the addition of new functions or modification of existing functions. 
  • Deleted development: Enhancements resulting only in the deletion of existing functions, with changes being made to existing functions.

Enhancement Function Point is the unit for measuring the impact on features between two versions of an application.

Counting Enhancement Function Points

Six steps are necessary for determining the size of the enhancement expressed in Enhancement Function Points counting process:

  1. Find out the Transactional Functions and Data Functions within the scope of the enhancement project and determine their functional size
  2. Identify the added Transactional Functions and Data Functions
  3. Identify the deleted Transactional Functions and Data Functions
  4. Identify the modified Data Functions and determine the associated Impact Factor
  5. Identified the modified Transactional Functions and determine the associated Impact Factor
  6. Calculate the number of Enhancement Function Points

 

Automated Function Points:
  • Measure the number of transactions managed by the application
  • Measure the amount of functionalities

Enhancement Function Points:

  • Measure the number of modifications (added, updated, deleted) done between two versions of the application

Keep in mind

When the first version of an application is measured, all the functions are considered as new. The measure of the next versions will show new functions that have been added, missing functions that have been deleted, and modified functions. 

Step 1: Find out the Transactional Functions and the Data Functions within the scope of the enhancement

The enhancement proposal, the functional documentation of the application, and the number of Function Points estimated for the current version of the application are used to identify the Transactional Functions and the Data Functions within the scope of the enhancement project. Not having the Function Points metric for the current version of the application prevents to identify correctly the functions affected by the enhancement.

The Automated Function Points (AFP) measure will give the entire set of Transactional Functions and Data Functions with their complexity and their associated number of Function Points. This will be the baseline for Enhancement Function Points counting. 

Step 2: Identify the added Transactional Functions and Data Functions

The enhancement proposal should specify the Transactional Functions and the Data Functions to be added to the application. From the proposal it should be possible to identify the new functions and then to calculate their functional size by applying the OMG AFP methodology. The unadjusted size of the added functionalities is expressed in Function Points.

An Impact Factor is used to adjust the added Function Points leading to the formula: Adjusted EFPadd = IF x AFPadd

How to change the weight

You can alter the Impact Factor value by modifying the dedicated custom stored procedure which will be automatically triggered during the computation. The default value for added transactions is set to 1.

Step 3: Identify the deleted Transactional Functions and Data Functions

The Data Functions and the Transactional Functions that have been deleted from the previous version of the application are identified from the enhancement proposal. The number of Function Points they represent is determined and the unadjusted size of the deleted functions is expressed in Function Points. 

An Impact Factor is used to adjust the added Function Points leading to the formula: Adjusted EFPdel = IF x AFPdel

How to change the weight

You can alter the Impact Factor value by modifying the dedicated custom stored procedure which will be automatically triggered during the computation. The default value for deleted transactions is set to 1.

Step 4: Identify the modified Data Functions and determine the associated Impact Factor

A Data Function can be either an Internal Logical File (ILF) or an External Interface File (EIF). Data Functions are assessed to identify those that:

  • had an internal change: DETs have been added, deleted or changed
  • have now a new type without any internal change (that is, an EIF is changed into an ILF or vice versa).

The change is detected by checking the checksum of the Data Functions. 

Determining which Data Functions have been modified and how many Function Points this modification represents is done by applying the standard AFP rules. The number of unadjusted Function Point is expressed in Function Points.

An Impact Factor is used to adjust the added Function Points leading to the formula: Adjusted EFPmod = IF x AFPmod

Note

The Impact Factor value for modified Data Functions is set to 1 in CAST AIP and cannot be changed.

Step 5: Identify the modified Transactional Functions and determine the associated Impact Factor

A Transactional Function is considered as changed if it has been altered but kept the same name and the same purpose after the enhancement. A Transactional Function may be affected by changes to Data Functions. Both directly impacted and indirectly impacted Transactional Functions are taken in account for the measure of Enhancement Function Points.

This means that a transaction is considered as modified when at least one of the following conditions is satisfied:

  • the transaction is affected by a DET that is added, changed, or deleted
  • the transaction is affected by a Logical File (ILF or EIF) that is added, changed, or deleted
  • the user interface has been functionally changed (for example, the composition of a screen or a report)
  • the business logic supporting a transaction has been changed (for example, edit rules or calculations has been performed on the transaction data)

The functional size after the change is estimated by applying the OMG AFP methodology. The number of unadjusted Function Points after the change is expressed as Function Points.

An Impact Factor is used to adjust the added Function Points leading to the formula: Adjusted EFPmod = IF x AFPmod

How to change the weight

You can alter the Impact Factor value by modifying the dedicated custom stored procedure which will be automatically triggered during the computation. The default value for modified transactions is set to 1

Step 6: Calculate the number of Enhancement Function Points

The size of the enhancement project is the total number of Enhancement Function Points for all the affected Transactional Functions and Data Functions:

EFP TOTAL= ∑EFP ADDED+ ∑EFP DELETED+ ∑EFP CHANGED

Case studies

This section presents several case studies that illustrate Automated Function Point counting and Enhancement Function Point counting through different scenarios.

Scenario 1 - A basic example

Context

A transaction processes an file with 10 fields and produces an output file with 12 fields. A business logic associates the 10 input fields with the corresponding output fields. In this scenario, let assume there is a constant number of Function Points.

Question: What would be the number of Function Points?

1/ Determine external interfaces: 

  

2/ Determine logical files: 

 

 

3/ Determine the total number of Function Points by adding Function Points for the external logical files and Function Points for interfaces: 

Answer:

The answer to the question 1 is: Total unadjusted Function Points = Transactional FP + Data FP = 4 + 7 = 11 FP

Scenario 2 - Impact of the implementation

Context

The file to be processed in scenario 1 being large, it will be processed through 16 distinct streams. This can be implemented following 3 strategies:

  • Program 1. The program takes the original input file and splits it into 16 files based on the range of values for a given field in the data. The file layout does not unchange.
  • Program 2: The program is the same than the one used in scenario 1 but is invoked 16 times.
  • Program 3: The program is created which takes the 16 output files and merges results into a single file.

Question 1: What would the number of Function Points be for each of the 3 programs?

Question 2: Since program 3 is invoked 16 times by the scheduler, does this change the number of Function Points?

The boundary of the application includes Program 1, Program 2 and Program 3. The transaction still has one unique report at the end and reads the same information as in scenario 1. The number of Function Points is the same as scenario 1.

In terms of Enhancement Function Points, since the program is split in 3 programs, if one of those programs is modified the entire transaction will be flagged as a modified transaction.

The answer to the question 1 is: The number of Function Points does not depend on the number of programs but is calculated for the whole transaction. As a consequence, the total number of Function Points is: Transactional Function FP + Data Function FP = 4 + 7 = 11 FP

The answer to the question 2 is: The number of times a program carries out an action on temporary files does not affect the number of Function Points.

Scenario 3 - Impact of temporary files

Context

In this scenario, programs 1 and 3 are those used in scenario 2. A program 2b reads 16 files and the same business logic is applied to each of them. 16 output files are created. This program is called one time.

Question: What would be the number of Function Points for program 2b?

The boundary of the application includes Program 1, Program 2 and Program 3. This transaction reads the same information as scenario 1 and still produces one unique report at the end. The number of Function Points is the same than for scenario 1 and scenario 3. The impact of creating 16 temporary files should not affect the number of Function Points.

In terms of Enhancement Function Points, since the program is split into 3 programs, if one of those programs is modified, then the entire transaction will be flagged as a modified one. If some other mechanisms or programs (SAFR) are also involved in the process, the modification of those programs will also affect the Enhancement Function Point count.

Total unadjusted function points = Transactional FP + Data FP = 4 + 7 = 11 FP

The answer to question is: The number of Function Points is not counted per program but per transaction. The number of times a program performs an action on temporary files does not affect the number of Function Points.

Scenario 4 - Impact on scheduler tasks

Context

The scheduler is configured to execute each day a program multiple times with specific parameters depending on the range of data to be processed.

Question: How does the schedule execution plan affect the number of Function Point?

Total unadjusted function points = Transactional FP + Data FP = 4 + 7 = 11 FP

The scheduler is not part of transactions but is typically used to identify the entry points for these transactions. Sometimes the scheduler information is included inside the transaction to identify when the technical part of this one is modified in order to measure the effort of the team.

In terms of Enhancement Function Points, since the program is split into 3 programs and a scheduler, if one of those programs or the scheduler is modified the entire transaction will be flagged as a modified transaction.

Q4a) How does the schedule aspect affect the function point count?

Answer:

The schedule is not part of the transaction if it serves as the starting point. The only case when a transaction will be impacted by the scheduler is if the transaction includes the scheduler mechanism as part of the business logic.

 

When analyzing an application with a dynamic schedule:

  • Control M job provides the Shell Script with the name of the Data Stage Job to execute
  • Control M is included in the CAST analysis in order to provide complete transactions
According to the definition, Control M is considered outside the boundary:
•3 Control M schedulers call the same Shell Script
•Shell is the entry point

--> 1 Transactional Function worth [computed FP]

CAST can be calibrated to include Control M inside the boundary
•3 Control M schedulers call the same Shell Script
•Control M is the entry point

--> 3 Transactional Functions each worth [computed FP]

Scenario 5 - Sorting Data

Summary

These are either done in the program or within the JCL/Procs.

To understand the impact of not analyzing the JCL/Procs.

How are the function points associated with a sort determined?

Take the following basic examples:

a)     The file has 10 fields; the data is sorted based on the first 3 fields in ascending order. All 10 fields are included on the output.

b)     The file has 10 fields.  The output consists of 9 key fields from the input, plus a total field which is derived from the 10th input field, based on the 9 key fields.

Q5a) What would be the number of function points for the above 2 scenarios.

Q5b) Would a given sort generally have a low function point count? Or what are the calculations used to determine the number of function points. (e.g. input files, number of fields, complexity of operations, number of output fields).


 

A Function Points count is associated to a transaction and the boundary definition is critical to ensuring the number will be accurate. A batch will be the beginning of a transaction and removing this will have the same impact as not selecting the final file as part of the count. That means all programs which are executed by this JCL will be selected as beginning of transaction and be counted as an EQ as per the example which will impact the count drastically.

Transaction: Batch JCL (1 FTR – 10 fields) -> Simple

 

 

EI

EO

EQ

Low

3 FP

4 FP

3 FP

Average

4 FP

5 FP

4 FP

High

6 FP

7 FP

6 FP

 

Data function:    a) 1 file (RET) – 10 fields (10 DET) -> Simple

b) 1 file (RET) – 9 fields (9 DET) -> Simple

 

 

1 to 19 DET

20 to 50 DET

51 or more DET

1 RET

Low = 7 FP

Low = 7 FP

Average = 10 FP

2 - 5 RET

Low = 7 FP

Average = 10 FP

High = 15 FP

6 or more RET

Average = 10 FP

High = 15 FP

High = 15 FP

 

Q5a) What would be the number of function points for the above 2 scenarios?

Answer:


A)      Total unadjusted function points = Transactional FP + Data FP = 4 + 7 = 11 FP

 

B)      Total unadjusted function points = Transactional FP + Data FP = 4 + 7 = 11 FP

Q5b) Would a given sort generally have a low function point count? Or what are the calculations used to determine the number of function points. (e.g. input files, number of fields, complexity of operations, number of output fields).

Answer:

The mechanism to sort will be part of the transaction in terms of the impact of modification. It will be used to flag the transaction as modified for the Enhancement Function Points count, but will not be counted as part of the Application Function Point count since this will be considered as a business rule and not as a display of information to an end user or data storage.

Scenario 11 - Simple Reporting Program

Summary

A report program reads 5 tables, with a total of 30 fields.  The SQL is coded within the program, so that each table read and fields can be identified. Following business processing, 20 fields are displayed on the report.

Q11a) What would be the functional point for this report assuming a constant for the business logic?

 

Transaction: Report (1 FTR – 20 fields) -> Average 3FP

 

 

1 to 4 DET

5 to 15 DET

16 or more DET

0 or 1 FTR

Low

Low

Average

2  FTR

Low

Average

High

2 or more FTR

Average

High

High

 

 

Data function:    1 file (RET) – 30 fields (30 DET) -> Low 7FP

 

 

1 to 19 DET

20 to 50 DET

51 or more DET

1 RET

Low

Low

Average

2 - 5 RET

Low

Average

High

6 or more RET

Average

High

High

 

 

Q11a) What would be the functional point for this report assuming a constant for the business logic?

Answer:

Total unadjusted function points = Transactional FP + Data FP = 3 + 7 = 10 FP

Scenario 12 - Impact of the number of reports

Summary

Assuming the program mentioned in scenario generates 50 reports.

Each report has the same characteristics (e.g. number of tables / fields read, business logic, output fields), the exact tables and fields vary between reporting program.

Q12a) Would the total function points be 50 times the number of function points from scenario 11?

(50 FTR – 20 fields) -> High 6FP

 

 

1 to 4 DET

5 to 15 DET

16 or more DET

0 or 1 FTR

Low

Low

Average

2  FTR

Low

Average

High

2 or more FTR

Average

High

High

 

Data function:    3 tables (RET) – 30 fields (30 DET) à Average 10 FP

 

 

1 to 19 DET

20 to 50 DET

51 or more DET

1 RET

Low

Low

Average

2 - 5 RET

Low

Average

High

6 or more RET

Average

High

High

 

Q12a) Would the total function points be 50 times the number of function points from scenario 11?

Answer:

The 50 reports are generated by the 3 programs but are functionally independent:

Total unadjusted function points = Transactional FP + Data FP = 6 + 10 = 16 FP

Scenario 13 - Impact of output format 

 

Summary

Taking the basic report in Scenario 11, but the user has the option of generating the report in 3 different formats - CSV, html, PDF.

Q13a) How does this affect the number of function points?

 

The format doesn’t have any impact on the count of function points.

 

Q13a) How does this affect the number of function points?

Answer:

Total unadjusted function points = Transactional FP + Data FP = 6 + 10 = 16 FP

Scenario 14 - Impact on retrieved data

Summary

Taking the basic report in Scenario 11. This time an additional parameter is added to the input, which filters rows in the report based on a set of access rights. The basic report is generated 8 times, to cover the various permutations of access rights.  E.g. Let’s say the report contains 10 accounts.  Report instance 1, contains accounts 1,2,3.  Report instance 2, contains accounts 1-5, Report instance 3, contains account 5,7. These reports are generated in batch.

Q14a) How does this affect the number of function points?

This will be viewed as one generic report which will be separate in different instances.

Q14a) How does this affect the number of function points?

Answer:

Total unadjusted function points = Transactional FP + Data FP = 3 + 10 = 13 FP

Scenario 15 - Reuse of existing components

Summary

If 10 reports programs require the same business function, and this is moved from the main program into a utility program. 

Q15a) How is the number of function points determined overall. Function points for ‘Utility Program’ multiplied by the number of report programs its used by?


Q15a) How is the number of function points determined overall. Function points for ‘Utility Program’ multiplied by the number of report programs its used by?

Answer:

The count will be the same if you have the business function as part of the main program or in a utility program. The impact will be on the enhancement function points. In this case, if the business function is in the main program, any modification to the business function will impact only the transaction in the main program. If the business function is in the utility program any modification to the business function will impact all transactions which access that utility program.

Scenario 16 - Reporting Engine

Summary

The 50 reports from Scenario 12 are reviewed and determined they can be replaced by a generic reporting engine.

The reporting engine takes in 1 parameter (report name), and generates 1 file for output (the report).   The program reads the report parameter from a database table (1 SQL query), using the report name as a key. The program then builds a set of dynamic sql’s to read the fields from various tables based on the report parameters.  Various business functions are applied to the data, based on the report parameters. The output of the file is generated dynamically based on the report parameters.

So instead of having 50 hard coded reports as in scenario 1, are replaced by a ‘reporting engine’ and 50 sets of report parameters.

Q16a) Would the function points of this approach be:

            Function points of Report Engine + Function Points of Report Parameters

Q16b) How should the function points for the ‘Report Parameters’ be determined? I presume this would be a manual exercise outside of the CAST tooling?
Here are some of the report parameters:

  • Tables read for input
  • Fields on table for input
  • Filter criteria input
  • Business functions used to manipulate data
  • Fields on the output report
  • Filter criteria in output
  • Report output format (CSV, PDF, HTML)

Q16c) If the program builds the data retrieval and data output dynamically how would this be represented in the function point analysis? (i.e. how does SQL for 5 hard code SQLs, compare with 1 dynamic SQL).

Q16a) Would the function points of this approach be:

            Function points of Report Engine + Function points of Report Parameters

Answer:

The report parameters will be viewed as one additional RET with 1 DET in the measurement of the complexity of the Data Function.

The other part of the report using dynamic SQL will use several tables (RET) and retrieve the different information (DET). CAST will identify the DET and RET used by those by rebuilding the dynamic SQL.

Q16b) How should the function points for the ‘Report Parameters’ be determined? I presume this would be a manual exercise outside of the CAST tooling?
Here are some of the report parameters:

Tables read for input
Fields on table for input
Filter criteria input
Business functions used to manipulate data
Fields on the output report
Filter criteria in output
Report output format (CSV,PDF, HTML)

Answer:

The report parameters are part of the report engine. The table read will be used in the complexity measurement of the transaction, and the fields contained in table for input will also be used for the complexity. The filter criteria are viewed as a value to gather the right information and trigger the right business rule. Therefore any table used will be counted as RET within the transaction and will be part of the measurement of the complexity..

The 50 reports are generated by the 3 programs but are functionally independent:

Total unadjusted function points = Transactional FP + Data FP = 6 + 10 = 16 FP

If the complexity of the report parameter is more than 5 DET you will have:

Total unadjusted function points = Transactional FP + Data FP = 6 + 15 = 21 FP

Q16c) If the program builds the data retrieval and data output dynamically how would this be represented in the function point analysis? (i.e. how does SQL for 5 hard code SQLs, compare with 1 dynamic SQL).

Answer:

CAST AIP utilizes an inference engine to rebuild dynamic SQL. Based on this rebuild the dependency will be addressed like a hardcoded SQL statements.

 

Scenario 17 - Impact of external components

Summary

How to handle the post processing where the multiple files are put back together for parallel processing (when it is a feature of the 3rd party product)?

Same behavior as Scenario 2 - streaming;

the internal processing will not affect the count in terms of function points. In addition to this, a third party tool or any data manipulation by a third party tool will be viewed as outside of the boundary and will not be counted as transactions as in Scenario 5 – Data Sort.


The program will be viewed as external to the boundary. The source code will not be part of the analysis but the dependency with it will be considered in the complexity measure of the transaction. Any modification of this component will not flag the transaction as modified.

The program will be viewed as internal to the boundary. The source code will be part of the analysis and the dependency with it will be considered on the complexity measure of the transaction. Any modification of this component will flag the transaction as modified.

Scenario 20 - ESSBase Dynamic Reporting

Summary

How to address the cube such as ESSBase?

A dynamic reporting program will be counted in different ways depending on the type of dynamic reporting (standard report, ad-hoc reports or presence of a hyper cube inside or outside of the boundary of the application).

In case of a standard reports:

  • Reports will be viewed as output
  • Tables used for the report will be counted as Data Entities
  • Tables used for the report will be used to measure the complexity of the transaction

In case of Ad-hoc reports:

  • Reports will be viewed as output
  • Views will be viewed as output
  • Data Items and other BO Classes will be viewed as output
  • Tables used for the report will be counted as Data Entities
  • Tables used for the report will be used to measure the complexity of the transaction

In case of Hyper-cube:

  • Reports will be viewed as output
  • Cube will be viewed as a very high complex data entity only if it used internally; otherwise:
    • Data Items and other BO Classes will be viewed as output
    • Tables used for the cube will be counted as Data Entities
    • Tables used for the cube will be used to measure the complexity of the transaction

 

  • Reports will be viewed as output

  • Tables used for the report will be counted as Data Entities

  • Tables used for the report will be used to measure the complexity of the transaction

•Reports will be viewed as output
•Views will be viewed as output
•Data Items and other BO Classes will be viewed as output
•Tables used for the report will be counted as Data Entities
•Tables used for the report will be used to measure the complexity of the transaction
•Reports will be viewed as output
•Cube will be viewed as a very high complex data entity only if it used internally; otherwise:
•Data Items and other BO Classes will be viewed as output
•Tables used for the cube will be counted as Data Entities
•Tables used for the cube will be used to measure the complexity of the transaction

Scenario 21 - Distributed “interaction” with Mainframe

Different scenarios should be considered when distributed modules interact with a mainframe module.

The first scenario is where the distributed part writes to a file or inserts data into a table which is then processed by a COBOL program. This will be viewed as two distinct transactions where the file or table will be viewed as the end of transaction for the distributed part and the JCL job as the entry point of the Mainframe part.

The second scenario is when distributed code executes a program via an MQueue or JMS mechanism. In this case the Java component will execute the COBOL program and both program (Java and mainframe) will be view as components as part of the same transaction and therefore will be counted as part of the same transaction.

The third scenario is when distributed code executes a mainframe transaction which executes some COBOL program. In this case the mainframe transaction is a transaction by itself and in this case this will be viewed as two transactions: the first transaction will be the distributed part ending on the queue and the second transaction will be the CICS transaction or IMS transaction.  

Scenario 22 - Impact of middleware

The boundary definition will be critical for a middleware application and especially the identification of all the different interfaces with other system. In the example below you have a standard end to end boundary definition in blue which include the user interface and the data  storage, but as soon you have the orange boundary you should define application interface, with the application above which will include the user interface, and with the second application below the middleware application which will contain the data storage.

 

 

Case blue definition: Application boundary  include end user interfaces as well as the data storages.  The middleware components will be part of the different transaction

Case orange definition:  Application boundary  excludes end user interfaces as well as the data storages. The middleware components will be the only technology which composed the transaction.  Entry point of the middleware application and exit should be considered as beginning of transaction and end of transaction.

Glossary

  • Application Model abstract source code representation of an application that results from analysis of the source code.  It contains the minimum information required to measure Automated Function Points, that is, the static elements of an application defined by associated KDM (Knowledge Discovery Metamodel - http://www.omg.org/spec/KDM/ISO/19506/PDF/) elements that are used in the Automated Function Point counting process.
  • Boundary The boundary is a conceptual interface between the software and the users. 
  • Complete transaction In the context of this specification, a transaction is considered as complete whenever the static code analyzer can find one or several code paths starting from the user interface and continuing down to the data entities.
  • Data Element Type (DET):  A data element type is a unique user recognizable, non-repeated attribute that is part of an ILF, EIF, EI or EO.  (ISO/IEC 20926:2009)
  • Data Function functionality provided to the user to meet internal or external data storage requirements.  A data function is either an ILF or EIF).  (ISO/IEC 20926:2009)
  • Database Table: SQL data table or KDM (Knowledge Discovery Metamodel - http://www.omg.org/spec/KDM/ISO/19506/PDF/) data: RelationalTable.
  • External Inquiry (EQ) a form of data that is leaving the system.  However, since automated counting tools cannot distinguish between External Inquiries and External Outputs, all External Inquiries will be included in and counted as External Outputs.  (ISO/IEC 20926:2009)
  • External Input (EI) elementary process that processes data or control information sent from outside the boundary (ISO).  (ISO/IEC 20926:2009)
  • External Interface File (EIF) user recognizable group of logically related data or control information, which is referenced by the application being measured, but which is maintained within the boundary of another  application.  (ISO/IEC 20926:2009)
  • External Output (EO) an elementary process that sends data or control information outside the boundary and includes additional processing logic beyond that of an External Inquiry.  (ISO/IEC 20926:2009)
  • File Type Referenced (FTR) data function (IFL or EIF) read and/or maintained by a transactional function.  (ISO/IEC 20926:2009)
  • Internal Logical File (ILF)  user recognizable group of logically related data or control information maintained within the boundary of the application being measured.  (ISO/IEC 20926:2009)
  • Library   a set of software components that are grouped together in the same physical container and that are accessed via a dedicated API.
  • Logical File either an Internal Logical File (ILF) or an External Interface File (EIF).  (ISO/IEC 20926:2009)
  • Method a method is a group of instructions that is given a name and can be called up at any point in a program simply by quoting that name.  In object oriented languages like Java and C++, methods are grouped in classes. A method is referenced as code:MethodUnit in the KDM (Knowledge Discovery Metamodel - http://www.omg.org/spec/KDM/ISO/19506/PDF/).
  • Physical File physical files hold the actual data of a database file and are not required to have keyed fields.  Where the word ‘file’ is used alone without the modifier ‘logical’, it refers to a physical file. Within the KDM (Knowledge Discovery Metamodel - http://www.omg.org/spec/KDM/ISO/19506/PDF/), such a physical file is described as a kind of data:DataContainer and contains a set of data:RecordFile.
  • Record Element Type (RET) user recognizable sub-group of data element types within a data function.  (ISO/IEC 20926:2009)
  • Service End Point well known address that is used by an application to exchange data and events with other applications.  Typical examples include Remote Procedure Call interfaces and Message Queues.
  • Source Code Entities Elements of the source that can be detected during static analysis in order that they be used in the Function Point counting process.
  • SQL Structured Query Language. A language used to query databases. (ISO/IEC 9075-1:2008)
  • Static Dependency a directional relation that exists between a caller method and a called method.
  • Transaction End Point User Interface End Point or a Service End Point.   It identifies potential Transactional Functions.
  • Transactional Function elementary process that provides functionality to the user to process data.  A transactional function is an External Input, External Output, or External Inquiry.  (ISO/IEC 20926:2009)
  • User Interface End Point there are two kinds of user interface end points: user interface inputs and user interface outputs.
  • User Interface Input command that can be activated by humans using a mouse, keyboard, or equivalent interactions.  (ISO/IEC 20926:2009)
  • User Interface Output set of visual elements that are composed by an application in order to present information or events to the user (e.g., a form, report, or tab inside a screen).  An elementary process that sends data or control information outside the boundary.  (ISO/IEC 20926:2009)
  • User Recognizable Refers to requirements for processes and/or data that are agreed upon, and understood by, both the user(s) and software developer(s)
  • VAF: Value Adjustment Factor