Page tree
Skip to end of metadata
Go to start of metadata

On this page:

Target audience:

CAST AI Administrators

Foreword on CAST Software Analysis and Measurement

The current document presents the nature of Software Analysis and Measurement information that can be generated about an application, based on raw facts about its source code and architecture. It also presents the process for generating this type of information.
In other words, how to turn raw facts into an assessment of quality and quantity?

The document is structured the following way:

The document ends with pointers to related topics (mainly Ranking Systems).

CAST AIP Quality Model Scoring System explained

AIP Quality Model Scoring System objectives

  • Deliver software quality assessment and measurement results to provide C-level managers with visibility on their applications or software products
  • Management visibility in the area of software quality covers
    • Identification of any situation or shift that could cause delays, unexpected costs, failures in operations, poor perceived quality in use
    • Comparison capabilities so as to support predictions based on past experiences

AIP Quality Model Scoring System detailed

Introduction

The following sections describe and comment the design of the AIP scoring system, starting from the findings in application source code, structure, and architecture, all the way up to getting high-level strategic quality indicators for the entire application:

  • Application facts
  • Quality Rule definition
  • Quality Rule Violation definition
  • From Quality Rule Violation identification to Quality Rule compliance ratio
  • From Quality Rule compliance ratio to Quality Rrule grade/status
  • From Quality Rule grades to Technical Criterion grade
  • From Technical Criterion grades to Business Criterion grade
  • From Module grades to Application grade

Application facts

Analysis Services contain various pieces of information about an application. All of this information can be used for Software Analysis and Measurement information generation.
Analysis Services contain:

  • Different kind of facts: names, links, properties, counts, hierarchy, etc.
  • At different levels: files, packages, functions, methods, etc.
  • I.e., what is commonly referred to as 'source code DNA'

Unit-level facts

E.g.:

  • Facts derived from code parsing:
  • Facts devired from code pattern scanning:

Technology-level facts

E.g.:

  • Intra-technology dependencies:

System-level facts

E.g.:

  • Cross-layer and cross-technology dependencies:

Quality Rule definition

  • Design description
    • A Quality Rule is the declaration of a source code or architectural pattern to look for in an application source code, structure, and architecture, i.e., any application facts.
    • Quality Rules are operational quality indicators that will help
      • Build the intermediate-level tactical quality indicators (cf. Technical Criterion definition section)
      • Define corrective actions thanks to accurate findings (as opposed to statistical models that can only conclude that, based on some measures and some statistical data, the assessed application is in such and such a state).
  • Design comment

From Quality Rule definition to Quality Rule Violation identification

  • Design description
    • A Quality Rule Violation is an Object from the application featuring one or more occurences of the violation pattern defined by the Quality Rule in a given Snapshot (i.e., in a given release or revision of the application)
  • Design comment
    • This definition supports the definition of a compliance ratio (cf. next section) for any kind of Quality Rules
    • This definition supports different pattern search approaches:
      • Identifying all occurrences
      • Identifying the first occurence when it is too resource-consuming to search for all of them (e.g.: some graph processing algorithms do not compute the complete graph but navigate through it to find the pattern they are looking for and stop as soon as one is found)

From Quality Rule Violation identification to Quality Rule compliance ratio

  • Design description
    • Quality Rule compliance ratio is the ratio of the number of Quality Rule Violations divided by the number of opportunities to violate the Quality Rule
  • Design comment
    • Rationalization is required to account for application differences in size, ... so as to avoid the obvious objection: "There are more violations because the application is bigger"
    • Rationalization by the number of opportunity is required to be more relevant, that is, assess quality on situations that are applicable / that could be creating a risk as opposed to the sheer size 
      • Utlimately to avoid creating the feeling that everything is fine regarding a specific Quality Rule while in fact it is not applicable to the assessed application – not applicable at all or to most of it.

E.g.:

  • Compliance ratio computation:

From Quality Rule compliance ratio to Quality Rule grade/status

  • Design description
    • Quality Rule grade is a decimal value between 1.00 and 4.00, computed as linear-by-segment interpolation between Quality Rule compliance ratio and target compliance ratios required to get specific grade values
    • Quality Rule status is one of the following four level: Very High / High / Moderate / Low Risk, based on the floor value of the Quality Rule grade, to provide self-interpretation.

      FromToStatus
      1.001.99Very High Risk
      2.002.99High Risk
      3.003.99Moderate Risk
      4.004.00Low Risk
  • Design comment
    • Quality Rule status is required to be fast and efficient to grasp the assessment result, while also supporting a color code for even faster identification of risky situations
    • Quality Rule decimal grade is required to support more accurate comparison over time and across context (avoid sudden change in status without any visibility on evolution in between)
    • 4-level-status model has the following advantages
      • It avoids neutral ground (as opposed to 3- or 5-level-status models)
      • It is more subtle than a 2-level-status model
    • Quality Rule grade range is required to avoid indicators without boundaries that, even if more representative, would only impair the understanding of the assessment and comparison ("there is always a situation that can be worse so is this situation really an issue or not?").
    • Quality Rule grade ranging from 1 to 4 is simple and logical to map a 4-level status and avoided the use of 0 as a range interval as it could cause computing issues if not handled properly ("invalid division by 0")
    • The linear-by-segment interpolation is a simple model yet flexible enough to model exponential-like behaviors.

E.g.:

  • 86.46% Compliance Ratio transformation into 1.91 Grade:


Technical Criterion definition

  • Design description
    • Technical Criteria are intermediate-level tactical quality indicators
    • A Technical Criterion is the aggregation of one to many Quality Rules
    • A Technical Criterion will help to
      • Summarize a certain number of Quality Rules that help assess the quality level of a specific quality sub-characteristic
      • Build the high-level strategic quality indicators (cf. Business Criterion definition section)
  • Design comment
    • Intermediate-level indicators are required to:
      • Serve as a clutch between management strategic indicators and operational indicators that can evolve "quite" seamlessly
      • Serve as a clutch between management stractegic indicators and operational indicators that can be more or less numerous to assess a quality sub-characteristic, independently from the importance of the technical area (a low number of rule will therefore not prevent a given technical criterion from impacting the overall score)
      • Serve as an abstraction level to compare similar topics of different technologies and languages (that require different operational indicators)
    • Technical Criteria are similar to Questions in a Goal Question Metric model

From Quality Rule grades to Technical Criterion grade

  • Design description
    • For each Module, i.e., for each Application sub-component for which to provide an assessment,
    • Technical Criterion grade is the weighted average - with a capping mechanism for Critical Quality Rules - of all contributing Quality Rule grades
      • It is therefore a decimal value between 1.00 and 4.00
      • It also support a translaction into a Technical Criterion status
  • Design comment
    • Aggregation is based on an average formula to account for both good and bad practices
    • Aggregation is based on a weighted average to accoung for relative importance of good and bad practices
    • Aggregation feature a capping mechanism to account for bad practices that no "amount" of good practices can compensate or be traded off against
      • This highly non-linear behavior is required to align on human perception: some practices are so severe they can not be unheeded at the Technical Criterion level
        • E.g.: exams for ivy league engineering schools feature high weight for mathematics, physics, etc., yet literature is eliminatory despite a lower weight
    • Decision to assign a weight into a Technical Criterion and to flag the contribution as Critical (to activate the capping mechanism) is done the following way:
      • Select any two Quality Rules that are to contribute to a given Technical Criterion
      • Assume that Quality Rule #1 gets a grade of 2.00 and Quality Rule #2 gets a grade of 3.00
      • Based on the quality issues these two Quality Rules detect, if you expect the Technical Criterion grade to be
        • 2.50, assign the same weight
        • closer to 2.00, assign a higher weight to Quality Rule #1
        • closer to 3.00, assign a higher weight to Quality Rule #2
      • Based on the quality issues these two Quality Rules detect, if you expect the Technical Criterion grade to be equal to 2.00, regarless of the grade of Quality Rule #2 and any other Quality Rule in the same Technical Criterion, flag the Quality Rule #1 as Critical

E.g.:

  • Aggregation of Quality Rule Grades into Efficiency - Expensive Calls in Loops Technical Criterion Grade:

Business Criterion definition

  • Design description
    • Business Criteria are top-level strategic quality indicators
    • A Business Criterion is the aggregation of one to many Technical Criterion
    • A Business Criterion will help summarize a certain number of Technical Criteria that help assess the quality level of a specific quality characteristic
  • Design comment
    • High-level indicators are required to
      • Quickly assess a whole quality characteristics, with one single value or status
      • Serve as an abstraction level to compare quality characteristics of different technologies and languages (that require different tactical and operational indicators)
    • Business Criteria are similar to Goals in a Goal Question Metric model

From Technical Criterion grades to Business Criterion grade

  • Design description
    • For each Module, i.e., for each Application sub-component for which to provide an assessment,
    • Business Criterion grade is the weighted average - with a capping mechanism for Critical Technical Criteria - of all contributing Technical Criterion grades
      • It is therefore a decimal value between 1.00 and 4.00
      • It also support a translaction into a Business Criterion status
  • Design comment
    • Aggregation is based on an average formula to account for both good and bad practices
    • Aggregation is based on a weighted average to accoung for relative importance of good and bad practices
    • Aggregation feature a capping mechanism to account for bad practices that no "amount" of good practices can compensate
      • This highly non-linear behavior is required to align on human perception: some practices are so severe they can not be unheeded at the Business Criterion level (cf. From Quality Rule grades to Technical Criterion grade section)

E.g.:

  • Aggregation of Technical Criterion Grades (partially grayed out) into Performance Business Criterion Grade:

From Module grades to Application grade

  • Design description
    • Application grade is the simple or weighted average of its Module grades
  • Design comment
    • The Module level is helpful to serve as an abstraction level to compare features for the service they offer and not for their implementation with more or less code, with identical or different technologies and languages
    • The weighting with functional data is helpful to deliver more accurate Risk assessment as it allows to take into account the context.
      • E.g.: if the functional cornerstone of the application happens to be small in size - when counting lines of code or files -, the overall results will hide the true risk the whole application incurs.

E.g.:

  • Simple average of Module Grades into Application Grade: 
  • Weighted average of Module Grades into Application Grade using the Business Value of Modules:

AIP Quality Model Scoring System advanced indicators

Quality Distributions

  • Design description
    • A Quality Distribution is the declaration of a way to distribute a population of objects into 4 categories according to the numerical value of one of their properties, in oder to check that the distribution is well balanced, with the expected split share of objects in each category.
    • Quality Distributions are operational quality indicators that will act as Quality Rules in the Quality Model, i.e., help
      • Build the intermediate-level tactical quality indicators
      • Define corrective actions thanks to accurate findings (as opposed to statistical models that can only conclude that, based on some measures and some statistical data, the assessed application is in such and such a state).
  • Design comment
    • Quality Distributions are more-subbtle versions of Quality Rules: instead of distributing tested objects into two categories (Compliant and non-compliant) on which to apply the compliance-to-grade thresholds, Quality Distributions will allow to distribute in four categories each with its own set of split-share-to-grade thresholds
    • Quality Distributions also allow to constrain both side of a range of numerical value, which proves useful when both end of the measured value range are to be avoided (e.g.: avoid too high class complexity (WMC) and also avoid too low class complexity as it would be a sign of atomized business logic).

Quality Measures

  • Design description
    • A Quality Measure is the computation of a numerical value based on all the objects of a Module within the Application.
    • Quality Measures are operational quality indicators that will act as Quality Rules in the Quality Model, i.e., help build the intermediate-level tactical quality indicators
    • However, as Quality Measure values pertain to the whole Module, identification of individual objects to intervene on is not possible. Generally, there exist a companion Quality Rule to bring a different perspective on the issue at hand (e.g.: the avoid too high volume of copy-pasted code Quality Measure is complemented by the avoid too many copy-pasted artifacts Quality Rule: the former looks at the number of lines of code that are copy-pasted in the entire Module while the latter looks at the number of objects that are copy-pasted from others objects; both detect similar yet slightly different situations).
  • Design comment
    • Quality Measures are Module-level versions of Quality Rules: instead of distribution tested objects into two categories (Compliant and non-compliant) on which to apply the compliance-to-grade thresholds, Quality Measures will compute a single value to summarize all relevant objects, with its own set of value-to-grade thresholds

Technical Criterion definition enrichment

Design description

  • A Technical Criterion is the aggregation of one to many Quality Rules, Quality Distributions, and Quality Measures

From Quality Rule, Distribution, and Measure grades to Technical Criterion grade

Design description

  • For each Module, i.e., for each Application sub-component for which to provide an assessment,
  • Technical Criterion grade is the weighted average - with a capping mechanism for Critical contributions - of all contributing Quality Rule, Distribution, and Measure grades

AIP Quality Model Scoring System advanced behaviors

From Module results to Technology grade

  • Design description
    • Technology grade is the simple or weighted average of the grades of Module subsets for the concerned Technology
  • Design comment
    • Modules are composed of the results of source code analysis jobs, a.k.a., "subsets" (note that subsets are not assessed as such as they are only building blocks for Modules)
      • One Module per job
      • One Module for a group of job
      • One Module for a part of a single job
      • One Module for parts of multiple jobs...
    • Modules can contain objects of one or many 'Technologies'.
    • Technology aggregation takes into account the subsets of Modules pertaining to the concerned Technology

The Organization Tree

  • Design description

    • Modules can be assigned to Developers, as the persons responsible for it, even if other persons are involved in the source code development
      • A Module can be assigned to only one Developer at a time
    • Teams are groups of Developers
    • Organizations are groups of Teams
    • This organization can be used to filter the results of Modules assigned - directly or not - to the Organization entity under consideration
  • Design comments
    • Organization Tree as a whole is optional
    • All levels are mandatory

AIP Quality Model Scoring System in use

Default Business Criterion list

AIP default Assessment Model defines the following Business Criteria:

  • Business-oriented Business Criteria, a.k.a., Health Factors
    • Robustness
      • Robustness measures the level of risk and the likelihood of having application failures and application defects due to modifications. Robustness measures as well the level of effort necessary to test the application
      • CAST Robustness affiliates to ISO 25010 Reliability characteristic
    • Performance
      • Performance measures the likelihood of potential performance bottlenecks and the potential future scalability issues linked to coding practices.
      • CAST Performance affiliates to ISO 25010 Performance & Efficiency characteristic
    • Security
      • Security measures the likelihood of potential security breaches linked to coding practices and application source code.
      • CAST Security affiliates to ISO 25010 Security characteristic
    • Changeability
      • Changeability measures how easily applications can be modified in order to implement new features, correct errors, or change the applications environment.
      • CAST Changeability affiliates to ISO 25010 Maintainability characteristics with a focus on Changeability sub-characteristic
    • Transferability
      • Transferability measures how easily applications can be moved across teams or team members including in-house and outsourced development teams.
      • CAST Transferability affiliates to ISO 25010 Maintainability characteristics with a focus on Analyzability sub-characteristic
    • Total Quality Index
      • Total Quality Index measures the general maintainability level of the application based on hundreds of metrics provided by CAST.
      • CAST Total Quality Index summarizes the aforementioned characteristics and sub-characteristics
    • SEI Maintainability
      • SEI Maintainability estimates the general maintainability level of the application based on SEI Maintainability 3- and 4-metrics indexes
      • As opposed to other Business Criteria, this one provides a statistical measurement of the application maintainability, comparing the results of SEI Maintainability 3- and 4-metrics indexes to the results of 500+ projects
  • Development-oriented Business Criteria, a.k.a., Rule Compliance measurements
    • Architectural Design
      • CAST Architectural Design measurement summarizes the compliance level of all architecture-related Quality Rules, Distributions, and Measures
    • Programming Practices
      • CAST Programming Practices measurement summarizes the compliance level of all coding-related Quality Rules, Distributions, and Measures
    • Documentation
      • CAST Documentation measurement summarizes the compliance level of all documentation-related Quality Rules, Distributions, and Measures

Default Technical Criterion list

AIP default Assessment Model defines the following Technical Criteria:

  • Architecture - Architecture Models Automated Checks measures how compliant the application is with the user defined Architecture Models. (Archictecture Models can be defined using ArchiChecker. And once assigned to an application, the Archictecture Model becomes a CAST AIP quality rule and is automatically computed for each snapshot. An application can be assigned multiple models, so that Architects can check different aspect of application architecture, both the technical ones and the functional ones.)
  • Architecture - Multi-Layers and Data Access measures the respect of multi-tier and multi-layer best practices as well data access and data integrity best practices
  • Architecture - OS and Platform Independence measures the respect of Operating System level resource usage practices
  • Architecture - Object-level Dependencies measures the respect of coupling practices at the object level
  • Architecture - Reuse measures the respect of code reuse practices
  • Complexity - Algorithmic and Control Structure Complexity measures the respect of complexity practices regarding algorithmic complexity control
  • Complexity - Dynamic Instantiation measures the respect of practices regarding dynamic instantiation
  • Complexity - Empty Code measures the respect of practices regarding empty code
  • Complexity - OO Inheritance and Polymorphism measures the respect of complexity practices regarding object-oriented complexity control
  • Complexity - SQL Queries measures the respect of evolutive coding practices of SQL queries
  • Complexity - Technical Complexity measures the respect of complexity practices regarding overall technical complexity
  • Dead code (static) measures the respect of code coverage practices
  • Documentation - Automated Documentation measures the respect of comment practices regarding automated documentation
  • Documentation - Bad Comments measures the respect of comment practices (code in comments, language ...)
  • Documentation - Naming Convention Conformity measures the respect of naming practices
  • Documentation - Style Conformity measures the respect of coding style practices
  • Documentation - Volume of Comments measures the respect of comment practices regarding comments volume
  • Efficiency - Expensive Calls in Loops measures the respect of performance optimization best practices regarding bulky and time expensive loops
  • Efficiency - Memory, Network and Disk Space Management measures the respect of memory and disk space optimization practices
  • Efficiency - SQL and Data Handling Performance measures the respect of SQL and Data handling performance optimization practices
  • Programming Practices - Error and Exception Handling measures the respect of error handling practices
  • Programming Practices - File Organization Conformity measures the respect of file organization practices
  • Programming Practices - Modularity and OO Encapsulation Conformity measures the respect of object-oriented encapsulation practices
  • Programming Practices - OO Inheritance and Polymorphism measures the respect of object-oriented inheritance practices and proper usage of polymorphism
  • Programming Practices - Structuredness measures the respect of structuredness practices
  • Programming Practices - Unexpected Behavior measures the respect of unexpected behaviour
  • Secure Coding - API Abuse measures the API Abuse security breaches result from misused calls to a programming interface or to a flawed, compromised API (An API implements a contract between a caller and a callee. The most common forms of API abuse are caused by the caller failing to honor its end of this contract.)
  • Secure Coding - Encapsulation measures the level of vulneratility to security breaches appearing whenever the principle of  information hiding are violated and that malicious user can gain access to data which should be kept private.
  • Secure Coding - Input Validation measures the level of vulnerability to breaches resulting from trusting user input and lack of strong checks/cleanup of the user input.
  • Secure Coding - Time and State measures the level of vulnerability to breaches resulting from multithreading misuse and coding practices that ignore the fact that software components are shared and executed within concurrent thread and processes.
  • Volume - Number of Components measures the respect of sizing practices regarding the number of included or contained objects
  • Volume - Number of LOC measures the respect of sizing practices regarding the number of code lines

 Note

  • there is a specific Maintainability Indexes (SEI) Technical Criterion to contribute to the SEI Maintainability Business Criteria based on the specific SEI Maintainability Index 3 and 4 Quality Measures
  • The Technical Criterion Complexity - Functional Evolvability (that measures the capability of a software structure to support changes and addition of new functional rules without threatening the testability and the stability) is delivered by default as "detached" (which will mean that the criterion and any child Quality Rules will be ignored). It can be re-attached if required.

Default Quality Rule list

Due to the sheer number of Quality Rules, please refer to  CAST Metrics and Quality Rules Documentation.

Default Quality Distribution list

AIP default Assessment Model defines the following Quality Distribution:

  • 4GL Complexity Distribution ensures a balanced distribution of the level of complexity of 4GL user interface / forms. Forms complexity level rely on their being:
    • Heavy Forms: A heavy Form is a Form that is the container of more than X (number Events + number of Controls) (X is technology dependant)
    • Lengthy Forms: A lengthy Form is a Form that is the container of more the X number of code lines (X is technology dependant)
    • High Data Layer Usage Forms: A high data layer usage Form is a Form that has more than X number of links to database objects (X is technology dependant)
    • High Fan-Out Forms: A high Fan-out Form is a From that has more than X number of links to objects outside the form itself (X is technology dependant)
  • Class Complexity Distribution (WMC) ensures a balanced distribution of the Weighted Method per Class metric value of classes from Object-Oriented languages.
  • Class Fan-In Distribution ensures a balanced distribution of the Class Fan-In metric value of classes from Object-Oriented languages.
  • Class Fan-Out Distribution ensures a balanced distribution of the Class Fan-In metric value of classes from Object-Oriented languages.
  • Coupling Distribution ensures a balanced distribution of the Fan-In metric value of source code artifacts
  • Cyclomatic Complexity Distribution ensures a balanced distribution of the McCabe Cyclomatic Complexity metric value of source code artifacts
  • OO Complexity Distribution ensures a balanced distribution of the level of complexity of classes from Object-Oriented languages. Classes complexity level rely on their being:
    • High Cyclomatic Complexity Classes (WMC)
    • High Depth of Inheritance Classes (DIT)
    • Low Cohesion Classes (LCOM)
    • High Length or Weight Classes: classes that have respectively more than X lines of code or X fields and methods
  • Reuse by Call Distribution ensures a balanced distribution of the Fan-In metric value of source code artifacts, considering that too-low values indicate a poor reuse of the component.
  • SQL Complexity Distribution ensures a balanced distribution of the level of complexity of SQL artifacts a.k.a. artifacts that contain static or dynamic SQL code. SQL artifacts complexity level rely on their being:
    • Artifact with a Query on more than 4 tables

    • Artifact with a Subquery

    • Artifact with a GROUP BY

    • Artifact with a Complex SELECT clause

    • Artifact with an UPDATE statement

    • Artifact with Raw SQL Complexity Higher than 30

  • Size Distribution ensures a balanced distribution of the number of lines of code of source code artifacts

Note

  • there is a specific Cost Complexity distribution used in Version Comparison pages of CAST portals to estimate the effort required to create, update, and delete artifacts

Default Quality Measure list

AIP default Assessment Model defines the following Quality Measure:

  • Commented-out Code Lines/Code Lines ratio (% of LOC) ensures the absence of too much commented-out code lines within the commet lines
  • Complexity Volume (% of LoC) ensures that the volume of complex code is kept low enough
  • Copy Pasted Code (% of LOC) ensures that the volume of duplicated code is kept low enough
  • SEI Maintainability Index 3
  • SEI Maintainability Index 4

Default compliance-to-grade thresholds for critical and non-critical Quality Rules

Non-critical Quality Rules

AIP default Assessment Model uses the following thresholds for non-critical Quality Rules:

  • Low Risk: more than 99% compliance
  • Moderate Risk: between 95 and 99% compliance
  • High Risk: between 90 and 95% compliance
  • Very High Risk: less than 90% compliance (note: less than 50% compliance lead to a '1' grade plateau, where compliance assessment do not require to be compared any more)


Critical Quality Rules

AIP default Assessment Model uses the following thresholds for critical Quality Rules:

  • Low Risk: more than 99.99% compliance
  • Moderate Risk: between 99.5 and 99.99% compliance
  • High Risk: between 99 and 99.5% compliance
  • Very High Risk: less than 99% compliance (note: less than 98% compliance lead to a '1' grade plateau, where compliance assessment do not require to be compared any more)


CAST AIP Sizing Model explained

CAST Sizing Model objectives

  • Deliver software quantity measurement results to provide C-level managers with visibility on their applications or software products
  • Management visibility in the area of software quantity covers
    • Technological inventory
    • Productivity-related measurement
    • Comparison capabilities so as to support predictions based on past experiences

CAST Sizing Model detailed

Introduction

The following sections describe and comment the different types of indicators from AIP Sizing Model:

  • Technical Size
  • Functional Weight
  • Technical Debt
  • Run-Time statistics

Technical Size

Design description

  • A Technical Size measure is the declaration of a type of object to count (e.g.: number of methods) or of a numerical property to sum up (e.g.: number of lines of code), using application facts.
  • Technical Size measures are basicaly inventory.

Functional Weight

Design description

  • A Functional Weight measure is an estimation of the number of functions the application delivers to the end-user.
  • Functional Weight measures abstract application facts into such estimations.

Technical Debt

Design description

  • A Technical Debt measure is an estimation of the cost to fix a pre-set percentage of high severity violations, of medium severity violations, and of low severity violations, considered as building the technical debt as well as derived information.
  • Technical Debt measures the size of applications' problems to be fixed.

Run-Time statistics

Design description

  • A Run-Time statistics measure is a place holder for information associated to the execution of the application's user-facing transactions.
  • Run-Time statistics
    • Are used to provide more context when listing the riskiest transactions in the CAST Engineering Dashboard and Application Analytics Dashboard
    • Could be used as an estimation of the size of applications' functional activity

CAST Sizing Model in use

Default Technical Size measures

AIP default Assessment Model defines the following Technical Size measures:

  • Number of ABAP Transactions
  • Number of ABAP User-Exits
  • Number of Artifacts
  • Number of Classes
  • Number of Code Lines
  • Number of Comment Lines
  • Number of Commented-out Code Lines
  • Number of Copybooks
  • Number of Datablocks
  • Number of Datawindows
  • Number of Events
  • Number of Files
  • Number of Forms
  • Number of Function Pools
  • Number of Functions
  • Number of Functions and Procedures
  • Number of Includes
  • Number of Interfaces
  • Number of Macros
  • Number of Methods
  • Number of Modules
  • Number of Namespaces
  • Number of Packages
  • Number of Paragraphs
  • Number of Processing Screens
  • Number of Programs
  • Number of SQL Artifacts
  • Number of Sections
  • Number of Tables
  • Number of Template Class Instances
  • Number of Template Classes
  • Number of Template Function Instances
  • Number of Template Functions
  • Number of Template Interface Instances
  • Number of Template Interfaces
  • Number of Template Method Instances
  • Number of Template Methods
  • Number of Triggers
  • Number of Units
  • Number of Userobjects
  • Number of Views
  • Number of WEB Pages

Default Functional Weight measures

AIP default Assessment Model defines the following Functional Weight measures:

  • Backfired Function Points
  • Number of Decision Points
  • OMG-Compliant Automated Function Points
    • Unadjusted Data Functions

    • Unadjusted Transactional Functions

  • Enhancement Function Points
    • Added function points

      • Added data function points

      • Added transactional function points

    • Deleted function points

      • Modified data function points

      • Modified transactional function points

    • Modified function points

      • Deleted data function points

      • Deleted transactional function points

Default Technical Debt measures

AIP default Assessment Model defines the following Technical-Debt-related measures

  • Technical Debt
  • Technical Debt density
  • Technical Debt of added Violations
  • Technical Debt of removed Violations

Default Run-Time Statistics measures

AIP default Assessment Model defines the following Run-Time statistics:

  • CPU Time
  • Database Time

They are mostly useful to populate the Risk Indicator - Transaction level page of the CAST Engineering Dashboard, to support cross-reference of these run-time statistics with the Transaction Risk Index.


Related CAST Ranking Systems

Introduction

In addition to the Measurement System, CAST AIP proposes complementary Ranking Systems to guide improvement initiatives.

The CAST Ranking System does not impact results from the CAST Measurement System.

It is designed to leverage information from the CAST Measurement System to help organization with the definition of improvement opportunities.

The CAST Ranking System relies on

  1. Weight and Critical contribution option of Quality Rules and Technical Criteria
  2. Propagated Risk Index
  3. Transaction Risk Index

Weight and Critical contribution option of Quality Rules and Technical Criteria

This Ranking capability lets Project Managers, Architects, ... define improvement actions based on the aggregation weight and critical contribution as defined in

Propagated Risk Index

This Ranking capility lets Project Managers, Architects, ... define improvement actions based on the aggregation weight as defined in:

as well as the execution call graph of the Objects which carry Violations.

In a nutshell:

  • the aggregation weight are compounded to get a sense of the concentration of issues in any given Object, through the definition of the Violation Index
  • the call graph is used to get a sense of the level of impact any given Object can have on the rest of the Application, through the definition of the Risk Propagation Factor
  • the above two indicators are combined to define the Propagated Risk Index to identify most impacting issues.

Violation Index (VI)

The aggregation weights are used to compute the Violation Index of an Object regarding a given Health Factor.

The Violation Index compounds the weight of all Quality Rules the Object violates, multiplied by the weight of the Technical Criteria, yet limiting the Quality Rules and Technical Criteria to the ones which contribute to the considered Health Factor.

Risk Propagation Factor (RPF)

The execution call graphs are used to compute the Risk Propagation Factor of an Object regarding a given Health Factor.

  • For TransferabitityRPF = 0 as transferability-related issues do not impact calling objects
  • For Changeability, RPF = # of direct callers, as changeability-related issues do only impact objects directly using the considered Object
  • For Robustness, Performance, and Security, RPF = # of distinct call paths, as robustness-, performance-, and security-related issues do impact all objects using the considered Object, directly or indirectly

Note that there is no RPF value defined for other Business Criteria.

Resulting Propagated Risk Index (PRI)

PRI = (1 + RPF ) x VI

Note that there is a PRI value defined only for Transferability, Changeability, Robustness, Performance, and Security.

Transaction Risk Index (TRI)

This Ranking capility lets Project Managers, Architects, ... define improvement actions based on the aggregation weight as defined in:

as well as the transaction graphs.

The Transaction Risk Index compounds the Robustness, Performance, or Security Violation Index along the path of the transaction, from the entry point to the data-entities accessed by the transaction.

Note that there is a TRI value defined only for Robustness, Performance, and Security.

  • No labels