Quality Model adapation options
Aggregated quality indicators - Business and Technical Criteria
Within the CAST Assessment Model, quality indicator aggregations are always performed according to the same principles, regardless of the level in the Assessment Model. The principles are the weighted average of lower-level values, with the optional ability to cap resulting grade with the value of a "critical" contributor's value.
However, for organization and restitution principles, aggregated quality indicators are named differently. They are Business Criteria and Technical Criteria:
- Business Criteria are referred to as 'Health Factors' when they are business-oriented and 'Rule Compliance' when they are development-oriented.
- Each Business Criterion is the aggregation of multiple Technical Criteria.
- Each Technical Criterion is the aggregation of multiple elementary quality indicators, be they Quality Rules, Quality Distributions, or Quality Measures.
A default list of constitutive quality indicators of each aggregated quality indicator are delivered with the standard CAST Assessment Model, however, they can be adapted if some contributions are missing or considered as inappropriate in a given context.
A default list of contribution weights are delivered with the standard CAST Assessment Model, however, contribution weights can be adapted if some contributions are thought more or less important in a given context.
The contribution of a quality indicator into an aggregated quality indicator can be tagged with the "critical" option. This option allows to threshold the grade of the aggregated quality indicator with the lowest grade of its "critical" contribution.
This feature is optional yet its usage leads to better results in the post-6.1-release portal pages: a careful selection of diagnostic-based quality indicators has to be done to deliver the best value in these pages.
A default list of active quality indicators are delivered with the standard CAST Assessment Model, however, active option can be adpated if some indicators are not to be computed in a given context.
Elementary quality indicators, that is Quality Rules natively handle technology-oriented information: the list of technologies the Quality Rule is applicable for.
However, some quality indicator sets may not be relevant to a given Module, Application or System for historical or functional reasons.
E.g.: a legacy Cobol Module is known to be of poor quality regarding its algorithmic complexity; this Module won't be evolved; there is therefore no need to spend resources on checking its quality; the Module is nevertheless part of the Portfolio Tree for inventory reasons; you can exclude this Module.
In this context, a given Module, Application or System must be excluded from the computation so as to reflect the situation.
Elementary quality indicators - Quality Rules, Distributions, and Measures
Within the CAST Assessment Model, the elementary quality indicators are Quality Rules, Quality Distributions, and Quality Measures.
Aggregated quality indicators' large-grain execution exception principles are also valid for elementary quality indicators.
E.g.: Reuse by call distribution delivers best values on framework Modules; knowing which Modules are supposed to be framework is external to source code analysis; other Modules may not need to be assessed with this quality indicator; you can exclude these other Modules.
E.g.: Corporate naming policy handles project-specific prefix controls; you may want to run different prefix controls on different applications; you can then create control variations and run them where appropriate.
In this context, a given Module, Application, or System must be excluded from the computation so as to reflect the situation.
In some cases, objects within a Module may not be subject to the development rules for some historical or functional reasons.
E.g.: some objects do violate a rule but this situation is accepted on behalf of its functional specificity.
In this context, an object must be excluded from the computation so as to reflect the situation. This object-level exclusion is handled through the pages of the CAST Engineering Dashboard.
Quality Rules, Quality Distributions, and Quality Measures can feed (when appropriate) on parameters whose values impact the results - grades and statuses - of the Quality indicators.
- define a defect for a Diagnostic-based Quality indicator as items of a given type with a value of a numerical property greater than the value of a user-defined parameter;
- define the way to distribute objects of a given type into the Categories of a Quality Distribution according to the value of a numerical property greater than the value of a first user-defined parameter and lower or equal to the value of a second user-defined parameter.
Updating parameter values is part of the normal use of the product to better match the target context.
The thresholds that are used to turn percentages of rule compliance, split share percentage of distribution categories, and values into quality grades (respectively for Quality Rules, Quality Distributions, and Quality Measures) can also be updated.
Updating the status thresholds should be reserved for experienced users when default behavior does not meet with expectations.
As a general guideline, CAST Quality Rules use one of the following two threshold profile:
- For Quality Rules with a critical contribution into a Technical Criterion, a "strict" threshold profile:
- Threshold to get more than 1.00 grade: 98%
- Threshold to get more than 2.00 grade: 99%
- Threshold to get more than 3.00 grade: 99.5%
- Threshold to get 4.00 grade: 99.99%
- For other Quality Rules, a "lenient" threshold profile:
- Threshold to get more than 1.00 grade: 50%
- Threshold to get more than 2.00 grade: 90%
- Threshold to get more than 3.00 grade: 95%
- Threshold to get 4.00 grade: 99%
Exceptions and differences can occur when needed. However, this is a good practice to align on these two profiles.
One can decide of the weight of a Module with regards to other Modules when it comes to aggregate the result of multiple Modules into an APplication.
As a default behavior, each Module had the exact same weight, meaning that every Module was as important as the other ones.
One can decide to change this behavior to weight a Module according to one of its Technical Size sizing indicator value, one of its Functional Weight sizing indicator value, one of its uploaded Background Facts value, or its uploaded Business Value.
This feature is optional. This is especially true as users can define the Module the way they want. The Modules can be built so as to have the same weight.
When to use it? As soon as the results using the default behavior leads to comments such as "this Module is more important than this one so the Application grade should be impacted by its grade" or "this Module is bigger than this one so the Application grade should be impacted by its grade", this means there is a need to weight Modules a customized way.
The best candidates are:
- the "Business Value" to have critical components impact the overall quality assessment,
- the "number of Backfired Function Points" to have the functional weight of components impact the overall quality assessment (valid for all technologies),
- the "number of Artifacts" to have the technical size impact the overall quality assessment (valid for all technologies),
- and the "number of LOC" (pro's: anyone can understand, con's: some components have no LOC and if you build Modules with these components alone, the Modules will not impact the overall quality assessment)
Sizing Model adaptation options
Function Points configuration
Backfired Function Points
The "Backfired Functions Points" quality indicator features parameters to tune the 'FP / LOC' ratio for each technology.
Default values are fine but be sure, in case a new language has been added, that there is a valid value for it. Otherwise, the <any type> value will be used.
Automated Function Points
For more details about Automated FP, please refer to "CAST Automated Function Points Estimation" appendix.
Unadjusted Data Function Points
Unadjusted Data FP computing is based on database Tables and on objects declared as database mapping.
If the analysis does not contain any such data structure, the quality indicator will return zero FP.
Unadjusted Transactional Function Points
Unadjusted Transactional FP computing is based on database Tables and on user-facing forms or screens (User Interface elements). In case the language does not feature native form or screen concepts, user-facing components are found using naming conventions or base classes that user-facing components inherit from.
If the analysis does not contain any such data structure, the quality indicator will return zero FP; if the analysis does not contain any language with native form or screen concept and user-facing components selection is not configured, the quality indicator will return zero FP.