This action is not necessary if you are not using the CAST Dashboards.

At the core of CAST and extensions that are used during an analysis is the Assessment Model: a set of rules, sizing/quality measures and distributions in a hierarchy of parent Technical Criteria and Business Criteria that are used to grade an Application and to provide information about any defects (code violations) in the Application's source code. The set of rules, sizing/quality measures and distributions are predefined and preconfigured according to established best practices, however, it is possible to modify some aspects in order to match your own environment, for example:

  • The weight (i.e. "importance") of a rule in its parent Technical Criterion. This value can be changed if some contributions are thought more or less important in a given context.
  • The criticality of a rule (i.e. whether the rule is considered "critical" or not in a given context). This allows you to "threshold" the grade of the aggregated rule with the lowest grade of its "critical" contribution.
  • Whether a rule is enabled or disabled in the next analysis.

See Application - Config - Assessment Model for more information.

Modifying the Assessment Model is considered standard practice, however, these updates must be performed with care as the legitimacy of trend and comparison information greatly depends on the methodology you use for the update. If the Assessment Model is not homogeneous over time and context, assessment information cannot be compared. Even for one-shot assessments, users will tend to compare assessment results - outside of the dashboard context - from their previous experiences. Homogeneity is therefore as important in this one-shot perspective as in a multiple assessment perspective and you should proceed with care.