Summary: This section explains how to compare two Assessment Models using CAST Management Studio to highlight differences between them.

Introduction

CAST's Assessment Model contains multiple settings and values that are used to measure the Quality and Quantity of a given Application. Therefore, the results of an analysis are heavily dependent on the settings defined in the Assessment Model you are using. If you change to a new Assessment Model (for example when upgrading to a new release of AIP Core, or when switching to a custom Assessment Model) the results of your analyses will be impacted (for example a change a threshold value for a given Quality Rule could result in more or less violations of the rule). As such, it is important to be able to compare Assessment Models so that you are aware what has changed should you decide to switch to a new model.

The compare process generates results in a .CSV file.

Compare process

In the CAST Management Studio, ensure that the two Assessment Models you want to compare are present in the Assessment Models view:

Click the Compare button:

A wizard will open enabling you to select the two Assessment Models to compare and define the output .CSV file and location. Click Finish to start the compare process

Results will be saved to your chosen .CSV file, which you can open with Microsoft Excel (for example):

Each row in the CSV file represents a Business/Technical Criteria, Quality Rule/Distribution/Measure and the Change column indicates the difference.

Results

Identify and understand differences

The comparison tool will help you identify the differences between the Assessment Models and may help you decide if you need to update your current Assessment Model. The comparison tool will identify evolutions of different types:

  • Functional configuration of every quality and sizing indicators
    • New indicator
    • Removed indicator
    • Renamed indicator
    • For Business and Technical Criteria
      • New contributions
      • Removed contributions
      • Updated contributions with a different weight
      • Updated contributions with a different critical contribution option value
    • For Quality Rules, Distributions, and Measures
      • Updated activation status
      • Updated grade computation thresholds
    • For Sizing Measures
      • Updated activation status
  • Functional configuration of Parameters
    • New parameter
    • Removed parameter
    • Renamed parameter
    • Updated default value
    • Updated overriding configuration
  • Technical configuration of Quality Rules, Distributions, and Measures and Sizing Measures
    • Updated implementation
    • For Quality Rules
      • Updated XXL option value
      • Updated Unified option value
      • Updated list of applicable technologies
        • Added applicable technology
        • Removed applicable technology
      • Updated associated information configuration

What should you do with differences?

In general, please process the differences as follows:

  • Functional configuration:
    • Additions and deletions are functional decisions
    • Active status update are functional decisions
    • Contribution-related configurations are functional decisions
  • Technical configuration:
    • Implementation and associated information configuration evolutions must be carefully looked into to identify the "right" value as they control the proper functioning of the platform
    • Applicable technology list, XXL and Unified option updates must be carefully looked into to identify the "right" value as they control the proper assessment of applications