Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 177 Next »

In this section

Target audience:

CAST AI Administrator

Use the following search box to search in all sub pages:

 

Introduction

Each CAST AIP release provides new features which improve the value of the platform and justify an upgrade. However, there are a number of changes or improvements which can impact the measurement results/grades:

  • New or improved Quality Rules to perform deeper analysis
  • Updates to the Assessment Model, e.g. changes to rule weights, severity or thresholds. This can be mitigated by using the "Preserve assessment model" option during the upgrade.
  • Improvements of the language analysis, e.g. more fine-grained detection of objects or links
  • Extended automatic discovery of files included in the analysis
  • Bug fixes to improve the precision of results
  • And, unfortunately, a new release may also introduce new bugs which may impact the results until they are discovered and removed

Below is a list of changes made to the current release of CAST AIP that are known to cause impacts to results. You can also consult Case Study - Measurement changes after upgrade for selected customer applications which provides a more detailed analysis based on a few sample applications.

Analyzing the root causes of impacts to measurement results/grades

The following is a general description of the steps that should be taken in order to compare pre and post upgrade results:

  • Step 1: Take a snapshot (including a source code analysis) with the previous release of CAST AIP before upgrading to the new release of CAST AIP
    • Check the list of applications to be analyzed, the list of files per application and list of SQL objects from the Analysis Service.
  • Step 2: Compare the source code in version 1 (before upgrade) with the source code in the new version 2 (after upgrade)
    • Compare the list of analyzed files, list of files per application and list of SQL objects between the two Analysis Services
  • Step 3: Compare the results of the application analysis and snapshot post upgrade. This can be done by comparing the snapshots available in the Dashboard Service to find the differences in:
    • Quality rules
    • Violations

    • Grades at Business Criteria level

    • Function Points

    • Transactions

    • Lines of code

  • Step 4: Compare the data functions and transactions across the source Analysis Service and the target Analysis Service post upgrade.

  • No labels