Introduction
One of the most recurring issues faced by our users is with regard to changes in analysis results, i.e. why have my existing analysis results changed unexpectedly during a new analysis? This is because the stability of existing results is crucial for making informed decisions about improvements to make in the application based on what CAST is recommending: if the results change unexpectedly then these decisions are made more difficult.
In essence, changes to existing results in a new analysis can be caused by several different things - or a combination of them. Below is a non-exhaustive list of the most common:
Your source code has changed
It stands to reason that if your source code is different in two different analysis runs and nothing else is has changed, then the analysis results will differ. Of course, your source code may have been intentionally updated (based on what CAST has recommended) and this is the moment to use CAST to check whether the changes you have made have improved the overall quality of your application. However, if the source code has differences and they are unexpected, then you can also exploit CAST to find out where and why the source code has changed: for example:
- when using CAST Console you can use the Analysis Results Indicators feature to help you do this, see Application - Overview with Fast Scan or Application - Legacy Overview:
Click to enlarge
- when using the CAST Engineering Dashboard, you can view specific differences in terms of quality rule violations between two analyses/snapshots:
Click to enlarge
- when using the CAST Health Dashboard, you can view overall trends between successive snapshots:
Click to enlarge
You have changed some aspect of the CAST Imaging deployment
When a part of the CAST Imaging deployment changes between two analyses, this can impact existing analysis results. Typically this is an upgrade to part of the deployment where improvements have been made and bugs have been fixed in that particular component and published in a new release, for example:
- An upgrade of a CAST Extension used in your application analysis
- An upgrade of CAST AIP Core (the analysis engine)
Changes can include the following:
- New or improved Quality Rules to perform deeper analysis
- Updates to the Assessment Model, e.g. changes to rule weights, severity or thresholds. This can be mitigated by using the "Preserve assessment model" option - see Administration Center - Settings - Assessment Model Strategy.
- Improvements to the language analysis, e.g. more fine-grained detection of objects or links
- Extended automatic discovery of files included in the analysis
- Bug fixes to improve the precision of results
- And, unfortunately, a new release may also introduce new bugs which may impact the results until they are discovered and removed
This is a natural part of the lifecycle of CAST Imaging and we actively encourage you to upgrade to the latest release of a specific component where possible and consult the release notes for the specific component that has been upgraded to ensure you are aware of what has changed. In addition, you can use the Change Forecast tool to compare two releases of a CAST Extension and generate a report detailing the potential impacts to existing results:
Click to enlarge
Finally, when performing a CAST AIP Core upgrade, CAST recommends that a post upgrade analysis/snapshot is run so that a comparison of the results can be made pre and post upgrade.
You have changed the configuration of your application
If your source code has not changed and you have not made any changes to the CAST Imaging deployment (upgrades etc.) and your analysis results are different, then it is most likely that some aspect of the analysis configuration has changed between the two analysis runs. You should investigate the Analysis Configuration settings to check whether any settings have changed intentionally or unintentionally: