Problem Description
This page help qualifying issue of number of failed and/or Total objects has been modified for a given based diagnostic metric after migration where even the code code source is used:
Scenario 1
- Analyze code source and generate snapshot 1
- upgrade Cast databases using more recent Cast version
- Reanalyze same code source and generate snapshot 2
>> the grade, the number of failed and/or Total objects have been modified for a based diagnostic metric when comparing the snapshot 1 and 2
Scenario 2
- Analyze code source and generate snapshot 1
- Using more recent Cast version, create new CAST databases
- Analyze same code source and generate snapshot 2 in the new CAST databases
>> the grade, the number of failed and/or Total objects have been modified for a based diagnostic metric when comparing the snapshot 1 and 2
Applicable in CAST Version
Release | Yes/No |
---|---|
8.3.x |
Observed on RDBMS
RDBMS | Yes/No |
---|---|
CSS |
Action Plan
- Get customer's input
- Check if customer is comparing same thing
- If customer is comparing same thing, justify the difference
- If customer is not comparing same thing , don't continue investigating
- If you cannot justify the difference with general explications based on the improvements and the modifications made between the CAST versions,
- identify the objects making the difference between two version
- try to explain why some objects are violating the diags in one version but in the second. To do this ask for the KBs involved for each snapshot computation
- check if the objects (generating issue) exist in both KB with same checksum (SRC)
- check if the objects (generating issue) have same properties and same number of links on both KBs.
Get customer's input
- The central base(s) containing the 2 snapshots
- a screenshot for the snapshot 1 with the URL of the page shown object name (application or module), the diagnostic, the garde, number of failed and total object
- a screenshot for the snapshot 2 with the URL of the page shown object name (application or module), the diagnostic, the garde, number of failed and total object
Check if customer is comparing same thing
- import central database(s)
- observe the problem as described in the 2 screenshot
- If customer is
- analyzing the same code souce on same KB (before and after migrating the database)
- generating the 2 snapshots on same central (before and after migrating the database) and using exactly same portfolio tree : same application with same modules
- customer is comparing the garde, the number of failed/total objects at module level
- >> In this case, we can consider that customer is comparing same thing. Go next step to Justify the difference
- If customer is
- analyzing the same code souce on same KB (before and after migrating the database)
- generating the 2 snapshots on same central (before and after migrating the database) and using exactly same portfolio tree : same application with same modules
- customer is comparing the garde, the number of failed/total objects at Application level
- check for both snapshots that application is containing same module(s)
- if applications in both snapshots don't contain same module(s): this jutify the difference reported by customer. We stop investiagtion
- if applications in both snapshots contain same module(s), we can consider that customer is comparing same thing:
- check the grade, the number of failed/total objects at each sub componenet module level to identify the module(s) generating difference
- if the grade, the number of failed/total objects at each sub componenet module is the same in both snapshot, the difference must be related to the metric used as module aggregation weight.
- if you can find module(s) generating the difference, Go next step to Justify the difference
- check the grade, the number of failed/total objects at each sub componenet module level to identify the module(s) generating difference
- check for both snapshots that application is containing same module(s)
- If customer is
- analyzing the same code souce on 2 different knowledge base with 2 different CAST versions.
- generating the 2 snapshots on 2 different central base with 2 different CAST versions.
- customer is comparing the grade, the number of failed/total objects at module level, ensure that modules are defined identically on both KBs: same filter(s), same analysis job(s), same query (if explicit list mode is used)
- If modules are defined identically on both KBs: we can consider that customer is comparing same thing. Go next step to Justify the difference
- If modules are not defined identically on both KBs: this justify the difference reported by customer. We stop investigation
- customer is comparing the grade, the number of failed/total objects at Application level
- if applications in both snapshots don't contain same module(s): this justify the difference reported by customer. We stop investigation
- if applications in both snapshots contain same module(s),
- check the grade, the number of failed/total objects at each sub component module level to identify the module(s) generating difference
- if the grade, the number of failed/total objects at each sub component module is the same in both snapshot, the difference must be related to the metric used as module aggregation weight.
- if you can find module(s) generating the difference, for each one of these modules, ensure that modules are defined identically on both KBs: same filter(s), same analysis job(s), same query (if explicit list mode is used)
- If modules are defined identically on both KBs: we can consider that customer is comparing same thing. Go next step to Justify the difference
- If modules are not defined identically on both KBs: this justify the difference reported by customer. We stop investigation
- customer is comparing the grade, the number of failed/total objects at module level, ensure that modules are defined identically on both KBs: same filter(s), same analysis job(s), same query (if explicit list mode is used)
Justify the difference
- Check release note to get improvement and Evolution's for quality rules : as following section 'Quality Model - Major Evolution's and Improvements' and 'Changes in Metric or Quality Rule Results'
- Check the fixed bugs in jira: in jira search with whole name of the quality rule generating difference.
- using the search result, verify if the correction made can justify the modification of failed/detailed objects:
- the correction allows to identify more violations: number of failed should increase in this case
- or the correction allows avoiding wrong violations: number of failed should decrease in this case
- using the search result, verify if the correction made can justify the modification of failed/detailed objects:
- If the diagnostic is based on the analysis result, check if any bug is fixed on the analyzer that impact the daignostic.
- Compare the scripts of detail/total procedure(s) in both CAST versions: if you can identify modifications. Check if this modification can explain the behavior reported by customer: if number of failed should increase or decrease
Notes
Related Pages