Page tree
Skip to end of metadata
Go to start of metadata

 

Problem Description

This page help qualifying issue of number of failed and/or Total objects has been modified for a given based diagnostic metric after migration where even the code code source is used:

Scenario 1

  • Analyze code source and generate snapshot 1
  • upgrade Cast databases using more recent Cast version
  • Reanalyze same code source and generate snapshot 2
    >> the grade, the number of failed and/or Total objects have been modified for a based diagnostic metric when comparing the snapshot 1 and 2

Scenario 2

  • Analyze code source and generate snapshot 1
  • Using more recent Cast version, create new CAST databases
  • Analyze same code source and generate snapshot 2 in the new CAST databases
    >> the grade, the number of failed and/or Total objects have been modified for a based diagnostic metric when comparing the snapshot 1 and 2
Applicable in CAST Version
Release
Yes/No
8.3.x(tick)
8.2.x(tick)
8.1.x(tick)
8.0.x(tick)
7.3.x(tick)
7.2.x(tick)
7.0.x(tick)
Observed on RDBMS
RDBMS
Yes/No
Oracle Server (tick)
Microsoft SQL Server (tick)
CSS2 (tick)
Action Plan
  1. Get customer's input
  2. Check if customer is comparing same thing
    1. If customer is comparing same thing, justify the difference
    2. If customer is not comparing same thing , don't continue investigating
  3. If you cannot justify the difference with general explications based on the improvements and the modifications made between the CAST versions,
    1. identify the objects making the difference between two version
    2. try to explain why some objects are violating the diags in one version but in the second. To do this ask for the KBs involved for each snapshot computation
      1. check if the objects (generating issue) exist in both KB with same checksum (SRC)
      2. check if the objects (generating issue) have same properties and same number of links on both KBs.

Get customer's input

  • The central base(s) containing the 2 snapshots
  • a screenshot for the snapshot 1 with the URL of the page shown object name (application or module), the diagnostic, the garde, number of failed and total object
  • a screenshot for the snapshot 2 with the URL of the page shown object name (application or module), the diagnostic, the garde, number of failed and total object

Check if customer is comparing same thing

  1. import central database(s)
  2. observe the problem as described in the 2 screenshot
  3. If customer is
    1. analyzing the same code souce on same KB (before and after migrating the database)
    2. generating the 2 snapshots on same central (before and after migrating the database) and using exactly same portfolio tree : same application with same modules 
    3. customer is comparing the garde, the number of failed/total objects at module level
    4. >> In this case, we can consider that customer is comparing same thing. Go next step to Justify the difference
  4. If customer is
    1. analyzing the same code souce on same KB (before and after migrating the database)
    2. generating the 2 snapshots on same central (before and after migrating the database) and using exactly same portfolio tree : same application with same modules 
    3. customer is comparing the garde, the number of failed/total objects at Application level 
      1. check for both snapshots that application is containing same module(s)
        1. if applications in both snapshots don't contain same module(s): this jutify the difference reported by customer. We stop investiagtion
        2. if applications in both snapshots contain same module(s), we can consider that customer is comparing same thing:
          1. check the grade, the number of failed/total objects at each sub componenet module level to identify the module(s) generating difference
            1. if the grade, the number of failed/total objects at each sub componenet module is the same in both snapshot, the difference must be related to the metric used as module aggregation weight.
            2. if you can find module(s) generating the difference, Go next step to Justify the difference
  5. If customer is
    1. analyzing the same code souce on 2 different knowledge base with 2 different CAST versions.
    2. generating the 2 snapshots on 2 different central base with 2 different CAST versions.
      1. customer is comparing the grade, the number of failed/total objects at module level, ensure that modules are defined identically on both KBs: same filter(s), same analysis job(s), same query (if explicit list mode is used)
        1. If modules are defined identically on both KBs: we can consider that customer is comparing same thing. Go next step to Justify the difference
        2. If modules are not defined identically on both KBs: this justify the difference reported by customer. We stop investigation
      2. customer is comparing the grade, the number of failed/total objects at Application level 
      3. if applications in both snapshots don't contain same module(s): this justify the difference reported by customer. We stop investigation
      4. if applications in both snapshots contain same module(s),
        1. check the grade, the number of failed/total objects at each sub component module level to identify the module(s) generating difference
        2. if the grade, the number of failed/total objects at each sub component module is the same in both snapshot, the difference must be related to the metric used as module aggregation weight.
        3. if you can find module(s) generating the difference, for each one of these modules, ensure that modules are defined identically on both KBs: same filter(s), same analysis job(s), same query (if explicit list mode is used) 
          1. If modules are defined identically on both KBs: we can consider that customer is comparing same thing. Go next step to Justify the difference
          2. If modules are not defined identically on both KBs: this justify the difference reported by customer. We stop investigation

Justify the difference

  1. Check release note to get improvement and Evolution's for quality rules : as following section 'Quality Model - Major Evolution's and Improvements' and 'Changes in Metric or Quality Rule Results'
  2. Check the fixed bugs in jira: in jira search with whole name of the quality rule generating difference.
    1. using the search result, verify if the correction made can justify the modification of failed/detailed objects:
      1. the correction allows to identify more violations: number of failed should increase in this case
      2. or the correction allows avoiding wrong violations: number of failed should decrease in this case
  3. If the diagnostic is based on the analysis result, check if any bug is fixed on the analyzer that impact the daignostic.
  4. Compare the scripts of detail/total procedure(s) in both CAST versions: if you can identify modifications. Check if this modification can explain the behavior reported by customer: if number of failed should increase or decrease
Notes

 

Related Pages

 

  • No labels