Purpose (problem description)

This page describes how to investigate differences in the business criteria grade (for example security grade) between two snapshots when the source code does not change between two versions.

Observed in CAST AIP
Release
Yes/No
8.3.x (tick) 
Observed on RDBMS
RDBMS
Yes/No
CSS(tick)
Special Cases
Special Cases

CASE #1:

This may be due to changes in your module configuration and the effect of shared objects.

For example, if you have 4 analysis units, DB1, DB2, CODE1, and CODE2.

  • If you then create a module - MOD1 - which has CODE1, DB1, and DB2, and then create another module - MOD2 - which has CODE2, DB1, and DB2, then both modules will contain violations for the DB1 and DB2 analysis units, and you'll see some affects of this on the dashboard due to this configuration of your modules.
  • You may also have created an object filter on your module which is not filtering as initially expected, so that instead of getting a subset of the items that have been analyzed, you would get the full content of the application.  In the case with an improperly configured module filter, MOD1 may in fact contain DB1, DB2, CODE1, and CODE2, and MOD2 may also contain DB1, DB2, CODE1, and CODE2 analysis units which would effectively double the results of your dashboard violations when looking at the entire application.

To determine if you have shared objects between modules, you can run this query on the central:

 

SELECT DISTINCT o.OBJECT_ID AS SHARED_OBJECT, obj.OBJECT_NAME, obj.OBJECT_FULL_NAME, o.SNAPSHOT_ID
 FROM DSS_OBJECT_INFO o JOIN DSS_OBJECTS obj ON obj.OBJECT_ID=o.OBJECT_ID join DSS_LINK_INFO l1 ON o.SNAPSHOT_ID = l1.SNAPSHOT_ID and l1.NEXT_OBJECT_ID = o.OBJECT_ID and l1.LINK_TYPE_ID in ( 3, -3 ) where exists ( select 1 from DSS_LINK_INFO l2 where l2.SNAPSHOT_ID = o.SNAPSHOT_ID and l2.NEXT_OBJECT_ID = o.OBJECT_ID and l2.LINK_TYPE_ID in ( 3, -3 ) and l1.PREVIOUS_OBJECT_ID <> l2.PREVIOUS_OBJECT_ID )  and 
o.SNAPSHOT_ID= <replace_here_the_snapshot_id> 
Order by o.SNAPSHOT_ID, o.OBJECT_ID
 


The SNAPSHOT_ID in the above query can be obtained by running this query on the central and selecting the SNAPSHOT_ID for the snapshot which you want to run the query against:

select * from DSS_SNAPSHOTS; 

If the above query returns a value, this justifies the behavior at application level only. A shared object will be counted as many time as it's shared between the modules/subsets of the application. The shared object is displayed only one time in the list of the very high risk objects.  

To resolve this issue, you would need to configure your modules so that they no longer contain shared objects.

For example, in the first example described in this case you could create four separate and distinct modules:

MOD1 - CODE1

MOD2 - CODE2

MOD3 - DB1

MOD4 - DB2

In the second example described in this case, you would need to modify your object filter to properly select only the code you had planned for in each module.

CASE #2 :
The number of violation on metric “Avoid Functions and Procedures doing an Insert, Update, Delete, Create Table or Select without including error management” is unexpectedly high when compared to previous versions of CAST-MS (7.0).
This issue is a known bug which is caused due to the objects being shared between the two subsets coming from new SQL analyzer & legacy analyzer.
In 7.3 the SQL server technology should be replaced with Microsoft SQL. So the way to reduce the failed checks/avoid repeated counts would be by deactivating the database subset. As a workaround you must set the database subset to Inactive under the modules tab in the CMS and run a full analysis followed by snapshot generation. This issue is fixed in 7.3.4


CASE #3 :
The grade of the technical criterion is lowered in the second version of analysis because of the grade degradation of one of the Quality rules belonging to the Technical Criteria. However, this quality rule is violated by only one object that has not changed since the first version of analysis  (unchanged object, unchanged violation). 
This is expected behaviour considering the two explanations below -
  1. This can be explained by the fact that the total check has been reduced (if you have 1 violation among 1000 objects checked (0.1%), you will have a greater grade than if you have 1 violation among 2 objects checked (50%), although the object violating the rule is unchanged). For this please check the computing details tab for further information
  2.  The second case is, if total check is the same, the explanation could be that the threshold for this metric has been changed, this also can be checked in the computing details tab

Relevant customer's input

  1. Sherlock (Needed: Central Base)
  2. Screenshots showing the issue with the url visible (screenshots for the two values of the grade)
if the difference in the grade is at the system level

If the difference in the grade is at the system level

The grade at system level is displayed for example in the left panel of the assessment portfolio - level page (FRAME_PORTAL_PORTFOLIO_VIEW)

To investigate differences in the grade at system level, you need to:

  1. Go to the quick access.
  2. Check that all the applications present in the current version are present in the previous one.
  3. If some applications are not present in both version, this may explain the differences. You cannot compare results at system level as the configuration is no longer the same.
  4. If all applications are present in both version, then for each application do the following:
    1. Check the box corresponding to the two last snapshot and click on "monitor"
    2. You will get the business criteria grade values for both versions of the application

      Warning

       After clicking on the monitor button, you can get the following warning. This warning indicates configuration changes. If the indicated change is regarding metric computation (assessment model). This could explain the issue. Please refer to the "how to compute the grades" page to check the configuration differences

    3. Get the business criteria grade value you are looking for (for example security)
    4. if the value of the business criteria grade is different in the two versions, you need to follow: #if the difference in the grade is at the application level
    5. if the value of the business criteria grade is same in the two versions, you need to repeat the same steps on the other applications in the system
  5. If the value of the business criteria grade is same in the two versions for all applications in the system, you need to check if there is some changes in the grade computation of the business criteria. Please check the configuration differences by computing the grade using the page CAST Engineering Dashboard - Information - How to compute the metric grade at system level 

Conclusion

If you are comparing grades:

  1. If the difference is because one or more application having different metric grade in both snapshots, go the next section #if the difference in the grade is at the application level to explain this difference
  2. if metric grade the same for each application in both snapshots, explain to customer the difference you have identified
  3. If no difference is observed, or the computed values are not as shown in the dashboard: CAST Support to analyze the problem
    If you are validating the grade value:
  4. If computed grade is same shown in the dashboard, explain the customer the computation formula leading to the shown value
  5. If computed grade is not as shown in the dashboard: CAST Support to analyze the problem
if the difference in the grade is at the application level

If the difference in the grade is at the application level

The grade at application level is displayed for example in the Current Overall Status panel of the assessment - application level page (FRAME_PORTAL_RISK_VIEW)

To investigate differences in the grade at application level, you need to:

  1. Go to the quick access
  2. Check that all the modules present in the current version are present in the previous one
  3. If some modules are not present in both version, this may explain the differences. You cannot compare results at application level as the configuration is no longer the same
  4. If all modules are present in both version, then for each module do the following:
    1. check the box corresponding to the two last snapshot and click on "monitor"
    2. You will get the business criteria grade values for both versions of the modules

      Warning

      After clicking on the monitor button, you can get the following warning. This warning indicates configuration changes. If the indicated change is regarding metric computation (assessment model). This could explain the issue. Please refer to the "how to compute the grades" page to check the configuration differences

    3. Get the business criteria grade value you are looking for (for example security)
    4. if the value of the business criteria grade is different in the two versions, you need to follow: #if the difference in the grade is at the module level
    5. if the value of the business criteria grade is same in the two versions, you need to repeat the same steps on the other modules in the application
  5. If the value of the business criteria grade is same in the two versions for all modules in the application, you need to check if there is some changes in the grade computation of the business criteria. Please check the configuration differences by computing the grade using the page CAST Engineering Dashboard - Information - How to compute the metric grade at application level

 

Conclusion

If you are comparing grades:

  1. If the difference is because one or more module having different metric grade in both snapshots, go the next section #if the difference in the grade is at the module level to explain this difference
  2. if metric grade the same for each module in both snapshots, explain to customer the difference you have identified
  3. If no difference is observed, or the computed values are not as shown in the dashboard: CAST Support to analyze the problem.
    If you are validating the grade value:
  4. If computed grade is same shown in the dashboard, explain the customer the computation formula leading to the shown value
  5. If computed grade is not as shown in the dashboard: CAST Support to analyze the problem


 

if the difference in the grade is at the module level

If the difference in the grade is at the module level

The grade at module level is displayed for example in the Business Criteria panel of the Investigation - Quality model drilldown page (FRAME_PORTAL_INVESTIGATION_VIEW)

To investigate differences in the grade at module level, you need to:

  1. Go to the quick access 
  2. check the box corresponding to the two last snapshot of the module and click on "compare"
  3.  Since it is the same source code you should not get any added/deleted/modified objects. If you get some then this explains the change in the grade.
    For example in the below screenshot we have 16 new objects which are appearing, hence,the metric grade comparison can no longer be done as the configuration is no longer the same.

    For more details on the module content you can check the box corresponding to the two last snapshot of the module and click on "monitor" to access the Value Comparison Table

  4. If the content of the module is same in both the versions with the change in grade, then please refer to "how to compute the grades" page to check the computation differences by computing the grade using the page CAST Engineering Dashboard - Information - How to compute the metric grade at module level
  5. If there is some differences in the computation results between both snapshot, please refer to CAST Engineering Dashboard - Grades - Unexpected increase or decrease - module level to check for any change in the assesment model leading to the observed change of grade.
  6. If the content of the module is not the same is both versions, you need to check the module definition in every version (in CAST-MS module tab). You need to check whether the filtering is done the same manner in both versions (filtering on analysis unit or/and on objet type or/and object name). If the filtering is not the same, results cannot be comparable. Please review the configuration of the module to keep the same configuation on both version. Once the configuration of the module done, snapshot generation is required for the changes to be reflected in the dashboard


 

Conclusion

If you are comparing grades:

  1. If you identify any difference in the impacting factors, provide customer the identified difference(s) that explain the difference of the metric grades in both snapshots
  2. If the computed values are not as shown in the dashboard, or no difference is identified for the impacting factors, contact CAST Support
  3. If the difference is in the compliance ratios and the difference is in the failed checks values, go to False Violation or No Violation user guide
  4. If the difference is in the compliance ratios  and the difference is in the total objects values, ask customer to provide you the kb used for each snapshot computation and contact CAST Support to analyze the problem.

 

Notes/comments