Release | Yes/No |
---|---|
8.3.x | |
8.2.x | |
8.1.x | |
8.0.x | |
7.3.x | |
7.2.x | |
7.0.x |
RDBMS | Yes/No |
---|---|
Oracle Server | |
Microsoft SQL Server | |
CSS2 | |
CSS1 |
Factor impacting the metric grade computation at module level
The factors impacting the metric grade computation at module level are
- The module content
- Transformation Thresholds
- The parameters of metric
- The value of the option multiple values per object
- Aggregation weight of different sub criteria and diagnostic based metric
- The Critical criteria
Compute metric grade
The diagnostic based metric
- The module content: The metric grade is computed according to total and successful checks objects identified on local site
Get the number of total objects using following query:
SELECT DML.OBJECT_ID, DMR.METRIC_VALUE_INDEX, DMR.SNAPSHOT_ID , sum(DMR.METRIC_NUM_VALUE) FROM <CB_NAME>.DSS_METRIC_RESULTS DMR, <CB_NAME>.DSS_MODULE_LINKS DML WHERE DML.OBJECT_TYPE_ID = 20000 and DML.OBJECT_ID = <MODULE_ID> and DML.MODULE_ID = DMR.OBJECT_ID and DML.SNAPSHOT_ID = DMR.SNAPSHOT_ID and DMR.METRIC_ID = <DIAG_ID> and METRIC_VALUE_INDEX = 2 group by DML.OBJECT_ID, DMR.METRIC_VALUE_INDEX, DMR.SNAPSHOT_ID
Get the number of failed checks using following query:
SELECT DML.OBJECT_ID, DMR.METRIC_VALUE_INDEX, DMR.SNAPSHOT_ID , sum(DMR.METRIC_NUM_VALUE) FROM <CB_NAME>.DSS_METRIC_RESULTS DMR, <CB_NAME>.DSS_MODULE_LINKS DML WHERE DML.OBJECT_TYPE_ID = 20000 and DML.OBJECT_ID = <MODULE_ID> and DML.MODULE_ID = DMR.OBJECT_ID and DML.SNAPSHOT_ID = DMR.SNAPSHOT_ID and DMR.METRIC_ID = <DIAG_ID> and METRIC_VALUE_INDEX = 1 group by DML.OBJECT_ID, DMR.METRIC_VALUE_INDEX, DMR.SNAPSHOT_ID
- Number of successful checks = number of total objects - number of failed checks
- Compute the compliance ratio = Number of Successful Checks/Number of total objects
Get parameters of the diagnostic using following query
SELECT DMHP.SNAPSHOT_ID, PARAM_NUM_VALUE, PARAM_CHAR_VALUE FROM <CB_NAME>.DSS_METRIC_HISTO_PARAMS DMHP, <CB_NAME>.DSS_METRIC_HISTO_TREE DMHT WHERE DMHT.METRIC_PARENT_ID = <DIAG_ID> AND DMHT.METRIC_ID = DMHP.METRIC_ID AND DMHT.SNAPSHOT_ID = DMHP.SNAPSHOT_ID
Get the value of the option multiple values per object using following query
SELECT METRIC_AGGREGATE_OPERATOR, METRIC_VALUE_INDEX FROM <CB_NAME>.DSS_METRIC_VALUE_TYPES WHERE METRIC_ID = <DIAG_ID> AND METRIC_VALUE_INDEX = 1
Get the value of the transformation Thresholds stored in the central base table using following query:
Select SNAPSHOT_ID, METRIC_ID, STATUS, THRESHOLD_1, THRESHOLD_2, THRESHOLD_3, THRESHOLD_4 from <CB_NAME>.DSS_METRIC_HISTO_THRESHOLDS where SNAPSHOT_ID=<ID_Of_the_Snapshot> and METRIC_ID =<METRIC_ID>
Using the compliance ratio and the thresholds, compute the metric grade as described below:
The "Grade" of each Metric is the result of a mapping between the "Metric" measure and a decimal value ranging from '1' to '4' using thresholds. Default thresholds being:
- More than 99% compliance required to attain a grade of '4.00'
- More than 90% compliance required to attain a grade of at least '3.00'
- More than 70% compliance required to attain a grade of at least '2.00'
- More than 30% compliance required to attain a grade of at least '1.00
- example:
Technical and business criteria
Get the aggregation weight of different sub criteria and diagnostic based metric
Select SNAPSHOT_ID, METRIC_PARENT_ID, METRIC_ID, METRIC_INDEX, METRIC_TYPE, AGGREGATE_WEIGHT , METRIC_CRITICAL and From <CB_NAME>."DSS_METRIC_HISTO_TREE" where SNAPSHOT_ID=<ID_Of_the_Snapshot> and METRIC_PARENT_ID=<METRIC_ID_OF_SCREENSHOT>
Get the Critical criteria
Select SNAPSHOT_ID, METRIC_PARENT_ID, METRIC_ID, METRIC_INDEX, METRIC_TYPE, AGGREGATE_WEIGHT , METRIC_CRITICAL From <CB_NAME>.DSS_METRIC_HISTO_TREE" where SNAPSHOT_ID=<ID_Of_the_Snapshot> and METRIC_CRITICAL=1 and METRIC_PARENT_ID=<METRIC_ID_OF_SCREENSHOT>
Get the metric grades of subcomponents metrics: You can get value by reading them directly in the dashboard Compute the metric using the value input identified above: If no Critical contribution metrics are used, the metric grade is the weighted average of subcomponents metrics' grades. Following an example
Subcomponent metric grade weight 2.34 10 1.76 7 2.54 10 3.10 5 2.79 10 The resulting metric grade= 2.50 If critical contribution metrics are used, The metric grade= min (the weighted average of subcomponents metrics' grades, min (metric grade of sub component critical metrics). Follw an example
subcomponent metric grade weight Critical contribution 2.54 7 NO 1.32 4 NO 3.11 8 YES 2.78 8 NO 3.12 8 NO 3.45 7 YES 2.54 8 NO The resulting grade:
Distribution-based
Each category is processed like a diagnostic-based metric: threshold to turn the percentage that the category represents into a grade. Distribution grade is then the minimum value among the categories' grades. When comparing the grades between 2 snapshots, you have to compare the size of each category. Any difference for one (or more) category leads to different metric grade for distribution metric. Example In snapshot 2, the size of each category of the SQL Complexity distribution is provided in the next table:
Count | % | |
Low SQL Complexity Artifacts | 92 | 0 |
Moderate SQL Complexity Artifacts | 0 | 0 |
High SQL Complexity Artifacts | 0 | 0 |
Very High SQL Complexity Artifacts | 0 | 0 |
Total | 92 | 100 |
In snapshot 1, the size of each category of the SQL Complexity distribution is provide on the next table:
Count | % | |
Low SQL Complexity Artifacts | 121 | 84.03 |
Moderate SQL Complexity Artifacts | 23 | 15.97 |
High SQL Complexity Artifacts | 0 | 0 |
Very High SQL Complexity Artifacts | 0 | 0 |
Total | 144 | 100 |
The modification of the size of the 2 categories 'Low SQL Complexity Artifacts' and 'Moderate SQL Complexity Artifacts' between snapshots 1 and 2 leads to different metric grades for 'SQL Complexity distribution' in the 2 snapshots.
The population of each category is computed by the procedure provided in *Local Procedure name *field in the Main tab of the metric tree configuration window of ADGAdmin.
The Factor impacting the size of the populations are the content of the KB and used thresholds. The used thresholds are not stored on central site.
If you have to explain reason leading to have different category size in both snapshots, You should ask for the knowledge base(s) used for each snapshot computation. Having the KBS, you should proceed as follow
- Get list of objects used to compute the population size of the identified category by executing the Local Procedure of the associated category on the KB involved for the snapshot computation
- Compare the 2 lists and identify the difference
- Identify the reason leading to this difference (modification the used thresholds, modification of the code source, the modification of the job configuration, fixed bug ....)
- If you cannot identify the reason, transmit to the foundation team.
Example, the Moderate SQL Complexity Artifacts, the used procedure is APM_SCOPE_SQL_CPLX_AVG
The used thresholds are provided in the script of the procedure its self Begin Select PARAM_NUM_VALUE Into L_AVERAGE_THRESHOLD From DSS_METRIC_PARAM_VALUES Where METRIC_ID = I_METRIC_ID And PARAM_INDEX= 1;
Select PARAM_NUM_VALUE Into L_HIGH_THRESHOLD From DSS_METRIC_PARAM_VALUES Where METRIC_ID = I_METRIC_ID And PARAM_INDEX= 2;
ERRORCODE := APM_SCOPE_SQL_CPLX_MIN_MAX( I_SNAPSHOT_ID, I_METRIC_PARENT_ID, I_METRIC_ID, I_METRIC_CHILD_ID, L_AVERAGE_THRESHOLD, L_HIGH_THRESHOLD);
Return ERRORCODE; End APM_SCOPE_SQL_CPLX_AVG;
The used thresholds are L_AVERAGE_THRESHOLD and L_HIGH_THRESHOLD