Purpose (problem description)
When connecting to the Dashboard "Size Baseline" (FRAME_PORTAL_AFP_VIEW) view, you observe that the number of Function points in the dashboard is different from the Number of Function Points shown in TCC.

Observed in CAST AIP
Release
Yes/No
8.3.x  (tick) 
Observed on RDBMS
RDBMS
Yes/No
CSS (tick)
Step by Step scenario
  1. Run an analysis.
  2. Do some calibration in TCC.
  3. Generate snapshot.
  4. Go to the dashboard  "Size Baseline"  (FRAME_PORTAL_AFP_VIEW) view, you observe that the number of of Function points in the dashboard is greater than the Number of Function Points shown in TCC.
Action Plan
  1. Relevant  Inputs.
  2. Ensure that you are using the triplet as it was created by ServMan.exe, in other terms check if you have not created a new central database and linked it with an already existing local and management database. Note that this is a bad practice, and the consequences of this bad practices are irreversible, please refer Ensure that you are using the triplet as it was created by ServMan.exe in order to check that you are in this case and apply the partial remediation. If this is not your case then please go to the next check.
  3. Ensure that the snapshot have been generated after performing the calibration in TCC.
  4. If the snapshot has been generated after performing calibration on TCC, Check if the issue is not due to a corruption in the KB.
  5. Check the Known bugs
  6. If you cannot find the explanation from the steps above, please contact CAST Technical Support with Relevant  Inputs.

Relevant  Inputs

  1.  Sherlock export with the following options: Export Logs (feature checked by default), Export Configuration files (feature checked by default), Export CAST Bases.

  2.  Screenshots from TCC results and from Dashboard (with URL visible) showing the differences of results.  
Ensure that the snapshot have been generated after performing the calibration in TCC

Ensure that you are using the triplet as it was created by ServMan.exe

In order to check this please run the following query on local, central and mngt schema:

select revision_date from sys_package_history

this should return a result as follows (on local, central and mngt schema):

"2016-11-23 17:08:28.646"
"2016-11-23 17:08:28.646"
"2016-11-23 17:08:29.677"
"2016-11-23 17:08:36.927"
"2016-11-23 17:11:04.879"
"2016-11-23 17:11:05.754"
"2016-11-23 17:11:05.895"
"2016-11-23 17:11:05.895"
"2016-11-23 17:11:05.895"
"2016-11-23 17:11:05.895"
"2016-11-23 17:11:10.723"
"2016-11-23 17:12:00.832"
"2016-11-23 17:12:06.191"
"2016-11-23 17:12:06.191"
"2016-11-23 17:12:07.472"
"2016-11-23 17:12:07.472"
"2016-11-23 17:12:07.488"
"2016-11-23 17:12:07.488"
"2016-11-23 17:12:58.753"

Compare the revision date between the local, central and mngt schema, if the revision date is not the same between the 3 schemas it means that you are not using the triplet as it was created by ServMan.exe.This is a bad practice that creates inconsistencies between module IDs stored in the management base and central schema or knowledge base. Therefore modules cannot be correctly located during synchronization. The consequences of this bad practices are irreversible. Please refer to the following page Urgent Notice - CAST AIP - Risk and impacts of synchronization when restoring only knowledge base or central schema instead of the entire triplet - 4th November 2016 in this page we propose a partial remediation that will help you to avoid the impacts that you will come across after the future generation of snapshots.

Ensure that the snapshot have been generated after performing the calibration in TCC

Ensure that the snapshot have been generated after performing the calibration in TCC

Compare the results between the Enhancement Node of TCC:


and the view "FRAME_PORTAL_EFP_VIEW" from CED:

If the results are identical, then the AFP computation is not inconssistent between CED and TCC, but between the knowledge schema and the central schema.

This is maybe due to the fact that you did some modifications on the TCC configuration without generating a snapshot after that. Indeed, when you perform some calibration on TCC (deleting Data Functions, defining entry points, merging transactions, etc), when computing and saving results in TCC, results are stored in the KB. You need to generate a snapshot after that (you may skip the analysis), to get the results reflected in the Dashboard.

If you are not sure whether the TCC calibration have been done before or after snapshot generation, you may generate a new snapshot by skipping analysis. Then check whether the results on dashboard get synchronized with the results on TCC.

Check if the issue is not due to a corruption in the KB

Check if the issue is not due to a corruption in the KB

If you have used FP_filter_datafunctions and  FP_filter_transactions to automatically merge some data functions or transactions, this may have introduced some corruption on the dss_datafunction  or dss_transaction tables leading to inconsistencies between the results shown on TCC and the one shown in the dashboard.

Corruption of dss_datafunction table

Corruption #1 - Data function in dss_datafunction and not in dss_keysextra

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_datafunction  df
left join dss_keysextra ke
  on ke.object_id = df.object_id
where ke.object_id is null
order by object_name

If this query returns some rows, it means corruption is there.

How to fix it?

The fix for this corruption has not been done. Please contact CAST Technical Support with the required inputs and the results of the query

 

Corruption #2 -  Data function marked as not merged Item but with mergeroot_id 

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_datafunction  df
join dss_keysextra ke
  on ke.object_id = df.object_id
where df.cal_flags & 4 =0 
and df.cal_mergeroot_id !=0
order by object_name

If this query returns some rows, it means corruption is there.

How to fix it?

To fix this corruption, Please perform the following steps:

  1. Do a copy of you schemas
  2. Run the following update on your KB:

    update dss_datafunction  df 
    set cal_flags = df.cal_flags | 4 
    from dss_keysextra ke 
    where df.cal_flags & 4 =0 
    and df.cal_mergeroot_id !=0 
    and ke.object_id = df.object_id
  3. Generate a new snapshot (you can skip analysis)
  4. Dashboard and TCC count of Data Function FP should be in sync.

Since this is a corruption no fix have been done on the product.
 

Corruption #3 -  Data function marked as merged Item but without a valid mergeroot_id

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_datafunction  df
join dss_keysextra ke
  on ke.object_id = df.object_id
where 
     df.cal_flags & 4 =4
 and df.cal_mergeroot_id not in (select object_id from dss_datafunction)
order by object_name

If this query returns some rows, it means corruption is there.

How to fix it?

The fix for this corruption has not been done. Please contact support with the required inputs and the results of the query.

 

Corruption of dss_transaction table

Corruption #1 - Transaction in dss_transaction but not in dss_keysextra

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_transaction  df
left join dss_keysextra ke  
on ke.object_id = df.object_id
where ke.object_id is null
order by object_name

If this query returns some rows, it means corruption is there.

How to fix it?

The fix for this corruption has not been done. Please contact support with the required inputs and the results of the query
 

Corruption #2 - Transaction marked as not merged Item but with mergeroot_id

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_transaction  df
join dss_keysextra ke  
on ke.object_id = df.object_id
where df.cal_flags & 4 =0 
and df.cal_mergeroot_id !=0
order by object_name

If this query returns some rows, it means corruption is there.

How to fix it?

To fix this corruption, Please perform the following steps: 
  1. Do a copy of your schemas.
  2. Run the following update on your KB:

    update dss_transaction  df 
    set cal_flags = df.cal_flags | 4 
    from dss_keysextra ke 
    where df.cal_flags & 4 =0 
    and df.cal_mergeroot_id !=0 
    and ke.object_id = df.object_id
  3. Generate a new snapshot (you can skip analysis).

  4. Dadshboard and TCC count of Data Function FP should be in sync

 Since this is a corruption no fix have been done on the product.  

  

Corruption #3 - Transaction marked as merged Item but not a valid mergeroot_id 

How to check whether you have such a corruption? 

You can check whether you have such a corruption by running the following query.

select *
from dss_transaction  df
join dss_keysextra ke  
on ke.object_id = df.object_id
where      df.cal_flags & 4 =4 
and df.cal_mergeroot_id not in (select object_id from dss_transaction)order by object_name 

If this query returns some rows, it means corruption is there.

How to fix it?

The fix for this corruption has not been done. Please contact support with the required inputs and the results of the query

Known bugs
Known bugs
Case 1: If your project contain analysis unit results for the ASP analysis

There is a bug while building the analysis unit results for the ASP analysis. In fact the issue is that the database subsets belongs to the analysis unit results and some oracle table belong to several oracle projects. The TCC results are correct and the dashboard values are incorrects, this issue is fixed in CAST AIP 8.0.5 and CAST AIP 7.3.5.FP06. No work around can be delivered

Notes/comments

Ticket # 3327, 4205, 7214

Related Pages