This view is designed to deliver the information about the modules under the responsibility of the entity you are looking at (i.e., a node of the Organization tree - Organization, Team, Developer etc.) necessary to check their overall quality status, the trends of their overall quality, their latest status regarding each one of the Health Factors and the evolution of violations to Quality Rule-based metrics whose "critical contribution" option has been set:
It displays information that is similar to the FRAME_PORTAL_PORTFOLIO_VIEW - Assessment - Portfolio Level.
Left hand panel
Please see the section Left hand panel in Using the CAST Dashboard for more information about this.
Main window panels
Ensure that you select the entity (organization, team, developer) you require using the drop down selector in the top left hand corner.
Four main panels are available:
|Current Overall Status||Mapping of applications from the selected entity (organization, team, developer) according to their TQI (Technical Quality Index) on the horizontal axis, their Technical Debt per kLOC (please see the section Left hand panel in Using the CAST Dashboard for more information about Technical Debt) on the vertical axis, and their functional weight as the bubble size. Hyperlinks within the list of Applications lead to the FRAME_PORTAL_RISK_VIEW - Assessment - Application Level (for the Application) and to the FRAME_PORTAL_INVESTIGATION_VIEW - Investigation - Quality Model Drilldown for the kLOC. This panel is designed to help identify abnormal situations (e.g.: mission-critical applications with poor TQI).|
|Portfolio History||Displays the evolution (over successive snapshots) of the TQI (Technical Quality Index) of the selected entity (organization, team, developer) as lines, against the volume (counted as kLOCs) of the selected entity (organization, team, developer). It helps detect trends in overall quality and can also help detect the impact of a large addition/deletion of code on quality.|
|Bottom two panels||Solely relying on high level indicators to control the health of applications is not enough. Software development is also a question of details and a single violation of a critical performance rule can have severe impacts when occurring in production. An aggregated model can only offer what it is meant for: an effective summary of the quality. But due to the very volume of information they summarize, they fail to visualize the elementary evolutions that can jeopardize the application behavior. Therefore, besides the aggregated quality model, there is a need for a solution to monitor a few number of critical rules and be sure that no (or little) violations to these rules occurs. Hence the following two panels:|