For developers that are using CAST as their main interface to work through their remediation, CAST Engineering Dashboard (CED) provides a rich interface for navigating and prioritizing developer activities. The CED assembles all the information in one place the developer needs. The list of violations, the criticality of these violations, the weight of each violation, the estimated technical debt (cost to remediate), the best location to remediate, and some remediation advice, with a link for more advice on the CAST website:

Click to enlarge:

Examples of addressed critical application quality issues...

The CAST Engineering Dashboard is designed to address a wide variety of issues affecting large organizations. Some of the issues that have already been addressed by the CAST Engineering Dashboard are listed below:

Framework compliance - avoiding security flaws

For public facing websites (for example bank portals), security is a number one priority. Thus, all applications have to be thoroughly checked before they are put into "production".

However, given the amount of code and the importance of the checks, traditional tools are not up to the task and do not cover the languages involved - websites, like any current mission-critical application, are based on multiple languages: HTML, JavaScript, JSP Custom Tags, Java, PL/SQL, Cobol etc.

As a result, faults are usually only discovered in the applications when the fraud is detected - not good for customers or for businesses.

Multi-Applications - avoiding data corruption

OO persistence frameworks are becoming more and more popular, mainly for time-to-market and speed of development. As a result, persistence frameworks are used in conjunction with existing databases and other applications on an increasingly frequent basis.

However, data corruption is common since the persistence tool ensures the data integrity in "data objects", but not in the database while the other applications maintain integrity in the database. As a result, two applications could be looking at the same database tables, without seeing the same data and therefore causing data corruption.

Checks are therefore required to ensure that all data is correct and not corrupted.

Outsourcing control - quality and quantity assessment

Quality and Quantity Assessment is very important where outsourced applications are business-critical. Often, organizations only have limited time and resources to check the nature and volume of the delivered code and that code developed by the outsourcer meets their internal development practices.

Given the amount of code and the importance of the checks, manual checks are not an option. The only way the quality of the applications can be checked is to use tools. However, there are disadvantages in doing this - the testers need to have a full understanding of the code and the size of the code because by only using sampling techniques, one would take the risk of missing something critical. In addition, modern applications often consist of several tiers and are multi-language.

Scalability Issues

Sometimes successful business applications that are targeted at a small number of users suddenly become very popular and attract more users than initially expected. As a result, performance becomes extremely poor and cannot be addressed simply by upgrading hardware - typically these problems are related to scalability issues pertaining to the application itself (its design and its architecture rather than simple coding issues).

Because of the multi-language nature of these types of applications and the fact that point solutions are often the only method of dealing with architectural issues, organizations cannot compare results because the utilities used are inconsistent (inconsistent metric calculation rules for example). In addition, architecture issues cannot be dealt with because utilities cannot track transactions from end-to-end and are only dealing with low-level coding practices (development tools).

CAST Engineering Dashboard solutions...

The CAST Engineering Dashboard provides a solution to a wide variety of issues (see above for more information). These can be summarized as follows:

What is the CAST Engineering Dashboard?

The CAST Engineering Dashboard is part of the CAST Application Intelligence Platform and is a browser-based portal that provides detailed technical information about a company's set of applications.

How does it work?

Note that a specific license is required to install the CAST Engineering Dashboard.

Briefly, the following steps have to occur before data can be viewed with the CAST Engineering Dashboard:

Note also that you should ensure that objects you want to view in the CAST Engineering Dashboard are part of a Module, i.e.:

  • A module created manually in the CAST Management Studio (a User Defined Module)
  • A module created automatically by the CAST Management Studio if no User Defined Modules exist when the snapshot is generated.

Interaction with the CAST Discovery Portal (CDP)

The CAST Engineering Platform is closely associated with the CAST Discovery Portal (please see the CAST Discovery Portal - CDP for more information) and is, in fact, delivered in the same web archive (WAR). Some aspects of the CAST Discovery Portal can be directly accessed from the CAST Engineering Platform, however the CAST Discovery Portal is a fully functional standalone portal.

When are CAST Discovery Portal functionalities used?

The functionalities in the CAST Discovery Portal (in particular the Object Browser) are used when you are exploring the technical content of a particular snapshot (i.e. the objects within it). When looking at the details of a specific violation and the objects involved in this violation using, for example, the FRAME_PORTAL_DEVELOPMENT_VIEW - Development View or the FRAME_PORTAL_VIOLATION_VIEW - Violation View, a Technical Context link will be displayed in the expandable Violating Object section. If you click this link, a second browser window will be loaded, which will actually transfer you into the Discovery Portal:

Clicking the top right "Home Page" link in the new browser Window will still transfer you back to your CAST Engineering Dashboard home page even if you are using the CAST Discovery Portal in the second browser window.

Using the top right "Logout" link will actually log you out of both the CAST Discovery Portal and the CAST Engineering Dashboard. This is because even though a second browser window is loaded, it is in fact still one user session.

Concepts and Notions

Systems / Applications / Modules

Throughout the CAST Engineering Dashboard, the terms "system", "application" and "module" are used. These terms are used to describe the corporate IT Portfolio of applications. In the CAST Engineering Dashboard, the corporate IT Portfolio is divided into systems consisting of applications, which are themselves composed of modules. "Applications" and "Modules" are defined by the CAST Administrator using the CAST Management Studio:

With regard to modules, using the CAST Management Studio you can define your own modules using analysis results. Thus:


Throughout the document, the term "snapshot" is used. The CAST Engineering Platform displays information at a particular moment in time, i.e. a snapshot:

It does not display live information (i.e. information that is updated in real time). If a set of snapshots of the data has been made, it is then possible to view information from different snapshots. As a result, a complete picture can be built up showing the progress (or lack of progress) over time and the users are then in a position to carry out trend analysis. Within a given snapshot, the elements of the IT Portfolio are tagged with their version label so that it is possible to benchmark or monitor applications by selecting specific versions.


The word artifact is used throughout the document to indicate low level programming elements from different technologies under the same umbrella. Indeed Functions, Methods, Subs, Events, Triggers, Procedures and Programs have different names within different technologies. In the CAST Engineering Platform, they are counted and listed as artifacts. As an example, the metric "Number of Artifacts" has been used successfully by CAST as an alternative technical size measure since 1996.

You can see a list of objects that are currently defined as artifacts, here: List of Artifacts - part of the Portal Administration.

Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Quality Rules

Throughout this document, the terms "Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Quality Rules" designate the following computations:

In the sizing Assessment Model, the physical measure of an object (e.g.: the number of attributes):

Note that the term "metric" is also used to designate all types of computations in a generic way. All these computations are organized in an Assessment Model (see below).

Assessment Model

The term "Assessment Model" are used throughout this document and the CAST Management Studio to indicate how data (i.e., "Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Diagnostics") that are to be computed is organised. In essence, the "Assessment Model" is a tree structure containing branches and leaves (you can manage the "Assessment Model" in the CAST Management Studio).

The Assessment Model is composed of two sub-models:

Sizing model

Organized into four branches:

Both Technical Size information and Functional Weight information are automatically computed based on physical measures of the application source code: these are known as quantity "metrics".

Productivity and Business Value information is optionally imported from an XML file at the Module-level (this is done using the CAST Management Studio application - see the Background Facts and Business Value Metric upload in the Portal Administration) and then automatically aggregated by the framework: these are known as "background facts".

Quality model

Composed of three layers:

Organized in three branches:

Quality Rules can contribute to different "technical criteria" with different weights and "technical criteria" can contribute to different "health factors" and "rule compliance" with different weights. The "Assessment Model" controls the way CAST AD information is computed AND the way it is displayed.

Metric Grade / Status

Each quality "Metric", "Technical Criterion", "Health Factor", and "Rule Compliance" leads to an assessment of an ensemble of source code through a "Grade", that is, a decimal value ranging from '1' to '4' - the higher the score, the better -, leading eventually to a "Status" using one of the following values: 'Very High Risk', 'High Risk', 'Moderate', and 'Low Risk'.

The ensemble of source code is determined by the context that is assessed: the source code of a given "Module", the source code of all "Modules" of a given "Application", the source code of all "Modules" assigned to a given "Developer", etc. The "Grade" is based on the following aggregation / computation mechanisms:

For a "Diagnostic"-based "Metric": thresholds on the percentage of compliant objects. Default thresholds being: more than 99% compliance required to reach a '4.00' plateau which would lead to a 'Low Risk' status, more than 90% compliance required to reach '3.00' which would lead to an 'Moderate' status, more than 70% compliance required to reach '2.00' which would lead to an 'High Risk' status, more than 30% compliance required to leave a '1.00' plateau.

For a "Distribution"-based "Metric": similar thresholds are applied on each of the distribution category (this the use of a distribution to compute a grade and a status for the module) "Distribution-based metric" example: use the distribution based on the measure of the "number of LOC" characteristic of all objects of a module- to give the module a grade between 1 and 4 and a status; if group 1 contains more than 98% of all the measured objects of the module, the module grade is '1' and its status is "Very High Risk"; if group 2 contains less than 30% of all the measured objects of the module, the module grade is '4' and its status is "Low Risk", etc. In other words, a distribution is simply the split action to create groups of objects according to one of their measurable attribute while a distribution-based metric is the use of a distribution to say that a module is good or bad.

For a pure measure-based "Metric": similar thresholds are applied to the physical measure itself.

Complexity ratings

The CAST Quality Model contains various metrics that are classed as "Complexity metrics", i.e. metrics that measure the complexity of artifacts in an application. See How Complexity metrics are calculated by CAST for more information.