This documentation is no longer maintained and may contain obsolete information.
Examples of addressed critical application quality issues...
The CAST Engineering Dashboard is designed to address a wide variety of issues affecting large organizations. Some of the issues that have already been addressed by the CAST Engineering Dashboard are listed below:
Framework compliance - avoiding security flaws
For public facing websites (for example bank portals), security is a number one priority. Thus, all applications have to be thoroughly checked before they are put into "production".
However, given the amount of code and the importance of the checks, traditional tools are not up to the task and do not cover the languages involved - websites, like any current mission-critical application, are based on multiple languages: HTML, JavaScript, JSP Custom Tags, Java, PL/SQL, Cobol etc.
As a result, faults are usually only discovered in the applications when the fraud is detected - not good for customers or for businesses.
Multi-Applications - avoiding data corruption
OO persistence frameworks are becoming more and more popular, mainly for time-to-market and speed of development. As a result, persistence frameworks are used in conjunction with existing databases and other applications on an increasingly frequent basis.
However, data corruption is common since the persistence tool ensures the data integrity in "data objects", but not in the database while the other applications maintain integrity in the database. As a result, two applications could be looking at the same database tables, without seeing the same data and therefore causing data corruption.
Checks are therefore required to ensure that all data is correct and not corrupted.
Outsourcing control - quality and quantity assessment
Quality and Quantity Assessment is very important where outsourced applications are business-critical. Often, organizations only have limited time and resources to check the nature and volume of the delivered code and that code developed by the outsourcer meets their internal development practices.
Given the amount of code and the importance of the checks, manual checks are not an option. The only way the quality of the applications can be checked is to use tools. However, there are disadvantages in doing this - the testers need to have a full understanding of the code and the size of the code because by only using sampling techniques, one would take the risk of missing something critical. In addition, modern applications often consist of several tiers and are multi-language.
Scalability Issues
Sometimes successful business applications that are targeted at a small number of users suddenly become very popular and attract more users than initially expected. As a result, performance becomes extremely poor and cannot be addressed simply by upgrading hardware - typically these problems are related to scalability issues pertaining to the application itself (its design and its architecture rather than simple coding issues).
Because of the multi-language nature of these types of applications and the fact that point solutions are often the only method of dealing with architectural issues, organizations cannot compare results because the utilities used are inconsistent (inconsistent metric calculation rules for example). In addition, architecture issues cannot be dealt with because utilities cannot track transactions from end-to-end and are only dealing with low-level coding practices (development tools).
CAST Engineering Dashboard solutions...
The CAST Engineering Dashboard provides a solution to a wide variety of issues (see above for more information). These can be summarized as follows:
- Objective and accurate quantity assessment using various measurements
- Ability to address quality assessment through different approaches - impacts on business, technical rule compliance...
- Multi-language support - essential for modern applications using multi-language architecture
- Ability to track architectural, application-wide quality issues and to check transactions from end-to-end, a task that is beyond what point solutions can offer.
- Ability to create or customize checks in a very simple manner using standard SQL language
- Automatic analysis of the entire set of applications and automatic control over defined rules
- The ability to create or customize standard controls in a simple manner to define and check in-house rules
What is the CAST Engineering Dashboard?
The CAST Engineering Dashboard is part of the CAST Application Intelligence Platform and is a browser-based portal that provides detailed technical information about a company's set of applications.
How does it work?
- The CAST Engineering Dashboard is installed on a supported application server and interacts with the CAST Dashboard Service to extract the required information
- The information supplied by the portal is derived from specific Metrics and Quality Rules that are provided by CAST
- Users access the information via a standard web browser
Briefly, the following steps have to occur before data can be viewed with the CAST Engineering Dashboard:
- CAST Analyzers are run via the CAST Management Studio to populate each Analysis Service with information
- A snapshot is generated in the CAST Management Studio to run initial metrics against the information provided by the Analyzers. Auto-created modules and user-defined modules are created. CAST System Views are also auto-updated.
- Dashboard Service is populated with information (supplied by the CAST Analyzers) from each Analysis Service
- Further aggregation metrics are computed within the Dashboard Service
- Application server interacts with the Dashboard Service to feed data to the browser (CAST Engineering Dashboard).
Note also that you should ensure that objects you want to view in the CAST Engineering Dashboard are part of a Module, i.e.:
- A module created manually in the CAST Management Studio (a User Defined Module)
- A module created automatically by the CAST Management Studio if no User Defined Modules exist when the snapshot is generated.
Interaction with the CAST Discovery Portal (CDP)
The CAST Engineering Platform is closely associated with the CAST Discovery Portal (please see the CAST Discovery Portal - CDP for more information) and is, in fact, delivered in the same web archive (WAR). Some aspects of the CAST Discovery Portal can be directly accessed from the CAST Engineering Platform, however the CAST Discovery Portal is a fully functional standalone portal.
When are CAST Discovery Portal functionalities used?
The functionalities in the CAST Discovery Portal (in particular the Object Browser) are used when you are exploring the technical content of a particular snapshot (i.e. the objects within it). When looking at the details of a specific violation and the objects involved in this violation using, for example, the FRAME_PORTAL_DEVELOPMENT_VIEW - Development View or the FRAME_PORTAL_VIOLATION_VIEW - Violation View, a Technical Context link will be displayed in the expandable Violating Object section. If you click this link, a second browser window will be loaded, which will actually transfer you into the Discovery Portal:
Clicking the top right "Home Page" link in the new browser Window will still transfer you back to your CAST Engineering Dashboard home page even if you are using the CAST Discovery Portal in the second browser window.
Using the top right "Logout" link will actually log you out of both the CAST Discovery Portal and the CAST Engineering Dashboard. This is because even though a second browser window is loaded, it is in fact still one user session.
To experience this behavior, you must have already defined a Discovery Portal site in the Site List Administration page. See Quick Access and also Deploy the CAST web applications for more information.
It is possible to deactivate the Technical Context link if required. Please see CAST-CED - Disabling Technical Context link to the CAST Discovery Portal.
Concepts and Notions
Systems / Applications / Modules
Throughout the CAST Engineering Dashboard, the terms "system", "application" and "module" are used. These terms are used to describe the corporate IT Portfolio of applications. In the CAST Engineering Dashboard, the corporate IT Portfolio is divided into systems consisting of applications, which are themselves composed of modules. "Applications" and "Modules" are defined by the CAST Administrator using the CAST Management Studio:
- Systems: Displayed by default, cannot be modified
- Applications: Entities within a system, for example Invoicing and Debt Recovery applications within a Billing System
- Modules: A module refers to entities within an application. Typically, this is the way the application has been automatically divided up for analysis, i.e.:
- CAST Management Studio: one module = all Analysis Units in one Application
- CAST Management Studio: one module = all Analysis Units in one Application
With regard to modules, using the CAST Management Studio you can define your own modules using analysis results. Thus:
- you may have one large project that you have had to divide up into several smaller elements to ease the analysis process.
- you could divide a single auto-created module into smaller modules or only include certain objects in the module (this is known as a Subset).
- a module you configure could include various different technologies, such as a server technology analysis, and a client technology analysis or it could include only certain objects from the analysis results.
- you can include analysis results in a manually created module even if the same results appear in the auto-created module.
Snapshots
Throughout the document, the term "snapshot" is used. The CAST Engineering Platform displays information at a particular moment in time, i.e. a snapshot:
It does not display live information (i.e. information that is updated in real time). If a set of snapshots of the data has been made, it is then possible to view information from different snapshots. As a result, a complete picture can be built up showing the progress (or lack of progress) over time and the users are then in a position to carry out trend analysis. Within a given snapshot, the elements of the IT Portfolio are tagged with their version label so that it is possible to benchmark or monitor applications by selecting specific versions.
Artifact
The word artifact is used throughout the document to indicate low level programming elements from different technologies under the same umbrella. Indeed Functions, Methods, Subs, Events, Triggers, Procedures and Programs have different names within different technologies. In the CAST Engineering Platform, they are counted and listed as artifacts. As an example, the metric "Number of Artifacts" has been used successfully by CAST as an alternative technical size measure since 1996.
You can see a list of objects that are currently defined as artifacts, here: List of Artifacts - part of the Dashboard administration.
Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Quality Rules
Throughout this document, the terms "Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Quality Rules" designate the following computations:
- the term "metric" designates in the quality Assessment Model, the assessment of an object through a grade and a status based on a physical measure, based on the result of a "Quality Rule", or based on the result of a "distribution" using thresholds
- the term "Quality Rule" designates a computation that measures the compliance to a given rule; each "Quality Rule" supports its elementary "metric", i.e., for each configured "Quality Rule", the computing of the corresponding elementary compliance ratio, as a percentage, is automatically and accordingly configured
- the term "distribution" designates a 4-category distribution of objects for a given physical measure; each "distribution" supports its elementary "metric", i.e., for each configured "distribution", the computing of the corresponding category share, as a percentage, is automatically and accordingly configured (i.e. it is the split or scattering of object from a module into four groups or "categories" according to a measured characteristic of the objects): "Distribution" example:- measure the "number of LOC" characteristic of all objects of a module- split objects into four groups according to this measured value: group 1 contains objects with a value greater than X, group 2 contains objects with a value between X and Y, etc. In other words, a distribution is simply the split action to create groups of objects according to one of their measurable attribute while a distribution-based metric is the use of a distribution to say that a module is good or bad.
In the sizing Assessment Model, the physical measure of an object (e.g.: the number of attributes):
- the term "technical criterion" designates an aggregation of the results of multiple "diagnostics", "distributions", and/or quality "metrics" to generate an intermediate-level assessment through a grade and a status
- the terms "health factor" and "rule compliance" designate an aggregation of the results of multiple "technical criteria" to generate highest-level assessments through a grade and a status.
Note that the term "metric" is also used to designate all types of computations in a generic way. All these computations are organized in an Assessment Model (see below).
Assessment Model
The term "Assessment Model" are used throughout this document and the CAST Management Studio to indicate how data (i.e., "Metrics / Health Factors / Rule Compliance / Technical Criteria / Distributions / Diagnostics") that are to be computed is organised. In essence, the "Assessment Model" is a tree structure containing branches and leaves (you can manage the "Assessment Model" in the CAST Management Studio).
The Assessment Model is composed of two sub-models:
Sizing model
Organized into four branches:
- Technical Size
- Functional Weight
- Business Value (optional)
- Productivity (optional)
Both Technical Size information and Functional Weight information are automatically computed based on physical measures of the application source code: these are known as quantity "metrics".
Productivity and Business Value information is optionally imported from an XML file at the Module-level (this is done using the CAST Management Studio application - see the Background Facts and Business Value Metric upload in the Dashboard administration) and then automatically aggregated by the framework: these are known as "background facts".
Quality model
Composed of three layers:
- Health Factors and Rule Compliance layer
- Technical Criteria layer
- Quality Rules layer
Organized in three branches:
- Business Criteria containing the "health factors"
- Rule Compliance containing the "rule compliance"
- Metric Repository containing all the "technical criteria", whether in-use in "health factors" and "rule compliance" or not
Quality Rules can contribute to different "technical criteria" with different weights and "technical criteria" can contribute to different "health factors" and "rule compliance" with different weights. The "Assessment Model" controls the way CAST AD information is computed AND the way it is displayed.
Metric Grade / Status
Each quality "Metric", "Technical Criterion", "Health Factor", and "Rule Compliance" leads to an assessment of an ensemble of source code through a "Grade", that is, a decimal value ranging from '1' to '4' - the higher the score, the better -, leading eventually to a "Status" using one of the following values: 'Very High Risk', 'High Risk', 'Moderate', and 'Low Risk'.
The ensemble of source code is determined by the context that is assessed: the source code of a given "Module", the source code of all "Modules" of a given "Application", the source code of all "Modules" assigned to a given "Developer", etc. The "Grade" is based on the following aggregation / computation mechanisms:
- For each context larger than a single "Module", the "Grade" of each quality "Metric", "Technical Criterion", "Health Factor", and "Rule Compliance" is simply the average value of the very same quality "Metric", "Technical Criterion", "Health Factor", and "Rule Compliance" for each composing "Module",
- For each "Module" context, the "Grade" of each "Technical Criterion", "Health Factor", and "Rule Compliance" is simply the weighted average value of "Technical Criterion", "Health Factor", and "Rule Compliance" contributors' values
- For each "Module" context, the "Grade" of each quality "Metric" is the result of a mapping between the "Quality Rule", "Distribution", and "Metric" measure and a decimal value ranging from '1' to '4' using thresholds:
For a "Diagnostic"-based "Metric": thresholds on the percentage of compliant objects. Default thresholds being: more than 99% compliance required to reach a '4.00' plateau which would lead to a 'Low Risk' status, more than 90% compliance required to reach '3.00' which would lead to an 'Moderate' status, more than 70% compliance required to reach '2.00' which would lead to an 'High Risk' status, more than 30% compliance required to leave a '1.00' plateau.
For a "Distribution"-based "Metric": similar thresholds are applied on each of the distribution category (this the use of a distribution to compute a grade and a status for the module) "Distribution-based metric" example: use the distribution based on the measure of the "number of LOC" characteristic of all objects of a module- to give the module a grade between 1 and 4 and a status; if group 1 contains more than 98% of all the measured objects of the module, the module grade is '1' and its status is "Very High Risk"; if group 2 contains less than 30% of all the measured objects of the module, the module grade is '4' and its status is "Low Risk", etc. In other words, a distribution is simply the split action to create groups of objects according to one of their measurable attribute while a distribution-based metric is the use of a distribution to say that a module is good or bad.
For a pure measure-based "Metric": similar thresholds are applied to the physical measure itself.
Complexity ratings
The CAST Quality Model contains various metrics that are classed as "Complexity metrics", i.e. metrics that measure the complexity of artifacts in an application. See How Complexity metrics are calculated by CAST for more information.