Application onboarding without Fast Scan - Step-by-step onboarding - Review analysis configuration and execution


The source code delivery process streamlines the analysis set-up by suggesting a default analysis configuration when the delivery is validated and accepted. In the vast majority of cases, this default analysis configuration will produce coherent analysis results, however CAST recommends that the default configuration is reviewed before running the analysis. This review process contains several steps, each listed below.

  • Review technology specific settings as well as Analysis Units
  • Review dependency configuration
  • Review module configuration
  • Review Assessment Model

Review technology specific settings

Move to the Console screen if you are not already there:

Locate the Application and click it to access the Application - Overview page:

Now click the Config option in the left hand panel. The Application - Config - Analysis screen is displayed showing a list of all technologies detected in the application and for which analysis configuration settings exist.

In the first instance, you should ensure that all technologies that you are expecting are listed in this screen. If any technologies are not listed, then there is likely an issue with the delivery of the source code and the delivery should be rejected and redelivered.

Universal Technology is used to represent technologies that are handled through CAST extensions. Typically, you'll find the following technologies represented through Universal:

  • SQL (delivered as .SQL files)
  • Python
  • HTML5/JavaScript (for web technologies such as AngularJS, Node.js, jQuery etc.)
  • iOS Xcode (Objective-C/Swift etc.)
  • ...

Next you should inspect the technology settings by clicking each technology listed and drilling down into the settings:

Click to enlarge

Technology settings are organized as follows:

  • Settings specific to the technology for the entire Application
  • List of Analysis Units (a set of source code files to analyze) created for the Application
    • Settings specific to each Analysis Unit (typically the settings are the same as at Application level) that allow you to make fine-grained configuration changes.

You should check that settings are as required and that at least one Analysis Unit exists for the specific technology that includes all the source code you want to analyze.

Note that if source code you have delivered is not present in an Analysis Unit it will not be submitted for analysis.

Review dependency configuration

This step involves checking that the dependencies settings are correct for all your technologies. Dependencies are configured at Application level for each technology and are configured between individual Analysis Units/groups of Analysis Units. Move to the Console screen if you are not already there:

Locate the Application and click it to access the Application - Overview page:

Now click the Config option in the left hand panel, expand each Technology and locate the Dependencies section:

When a dependency is configured, during the analysis process source code corresponding to the Analysis Units in the Source column (highlighted in the image below) is scanned for references to objects in source code corresponding to the Analysis Units in the Target column (highlighted in the image below). If any matches are found, then a link between the two objects will be created:

Click to enlarge

References are traced using search strings which is less selective than parser based technology used for other links traced by the analyzer. This technology detects a reference to an object wherever its name is mentioned, regardless of the context in which this reference occurs. As a result, incorrect links may be traced if a string happens to match a given name even if it is not logically related to the corresponding object. As a result you may have to intervene to filter incorrect references - see Validate Dynamic Links.

Incorrect dependency settings may result in missing links - missing inter-technology dependencies - or in a large number of Dynamic Links - incorrect dependencies. This can have a knock on effect on Quality Rule results and on Transaction flow.

There are three types of dependencies, explained below:

Default rules

Default dependency rules are created automatically by Console between Analysis Units and groups of Analysis Units. If no dependency rules are "discovered" (during the source code delivery process), then Console will create a "default" rule. The default rules are as follows:

  • dependencies will be created between all "client" Analysis Unit groups and all "SQL" Analysis Units/groups of Analysis Units (whether legacy PL/SQL / T-SQL or Universal SQL analysis units)
  • dependencies will be created between all JEE Analysis Unit groups and all Mainframe Analysis Unit groups

In the screen shot below, no dependencies were "discovered" between any JEE Analysis Units and PL/SQL Analysis Units during the source code delivery - as such a default rule has been created by Console to ensure links are created where references exist:

Discovered rules

Discovered dependency rules are also created automatically by Console - these rules are directly "discovered" from the source code. Rules that have been "discovered" are always between individual Analysis Units since they are based on dependencies between projects (i.e. a classpath, for example for JEE, .NET, C/C++ etc.):

If there were no "missing project" alerts when Application onboarding without Fast Scan - Step-by-step onboarding - Validate and accept the version, then you can be confident that all "discovered" dependencies are correct. If you do have "missing project" alerts, you should always try to fix these first and if this is not possible you can then create a custom dependency.

Custom rules

Custom dependency rules are those that have been created manually. Typically these are created to cover a situation which the automatic Default and Discovered dependency rules do not take into account - you will need to examine the source code manually to determine what dependencies you should create.

One example where a custom rule might be necessary is in cross technology situations where code in Analysis Unit Technology A relies on code in Analysis Unit Technology B:

Another example would be where a "discoverer" does not exist for a technology and you are forced to create custom Analysis Units - in this case dependency rules must be created manually. Where several client technologies (for instance Java and C/C++) or several database schemas exist, the correct dependencies must be set, overriding the default settings if necessary. 

Manual permanent dependencies

It is possible to force Console to create a dependency between specific languages/technologies and which is valid for every single application managed on a specific Node in Console - in other words, once this is configured, it will no longer be necessary to manually create the dependencies when Console has not automatically created them.

This can be achieved using a file called dependencies-matrix.xml located on the Node. See Application - Config - Analysis - Manual dependency configuration for more information.

Review module configuration

Modules are used extensively in the CAST dashboards and other features as a means to configure analysis results for a given Application into meaningful groups or sets for display purposes:

Indeed, objects that are not part of a module:

  • cannot be seen in the CAST dashboards
  • cannot be included in Architecture Models
  • cannot be included in Transaction/Function Point configurations

The definition of modules for a given Application also impacts the accessibility and usability of application assessment results. Specifically, in the CAST quality and quantity model, the module is the smallest assessment entity. The definition of a module can improve the relevance of the analysis results by linking modules to development teams and application functional layers etc.

By default, Console uses a "Per Technology" module strategy, i.e. one module per technology discovered in your Version will always be created when generating the snapshot unless you change the settings (source code from all Analysis Units in a given technology will be placed in one module). Note that Modules are "attached" to the application (not the version).

You can view the strategy in place using the Application - Config - Modules screen. Move to the  Console screen if you are not already there:

Locate the Application and click it to access the Application - Overview page:

Then access the Application - Config - Modules screen:

You can change the default strategy if necessary by selecting the option you prefer:

CAST recommends that you choose an "automatic" module generation type (Full content, Per technology, Per analysis unit) for your initial analysis/snapshot since it is not possible to verify the content of manual modules based on object filters until an analysis/snapshot has been run.

Review Assessment Model

At the core of CAST and extensions that are used during an analysis is the Assessment Model: a set of rules, sizing/quality measures and distributions in a hierarchy of parent Technical Criteria and Business Criteria that are used to grade an Application and to provide information about any defects (code violations) in the Application's source code. The set of rules, sizing/quality measures and distributions are predefined and preconfigured according to established best practices, however, it is possible to modify some aspects in order to match your own environment, for example:

  • The weight (i.e. "importance") of a rule in its parent Technical Criterion. This value can be changed if some contributions are thought more or less important in a given context.
  • The criticality of a rule (i.e. whether the rule is considered "critical" or not in a given context). This allows you to "threshold" the grade of the aggregated rule with the lowest grade of its "critical" contribution.
  • Whether a rule is enabled or disabled in the next analysis.

See Application - Config - Assessment Model for more information.

Need for homogeneity

Modifying the Assessment Model is considered standard practice, however, these updates must be performed with care as the legitimacy of trend and comparison information greatly depends on the methodology you use for the update. If the Assessment Model is not homogeneous over time and context, assessment information cannot be compared. Even for one-shot assessments, users will tend to compare assessment results - outside of the dashboard context - from their previous experiences. Homogeneity is therefore as important in this one-shot perspective as in a multiple assessment perspective and you should proceed with care.

What next?

See Application onboarding without Fast Scan - Step-by-step onboarding - Run the analysis.