The source code delivery process streamlines the analysis set-up by suggesting a default analysis configuration when the delivery is validated and accepted. In the vast majority of cases, this default analysis configuration will produce coherent analysis results, however CAST recommends that the AIP Super Operator reviews the default configuration before running the analysis. This review process contains several steps, each listed below.
- Review technology specific settings as well as Analysis Units
Review technology specific settings
Move to the AIP Console screen if you are not already there:
Now click the Config option in the left hand panel. The Application - Config - Analysis screen is displayed showing a list of all technologies detected in the application and for which analysis configuration settings exist.
In the first instance, you should ensure that all technologies that you are expecting are listed in this screen. If any technologies are not listed, then there is likely an issue with the delivery of the source code and the delivery should be rejected and redelivered.
Universal Technology is used to represent technologies that are handled through CAST AIP extensions. Typically, you'll find the following technologies represented through Universal:
- SQL (delivered as .SQL files)
- iOS Xcode (Objective-C/Swift etc.)
Next you should inspect the technology settings by clicking each technology listed and drilling down into the settings:
Click to enlarge
Technology settings are organized as follows:
- Settings specific to the technology for the entire Application
- List of Analysis Units (a set of source code files to analyze) created for the Application
- Settings specific to each Analysis Unit (typically the settings are the same as at Application level) that allow you to make fine-grained configuration changes.
You should check that settings are as required and that at least one Analysis Unit exists for the specific technology.
Review dependency configuration
This step involves checking that the dependencies settings are correct for all your technologies. Dependencies are configured at Application level for each technology and are configured between individual Analysis Units/groups of Analysis Units. Move to the AIP Console screen if you are not already there:
Now click the Config option in the left hand panel, expand each Technology and locate the Dependencies section:
When a dependency is configured, during the analysis process source code corresponding to the Analysis Units in the Source column (highlighted in the image below) is scanned for references to objects in source code corresponding to the Analysis Units in the Target column (highlighted in the image below). If any matches are found, then a link between the two objects will be created:
Click to enlarge
References are traced using search strings which is less selective than parser based technology used for other links traced by the analyzer. This technology detects a reference to an object wherever its name is mentioned, regardless of the context in which this reference occurs. As a result, incorrect links may be traced if a string happens to match a given name even if it is not logically related to the corresponding object. As a result you may have to intervene to filter incorrect references - see Advanced onboarding - validate Dynamic Links.
Incorrect dependency settings may result in missing links - missing inter-technology dependencies - or in a large number of Dynamic Links - incorrect dependencies. This can have a knock on effect on Quality Rule results and on Transaction flow.
There are three types of dependencies, explained below:
Default dependency rules are created automatically by AIP Console between Analysis Units and groups of Analysis Units. If no dependency rules are "discovered" (during the source code delivery process), then AIP Console will create a "default" rule. The default rules are as follows:
- dependencies will be created between all "client" Analysis Unit groups and all "SQL" Analysis Units/groups of Analysis Units (whether legacy PL/SQL / T-SQL or Universal SQL analysis units)
- dependencies will be created between all JEE Analysis Unit groups and all Mainframe Analysis Unit groups
In the screen shot below, no dependencies were "discovered" between any JEE Analysis Units and PL/SQL Analysis Units during the source code delivery - as such a default rule has been created by AIP Console to ensure links are created where references exist:
Discovered dependency rules are also created automatically by the AIP Console - these rules are directly "discovered" from the source code. Rules that have been "discovered" are always between individual Analysis Units since they are based on dependencies between projects (i.e. a classpath, for example for JEE, .NET, C/C++ etc.):
If there were no "missing project" alerts when Advanced onboarding - validate and accept the version, then you can be confident that all "discovered" dependencies are correct. If you do have "missing project" alerts, you should always try to fix these first and if this is not possible you can then create a custom dependency.
Custom dependency rules are those that have been created manually. Typically these are created to cover a situation which the automatic Default and Discovered dependency rules do not take into account - you will need to examine the source code manually to determine what dependencies you should create.
One example where a custom rule might be necessary is in cross technology situations where code in Analysis Unit Technology A relies on code in Analysis Unit Technology B:
Another example would be where a "discoverer" does not exist for a technology and you are forced to create custom Analysis Units - in this case dependency rules must be created manually. Where several client technologies (for instance Java and C/C++) or several database schemas exist, the AIP Super Operator must set the correct dependencies, overriding the default settings if necessary.
Review module configuration
Modules are used extensively in the CAST dashboards and other features as a means to configure analysis results for a given Application into meaningful groups or sets for display purposes:
The definition of modules for a given Application also impacts the accessibility and usability of application assessment results. Specifically, in the CAST AIP quality and quantity model, the module is the smallest assessment entity. The definition of a module can improve the relevance of the analysis results by linking modules to development teams and application functional layers etc.
By default, AIP Console uses a "Per Technology" module strategy, i.e. one module per technology discovered in your Version will always be created when generating the snapshot unless you change the settings (source code from all Analysis Units in a given technology will be placed in one module). Note that Modules are "attached" to the application (not the version).
You can view the strategy in place using the Application - Config - Modules screen. Move to the AIP Console screen if you are not already there:
Then access the Application - Config - Modules screen:
You can change the default strategy if necessary by selecting the option you prefer:
Review Assessment Model
At the core of CAST AIP and extensions that are used during an analysis is the Assessment Model: a set of rules, sizing/quality measures and distributions in a hierarchy of parent Technical Criteria and Business Criteria that are used to grade an Application and to provide information about any defects (code violations) in the Application's source code. The set of rules, sizing/quality measures and distributions are predefined and preconfigured according to established best practices, however, it is possible to modify some aspects in order to match your own environment, for example:
- The weight (i.e. "importance") of a rule in its parent Technical Criterion. This value can be changed if some contributions are thought more or less important in a given context.
- The criticity of a rule (i.e. whether the rule is considered "critical" or not in a given context). This allows you to "threshold" the grade of the aggregated rule with the lowest grade of its "critical" contribution.
- Whether a rule is enabled or disabled in the next analysis.
See Application - Config - Assessment Model for more information.
Need for homogenity
Modifying the Assessment Model is considered standard practice, however, these updates must be performed with care as the legitimacy of trend and comparison information greatly depends on the methodology you use for the update. If the Assessment Model is not homogeneous over time and context, assessment information cannot be compared. Even for one-shot assessments, users will tend to compare assessment results - outside of the dashboard context - from their previous experiences. Homogeneity is therefore as important in this one-shot perspective as in a multiple assessment perspective and you should proceed with care.