You are now ready to run the analysis (and save the results) but not quite ready to generate a snapshot. Running the analysis and saving the results for the first time still requires you to validate the analysis log to ensure the configuration is accurate (see 2.2.2. Validate and fine tune the Analysis) and to configure analysis options (see Configure advanced options) that influence the way results are presented as well as extend the assessment model to add optional quality checks. We will focus here on the options available and best practices to follow when executing an application analysis as well when analyzing a new version of a previously on-boarded application.
Run the Analysis
If this is the first time the application will be analyzed, you should run the analysis selecting the Run Analysis Only option before attempting to generate a snapshot:
In particular this allows you to inspect the analysis log to ensure complete and accurate configuration, review Dynamic Links and configure available analysis exploitation components (first time analysis only) as described in following process steps, before generating the final snapshot. In addition, this approach may result in a considerable improvement in analysis efficiency especially for large and complex applications.
- For a first time analysis, during the application on-boarding you should use the Run Analysis Only option.*
- During the analysis of a new release of a previously on-boarded application, you should use the Generate Snapshot option
- CAST recommends to validate the Analysis Services meta model and Dynamic Links about one every other four (4) reanalysis to ensure that the configuration remains relevant. A different frequency may be acceptable depending on the application degree of changes introduce between releases.
*Although this is considered a best practice, often, time permitting, it may be convenient to generate a preliminary version of the snapshot even though you know that a new snapshot generation will be required as the early availability of the dashboard results will greatly improve the ability to validate the analysis configuration.
If you encounter an error that prevents the analysis from completing, this section may provide some answers.
Failed to generate the list of set
If your analysis fails with the error "Failed to generate the list of set. Check in CAST-MS log for 'A cycle was detected' then fix the cycle using 'Details of the cycle' information." this may mean you have cyclical dependency between Analysis Units (however this is not the ONLY cause of this error). To check, open the CAST-MS log file and search for "A cycle was detected". The detail of the cyclical dependency is displayed in the log as follows:
If the error is indeed caused by a cyclical dependency, you can resolve it by selecting the Dependencies tab (in the Application editor in the CAST Management Studio) and remove the dependency that is causing the cycle. Run your analysis again until you no longer encounter the error.
To further improve analysis efficiency you may also want to consider independently analyzing selected Analysis Units to validate their configuration, troubleshoot issues related to analysis performance, memory usage, etc. When using the Run Analysis Only option, CAST AIP offers various troubleshooting features to help with the fine tuning of the analysis configuration:
Some of the most important options to consider include:
- For J2EE applications, Activate CAST Script traces if you have made a CAST Script customization. This will display the DEBUG messages you have added in your CAST Script
Skip analysis result save - considering that the analysis result storage phase can take up to 80% of the total analysis time, this can save considerable time when the initial objective is to simply clean up the analysis log. You should consider this option when:
- You are analyzing a big application and you expect the analysis to take a long time
- You want to check what is missing or miss-configured (such as libraries, paths, missing annotations and XML file handling.): this is common during the first run or when you still have "unresolved" errors/warnings in the analysis log.
You want results quickly to fix issues immediately and then run another analysis. Don't set this option if you are going to run the job overnight: it may be that you have already fixed all the problems, so not saving the analysis results will make you lose time in the long run.
The Skip analysis result save option is equivalent to the Test Analysis option, but you get additional debug options.
When working with J2EE, C/C++/Pro*C and Mainframe technologies, you should always first run the analysis without saving the results (or use the Test Analysis option) so that you can more easily clean up your settings. For ABAP, C# and VB.NET, you can save the results immediately.
- For J2EE applications, Keep Generated Files from JSP will copy the Java files generated from JSP files into the directory selected
- For J2EE applications, Keep Generated Files from Annotations will copy intermediate files that are generated from Java annotations and that are used by XML query files to manage Java annotation semantic (i.e. create objects and links induced by annotations) into the directory selected.
- For J2EE applications, Keep Generated CAST Script from XML & Annotations will copy generated CAST Script files resulting from the XQuery or other XML processing files into the directory selected.
Once the Analysis has been completed successfully it is recommended you make a back-up of the analysis configuration by using the File > Export option in the CAST Management Studio.