2- Check the source files
2-1- Remove the 0 kb source files
Check if there are some source files having size equal to 0 kb. If so remove them
2-2- Check the biggest files
During the parsing step, the analyzer copy in memory the files. Some files may be too big and this can take time to be copied in to memory. In case file size is bigger than few Mega bytes (5 or 10 MB), check if this is a generated file. If so, check if these generated files are useful for other source file (dependencies or references), if not, you can remove them from the source file.
In case of Web files (HTML, XML files), check if the files contain data, or are useful for analysis.
2-3- Check for any CR/LF sequence present in the source files
During the parsing step,these embedded CR or LF sequences are misleading, when the analyzer encounters these sequences, it thinks that a new line is beginning (which is not the case). So this affect analysis run time and leads to the performance issue.
How to find the sequence: Open the file in notepad++ and use the 'extended' search mode and look for
\r\n. That does find all your
So, the solution to overcome this issue is to remove these sequence from the source file before doing the analysis.
How to remove the sequence: Open the file in notepad++ and replace all the \r\n with blank.
We have seen this issue for .Net and Mainframe analysis.
If you have around millions of SQ/IX files (.uax and src) referred in the .uaxdirectory file this causes the performance issue in the SQL Analyzer.
So, you can comment all the SQ/IX files (.uax and src) referred in the .uaxdirectory file and run the analysis, suppressing the SQ/IX files would have no impact on the analysis result. The impact is null for transactions, and restricted to some quality rules/metrics (e.g., fan out of some procedures, statements, etc...)
This issue is fixed in CAST 8.2.0