Page tree
Skip to end of metadata
Go to start of metadata

This checklist answers the following questions:

  • When is the User Input Security analysis configuration finished?
  • As an AIA, when can I hand over to the FO or customer?
  • What should I document as part of the implementation process?
  • As a reviewer, what is the proof that demonstrates the User Input Security configuration has been performed correctly?

Ensure you duplicate this page, and fill in the last column with the result of the check for each checked item. Upon completion, the AIA can assess whether he.she executed the process from end to end, and ensure the analysis results are in line with the analyzer's current capacity.


Check itemDeliverableCheck to perform - commentAIA Comments
1. Check prerequisites 

1.1 Architecture review done and documented

Architecture diagram (PPT or VSD) + identification of main data flow patterns + identification of blackbox files to reuse from Dataflow SME kit's repository

Diagram must be detailed enough to understand the flow of data from User input down to the resources.

Each middleware (MOM, SOA, WS, etc.) should be positioned according to architect's knowledge or findings in the source code.


1.2 .NET/SecurityForJava analysis log(s) has no unresolved symbol

Zipped log(s) + summary of number of unresolved messages (search "can not resolve" With NotePad++)

SecurityForJava : the log is located in ByteCode folder, under the Log folder. Sample log name = Job-generation-2019-03-26.log

Unresolved symbols are stoppers of the flow, the same way as methods reported as "No implementation found".

The remediation is to add the missing JARs, or missing assemblies, or missing dependencies in the analysis configuration.

Rework the analysis configuration until the log becomes clean (or close to clean in case of timeboxing).


1.3 User Input Security option enabledEnable JEE/DotNet in CAST Management Studio "User Input Security Tab" or in AIP Console Config page.-
2. Check logs
2.1 SecurityAnalyzer.logSecurityAnalyzer.log + commented summary of main numbers (number of entrypoints, number of flaws found for each of the 17 flaw searches).

Some flaws should have entrypoints (cannot be 0): XSS, SQL (often), File path manipulation, Log forging, String format (JEE).

    • in case of 0 entrypoints for any of these 5 rules, it is usually the sign of an incomplete architecture review, and missing target definitions for the resource at stake: application is using a library not properly defined.
    • 0 entrypoints for Log Forging means your application has no logging at all! This is highly suspicious. Check the logging framework(s), and set the log methods (info, warn, debug, fatal) as Log-targets.
    • 0 entrypoints for SQL injection means your application. does not access any RDBMS database! This is highly suspicious. Check the persistence framework(s), and set the "SQLexecute" methods as SQL-targets.
    • Same apply to XSS, FIle Path manipulation etc.

On the other hand, it is valid to have 0 entrypoints for unused APIs: XPath, LDAP, OS Command APIs are not always used by application.


2.2 SecurityAnalyzer.log.secondarySecurityAnalyzer.log.secondary of last iteration + number of remaining messages if not totally clean.

No check to perform, as this file is now used only internally by CAST.


3. Check total and violations

3.1 total = number of methods calling input methodsNumber of methods calling input methods.

If the "scope" is zero or very low, it is a sign the real User Input pattern has been missed. Enterprise applications often have several input patterns: servlet for test + MVC for production transactions + another one for admin pages.

Review architecture and (re)define alternate input methods for the uncovered input patterns.


3.2 violations for each dataflow QRs

Number of violations for each rule, as seen in the CAST Dashboard.

This will differ from the number of flaws reported in SecurityAnalyzer.log, since one violation can have multiple PATH_IDs.

Number of violations is usually not null when the architecture review concluded the application is eligible.

Even with sanitization in place as best practice for dev team, usually there are some mistakes at the developer's end (some JSP not using the sanitization methods, or some test code not equipped with it etc.): this will create violations.

When no violation are reported, then it is often good to review the application architecture again, and consider the "flow breakers": SOA, IoC, CDI, generics. Refer to Check for potential breaks in the flow due to indirections patterns in User Input Security - Architecture review.


3.3 check that no "call stack" contains a potential sanitization

For each violation, check that no candidate sanitization method is part of the flow.

Deliverable can be a list of suspect methods.

The sanitization may have been missed during blackboxing. It is recommended to declare sanitization as soon as possible, once the flaw has been identified by the engine.

Also sanitization could be performed at the presentation framework level (i.e. Struts validator, JSF validator etc.) which is not currently seen by the User Ibnput Security engine. When the user input associated to the shortened flow is identified as beneficial to the aforementioned validation, it is safe to ignore the violation. An exclusion in the CAST Dashboard is sometimes the only possibility to make the violations disappear.


3.4 in case of sanitization detected and configured, get customer approvalCustomer approval email for all method signatures configured as sanitization (mode="clear").The sanitization can depend on the Security engineer. Each company has its own practice. Some methods can be considered as valid by customer A, while the same method is considered as vulnerable (too weak) by customer B. This is why sanitization blackboxes shouldn't be used for all.
3.5 review on-the-fly decisionsExcel file, extracted thanks to LINQ script.

Check that the decision (file, database, input, etc. are valid).

If not, open a ticket with CAST Support.


4. Summarize configuration

This summary will allow the Team Leader or any reviewer to quickly control the implementation without having to read the whole set of blackbox files and other elements.
4.1 Assumptions taken, limitations documented5 to 10 lines describing the gaps from app architecture to the release notes, and the approach and workaround taken to counter them.-
4.2 Reused blackbox.xmlList the reused blackbox files (should match list in section 1.1 above)The list will show the blackbox versions used. The repository may have evolved between the start of the implementation and its end. Take the latest release of kit to ensure minimum rework.
4.2 Added input, target, sanitization on rt.jar, mscorlib.dll or other frmkExtracted from Excel master file (using the filter on col 1).

This extract shows the "active" part of the configuration. The rest is defined as a collection, which can be assimilated to "passive" configuration ("continue the flow").


5. Package contribution to repository

5.1 Zip all the custom blackbox files + the corresponding Excel master files.Archive of all the custom blackbox files deployed + the corresponding Excel master files used to produce them.Not sharing the configuration ensures rework. And rework is an important waste of time, and even a source of failure.
  • No labels