Target audience:
Users of the extension providing data column access support for the SQL Analyzer extension.
Summary: This document provides information about the extension that provides data column access support for the SQL Analyzer extension (from version ≥ 2.4)
Extension ID
com.castsoftware.datacolumnaccess
What's new?
Please see Data Column Access - 2.1 - Release Notes for more information.
Description
The Data Column Access extension provides support for:
- Data sensitivity policy based on Data Sensitivity indicators
- column access links with the SQL Analyzer ≥ 2.4
In what situation should you install this extension?
- If you need to check that the application is data sensitive compliant using the indicators
- If you need to see Access Read and Access Write links for columns on SELECT, INSERT, UPDATE, DELETE and MERGE statements
- If you need to see Access Write links between columns and Cobol data, from INSERT, UPDATE and SELECT .. INTO statements
- If you need to see Access Write links for columns selected in the DECLARE cursor statement and the Cobol data fetched in FETCH cursor statement
CAST AIP release | Supported |
---|---|
8.3.x | |
8.2.x | |
8.1.x | |
8.0.x | |
7.3.x |
Supported DBMS servers used for CAST AIP schemas
This extension is compatible with the following DBMS servers used to host CAST AIP schemas:
CAST AIP release | CSS | Oracle | Microsoft |
---|---|---|---|
All supported releases |
Supported client languages
Language | Supported? | Notes |
---|---|---|
Mainframe Cobol | ||
Visual Basic | ||
.NET | ||
Java/JEE | ||
C/C++ | Provided that queries are located in EXEC SQL statements and the com.castsoftware.cpp extension was used to analyze the client code, then links to columns will be created if they exist. | |
Python |
Prerequisites
An installation of any compatible release of CAST AIP (see table above) | |
An installation of SQL Analyzer extension (from version ≥ 2.4) |
Download and installation instructions
Please see:
The latest release status of this extension can be seen when downloading it from the CAST Extend server.
What results can you expect?
Links
- Links are created for transaction and function point needs.
- You can expect the following links on the DDL side within the same sql file:
- accessRead from View / Procedure / Function / Trigger / Event to Column
- accessWrite from Procedure / Function / Trigger / Event to Column
- You can expect the same links for the following client side, only if the server-side code has been analyzed with the SQL Analyzer extension and only if they have dependencies with the SQL Analyzer analysis results:
COBOL PB VB .NET JAVAC/C++/OBJC IOS PYTHON
- When Cobol Data are created during Cobol Analysis and the production option Save data and links to other data is activated, you can expect to the following links:
- accessWrite from Cobol Data to Column, for the case of insert into and update. E.g. : Insert into table1 (col1, col2, ...) values (:data1, data2) the data1, data2 will access write the col1, col2. Update table1 set col1 = :data1, col2 = :data2 the data1, data2 will access write col1, col2
- Sometimes those links have bookmarks but sometimes the bookmarks are missing, see here why : Limitations. When the bookmark exists, it will pointing on the Cobol Data definition and not on the SQL statement.
- accessWrite from Column to Cobol Data, for the case of select into. E.g. : Select col1, col2, ... into :data1, :data2 the col1, col2 will access write the data1, data2.
- Those links have no bookmarks, see here why : Limitations.
- accessWrite from Column selected in DECLARE cursor to Cobol Data fetched in FECTH cursor. E.g. : declare toto_curs cursor for select col_toto from toto, followed by fetch toto_curs into :var1. The col_toto will access write the var1.
- Those links have no bookmarks, see here why : Limitations.
- accessWrite from Cobol Data to Column, for the case of insert into and update. E.g. : Insert into table1 (col1, col2, ...) values (:data1, data2) the data1, data2 will access write the col1, col2. Update table1 set col1 = :data1, col2 = :data2 the data1, data2 will access write col1, col2
Special notes about Links on client side
- For Java client-side code, SQL statements used in parameters of methods including a SQL parametrization rule are analyzed.
class Foo { final static String TABLE_NAME = "Person"; void method() { String query = "select * from " + this.TABLE_NAME; java.sql.Statement.execute(query ); } }
- But 'queries' visible in the DLM (that need reviewing) are not analyzed:
class Foo { // not passed to an execute something private final static String text = "select name from Person"; }
Explicit queries used in an ORM context are analyzed (or not) based on if they are visible in Enlighten
COBOL EXEC SQL queries are analyzed
SQL queries founded in Python code are analyzed
- SQL queries founded in .properties (Java Property Mapping objects) are analyzed
Examples
Select Into, Move, Update
The value we have in the column COL_TOTO will be written in the Cobol data VAR1. What we have in VAR1 will move into the Cobol Data VAR2. At the end, VAR2 will update the column COL_TOTO, here is the source code :
IDENTIFICATION DIVISION. 00010000 PROGRAM-ID. TESTMOVE. 00020000 DATA DIVISION. 00340000 WORKING-STORAGE SECTION. 00010000 01 VAR1 PIC X. 00010000 02 VAR2 PIC X. 00010000 00010000 PROCEDURE DIVISION. 00010000 00010000 EXEC SQL 00010000 SELECT COL_TOTO 00010000 INTO :VAR1 00010000 FROM TOTO 00010000 END-EXEC 00010000 00010000 MOVE VAR1 TO VAR2 00010000 00010000 EXEC SQL 00010000 UPDATE TITI 00010000 SET COL_TITI = :VAR2 00010000 END-EXEC 00010000
Insert Into
Update
Select into
Declare cursor ... Fetch cursor ... Move ... Insert
The value we have in the column COL_TOTO is selected in DECLARE TOTO_CURS, than it will be written during the FETCH TOTO_CURS in the Cobol data VAR1. What we have in VAR1 will move into the Cobol Data VAR2. At the end, VAR2 will be inserted in the table TITI, column COL_TITI, here is the source code :
IDENTIFICATION DIVISION. 00010000 PROGRAM-ID. TESTCURSOR. 00020000 DATA DIVISION. 00340000 WORKING-STORAGE SECTION. 00010000 01 VAR1 PIC X. 00010000 02 VAR2 PIC X. 00010000 00010000 PROCEDURE DIVISION. 00010000 00010000 EXEC SQL DECLARE TOTO_CURS CURSOR FOR 00010000 SELECT COL_TOTO 00010000 FROM TOTO 00010000 END-EXEC 00010000 00010000 EXEC SQL 00010000 FETCH TOTO_CURS 00010000 INTO :VAR1 00010000 END-EXEC 00010000 00010000 MOVE VAR1 TO VAR2 00010000 00010000 EXEC SQL 00010000 INSERT INTO TITI (COL_TITI) 00010000 VALUES (:VAR2) 00010000 END-EXEC 00010000 00010000 00010000
Sensitivity Indicator
To enable Sensitivity Indicator for columns, it is necessary to provide the analyzer with a configuration file (*.datasensitive extension), in the same folder alongside the source code. The pattern to be used in the configuration file is: <database_name>.<schema_name>.<table_name>.<column_name>=<Sensitivity_indicator>.
Data type | Description | |
---|---|---|
datacolumnaccess >= 2.0.0-funcrel SQL Analyzer version >= 3.5.3-funcrel | datacolumnaccess <= 1.0.0-funcrel SQL Analyzer version <= 3.5.2-funcrel | |
Highly Sensitive | GDPR - Very sensitive | The information stored in the column is very sensitive on its own from a data sensitivity point of view without being correlated with other information. The type of information involved is for example: Credit card number, health insurance number, passport number etc. |
Very Sensitive | GDPR - Sensitive | The information in the column is sensitive because, when correlated with other information, it became very sensitive. The type of information involved is for example: the address, the phone number etc. |
Sensitive | Security | The information stored is critical for the security of the platform, such as the administrators list, etc. |
Not sensitive | Not concerned | The column is not involved in data sensitivity legislation. |
Template
You can also find attached a real example, for the AdventureWorks database:
Examples
Schema1.Table1.Col1=Highly sensitive Schema1.Table1.Col2=Very Sensitive Schema1.Table1.Col3=Sensitive ......
When the Sensitivity Indicator should apply to the same column name from all tables, in a specific schema, use a "*" wildcard:
Schema1.*.Age=Very sensitive
When we don't know the schema name, but we do know the Table and Column names, e.g.:
Table1.Col1=Highly sensitive
When the Sensitivity Indicator should apply to all columns named Age from all tables / schemas / databases, e.g.:
*.Age=Very sensitive
Performance
When using the Data Column Access extension 1.0.0, analysis execution time is slightly different (usually slightly longer). Here are some details of the performance we have experienced using AIP Core 8.3.24 and SQL Analyzer 3.4.4-funcrel:
Use case | Technical details | Execution details |
---|---|---|
100% SQL (Teradata) | 1 single SQL file, file size 14.5MB Detected variant: Ansisql 129,529 LOC | Analysis duration for 1 analysis unit:
No data sensitivity 988 access links are added |
Cobol + SQL (Db2) | SQL: 1 single file, file size 289KB Detected variant: Db2 Cobol: 1,996 files, 11 folders, 52MB 443.8K LOC | Analysis duration for 2 analysis units:
No data sensitivity 15,245 access links are added, all of them on the Cobol side |
.NET + SQL (Microsoft SQL Server) | SQL: 6 files (3 files of 6 are .sqltablesize), 1 folder, 4.08 MB Detected variant: SQL Server .NET: 14,062 files, 2,747 folders, 1.04GB Assemblies: 218 files, 214 folders, 68.2MB 698K LOC | Analysis duration for 185 analysis units:
No data sensitivity From a total of 2,562 added access links, 6 are detected on the .NET side |