Summary: this page describes how to configure the Node storage folder locations. This is an optional but recommended step in the installation configuration process.
Each Node that is managed by Console requires a set of storage folders that are used to store items related to each Application onboarded and analyzed via the Console. Currently the following storage folders are required on each Node and they will be set to a default location as listed below.
If you prefer to locate these storage folders elsewhere, for example a common/shared network folder, or on another folder/drive on the Node, you can change where these folders are located, as described below. CAST highly recommends that this is done as part of the installation process before any Applications are onboarded or imported.
Changing folder locations with onboarded Applications
Whilst it is technically possible to change the folder locations once an Application has been onboarded, please be aware of the following if you choose to do so:
- Changing the Delivery folder location is NOT supported and will break your existing Applications
- Changing the Deployment folder will cause added/deleted objects to be recorded in subsequent snapshots
- Changing the Common Data folder requires that you copy the content from the previous location to the new location
- Node Logs and LISA/LTSA folders can be changed with no impact (existing items will remain in the old location).
Reusing folders configured for use with Applications onboarded in CAST Management Studio (existing customers)
Whilst it is technically possible to re-use folders configured for use with Applications onboarded in CAST Management Studio (and therefore mix data from Console with data from CAST Management Studio) this is not supported mainly due to the different Delivery folder structure used in Console. The one exception to this is when you Import an Application managed with CAST Management Studio into AIP Console where a specific process exists.
≥ 2.x storage locations
All storage folders located on Nodes are created the first time that they are accessed (i.e. they are not created during the installation process), unless they exist already. The shared storage folders required during the installation of the Console front-end must exist already.
|Type||Default location||Description||Defined by|
|Used for storing successive and compressed versions of an application's source code produced during the source code delivery phase.|
|Deployment||Used for storing the most recent version of the Application's source code for analysis in uncompressed format.|
|Common Shared Data||Used for storing miscellaneous items required for an Application analysis. For example Dynamic Link rule files, analysis reports, snapshot reports, application backups, Sherlock extractions etc.|
|Used for storing all logs produced by the Node with regard to code delivery/analysis/snapshot activities. One sub-folder folder will be created per Application onboarded in Console.||%PROGRAMFILES%\CAST\8.x\CastGlobalSettings.ini on the Node|
|Analysis folders LISA|
Location to store temporary files generated during the analysis/snapshot process on the Node.
|Analysis folders LTSA|
Location used to store Extensions used during an analysis.
Configuration files that are shared among all users of the server:
Configuration files for legacy AIP Core applications (such as CAST Server Manager) that are specific to the current logged in user on the Node.
CAST recommends that you avoid modifying this value unless you ensure that either only one user is using this installation, or each user has its own path based on an environment variable.
|Miscellaneous temporary files for legacy AIP Core applications . Some files may be quite large. Specific to the current logged in user.|
Configure the storage folder locations
Delivery, Deployment, Common Shared Data
Docker installation - docker-compose.yml
Remotely connect to the machine hosting the Console front-end services and then locate and edit the following file (created from the unzipped installation media):
Locate the following section in the file:
Make any changes you require, save the file and then restart all Console front-end services.
Java JAR installation - aip_config schema
See Change shared folder storage in 2.x - Enterprise mode - Installation of AIP Console front-end via Java JAR installers.
CastGlobalSettings.ini for Node specific folders
Remotely connect (for example using Remote Desktop) to the Node whose storage folder locations you would like change and then edit the following file (located in the AIP Core installation folder) with a text editor:
Locate the following sections in the file:
Make any changes you require, for example to change some of the folders to use a mapped network drive. Ensure you uncomment the option by removing the semi-colon (;) :
Save the file when your changes are complete and repeat for each Node you would like to change. Finally ensure that you restart the Node service so that the changes are taken into account.
The CastGlobalSettings.ini file ALSO contains values for Delivery, Deploy, LISA and LTSA as shown in the example below. These locations are overridden by the locations defined in the <unpacked_zip>/docker-compose.yml file.
Note also that The following items in CastGlobalSettings.ini file are no longer used:
Notes about using Mapped drives and Node Windows Service
While using Mapped drives for the Node specific data storage locations is supported, the Node can sometimes have difficulty accessing these storage locations when the Node is configured to run as a Windows Service. For the most part, these difficulties are caused by the context (i.e. the Windows user login) which is:
- configured to access the mapped drives
- configured to run the Node Windows Service
When an Node is started the storage folder locations/paths are checked to ensure that they can be accessed. Even if the mapped drives in the current Windows session are configured to use the current Windows user login and can therefore be accessed by Windows File Explorer without issue and even if the Windows Service that is running the Node is configured to use the same user login as is used for the mapped drives in the current Windows session, the mapped drives may still appear to be "unavailable" to the Node. To remediate this issue, the following batch file will be run automatically when the Node is started (whether as a Windows service or using the .bat file) AND when a mapped drive has been configured for an Node storage folder AND this drive is "unavailable":
This batch file will automatically scan (using the Windows net use command) all mapped drives that exist in the current Windows session. If any are "unavailable", the batch file will remap them for the duration of the Windows session using the Windows user login configured to run the Node Windows Service therefore ensuring that they can be accessed by the Node. Obviously, the Windows user login configured to run the Node Windows Service must already have access to the mapped drive location, otherwise the mapped drives will remain unavailable.
- If a drive is mapped with subst, the config should be manually added to map-drives.bat, for example:
- If different Windows user logins are used for the scenarios listed below, then it is possible to edit map-drives.bat and add the required mapped drive locations via the net use command (example available in the batch file). This will ensure that any required mapped drives are available to the Node:
- access the mapped drives (connect using different credentials is ticked for example)
- run the Node Windows Service