Installation on Linux via Docker


Overview

This guide covers the installation of CAST Imaging on Linux using Docker. It’s intended for:

  • Docker running on Linux operating systems
  • new installations (for updates to existing installations, see In-place component update)

The installation package com.castsoftware.imaging.all.dockerexternal link includes an installation script and configuration files. All Docker images are pulled direct from https://hub.docker.comexternal link during the installation - however you can alternatively configure the installer to function in “offline mode”.

Available components

The following components are provided as Docker images:

  • imaging-services including:
  • imaging-viewer
  • analysis-node (includes CAST Imaging Core ≥ 8.4)
  • dashboards (optional - not mandatory)
    Available in ≥ 3.3.x-funcrel

Installation methods, scenarios and options

The installation is actioned via a script (.sh file) which supports various deployment approaches and provides flexibility:

  • Single machine mode, all components
    • All components on one machine in a single Docker instance
    • Ideal for POCs or testing purposes, or small production deployments
  • Distributed mode, all components (multiple machines)
    • All components distributed across multiple machines
    • Recommended for production environments
  • Read-only/standalone modes
    Available in ≥ 3.4.x-funcrel
    • Read-only deployments providing end-user access to CAST Dashboards or to “viewer” results
    • No analysis capability
    • Two available deployment options, either all components on one single machine or across two machines:
      • CAST Dashboards: imaging-services and dashboards components
      • Viewer imaging-services and imaging-viewer components

Single machine mode, all components

All components on one single machine:

Install Command Components installed Containers created
all All components. If you choose this option, you must ensure that your machine has sufficient resources to run all components: see Requirements.
  • analysis-node
  • auth-service
  • console
  • control-panel
  • dashboards
  • etl
  • gateway
  • imaging-apis
  • neo4j
  • open_ai_manager
  • postgres
  • server
  • sso-service

Distributed mode, all components

All components distributed across multiple machines:

Install Command Components installed Containers created
imaging-services
  • CAST Imaging Services (front-end)
  • Database instance
  • Keycloak authentication
Must only be installed once per installation of CAST Imaging.
  • auth-service
  • console
  • control-panel
  • gateway
  • postgres
  • sso-service
imaging-viewer
  • CAST Imaging Viewer (results display)
Must only be installed once per installation of CAST Imaging.

Requires:
  • access to the imaging-services component.
  • access to any additional database components.
  • etl
  • imaging-apis
  • neo4j
  • open_ai_manager
  • server
analysis-node
  • CAST Imaging Node Service (analysis engine - also includes CAST Imaging Core ≥ 8.4)
Can be installed multiple times per installation of CAST Imaging, once per separate dedicated machine to load balance.

Requires:
  • access to the imaging-services component.
  • access to any additional database components.
  • analysis-node
dashboards
Available in ≥ 3.3.x-funcrel
  • CAST Dashboards (Management/Engineering/Security)
Must only be installed once per installation of CAST Imaging.

Requires:
  • access to the imaging-services component.
  • access to any additional database components.
  • dashboards

Read-only/standalone modes

Available in ≥ 3.4.x-funcrel

For end-user results consultation without analysis capabilities, you can deploy CAST Imaging in “read-only” mode specifically for the following two use cases:

  • CAST Dashboards (Engineering/Management/Security) only
  • “viewer” only

This type of deployment is designed for environments where:

  • No analysis is performed locally
  • Users only need to view results through CAST Dashboards or the “viewer”
  • Application analysis occurs on a separate, dedicated CAST Imaging installation

This can be achieved by installing only the following components either on two separate dedicated machines or one single machine:

For CAST Dashboards:

  • imaging-services (must always be installed first)
  • dashboards (requires access to the imaging-services component)

For “viewer”:

  • imaging-services (must always be installed first)
  • imaging-viewer (requires access to the imaging-services component)

Requirements

Installation process

Step 1 - Determine your installation method

  • Single machine mode, all components or read-only mode, single machine - connect to the machine and proceed to Step 2. Ensure that your machine has sufficient resources to run all components: see Requirements.
  • Distributed mode, all components or read-only mode, multiple machines - identify which machine will run the imaging-services component (this must be installed first). Connect to this machine and then proceed to Step 2.

Step 2 - Obtain the installation media

Download the installer using curl:

curl -# -O -J "https://extend.castsoftware.com/api/package/download/com.castsoftware.imaging.all.docker/<version>?platform=linux_x64" -H "x-nuget-apikey: <api-key>" -H "accept: application/octet-stream"

Where:

  • -#: enables a download progress bar.
  • -O: (--remote-name) ensures the file is downloaded and saved to the current folder using the same name as the remote file.
  • -J: (--remote-header-name) ensures the -O option uses the server-specified Content-Disposition filename instead of extracting a filename from the URL. If the server-provided filename contains a path, that is stripped off before the filename is used.
  • <version>: use latest to download the most recent release, or specify the specific release, e.g., 3.2.0-funcrel.
  • -H: (--header) defines the additional header to include in information sent.
  • <api-key>: your CAST Extend API key (obtain this from https://extend.castsoftware.com/#/profile/settingsexternal link)

Example for latest release:

curl -# -O -J "https://extend.castsoftware.com/api/package/download/com.castsoftware.imaging.all.docker/latest?platform=linux_x64" -H "x-nuget-apikey: a9999a9a-c999-999d-999b" -H "accept: application/octet-stream"

Unzip the resulting ZIP file anywhere on your local disk. The following files/folders will be created:

  • cast-imaging-dashboards (folder)
  • cast-imaging-node (folder)
  • cast-imaging-services (folder)
  • cast-imaging-viewer (folder)
  • tools (folder)
  • cast-imaging-install.sh (file)
  • configuration.conf (file)
  • image_infos.txt (file)

Running the installer without an internet connection

Available in ≥ 3.4.0-funcrel

The installer automatically downloads all required Docker images from Docker Hubexternal link during installation. For air-gapped environments without internet access, you can manually download and transfer the images to target machines using the URLs listed below and then set the installer to use local images rather than fetching them from Docker Hub.

CAST Imaging component Image name URL
imaging-services Gateway https://hub.docker.com/r/castimaging/gatewayexternal link
imaging-services Control Panel https://hub.docker.com/r/castimaging/admin-centerexternal link
imaging-services SSO Service https://hub.docker.com/r/castimaging/sso-serviceexternal link
imaging-services Auth Service https://hub.docker.com/r/castimaging/auth-serviceexternal link
imaging-services Console https://hub.docker.com/r/castimaging/consoleexternal link
dashboards Dashboards https://hub.docker.com/r/castimaging/dashboardsexternal link
analysis-node Analysis Node https://hub.docker.com/r/castimaging/analysis-nodeexternal link
imaging-viewer ETL https://hub.docker.com/r/castimaging/etl-serviceexternal link
imaging-viewer AI Service https://hub.docker.com/r/castimaging/ai-serviceexternal link
imaging-viewer API Service https://hub.docker.com/r/castimaging/imaging-apisexternal link
imaging-viewer Viewer Server https://hub.docker.com/r/castimaging/viewerexternal link
imaging-viewer Neo4j https://hub.docker.com/r/castimaging/neo4jexternal link
  • Download (i.e. using docker pull) the correct image release for your installation by using the appropriate tag. The latest tag fetches the most recent release and is recommended.
  • Convert the images to .tar file using docker image saveexternal link and then transfer the images to the relevant air-gapped machine(s).
  • Convert the .tar files back to images using docker image loadexternal link on the relavant machine(s).
  • Locate the configuration.conf file at the root of the unzipped files (on each relevant machine) and open it in a text editor such as nano or vi and set OFFLINE_MODE=true to ensure that offline mode is enabled.
  • Proceed with the installation process detailed below.

Step 3 - Configure your installation

configuration.conf file

Locate the configuration.conf file at the root of the unzipped files and open it in a text editor such as nano or vi and prepare to modify the configuration variables within it.

  • Use forward slashes for all paths
  • Do not use localhost, 127.0.0.1, or simple hostnames for _HOSTNAME variables
  • CAST recommends leaving port numbers at default values where possible
  • See Configuration examples for more information about how to configure your installation in distributed mode
xxx_HOSTNAME variables

Locate the following variables in the configuration.conf file (they determine the hostname for each CAST Imaging component):

  • IMAGING_SERVICES_HOSTNAME
  • IMAGING_VIEWER_HOSTNAME
  • IMAGING_NODE_HOSTNAME
  • IMAGING_DASHBOARDS_HOSTNAME

For each variable, configure the appropriate FQDN (fully qualified domain name) or static IP address (use hostname -f (FQDN) or hostname -I (IP address) to determine this):

  • Single machine mode, all components: configure only the IMAGING_SERVICES_HOSTNAME using the FQDN/IP address of the host machine on which you are running the installation. All other xxx_HOSTNAME variables can be left blank.
  • Read-only mode, single machine: configure only the IMAGING_SERVICES_HOSTNAME variable and then either the IMAGING_VIEWER_HOSTNAME or the IMAGING_DASHBOARDS_HOSTNAME (depending on your deployment scenario) using the FQDN/IP address of the host machine on which you are running the installation.
  • Distributed mode, multiple machines / Read-only mode, multiple machines: configure the appropriate xxx_HOSTNAME variable using the FQDN/IP address of the machine that will host that particular component. Ensure that machines can communicate with each other over the network.
INSTALL_DIR variable

Locate the INSTALL_DIR variable in the configuration.conf file: this defines where the various properties/yaml files for each component will be stored (default: /opt/cast). During the installation process, this path will be populated with three sub folders, core, installation and shared:

  • Distributed mode, multiple machines:
    • where you will install multiple analysis-node components, you must ensure that the /opt/cast/shared sub-folder is accessible (in read/write mode) to ALL analysis-node components
    • DO NOT “share” the parent folder /opt/cast
    • Options for sharing:
      • mount the same network share (e.g. on a NAS or SAN) on all analysis-node machines
      • share /opt/cast/shared from one analysis-node machine and then mount this on all other analysis-node machines
  • All other deployment modes: there is nothing further to change. All relevant files will be available in this path.

Open firewall ports

  • Distributed mode, multiple machines / Read-only mode, multiple machines - you should ensure that all ports listed in Hardware requirements are opened inbound on the relevant machine to ensure that:
    • your users will be able to access all CAST Imaging resources in their browser
    • CAST Imaging components can communicate correctly with each other correctly
  • Single machine mode, all components / Read-only mode, single machine - only port 8090 (TCP) should be opened inbound if you or your users need to access CAST Imaging from another machine on the network.

Step 4 - Make files executable

Make the installation script (cast-imaging-install.sh) executable on each machine where you’ll run it:

chmod +x cast-imaging-install.sh

If installing the imaging-viewer component (via all or imaging-viewer commands), also make the imagingsetup file (located in the cast-imaging-viewer folder at the root of the unzipped installation media) executable:

chmod +x cast-imaging-viewer/imagingsetup

Step 5 - Run the installation

Scenario 1 - Single machine mode, all components

Run the following command:

./cast-imaging-install.sh all

Verify the installation by checking Docker containers (you should see 12 containers):

docker ps

Scenario 2 - Distributed mode, multiple machines

On each machine:

  • download and unzip the installation media
  • ensure the configuration.conf file is correctly configured for the component(s) you want to install
  • ensure the cast-imaging-install.sh and imagingsetup file ares executable as appropriate
  • run the appropriate installation command on each machine for the component you would like to install, ensuring that the imaging-services component is always installed first
./cast-imaging-install.sh imaging-services
./cast-imaging-install.sh imaging-viewer
./cast-imaging-install.sh analysis-node
./cast-imaging-install.sh dashboards

Scenario 3 - Read only modes

The installation process is similar to that described in Scenario 2 - Distributed mode (multiple machines):

  • download and unzip the installation media on each machine on which you want to install a component (you can use two dedicated machines or one single machine)
  • ensure the relevant *.conf file on each machine is correctly configured for the component(s) you want to install
  • if you are installing the “CAST Dashboards” read-only mode, modify the cast-imaging-services/.env file and set the DASHBOARD_STANDALONE_ENABLED variable to true
  • if you are installing the “viewer” read-only mode, ensure the cast-imaging-install.sh and imagingsetup file are executable
  • run the following commands ensuring that the imaging-services component is always installed first:

For CAST Dashboards:

./cast-imaging-install.sh imaging-services
./cast-imaging-install.sh dashboards

For “viewer”:

./cast-imaging-install.sh imaging-services
./cast-imaging-install.sh imaging-viewer

Step 6 - Post installation tasks

Verify containers

Verify the installation by checking Docker containers on each machine:

docker ps

Expected container counts:

Command No. of containers
imaging-services 6
imaging-viewer 5
analysis-node 1
dashboards 1

Set permissions on persistent volumes

For each analysis-node you have installed (including via the all command) you must execute the following command to ensure that the root user has access to the persistent volumes shared with the analysis-node container:

chown -R 0:0 <root_data_folder>

Where:

  • <root_data_folder> points to the root of the “shared” data folder. By default this is set to /opt/cast/shared.
  • 0:0 is equivalent to the “root” user.

Step 7 - Initial start up configuration

  • When the install is complete, browse to the URL:
http://IMAGING_SERVICES_HOSTNAME:8090
  • Login using the default local admin/admin credentials
  • Configure the Licensing strategy. Choose either a Named Application strategy (where each application you onboard requires a dedicated license key entered when you perform the onboarding - not available for a “read-only” deployment), or a Contributing Developers strategy (a global license key based on the number of users):

License key

CAST Extend settings

  • Verify component availability via the following URL and ensure that you see at least one analysis-node, dashboards and imaging-viewer:
http://IMAGING_SERVICES_HOSTNAME:8090/admin/services

Services

Step 8 - Configure authentication

By default, CAST Imaging is configured to use Local Authentication via a simple username/password system. Default login credentials are provided (admin/admin) with the global ADMIN profile so that installation can be set up initially.

CAST recommends configuring CAST Imaging to use your on-premises enterprise authentication system such as LDAP or SAML Single Sign-on instead before you start to onboard applications. See Authentication for more information.

What is installed?

Containers

The following Docker containers will be created and are set to start automatically:

Command Container (Port) / Base OS
imaging-services
  • postgres (2285) - Debian GNU/Linux 12 (bookworm)
  • gateway (8090) - Alpine Linux 3.21
  • console (8091) - Alpine Linux v3.21
  • auth-service (8092) - Alpine Linux v3.21
  • sso-service (8096) - Red Hat Enterprise Linux 9.4 (Plow)
  • control-panel (2381, 8098) - Alpine Linux v3.21
imaging-viewer
  • neo4j (7473, 7474, 7687) - Debian GNU/Linux 13 (trixie)
  • etl (9001) - Debian GNU/Linux 13 (debian:trixie)
  • open_ai_manager (8082) - Alpine Linux 3.18 (python:3.10-alpine)
  • server (9000, 8084, 8083) - Alpine Linux 3.20 (golang:1.24.1-alpine3.20)
  • imaging-apis (8070) - Debian GNU/Linux 13 (debian:trixie)
analysis-node
  • analysis-node (8089) - Rocky Linux 8.10 (Green Obsidian)
dashboards
  • dashboards (8097) - Ubuntu 24.04.1 LTS (Noble Numbat)

Data storage

All analysis data will be stored in the path you defined in the configuration.conf file for the INSTALL_DIR variable (/opt/cast by default).

In addition, database data for the PostgreSQL instance provided with the imaging-services component is stored in a Docker volume called v3-db-data - this can be viewed using the docker volume ls command. Note that if you re-install the imaging-services component and this volume exists it will be reused, therefore any existing applications may be visible in CAST Imaging.

Troubleshooting

If a container fails to start, check the logs:

docker logs <container_name>

Uninstall process

Complete Docker clean-up

To action a full clean-up:

  • run the following Docker commands
  • delete the installer ZIP file
  • delete the unzipped installation files/folder
  • delete all files/folders in /opt/cast/ (default install path) - this will also delete all analysis data.
# Stop all containers
docker stop $(docker ps -a -q)

# Remove all containers
docker rm $(docker ps -a -q)

# Remove all images
docker rmi $(docker images -a -q)

# Remove all build cache
docker builder prune -a

# Remove unused networks
docker network prune

# Check remaining data
docker system df -v

# Remove data volumes for analyses and the database instance (CAUTION: cannot be reversed)
docker volume prune -a

Selective CAST Imaging clean-up

To remove only CAST Imaging related items:

  • run the following Docker commands
  • delete the installer ZIP file
  • delete the unzipped installation files/folder
  • delete all files/folders in /opt/cast/ (default install path) - this will also delete all analysis data.
# Stop imaging-services containers
cd /opt/cast/installation/imaging-services
docker compose down

# Stop analysis-node container
cd /opt/cast/installation/imaging-node
docker compose down

# Stop imaging-viewer containers
cd /opt/cast/installation/imaging-viewer
docker compose down

# Stop dashboards containers
cd /opt/cast/installation/imaging-dashboards
docker compose down

# Remove images
docker images | grep $servicename | awk '{print $3}' | xargs -I {} docker rmi -f {}
docker image prune