Installation on Microsoft Azure via AKS


Overview

This guide covers the installation of CAST Imaging on Microsoft Azure Azure Kubernetes Serviceexternal link (AKS) using Helm charts.

Requirements

  • Access to Docker Hub registry - CAST Imaging Docker images are available as listed in the table below
  • A clone of the latest release branch from the Git repository containing the Helm chart scripts: git clone https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup (to clone an older release, add the “-b x.x.x” flag with the desired release number).
  • A valid CAST Imaging License
  • Optional setup choices:
    • Deploy the Kubernetes Dashboard (https://github.com/kubernetes/dashboardexternal link) to troubleshoot containers and manage the cluster resources.
    • Setup Azure Files for a multi analysis-node deployment (Azure Disks - block storage is used by default)
    • Use an external PostgreSQL instance (a PostgreSQL instance is provided as a Docker image and will be used by default)

Docker images

CAST Imaging is provided in a set of Docker images as follows:

CAST Imaging component Image name URL
imaging-services Gateway https://hub.docker.com/r/castimaging/gatewayexternal link
imaging-services Control Panel https://hub.docker.com/r/castimaging/admin-centerexternal link
imaging-services SSO Service https://hub.docker.com/r/castimaging/sso-serviceexternal link
imaging-services Auth Service https://hub.docker.com/r/castimaging/auth-serviceexternal link
imaging-services Console https://hub.docker.com/r/castimaging/consoleexternal link
dashboards Dashboards https://hub.docker.com/r/castimaging/dashboards-v3external link
analysis-node Analysis Node https://hub.docker.com/r/castimaging/analysis-nodeexternal link
imaging-viewer ETL https://hub.docker.com/r/castimaging/etl-serviceexternal link
imaging-viewer AI Service https://hub.docker.com/r/castimaging/ai-serviceexternal link
imaging-viewer API Service https://hub.docker.com/r/castimaging/imaging-apisexternal link
imaging-viewer Viewer Server https://hub.docker.com/r/castimaging/viewerexternal link
imaging-viewer Neo4j https://hub.docker.com/r/castimaging/neo4jexternal link
imaging-viewer MCP Server https://hub.docker.com/r/castimaging/imaging-mcp-serverexternal link
extend-local-server Extend Proxy https://hub.docker.com/r/castimaging/extend-proxyexternal link
utilities Init Container https://hub.docker.com/r/castimaging/init-utilexternal link

Installation process

Before starting the installation, ensure that your Kubernetes cluster is running, all the CAST Imaging docker images are available in the registry and that helm and kubectl are installed on your system.

Step 1 - AKS environment setup

Create your AKS environment, see AKS - Cluster Setup.

CAST Imaging also requires:

Step 2 - Prepare and run the CAST Imaging installation

  • Review and adjust the parameter values in the values.yaml file (located at the root of the cloned Git repository branch) in between the section separated with # marks.
  • Ensure you set the K8SProvider: option to AKS
  • When using a custom CA or self-signed SSL certificate, copy the contents into the relevant section in the file console-authenticationservice-configmap.yaml located at the root of the cloned Git repository branch and then set UseCustomTrustStore: option to true in the values.yaml file
  • Run helm-install.bat|sh (depending on your base OS) located at the root of the cloned Git repository branch

Step 3 - Configure network settings for console-gateway (main entrypoint), mcp-server (optional) and extendproxy (optional) services

To access those 3 services from outside, you will need to setup a reverse proxy such as an Ingress Service.

If you want to use an Azure Application Gateway

Instructions can be found in the file Azure-ApplicationGateway-for-CastImaging.pdf (located at the root of the cloned Git repository branch).

You will also need to create a rewrite rule that modifies the Set-Cookie headers from your backend. Here’s the approach:

Via Azure Portal:

  • Navigate to your Application Gateway → Rewrites
  • Create a new rewrite rule set
  • Add a rewrite rule with these settings: Rule Configuration:
  • Rule name: cookie-flags-rewrite
  • Rule sequence: 100
  • Condition:
    • Variable: http_resp_Set-Cookie
    • Pattern to match: .*
    • Case-insensitive: Yes
  • Action:
    • Rewrite type: Response header
    • Action type: Set
    • Header name: Set-Cookie
    • Header value: {http_resp_Set-Cookie}; Secure; SameSite=None

If you want to use a Kubernetes NGINX Ingress

  • Set CreateIngress: true in values.yaml:
CreateIngress: true
  • Install the Ingress driver on the cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update        
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
  • Alternate Ingress driver installation - Internal IP: append this option to the command in case you want the Ingress to use Internal IP adresses rather than public ones (Ingress will not be reachable from the internet)
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true"
  • Create TLS Secret(s) using the certificate files associated to the DNS name(s) you are planning to use:
kubectl create secret tls tls-secret-cast --cert=my-cert-folder\myhostname.com\fullchain.pem --key=my-cert-folder\myhostname.com\privkey.pem -n castimaging-v3
kubectl create secret tls tls-secret-cast-extend --cert=my-cert-folder\myextendhostname.com\fullchain.pem --key=my-cert-folder\myextendhostname.com\privkey.pem -n castimaging-v3
kubectl create secret tls tls-secret-cast-mcp --cert=my-cert-folder\mymcphostname.com\fullchain.pem --key=my-cert-folder\mymcphostname.com\privkey.pem -n castimaging-v3
# (fullchain.pem <=> tls.crt ; privkey.pem <=> tls.key)

If you want to use the same hostname for the 3 services, just create the 3 secrets using the same certificate files.

If you want to use an Istio Ingress Gateway

  • Set CreateIstioGateway: true in values.yaml:
CreateIstioGateway: true
  • Install Istio on the cluster (Linux/Mac):
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH
istioctl install --set profile=default -y
  • Install Istio on the cluster (Windows - PowerShell using Chocolatey):
choco install istioctl
istioctl install --set profile=default -y
  • Install Istio on the cluster (Windows - PowerShell using manual download and install of a specific version)
$ISTIO_VERSION="1.28.0"  # Check https://github.com/istio/istio/releases for latest
Invoke-WebRequest -Uri "https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istio-$ISTIO_VERSION-win.zip" -OutFile "istio.zip"
Expand-Archive -Path "istio.zip" -DestinationPath "." -Force
cd "istio-$ISTIO_VERSION"
$env:PATH = "$PWD\bin;$env:PATH"
istioctl install --set profile=default -y
  • Istio with internal IP: use this alternate “istioctl install” command in case you want the Istio Ingress Gateway to allocate an internal IP address (Istio Ingress Gateway will not be reachable from the internet)
    • Create this istio-config.yaml file in the Istio install folder and apply it:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  installPackagePath: ./manifests
  components:
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        serviceAnnotations:
          service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# Apply the file:
#   - If Istio is not yet installed on your cluster:
istioctl install -f istio-config.yaml -y
#   - If Istio is already installed on your cluster:
istioctl upgrade -f istio-config.yaml -y

Important note: istio-injection should not be enabled on CAST Imaging namespace

  • Create TLS Secret(s) using the certificate files associated to the DNS name(s) you are planning to use (to be created in the istio-system namespace):
kubectl create secret tls tls-secret-cast --cert=my-cert-folder\myhostname.com\fullchain.pem --key=my-cert-folder\myhostname.com\privkey.pem -n istio-system
kubectl create secret tls tls-secret-cast-extend --cert=my-cert-folder\myextendhostname.com\fullchain.pem --key=my-cert-folder\myextendhostname.com\privkey.pem -n istio-system
kubectl create secret tls tls-secret-cast-mcp \--cert=my-cert-folder\mymcphostname.com\fullchain.pem --key=my-cert-folder\mymcphostname.com\privkey.pem -n istio-system
# (fullchain.pem <=> tls.crt ; privkey.pem <=> tls.key)

If you want to use the same hostname for the 3 services, just create the 3 secrets using the same certificate files.

Optional - When Istio or NGINX Ingress is implemented to access the console-gateway service with a certificate that cannot be verified (e.g., self-signed certificate or internal CA), the certificate will need to be stored in CAST auth-service to avoid certificate validation errors:

  • set: UseCustomTrustStore: true in values.yaml
  • Insert the encoded certificate:
    • directly inside the auth.caCertificate variable in values.yaml
    • or using helm upgrade ... --set-file auth.caCertificate=ca.crt ... to override the variable value with the ca.crt file content
UseCustomTrustStore: true
auth:
  caCertificate: |
    -----BEGIN CERTIFICATE-----
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -----END CERTIFICATE-----

Final steps (Istio or NGINX Ingress):

  • Set hostnames in values.yaml - Option A: use same hostname for all 3 services
FrontEndHost: https://myhostname.com 
ExtendProxy
  enable: true
  exthostname: myhostname.com
McpServer
  enable: true
  exthostname: myhostname.com

Exposed URLs will be:
https://myhostname.comexternal link (or https://myhostname.com/mycontextexternal link if ContextUrl is enabled)
https://myhostname.com/mcpexternal link
https://myhostname.com/extendproxyexternal link

  • Set hostnames in values.yaml - OPTION B: use different hostname is used for each service in values.yaml:
FrontEndHost: https://myhostname.com 
ExtendProxy
  enable: true
  exthostname: myextendhostname.com
McpServer
  enable: true
  exthostname: mymcphostname.com

Exposed URLs will be:
https://myhostname.comexternal link (or https://myhostname.com/mycontextexternal link if ContextUrl is enabled)
https://mymcphostname.com/mcpexternal link
https://myextendhostname.com/extendproxyexternal link

  • Apply the helm chart changes by running helm-upgrade.bat|sh (depending on your base OS) located at the root of the cloned Git repository branch.
  • Create a DNS record pointing at the reverse proxy external IP address.
    To retrieve the external IP:
# For an NGINX Ingress, use this command:
kubectl get ingress -n castimaging-v3
#
# For an Istio Ingress, use this command:
kubectl get service istio-ingressgateway -n istio-system

Step 4 - Install Extend Local Server (optional)

If you need to install Extend Local Server as an intermediary placed between CAST Imaging and CAST’s publicly available “Extend” ecosystem https://extend.castsoftware.comexternal link, follow the instructions below. This step is optional and if not completed, CAST Imaging will access https://extend.castsoftware.comexternal link to obtain required resources.

  • Retrieve the Extend Local Server external IP address by running kubectl get service -n castimaging-v3 extendproxy
  • In values.yaml (located at the root of the cloned Git repository branch), set ExtendProxy.enable to true and update the ExtendProxy.exthostname variable:
ExtendProxy:
    enable: true
    exthostname:  myextendhost.com
  • Run helm-upgrade.bat|sh (depending on your base OS) located at the root of the cloned Git repository branch.
  • Review the log of the extendproxy pod to find the Extend Local Server administration URL and API key (these are required for managing Extend Local Server and configuring CAST Imaging to use it - you can find out more about this in Extend Local Server). You can open the log file from the Kubernetes Dashboard (if you have chosen to install it). Alternatively, you can get the extendproxy pod name by running kubectl get pods -n castimaging-v3 then run kubectl logs -n castimaging-v3 castextend-xxxxxxxx to display the log.

Step 5 - Initial start up configuration

When the install is complete, browse to the public/external URL and login using the default local admin/admin credentials. You will be prompted to configure:

  • your licensing strategy. Choose either a Named Application strategy (where each application you onboard requires a dedicated license key entered when you perform the onboarding), or a Contributing Developers strategy (a global license key based on the number of users):

License key

  • CAST Extend settings / Proxy settings (if you chose to install Extend Local Server (see Step 4 above) then you now need to input the URL and API key so that CAST Imaging uses it).

CAST Extend settings

As a final check, browse to the URL below and ensure that you have at least one CAST Imaging Node Service, the CAST Dashboards and the CAST Imaging Viewer components listed:

https://<public or external URL>/admin/services

Services

Step 6 - Configure authentication

Out-of-the-box, CAST Imaging is configured to use Local Authentication via a simple username/password system. Default login credentials are provided (admin/admin) with the global ADMIN profile so that installation can be set up initially.

CAST recommends configuring CAST Imaging to use your enterprise authentication system such as LDAP or SAML Single Sign-on instead before you start to onboard applications. See Authentication for more information.

How to start and stop CAST Imaging

Use the following script files (located at the root of the cloned Git repository branch) to stop and start CAST Imaging:

  • Util-ScaleDownAll.bat|sh
  • Util-ScaleUpAll.bat|sh

Optional setup choices

Install Kubernetes Dashboard

Please refer to the Kubernetes Dashboard documentation at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/external link.

Setup Azure Files for multiple analysis-node(s)

All pods will use Azure Disks (block storage) by default. For the console-analysis-node StatefulSet, it is possible to configure Azure Files (based on the file.csi.azure.com driver driver) to enable file sharing between analysis nodes, when multiple analysis nodes are required.

Prior to running the initial CAST Imaging installation (detailed above), follow these steps:

  • Set AnalysisNodeFS.enable to true in the values.yaml located at the root of the cloned Git repository branch
  • Proceed with the CAST Imaging installation described above

Use an external PostgreSQL instance

If you do not want use the PostgreSQL instance preconfigured in this helm chart, you can disable it and configure an Azure Database for PostgreSQL instead.

  • Setup your Azure Database for PostgreSQL (PostgreSQL 15 - 8GB RAM minimum recommended, e.g. B2ms)
  • Connect to the database with a superuser and execute this script to create the necessary CAST custom users/database:
CREATE USER operator WITH SUPERUSER PASSWORD 'CastAIP';
GRANT azure_pg_admin TO operator;
CREATE USER guest WITH PASSWORD 'WelcomeToAIP';
GRANT ALL PRIVILEGES ON DATABASE postgres TO operator;
CREATE USER keycloak WITH PASSWORD 'keycloak';
CREATE DATABASE keycloak;
GRANT ALL PRIVILEGES ON DATABASE keycloak TO keycloak;
EOSQL
  • In the values.yaml located at the root of the cloned Git repository branch:
    • Set CastStorageService.enable to false (to disable the PostgreSQL instance server preconfigured by CAST)
    • Set CustomPostgres.enable to true
    • Set the CustomPostgres.host and CustomPostgres.port to match your custom instance host name and port number
  • Proceed with the CAST Imaging installation described above