Installation on Google Cloud Platform via GKE
Overview
This guide covers the installation of CAST Imaging on Google Cloud Platform Google Kubernetes Engine (GKE) using Helm charts.
Requirements
- Access to Docker Hub registry - CAST Imaging Docker images are available as listed in the table below
- A clone of the appropriate Git repository (https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup ) branch (i.e. matching the version of CAST Imaging you want to deploy) containing the Helm chart scripts - for example to clone the 3.2.3-funcrel release branch use
git clone -b 3.2.3 https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup
- A valid CAST Imaging License
- Optional setup choices:
- Deploy the Kubernetes Dashboard (https://github.com/kubernetes/dashboard ) to troubleshoot containers and manage the cluster resources.
Docker images
CAST Imaging is provided in a set of Docker images as follows:
CAST Imaging component | Image name | URL |
---|---|---|
imaging-services |
Gateway | https://hub.docker.com/r/castimaging/gateway |
imaging-services |
Control Panel | https://hub.docker.com/r/castimaging/admin-center |
imaging-services |
SSO Service | https://hub.docker.com/r/castimaging/sso-service |
imaging-services |
Auth Service | https://hub.docker.com/r/castimaging/auth-service |
imaging-services |
Console | https://hub.docker.com/r/castimaging/console |
dashboards |
Dashboards | https://hub.docker.com/r/castimaging/dashboards |
analysis-node |
Analysis Node | https://hub.docker.com/r/castimaging/analysis-node |
imaging-viewer |
ETL | https://hub.docker.com/r/castimaging/etl-service |
imaging-viewer |
AI Service | https://hub.docker.com/r/castimaging/ai-service |
imaging-viewer |
Viewer Server | https://hub.docker.com/r/castimaging/viewer |
imaging-viewer |
Neo4j | https://hub.docker.com/r/castimaging/neo4j |
extend-local-server |
Extend Proxy | https://hub.docker.com/r/castimaging/extend-proxy |
utilities |
Init Container | https://hub.docker.com/r/castimaging/init-util |
Installation process
Before starting the installation, ensure that your Kubernetes cluster is running, all the CAST Imaging docker images are available in the registry and that helm
and kubectl
are installed on your system.
Step 1 - GKE environment setup
- Create your GKE environment, see GKE - Cluster Setup. Also refer to the Google documentation
- Retrieve the cluster credentials using GCP CLI:
gcloud container clusters get-credentials my-cluster --zone=my-zone
- Install
kubectl
- see https://kubernetes.io/docs/tasks/tools/ - Install
helm
:- Binary download: https://github.com/helm/helm/releases
- Documentation: https://helm.sh/docs/intro/quickstart/
Step 2 - GCP Network settings and CAST Imaging installation
Prerequisites:
- A domain name e.g. mydomain.com with DNS record pointing at this static IP
- An SSL certificate for mydomain.com
Steps:
- Create a global static external IP in GCP
- Store the SSL certificate for mydomain.com in GCP “Certificate Manager”
- Update
console-gatewayservice-ingress.yaml
with the “Managed Certificate name” and “Static IP name”:- Edit the file and make a global string replacement of
my-certificate
andmy-frontend-ip
with the actual names
- Edit the file and make a global string replacement of
- Review and adjust the parameter values in the
values.yaml
file (located at the root of the cloned Git repository branch) in between the section separated with # marks.- K8SProvider: GKE
- FrontEndHost: https://mydomain.com
- When using a custom CA or self-signed SSL certificate, copy the contents into the relevant section in the file
console-authenticationservice-configmap.yaml
located at the root of the cloned Git repository branch and then setUseCustomTrustStore:
option totrue
in thevalues.yaml
- Install the HelmChart:
This will create the k8s Ingress as well as a Load Balancer and a Health Check within GCP
- Run
helm-install.bat|sh
(depending on your base OS) located at the root of the cloned Git repository branch
- Run
- Update the Health Check in GCP:
- Locate and edit the Health Check that was automatically created in GCP for the
console-gateway-service
:- The Health Check name will be similar to:
k8s1-abc123456-castimaging-v-console-gateway-servic-809-abc123abc
- The Health Check name will be similar to:
- Set the “Request path” to:
/actuator/health
- Press “Save”
- Locate and edit the Health Check that was automatically created in GCP for the
- Wait until GCP network components are deployed (may take up to 20 min)
- CAST Imaging will be available on https://mydomain.com
Step 3 - Install Extend Local Server (optional)
If you need to install Extend Local Server as an intermediary placed between CAST Imaging and CAST’s publicly available “Extend” ecosystem https://extend.castsoftware.com , follow the instructions below. This step is optional and if not completed, CAST Imaging will access https://extend.castsoftware.com to obtain required resources.
- Retrieve the Extend Local Server external IP address by running
kubectl get service -n castimaging-v3 extendproxy
- In
values.yaml
(located at the root of the cloned Git repository branch), setExtendProxy.enable
totrue
and update theExtendProxy.exthostname
variable with the external IP address:
ExtendProxy:
enable: true
exthostname: EXTERNAL-IP
- Run
helm-upgrade.bat|sh
(depending on your base OS) located at the root of the cloned Git repository branch. - Review the log of the
extendproxy
pod to find the Extend Local Server administration URL and API key (these are required for managing Extend Local Server and configuring CAST Imaging to use it - you can find out more about this in Extend Local Server). You can open the log file from the Kubernetes Dashboard (if you have chosen to install it). Alternatively, you can get theextendproxy
pod name by runningkubectl get pods -n castimaging-v3
then runkubectl logs -n castimaging-v3 castextend-xxxxxxxx
to display the log.
Step 4 - Initial start up configuration
When the install is complete, browse to the public/external URL and login using the default local admin/admin
credentials. You will be prompted to configure:
- your licensing strategy. Choose either a
Named Application
strategy (where each application you onboard requires a dedicated license key entered when you perform the onboarding), or aContributing Developers
strategy (a global license key based on the number of users):
- CAST Extend settings / Proxy settings (if you chose to install Extend Local Server (see Step 4 above) then you now need to input the URL and API key so that CAST Imaging uses it).
As a final check, browse to the URL below and ensure that you have at least one CAST Imaging Node Service, the CAST Dashboards and the CAST Imaging Viewer components listed:
https://<public or external URL>/admin/services
Step 5 - Configure authentication
Out-of-the-box, CAST Imaging is configured to use Local Authentication via a simple username/password system. Default login credentials are provided (admin/admin
) with the global ADMIN
profile so that installation can be set up initially.
CAST recommends configuring CAST Imaging to use your enterprise authentication system such as LDAP or SAML Single Sign-on instead before you start to onboard applications. See Authentication for more information.
How to start and stop CAST Imaging
Use the following script files (located at the root of the cloned Git repository branch) to stop and start CAST Imaging:
Util-ScaleDownAll.bat|sh
Util-ScaleUpAll.bat|sh
Optional setup choices
Install Kubernetes Dashboard
Please refer to the Kubernetes Dashboard documentation at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ .