Kubernetes storage location information
Contents
- Overview
- 1. Storage Classes
- 2. Persistent Volume Claims (PVCs)
- 3. StatefulSet Volume Claim Templates
- 4. Per-Container Storage Details
- 5. ConfigMap-Backed File Mounts
- 6. Secret Mounts
- 7. Ephemeral (In-Memory) Volumes
- 8. Deployment Options Affecting Storage
- 9. Downloading Logs with
kubectl cp - 10. Storage Summary Table
Overview
This document describes all storage locations used across the containers in a CAST Imaging Kubernetes deployment: persistent volumes, ConfigMap-backed configuration files, Secrets, and ephemeral (in-memory) volumes.
1. Storage Classes
Two storage class types are used, depending on the feature set enabled.
1.1 Block Storage Class (castimaging-ds)
Used for all standard persistent volumes (databases, logs, archives, etc.).
| Parameter | Value |
|---|---|
| Default name | castimaging-ds (configurable via DiskClassName) |
volumeBindingMode |
WaitForFirstConsumer |
reclaimPolicy |
Delete |
allowVolumeExpansion |
true |
CSI provisioner and parameters vary by cloud provider:
Provider (K8SProvider) |
Provisioner | Key Parameters |
|---|---|---|
| EKS (AWS) | ebs.csi.aws.com |
type: gp3, fsType: ext4 |
| AKS (Azure) | disk.csi.azure.com |
skuName: Premium_LRS, kind: Managed |
| GKE (Google) | pd.csi.storage.gke.io |
type: pd-standard (or pd-ssd) |
Storage class creation is controlled by the
CreateStorageClassvalue (default:true). Set tofalseif the storage classes already exist in the cluster.
1.2 File Storage Class (castimaging-fs)
Only created and used when shared file storage for the Analysis Node is enabled (AnalysisNodeFS.enable: true). This allows the Analysis Node’s shared data directory to use ReadWriteMany (RWX) access mode, enabling horizontal scaling of analysis nodes.
| Provider | Provisioner | Notes |
|---|---|---|
| EKS | efs.csi.aws.com |
Requires EFSsystemID; optionally EFSaccessPointID for an explicit PV |
| AKS | file.csi.azure.com |
SMB protocol; optional AKSresourceGroup / AKSstorageAccount / AKSsharedDatadirPV |
| GKE | filestore.csi.storage.gke.io |
tier: standard or premium |
2. Persistent Volume Claims (PVCs)
All PVCs are created in the deployment namespace with the annotation helm.sh/resource-policy: keep, ensuring they are not deleted on helm uninstall.
| PVC Name | Default Size | Storage Class | Access Mode | Service | Conditional |
|---|---|---|---|---|---|
pvc-shared-datadir |
100Gi | castimaging-ds (RWO) or castimaging-fs (RWX) |
RWO / RWX | console-analysis-node | Always created |
db-data |
128Gi | castimaging-ds |
RWO | console-postgres | Only if CastStorageService.enable: true |
pvc-imagingviewer-v3-server-log |
1Gi | castimaging-ds |
RWO | viewer-server | Always created |
pvc-imagingviewer-v3-etl-logs |
10Gi | castimaging-ds |
RWO | viewer-etl | Always created |
pvc-imagingviewer-v3-etl-csvarchive |
10Gi | castimaging-ds |
RWO | viewer-etl | Always created |
pvc-imagingviewer-v3-aimanager-logs |
2Gi | castimaging-ds |
RWO | viewer-aimanager | Always created |
pvc-imagingviewer-v3-api-logs |
2Gi | castimaging-ds |
RWO | viewer-api | Always created |
pvc-mcpserver-logs |
2Gi | castimaging-ds |
RWO | mcp-server | Only if McpServer.enable: true |
pvc-extendproxy-data |
10Gi | castimaging-ds |
RWO | extendproxy | Only if ExtendProxy.enable: true |
All PVC sizes are configurable through values.yaml (e.g. size_db_data, size_shared_datadir, etc.).
3. StatefulSet Volume Claim Templates
These volumes are provisioned automatically per pod replica via volumeClaimTemplates in StatefulSets. They use the castimaging-ds block storage class.
console-analysis-node (StatefulSet)
| Template Name | Default Size | Mount Path | Purpose |
|---|---|---|---|
castdir |
100Gi | /usr/share/CAST |
CAST installation directory: extensions, LISA data, analysis engine logs |
viewer-neo4j (StatefulSet)
A single neo4jdata volume (100Gi) is provisioned per pod. It is sub-divided using subPathExpr into three logical directories:
| Sub-Path | Mount Path | Purpose |
|---|---|---|
logs |
/var/lib/neo4j/logs |
Neo4j operational logs |
neo4jdata |
/var/lib/neo4j/config/neo4j5_data |
Neo4j graph database data files |
csvarchive |
/var/lib/neo4j/config/csv/archive |
Archived CSV import files |
4. Per-Container Storage Details
4.1 console-postgres
The PostgreSQL database server used by all Console services and Keycloak.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
db-data |
PVC (db-data, 128Gi) |
/var/lib/postgresql/data (subPath: postgres) |
Database data files |
pgconf |
ConfigMap (console-v3-postgresqlconf) |
/usr/share/postgresql/postgresql.conf.sample |
Custom PostgreSQL configuration |
pginit |
ConfigMap (console-v3-init-db) |
/docker-entrypoint-initdb.d/init-db.sh |
Database initialization script (users, databases, grants) |
dshm |
EmptyDir (Memory, 4G) | /dev/shm |
Shared memory for PostgreSQL |
db-creds |
Secret (imaging-pwd-sec) |
/opt/secrets |
Database passwords, read-only |
The
db-dataPVC is only created whenCastStorageService.enable: true. When using an external PostgreSQL instance (CustomPostgres.enable: true), no PVC is created and a separate init script (init-db-custom-pg.sh) is executed as an init container to set up the required users and databases on the external server.
4.2 console-analysis-node
The CAST analysis engine. Runs as a StatefulSet.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
castdir |
VolumeClaimTemplate (100Gi) | /usr/share/CAST |
CAST engine binaries, extensions, logs, LISA data |
shared-datadir |
PVC (pvc-shared-datadir, 100Gi) |
/opt/cast/shared |
Shared analysis workspace: delivery, deploy, and common-data folders |
dshm |
EmptyDir (Memory, 1G) | /dev/shm |
Shared memory for analysis processing |
The pvc-shared-datadir sub-directories used at runtime:
| Sub-directory | Path | Purpose |
|---|---|---|
| Delivery | /opt/cast/shared/delivery |
Source code and artifacts delivered for analysis |
| Deploy | /opt/cast/shared/deploy |
Analysis deployment output |
| Common data | /opt/cast/shared/common-data |
Shared configuration and data across analysis nodes |
When
AnalysisNodeFS.enable: true,pvc-shared-datadirswitches toReadWriteMany(file storage class), allowing multiple Analysis Node replicas to share the same workspace. When disabled (default), it usesReadWriteOnceblock storage and only a single Analysis Node replica is supported.
4.3 viewer-neo4j
The graph database for the Imaging Viewer. Runs as a StatefulSet.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
neo4jdata (subPath: logs) |
VolumeClaimTemplate (100Gi) | /var/lib/neo4j/logs |
Neo4j logs |
neo4jdata (subPath: neo4jdata) |
VolumeClaimTemplate (100Gi) | /var/lib/neo4j/config/neo4j5_data |
Graph database data files |
neo4jdata (subPath: csvarchive) |
VolumeClaimTemplate (100Gi) | /var/lib/neo4j/config/csv/archive |
Archived CSV files after import |
All three paths share a single underlying 100Gi PVC via sub-paths.
4.4 viewer-server
The main Imaging Viewer frontend and backend service, fronted by Nginx.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
log |
PVC (pvc-imagingviewer-v3-server-log, 1Gi) |
/opt/imaging/imaging-service/logs |
Application and Nginx access/error logs |
servernginxconf |
ConfigMap (servernginxconf) |
/opt/imaging/config/nginx/conf/nginx.conf |
Nginx reverse-proxy configuration |
4.5 viewer-etl
The ETL (Extract, Transform, Load) service responsible for importing analysis data into Neo4j.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
logdir |
PVC (pvc-imagingviewer-v3-etl-logs, 10Gi) |
/opt/imaging/imaging-etl/logs |
ETL operational logs |
csvarchive |
PVC (pvc-imagingviewer-v3-etl-csvarchive, 10Gi) |
/opt/imaging/imaging-etl/upload/archive |
Archived CSV files after they have been processed and loaded |
4.6 viewer-aimanager
The AI/enrichment manager service.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
logdir |
PVC (pvc-imagingviewer-v3-aimanager-logs, 2Gi) |
/opt/imaging/open_ai-manager/logs |
AI Manager operational logs |
4.7 viewer-api
The public REST API service for querying the Imaging graph.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
logdir |
PVC (pvc-imagingviewer-v3-api-logs, 2Gi) |
/opt/imaging/imaging-api/logs |
API service operational logs |
4.8 mcp-server
The Model Context Protocol server. Only deployed when McpServer.enable: true.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
logdir |
PVC (pvc-mcpserver-logs, 2Gi) |
/app/logs |
MCP server operational logs |
mcpserverappconfig |
ConfigMap (mcpserverappconfig) |
/app/server/config/app.config |
MCP server application configuration |
4.9 extendproxy
The CAST Extend proxy service. Only deployed when ExtendProxy.enable: true.
| Volume | Type | Mount Path | Purpose |
|---|---|---|---|
extendproxy |
PVC (pvc-extendproxy-data, 10Gi) |
/opt/cast_extend_proxy/data |
Proxy cache, configuration, and runtime data |
4.10 console-control-panel
No persistent volumes are mounted in the main container. Init containers use ConfigMap-backed SQL scripts to update context URLs in the PostgreSQL database on startup (see Section 5).
4.11 console-authentication-service
No persistent volumes are mounted in the main container. When UseCustomTrustStore: true, a ConfigMap (authcacrt) holding the CA certificate is projected into the container (see Section 5).
4.12 console-service, console-gateway-service, console-sso-service, console-dashboards
None of these services mount persistent volumes. They rely entirely on database-backed state and environment variable configuration.
5. ConfigMap-Backed File Mounts
The following ConfigMaps are mounted as files into containers or init containers.
| ConfigMap | Mounted In | Mount Path | Content |
|---|---|---|---|
console-v3-postgresqlconf |
console-postgres |
/usr/share/postgresql/postgresql.conf.sample |
Tuned PostgreSQL configuration (memory, WAL, logging, auto_explain) |
console-v3-init-db |
console-postgres (init via docker-entrypoint-initdb.d) |
/docker-entrypoint-initdb.d/init-db.sh |
Creates operator, guest, keycloak users and the keycloak database on first startup |
init-db-custom-pg |
Init container in console-postgres |
Executed as a script | Used when CustomPostgres.enable: true to initialize users/databases on an external PostgreSQL server |
contexturl-controlpanel-update-script |
Init container in console-control-panel |
/home/imaging/sql/contexturl-controlpanel-update-script.sql |
Sets keycloak.uri property in control_panel.properties |
contexturl-keycloak-update-script |
Init container in console-control-panel |
/home/imaging/sql/contexturl-keycloak-update-script.sql |
Removes frontendUrl from Keycloak realm attributes |
analysisnode-upgrade-script |
Init container in console-analysis-node |
/home/imaging/sql/analysisnode-upgrade-script.sql |
Runs schema upgrade SQL on the postgres database |
servernginxconf |
viewer-server |
/opt/imaging/config/nginx/conf/nginx.conf |
Nginx reverse-proxy rules for routing API, ETL, Neo4j, AI, login, SAML, and sourcecode endpoints |
mcpserverappconfig |
mcp-server |
/app/server/config/app.config |
MCP server runtime configuration (ports, control panel host, domain, etc.) |
authcacrt |
console-authentication-service |
(CA trust store injection) | Custom CA certificate; only used when UseCustomTrustStore: true |
6. Secret Mounts
The Secret imaging-pwd-sec is used both as environment variable source and as a file-system mount.
| Secret Key | Used By (env var) | Mounted As File In |
|---|---|---|
postgres-db-password |
console-postgres |
console-postgres → /opt/secrets/postgres-db-password |
operator-db-password |
console-authentication-service, console-control-panel, console-sso-service |
console-postgres → /opt/secrets/operator-db-password |
guest-db-password |
(init script only) | console-postgres → /opt/secrets/guest-db-password |
keycloak-db-password |
(init script only) | console-postgres → /opt/secrets/keycloak-db-password |
keycloak-admin-password |
console-authentication-service, console-sso-service |
— |
neo4j-password |
viewer-neo4j, viewer-server, viewer-etl, viewer-aimanager, viewer-api |
— |
The file-system mount (/opt/secrets) in console-postgres is read-only and used by the init script to read passwords at startup.
7. Ephemeral (In-Memory) Volumes
| Volume Name | Container | Type | Size | Mount Path | Purpose |
|---|---|---|---|---|---|
dshm |
console-postgres |
EmptyDir (Memory) | 4G | /dev/shm |
PostgreSQL shared memory (required for work_mem, parallel queries) |
dshm |
console-analysis-node |
EmptyDir (Memory) | 1G | /dev/shm |
Analysis engine shared memory |
updated-config-volume |
viewer-aimanager |
EmptyDir | — | (internal) | Temporary config staging during init |
8. Deployment Options Affecting Storage
8.1 Cloud Provider (K8SProvider)
Controls which CSI drivers are used for the castimaging-ds (and optionally castimaging-fs) storage classes. Accepted values: EKS, AKS, GKE.
8.2 Built-in vs. External PostgreSQL
| Option | Behavior |
|---|---|
CastStorageService.enable: true (default) |
PostgreSQL is deployed as part of the Helm chart. The db-data PVC (128Gi) is created automatically. |
CastStorageService.enable: false + CustomPostgres.enable: true |
No PostgreSQL deployment and no db-data PVC. An init container runs init-db-custom-pg.sh against the external host specified by CustomPostgres.host / CustomPostgres.port. |
8.3 Analysis Node Shared File Storage (AnalysisNodeFS.enable)
| Option | pvc-shared-datadir Access Mode |
Storage Class | Supports Multiple Analysis Nodes |
|---|---|---|---|
false (default) |
ReadWriteOnce |
castimaging-ds (block) |
No — single node only |
true |
ReadWriteMany |
castimaging-fs (file) |
Yes |
When enabling file storage on EKS, the EFSsystemID value is required. Optionally, providing EFSaccessPointID causes an explicit PersistentVolume (pv-shared-datadir) to be created.
On AKS, if auto-provisioning is not available, set AnalysisNodeFS.AKSsharedDatadirPV.create: true and provide secretName and shareName to create an explicit PV backed by an Azure File Share.
8.4 Optional Components
| Component | Controlling Value | PVC Created |
|---|---|---|
| Extend Proxy | ExtendProxy.enable: true |
pvc-extendproxy-data (10Gi) |
| MCP Server | McpServer.enable: true |
pvc-mcpserver-logs (2Gi) |
8.5 Storage Class Creation
Set CreateStorageClass: false to skip storage class creation (e.g., if the cluster admin has pre-provisioned them). The DiskClassName and FileClassName values must still match the names of the existing storage classes.
9. Downloading Logs with kubectl cp
The following commands download the contents of each log volume to a local ./Logs/ directory. Run them from the machine where kubectl is configured and authenticated against your cluster.
Namespace: All commands assume the namespace
castimaging-v3. Adjust if your deployment uses a different namespace.
StatefulSet pods (predictable pod names)
These pods have a fixed, deterministic name.
Analysis Node — CAST engine logs stored in the castdir volume:
Linux:
kubectl cp -n castimaging-v3 \
console-analysis-node-core-0:/usr/share/CAST/CAST/Logs \
./Logs/console-analysis-node
Windows (PowerShell):
kubectl cp -n castimaging-v3 `
console-analysis-node-core-0:/usr/share/CAST/CAST/Logs `
./Logs/console-analysis-node
Neo4j — graph database logs stored in the neo4jdata volume:
Linux:
kubectl cp -n castimaging-v3 \
viewer-neo4j-core-0:/var/lib/neo4j/logs \
./Logs/viewer-neo4j
Windows (PowerShell):
kubectl cp -n castimaging-v3 `
viewer-neo4j-core-0:/var/lib/neo4j/logs `
./Logs/viewer-neo4j
Deployment pods (dynamic pod names)
For Deployment-managed pods, the pod name includes a random suffix. The pattern below retrieves the current pod name dynamically before copying.
Viewer Server — Nginx access/error logs and application logs:
Linux:
POD=$(kubectl get pod -n castimaging-v3 -l imaging.service=viewer-server -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n castimaging-v3 $POD:/opt/imaging/imaging-service/logs ./Logs/viewer-server
Windows (PowerShell):
$POD = kubectl get pod -n castimaging-v3 -l imaging.service=viewer-server -o jsonpath='{.items[0].metadata.name}'
kubectl cp -n castimaging-v3 "${POD}:/opt/imaging/imaging-service/logs" ./Logs/viewer-server
Viewer ETL — ETL service operational logs:
Linux:
POD=$(kubectl get pod -n castimaging-v3 -l imaging.service=viewer-etl -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n castimaging-v3 $POD:/opt/imaging/imaging-etl/logs ./Logs/viewer-etl
Windows (PowerShell):
$POD = kubectl get pod -n castimaging-v3 -l imaging.service=viewer-etl -o jsonpath='{.items[0].metadata.name}'
kubectl cp -n castimaging-v3 "${POD}:/opt/imaging/imaging-etl/logs" ./Logs/viewer-etl
Viewer AI Manager — AI Manager operational logs:
Linux:
POD=$(kubectl get pod -n castimaging-v3 -l imaging.service=viewer-aimanager -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n castimaging-v3 $POD:/opt/imaging/open_ai-manager/logs ./Logs/viewer-aimanager
Windows (PowerShell):
$POD = kubectl get pod -n castimaging-v3 -l imaging.service=viewer-aimanager -o jsonpath='{.items[0].metadata.name}'
kubectl cp -n castimaging-v3 "${POD}:/opt/imaging/open_ai-manager/logs" ./Logs/viewer-aimanager
Viewer API — REST API service operational logs:
Linux:
POD=$(kubectl get pod -n castimaging-v3 -l imaging.service=viewer-api -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n castimaging-v3 $POD:/opt/imaging/imaging-api/logs ./Logs/viewer-api
Windows (PowerShell):
$POD = kubectl get pod -n castimaging-v3 -l imaging.service=viewer-api -o jsonpath='{.items[0].metadata.name}'
kubectl cp -n castimaging-v3 "${POD}:/opt/imaging/imaging-api/logs" ./Logs/viewer-api
MCP Server (only if McpServer.enable: true) — MCP server operational logs:
Linux:
POD=$(kubectl get pod -n castimaging-v3 -l imaging.service=mcp-server -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n castimaging-v3 $POD:/app/logs ./Logs/mcp-server
Windows (PowerShell):
$POD = kubectl get pod -n castimaging-v3 -l imaging.service=mcp-server -o jsonpath='{.items[0].metadata.name}'
kubectl cp -n castimaging-v3 "${POD}:/app/logs" ./Logs/mcp-server
Collect all logs in one shot
Linux — save the following as collect-logs.sh and run with bash collect-logs.sh:
#!/bin/bash
NAMESPACE="castimaging-v3"
DEST="./Logs/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$DEST"
# StatefulSets (fixed pod names)
kubectl cp -n $NAMESPACE console-analysis-node-core-0:/usr/share/CAST/CAST/Logs $DEST/console-analysis-node
kubectl cp -n $NAMESPACE viewer-neo4j-core-0:/var/lib/neo4j/logs $DEST/viewer-neo4j
# Deployments (dynamic pod names)
declare -A LOG_PATHS=(
[viewer-server]="/opt/imaging/imaging-service/logs"
[viewer-etl]="/opt/imaging/imaging-etl/logs"
[viewer-aimanager]="/opt/imaging/open_ai-manager/logs"
[viewer-api]="/opt/imaging/imaging-api/logs"
[mcp-server]="/app/logs"
)
for SVC in "${!LOG_PATHS[@]}"; do
POD=$(kubectl get pod -n $NAMESPACE -l imaging.service=$SVC -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD" ]; then
echo "No pod found for $SVC, skipping."
continue
fi
echo "Copying logs from $POD ($SVC)..."
kubectl cp -n $NAMESPACE $POD:${LOG_PATHS[$SVC]} $DEST/$SVC
done
echo "All logs collected in $DEST"
Windows — save the following as collect-logs.ps1 and run with powershell -File collect-logs.ps1:
$NAMESPACE = "castimaging-v3"
$DEST = "./Logs/$(Get-Date -Format 'yyyyMMdd_HHmmss')"
New-Item -ItemType Directory -Force -Path $DEST | Out-Null
# StatefulSets (fixed pod names)
kubectl cp -n $NAMESPACE console-analysis-node-core-0:/usr/share/CAST/CAST/Logs "$DEST/console-analysis-node"
kubectl cp -n $NAMESPACE viewer-neo4j-core-0:/var/lib/neo4j/logs "$DEST/viewer-neo4j"
# Deployments (dynamic pod names)
$SERVICES = @{
"viewer-server" = "/opt/imaging/imaging-service/logs"
"viewer-etl" = "/opt/imaging/imaging-etl/logs"
"viewer-aimanager" = "/opt/imaging/open_ai-manager/logs"
"viewer-api" = "/opt/imaging/imaging-api/logs"
"mcp-server" = "/app/logs"
}
foreach ($SVC in $SERVICES.Keys) {
$POD = kubectl get pod -n $NAMESPACE -l "imaging.service=$SVC" -o jsonpath='{.items[0].metadata.name}' 2>$null
if (-not $POD) {
Write-Host "No pod found for $SVC, skipping."
continue
}
Write-Host "Copying logs from $POD ($SVC)..."
kubectl cp -n $NAMESPACE "${POD}:$($SERVICES[$SVC])" "$DEST/$SVC"
}
Write-Host "All logs collected in $DEST"
Note:
kubectl cprequires thetarbinary to be available inside the container. This is the case for all CAST Imaging containers. If a pod is not running or not ready, the copy will fail for that service — check pod status first withkubectl get pods -n castimaging-v3.
10. Storage Summary Table
| Container | PVC(s) | VolumeClaimTemplate(s) | ConfigMap Files | Secret Files | EmptyDir |
|---|---|---|---|---|---|
console-postgres |
db-data ¹ |
— | postgresql.conf.sample, init-db.sh |
/opt/secrets (all DB passwords) |
/dev/shm (4G) |
console-control-panel |
— | — | SQL update scripts (init containers) | — | — |
console-authentication-service |
— | — | ca.crt ² |
— | — |
console-service |
— | — | — | — | — |
console-gateway-service |
— | — | — | — | — |
console-sso-service |
— | — | — | — | — |
console-dashboards |
— | — | — | — | — |
console-analysis-node |
pvc-shared-datadir |
castdir (100Gi) |
SQL upgrade script (init container) | — | /dev/shm (1G) |
viewer-neo4j |
— | neo4jdata (100Gi) |
— | — | — |
viewer-server |
pvc-imagingviewer-v3-server-log |
— | nginx.conf |
— | — |
viewer-etl |
pvc-imagingviewer-v3-etl-logs, pvc-imagingviewer-v3-etl-csvarchive |
— | — | — | — |
viewer-aimanager |
pvc-imagingviewer-v3-aimanager-logs |
— | — | — | (config staging) |
viewer-api |
pvc-imagingviewer-v3-api-logs |
— | — | — | — |
mcp-server ³ |
pvc-mcpserver-logs |
— | app.config |
— | — |
extendproxy ⁴ |
pvc-extendproxy-data |
— | — | — | — |
Notes:
db-dataPVC only exists whenCastStorageService.enable: true.authcacrtConfigMap only mounted whenUseCustomTrustStore: true.mcp-serverand its PVC only deployed whenMcpServer.enable: true.extendproxyand its PVC only deployed whenExtendProxy.enable: true.