Customizing the Helm Chart

Beta Feature — For Test Environments Only

Customize the Vertica Helm chart after you create and install a new instance. Pass environment-specific settings to a YAML formatted file, or use the --set option during the final helm upgrade command.

For details about adding or removing Vertica pods, see Scaling Vertica Pods Horizontally

For a comprehensive list of configurable parameters, see Helm Chart Parameters.

Prerequisites

The following steps incrementally build a Helm upgrade command with a YAML file named overrides.yaml and various --set commands to update a Helm chart named vertica-charts.

  1. Begin the helm upgrade command by defining the name of the Helm chart instance and the Helm chart that you want to alter. The following command updates a Helm chart named my-release:

    $ helm upgrade my-release vertica-charts/vertica \ 

    All subsequent lines are indented by 1 tab.

  2. The Vertica Helm chart uses the default StorageClass. To use a different storage class, append the following --set option to the command:

    	--set db.storage.local.storageClass=storageClass-name \
  3. Each pod requests resources from its host node. The Vertica Helm chart's default configuration sets request and limit defaults based on Recommendations for Sizing Vertica Nodes and Clusters in the Vertica knowledge base. These limits are for testing environments, and are not suitable for production workloads.

    As a best practice, set the resource request and limit to equal values so that they are assigned to the guaranteed QoS class. Equal settings also provide the best safeguard against the Out Of Memory (OOM) Killer in constrained environments.

    Select resource settings that your host nodes can accommodate. When a pod is started or rescheduled, Kubernetes searches for host nodes with enough resources available to start the pod. If there is not a host node with enough resources, the pod STATUS stays in Pending until there are enough resources available.

    Because setting the resource requests and limits requires multiple set commands, pass the new settings to the configuration using the overrides.yaml file. Add the following settings to overrides.yaml to set the requests and limits to 96 Gi of RAM and 32 CPUs:

    subclusters:
      defaultsubcluster:
        resources:
          requests:
            memory: "96Gi"
            cpu: 32
          limits:
            memory: "96Gi"
            cpu: 32

    When you change the resources settings, Kubernetes restarts each pod with the updated resource configuration.

  4. Vertica runs optimally in Kubernetes if there is only one Vertica pod per node. As a best practice, set subclusters.defaultsubcluster.affinity to ensure that a single node does not serve two Vertica pods.

    Append the following to the overrides.yaml:

    ...
    subcluster:
      defaultsubcluster:
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                operator: In
                values:
                - vertica
              topologyKey: "kubernetes.io/hostname"
  5. Optional. In many circumstances, external client applications need to connect to your Kubernetes cluster. This is controlled by the subclusters.defaultsubcluster.service.type setting. The Vertica Helm chart's default configuration sets this value to ClusterIP, which sets a stable IP and port that is accessible from within the Kubernetes cluster only. For external client access, set this to NodePort.

    Append the following to overrides.yaml to enable external access on a port. If you do not provide a port number, Kubernetes assigns a port in the 30000 - 32767 range automatically. As a best practice, let Kubernetes assign the port number to avoid potential port collisions:

    ...
    subclusters:
      defaultsubcluster:
        service:
          type: NodePort

    If you choose to use the NodePort service type, you must create a firewall rule that allows TCP connections on the assigned external port. To get the assigned port number, use the kubectl get svc command to view the PORT(S) column. In the following example, port 31978 is assigned to the NodePort service:

    $ kubectl get svc
    NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
    cluster-vertica-defaultsubcluster      NodePort    10.20.30.40     <none>        5433:31978/TCP,5444:30612/TCP   12h
  6. Confirm that the final Helm install command matches the command displayed below, then execute it:

    $ helm upgrade my-release vertica-charts/vertica \ 
        --set db.storage.local.storageClass=storageClass-name \
        -f overrides.yaml

    To execute helm upgrade to confirm your updates without applying the changes, add the --dry-run flag to the end of the command.

  7. To confirm that the values were set properly, enter the following command:

    $ helm get values my-release