Using Helm
    • PDF

    Using Helm

    • PDF

    Article Summary

    Available in Classic

    Helm is a kubernetes package manager which consists of two parts: the client (helm) and the server (tiller).

    This guide is based on Helm v2.

    Install Helm

    Before using Helm, you should install kubectl and configure kubeconfig in advance.

    Refer to the following document to install Helm depending on your OS.

    • https://v2.helm.sh/docs/using_helm/#installing-helm

    Example: macOS

    $ brew install helm@2
    

    To use Helm, you should deploy tiller for which clusterrolebinding permissions should be given.

    Add cluster role binding

    $ kubectl --kubeconfig=$KUBE_CONFIG create clusterrolebinding tiller-cluster-admin \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:default
    
    clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-admin created
    

    Add clusterrolebinding permissions and initialize Helm to deploy tiller.

    Initialize Helm

    $ helm --kubeconfig=$KUBE_CONFIG init
    ...
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
    
    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    To prevent this, run `helm init` with the --tiller-tls-verify flag.
    For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
    Happy Helming!
    

    After executing the init command, you can see that the tiller-deploy pod is deployed. If the STATUS is Running, Helm is ready to use.

    Check tiller-deploy pod

    $ kubectl --kubeconfig=$KUBE_CONFIG get pods -n kube-system -w
    NAME                                            READY   STATUS    RESTARTS   AGE
    ...
    tiller-deploy-845cffcd48-llds2                  1/1     Running   0          18h
    ...
    

    Integrate with NAS by installing NFS Client Provisioner using Helm

    Since BlockStorage CSI is installed in your cluster by default, a StorageClass is automatically created, but you can also create a StorageClass with NAS by using the Helm nfs-client-provisioner chart.
    First, you need to create a volume by using NFS protocol in NAS.

    Note

    When you create a NAS volume, all the worker node servers in your cluster should be added to ACL.

    Once a NAS volume is created, check the IP address and path information in the mount information.

    nks-1-2-4_en

    The IP and path information is used to install a Helm chart.

    Example

    • __NAS_IP__: 10.250.48.16
    • __NAS_PATH__: /n000075_nkstest3

    Install nfs-client-provisioner

    In the following command, replace __NAS_IP__ and __NAS_PATH__ with your IP and path information.

    $ helm --kubeconfig=$KUBE_CONFIG --name storage install stable/nfs-client-provisioner \
    --set nfs.server=__NAS_IP__ \
    --set nfs.path=__NAS_PATH__
    
    NAME:   storage
    LAST DEPLOYED: Tue Feb 19 16:56:50 2019
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ClusterRole
    NAME                                   AGE
    storage-nfs-client-provisioner-runner  1s
    
    ==> v1/ClusterRoleBinding
    run-storage-nfs-client-provisioner  0s
    
    ==> v1/Role
    leader-locking-storage-nfs-client-provisioner  0s
    
    ==> v1/RoleBinding
    leader-locking-storage-nfs-client-provisioner  0s
    
    ==> v1/Deployment
    storage-nfs-client-provisioner  0s
    
    ==> v1/Pod(related)
    
    NAME                                             READY  STATUS             RESTARTS  AGE
    storage-nfs-client-provisioner-6f4b47749d-x9cn2  0/1    ContainerCreating  0         0s
    
    ==> v1/StorageClass
    
    NAME        AGE
    nfs-client  1s
    
    ==> v1/ServiceAccount
    storage-nfs-client-provisioner  1s
    

    Check Storageclass

    $ kubectl --kubeconfig=$KUBE_CONFIG get storageclass
    NAME                   PROVISIONER                                    AGE
    nfs-client (default)   cluster.local/storage-nfs-client-provisioner   16s
    

    After the above package is installed, if there is a PVC in the Helm package, a PV is automatically created in the integrated NAS.

    Caution

    When you attempt to stop a worker node mounted to NAS using nfs-client-provisioner, it may become stop-failed.
    This is an OS bug, which arises when I/O occurs on the mounted NAS.
    Therefore, we recommend that you refer to Node Maintenance Guide to drain the node before stopping it.

    Install applications with Helm

    This section describes how to install and access Jenkins application by using Helm.

    Execute the following command to install Jenkins.

    Prerequisite

    Install Jenkins

    $ helm --kubeconfig=$KUBE_CONFIG install --name=ci stable/jenkins
    NAME:   ci
    LAST DEPLOYED: Wed Feb 20 15:38:07 2019
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME              AGE
    ci-jenkins        0s
    ci-jenkins-tests  0s
    
    ==> v1/PersistentVolumeClaim
    ci-jenkins  0s
    
    ==> v1/Service
    ci-jenkins-agent  0s
    ci-jenkins        0s
    
    ==> v1/Deployment
    ci-jenkins  0s
    
    ==> v1/Pod(related)
    
    NAME                         READY  STATUS   RESTARTS  AGE
    ci-jenkins-7fb57bf7bb-t8dd9  0/1    Pending  0         0s
    
    ==> v1/Secret
    
    NAME        AGE
    ci-jenkins  0s
    
    
    NOTES:
    1. Get your 'admin' user password by running:
      printf $(kubectl get secret --namespace default ci-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
    2. Get the Jenkins URL to visit by running these commands in the same shell:
      NOTE: It may take a few minutes for the LoadBalancer IP to be available.
            You can watch the status of by running 'kubectl get svc --namespace default -w ci-jenkins'
      export SERVICE_IP=$(kubectl get svc --namespace default ci-jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
      echo http://$SERVICE_IP:8080/login
    
    3. Login with the password from step 1 and the username: admin
    
    For more information on running Jenkins on Kubernetes, visit:
    https://cloud.google.com/solutions/jenkins-on-container-engine
    
    • Username: admin
    • Password: Get your password with secret.

    After Jenkins is installed, the result screen shows a guide to check account information, with which you can check the PV information.

    Check PVC and PV

    $ kubectl --kubeconfig=$KUBE_CONFIG get pvc
    NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    ci-jenkins   Bound    pvc-1548887b-34da-11e9-89a3-f220cd2fe758   10Gi       RWO            nfs-client           23s
    
    $ kubectl --kubeconfig=$KUBE_CONFIG get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS      REASON   AGE
    pvc-1548887b-34da-11e9-89a3-f220cd2fe758   10Gi       RWO            Delete           Bound    default/ci-jenkins   nfs-client                 23s
    

    Get your admin password with secret

    $ kubectl --kubeconfig=$KUBE_CONFIG get secret --namespace default ci-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode;echo
    Oq307Rj2Yu
    

    Now, you can use kubectl port-forward command in your local machine to access Jenkins.

    Access Jenkins from your local machine

    $ export POD_NAME=$(kubectl --kubeconfig=$KUBE_CONFIG get pods -l "app.kubernetes.io/name=jenkins" -o jsonpath="{.items[0].metadata.name}"); echo $POD_NAME; kubectl --kubeconfig=$KUBE_CONFIG port-forward $POD_NAME 18080:8080
    Forwarding from [::1]:18080 -> 8080
    Forwarding from 127.0.0.1:18080 -> 8080
    

    Execute the above command and connect to the following link on your browser, and the Jenkins login page appears.

    • http://localhost:18080

    nks-1-2-5_en

    Log in with your Jenkins account information, and the following Jenkins page appears.

    nks-1-2-6_en

    Install Prometheus and Grafana to monitor clusters

    This section describes how to install Prometheus, a monitoring system, and Grafana, an analysis platform, and monitor clusters.
    For more information on Prometheus and Grafana, refer to the following official websites.

    • https://prometheus.io
    • https://grafana.com

    Restrictions

    Install Prometheus

    Create a namespace for monitoring.

    Create a namespace

    $ kubectl --kubeconfig=$KUBE_CONFIG create namespace pg
    

    Install Prometheus with the helm command.

    Install Prometheus with the helm command

    $ helm --kubeconfig=$KUBE_CONFIG install --name prometheus stable/prometheus --version 6.7.4 --namespace pg
    NAME:   prometheus
    LAST DEPLOYED: Thu Feb 28 11:24:30 2019
    NAMESPACE: pg
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/Pod(related)
    NAME                                            READY  STATUS             RESTARTS  AGE
    prometheus-node-exporter-pdrq7                  0/1    ContainerCreating  0         1s
    prometheus-alertmanager-7b945bb544-87knh        0/2    ContainerCreating  0         1s
    prometheus-kube-state-metrics-86996f7fff-tfm92  0/1    Pending            0         1s
    prometheus-pushgateway-b9477487f-42bhh          0/1    Pending            0         1s
    prometheus-server-6f9d569489-q75mx              0/2    Pending            0         1s
    
    ==> v1/PersistentVolumeClaim
    
    NAME                     AGE
    prometheus-alertmanager  1s
    prometheus-server        1s
    
    ==> v1beta1/ClusterRole
    prometheus-kube-state-metrics  1s
    prometheus-server              1s
    
    ==> v1beta1/ClusterRoleBinding
    prometheus-kube-state-metrics  1s
    prometheus-server              1s
    
    ==> v1/Service
    prometheus-alertmanager        1s
    prometheus-kube-state-metrics  1s
    prometheus-node-exporter       1s
    prometheus-pushgateway         1s
    prometheus-server              1s
    
    ==> v1beta1/DaemonSet
    prometheus-node-exporter  1s
    
    ==> v1/ConfigMap
    prometheus-alertmanager  1s
    prometheus-server        1s
    
    ==> v1/ServiceAccount
    prometheus-alertmanager        1s
    prometheus-kube-state-metrics  1s
    prometheus-node-exporter       1s
    prometheus-pushgateway         1s
    prometheus-server              1s
    
    ==> v1beta1/Deployment
    prometheus-alertmanager        1s
    prometheus-kube-state-metrics  1s
    prometheus-pushgateway         1s
    prometheus-server              1s
    
    
    NOTES:
    The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
    prometheus-server.pg.svc.cluster.local
    
    
    Get the Prometheus server URL by running these commands in the same shell:
      export POD_NAME=$(kubectl get pods --namespace pg -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
      kubectl --namespace pg port-forward $POD_NAME 9090
    
    
    The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
    prometheus-alertmanager.pg.svc.cluster.local
    
    
    Get the Alertmanager URL by running these commands in the same shell:
      export POD_NAME=$(kubectl get pods --namespace pg -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
      kubectl --namespace pg port-forward $POD_NAME 9093
    
    
    The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
    prometheus-pushgateway.pg.svc.cluster.local
    
    
    Get the PushGateway URL by running these commands in the same shell:
      export POD_NAME=$(kubectl get pods --namespace pg -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
      kubectl --namespace pg port-forward $POD_NAME 9091
    
    For more information on running Prometheus, visit:
    https://prometheus.io/
    

    Access Prometheus from your local machine

    $ export POD_NAME=$(kubectl --kubeconfig=$KUBE_CONFIG get pods --namespace pg -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
    $ kubectl --kubeconfig=$KUBE_CONFIG --namespace pg port-forward $POD_NAME 9090
    

    Execute the above command, and open the following link in your browser to access Prometheus.

    • http://localhost:9090

    Install Grafana

    Before installing Grafana, create values.yml as follows to integrate with Prometheus.
    The value of datasources.url is http://<prometheus-server-name>. In this document, it is http://prometheus-server as <prometheus-server-name> is prometheus-server.

    values.yml

    persistence:
      enabled: true
      accessModes:
        - ReadWriteOnce
      size: 5Gi
    
    datasources:
     datasources.yaml:
       apiVersion: 1
       datasources:
       - name: Prometheus
         type: prometheus
         url: http://prometheus-server
         access: proxy
         isDefault: true
    
    dashboards:
        kube-dash:
          gnetId: 6663
          revision: 1
          datasource: Prometheus
        kube-official-dash:
          gnetId: 2
          revision: 1
          datasource: Prometheus
    
    dashboardProviders:
      dashboardproviders.yaml:
        apiVersion: 1
        providers:
        - name: 'default'
          orgId: 1
          folder: ''
          type: file
          disableDeletion: false
          editable: true
          options:
            path: /var/lib/grafana/dashboards
    

    After creating the file, use the Helm command with the -f values.yml option to install Grafana.

    Install Grafana with Helm

    $ helm --kubeconfig=$KUBE_CONFIG install --name grafana stable/grafana --version 1.11.6 -f values.yml --namespace pg
    NAME:   grafana
    LAST DEPLOYED: Thu Feb 28 14:38:24 2019
    NAMESPACE: pg
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/RoleBinding
    NAME     AGE
    grafana  0s
    
    ==> v1/Service
    grafana  0s
    
    ==> v1beta2/Deployment
    grafana  0s
    
    ==> v1/Pod(related)
    
    NAME                      READY  STATUS   RESTARTS  AGE
    grafana-76dbd66b77-d2dkl  0/1    Pending  0         0s
    
    ==> v1beta1/PodSecurityPolicy
    
    NAME     AGE
    grafana  0s
    
    ==> v1/Secret
    grafana  0s
    
    ==> v1/ConfigMap
    grafana                  0s
    grafana-dashboards-json  0s
    
    ==> v1/ClusterRole
    grafana-clusterrole  0s
    
    ==> v1/PersistentVolumeClaim
    grafana  0s
    
    ==> v1/ServiceAccount
    grafana  0s
    
    ==> v1/ClusterRoleBinding
    grafana-clusterrolebinding  0s
    
    ==> v1beta1/Role
    grafana  0s
    
    
    NOTES:
    1. Get your 'admin' user password by running:
    
       kubectl get secret --namespace pg grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    
    2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
    
       grafana.pg.svc.cluster.local
    
       Get the Grafana URL to visit by running these commands in the same shell:
    
         export POD_NAME=$(kubectl get pods --namespace pg -l "app=grafana,component=" -o jsonpath="{.items[0].metadata.name}")
         kubectl --namespace pg port-forward $POD_NAME 3000
    
    3. Login with the password from step 1 and the username: admin
    

    Get your Grafana account password with secret

    $ kubectl --kubeconfig=$KUBE_CONFIG get secret --namespace pg grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    

    Access Grafana from your local machine

    $ export POD_NAME=$(kubectl --kubeconfig=$KUBE_CONFIG get pods --namespace pg -l "app=grafana" -o jsonpath="{.items[0].metadata.name}")
    $ kubectl --kubeconfig=$KUBE_CONFIG --namespace pg port-forward $POD_NAME 3000
    

    After executing the commands above, open the following link and log in with your Grafana account information to access the Grafana Dashboard.

    • http://localhost:3000
    • Username: admin
    • Password: Get your password with secret.

    Add Dashboards

    To monitor your clusters with the integrated Prometheus data, add the following two Dashboards for Kubernetes.

    • https://grafana.com/dashboards/8588
    • https://grafana.com/dashboards/1621
    1. Click Create > Import in the Grafana page.
      nks-1-2-10_en

    2. Add the Dashboard link above and click Load.
      nks-1-2-11_en

    3. Select Prometheus as your data source and click Import.
      nks-1-2-12_en

    4. The Dashboard with imported data appears as below.
      nks-1-2-13_en

    5. Add the other Dashboard in the same way.


    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.