Cluster Autoscaler user guide
    • PDF

    Cluster Autoscaler user guide

    • PDF

    Article Summary

    Available in Classic

    Before use

    Note

    The current beta version is available with the following restrictions:

    • Multi node pool not supported
    • Cluster Autoscaler version auto upgrade not supported
    • Installation with Helm supported only

    Kubernetes allows you to automatically scale clusters up or down with the following features:

    Horizontal Pod Autoscaler adjusts the number of necessary Pods based on the resources currently being used by Pods.

    Cluster Autoscaler adjusts the number of Nodes by comparing resources available for a cluster and those requested by Pods.

    These two Autoscalers enable Kubernetes to automatically adjust the number of Pods and Nodes for a cluster's load balancing.

    This document describes what Cluster Autoscaler is and how you can install and use the tool.

    Cluster Autoscaler

    Auto adjustment

    • Scale up nodes if Pod resource requests made by the user are less than those of the current cluster
    • Scale down nodes if a node is less used during a specific time period

    Cluster Autoscaler does not work, based on the resources currently in use.
    Therefore, high Pod load alone does not increase the number of Nodes if resource requests are not made.

    Node scaledown exceptions

    Cluster Autoscaler does not scale down underutilized nodes in the following cases:

    • Controllers (e.g., Deployment, StatefulSet) do not control nodes
    • Local Storage is enabled
    • When pods can't be moved to another node
    • When the annotation is set to "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"

    For more information, see the link.

    Application

    Install Helm

    Currently, you can only install Cluster Autoscaler by using Helm. If you don't have Helm installed, then install Helm as described in Install Helm.

    Add Ncloud repository

    To install Cluster Autoscaler, add Ncloud repository.

    $ helm repo add ncloud https://navercloudplatformdeveloper.github.io/helm-charts
    

    Install Cluster Autoscaler

    Note

    This guide describes how to install Cluster Autoscaler in the Classic environment. For the VPC environment, please see the Ncloud Kubernetes Service (VPC) guide.

    Caution

    The Cluster Autoscaler provided in this guide is only available for use in the Classic environment of Ncloud Kubernetes Service.

    In order to install Cluster Autoscaler, the --version parameter is required, which should match the current cluster version.
    See the following table to check the Cluster Autoscaler version for the Kubernetes version.

    Kubernetes VersionCA Version
    1.17.11Not supported
    1.12.71.12.7-beta-202004200000

    You can also use the --set option to specify installation options.
    Currently, the following options are available for installation with Helm.

    OptionsDescriptionDefault
    minMinimum number of nodes1
    maxMaximum number of nodes3
    $ helm --kubeconfig=$KUBE_CONFIG install ncloud/autoscaler \
    --set min=1 \
    --set max=3 \
    --version [Cluster Autoscaler Version]
    

    Verify successful installation

    If your installation was successful, you can check the current Cluster Autoscaler status from a ConfigMap.

    $ kubectl --kubeconfig $KUBE_CONFIG get cm cluster-autoscaler-status -o yaml -n kube-system
    
    apiVersion: v1
    data:
      status: |+
        Cluster-autoscaler status at 2019-09-03 08:59:53.84165088 +0000 UTC:
        Cluster-wide:
          Health:      Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0)
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
          ScaleUp:     NoActivity (ready=1 registered=1)
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
          ScaleDown:   NoCandidates (candidates=0)
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
    
        NodeGroups:
          Name:        k8s-default-group
          Health:      Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0 cloudProviderTarget=1 (minSize=1, maxSize=5))
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
          ScaleUp:     NoActivity (ready=1 cloudProviderTarget=1)
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
          ScaleDown:   NoCandidates (candidates=0)
                       LastProbeTime:      2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
                       LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
    
    kind: ConfigMap
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/last-updated: 2019-09-03 08:59:53.84165088 +0000
          UTC
      creationTimestamp: 2019-09-03T08:59:31Z
      name: cluster-autoscaler-status
      namespace: kube-system
      resourceVersion: "426558451"
      selfLink: /api/v1/namespaces/kube-system/configmaps/cluster-autoscaler-status
      uid: 248a8014-ce29-11e9-8a51-f220cd8c2e67
    

    Example

    In this example, we'll run a load test for the cluster with Horizontal Pod Autoscaler and Cluster Autoscaler installed, in order to check if the Pods and Nodes are scaled up.

    Prepare a cluster and check if Metrics Server is normal

    Create a cluster with one Node.

    $ kubectl --kubeconfig $KUBE_CONFIG get nodes
    
    NAME             STATUS    ROLES     AGE       VERSION
    nks-worker-5uh   Ready     node      3h        v1.12.7
    

    Check if Metrics Server works normally.

    It may take a while to collect Metrics.

    $ kubectl --kubeconfig $KUBE_CONFIG top node
    
    NAME                  CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
    nks-worker-5uh   94m          4%        1330Mi          36%
    

    See Application to install Cluster Autoscaler.

    HPA configuration and load test

    See the following page to enable Horizontal Pod Autoscaler (HPA) and run a load test from Run & expose php-apache server to Increase load.

    Horizontal Pod Autoscaler walkthrough

    After running a load test, you can verify that the Pods are automatically scaled up as in the results below.

    Since the Node resource usage of the cluster is less than all Pod requests, the Nodes are not yet scaled up.

    Note

    The result depends on the node specifications specified when the cluster is installed.

    $ kubectl --kubeconfig $KUBE_CONFIG get hpa
    
    NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    php-apache   Deployment/php-apache   46%/50%   1         10        5          28m
    
    $ kubectl --kubeconfig $KUBE_CONFIG get pods
    
    NAME                               READY     STATUS    RESTARTS   AGE
    php-apache-f67db78b-2lvww          1/1       Running   0          28m
    php-apache-f67db78b-hbc86          1/1       Running   0          2m
    php-apache-f67db78b-mppcl          1/1       Running   0          2m
    php-apache-f67db78b-pctkv          1/1       Running   0          2m
    php-apache-f67db78b-wr5dh          1/1       Running   0          2m
    

    Change HPA configuration

    Change the CPU utilization percentage of Horizontal Pod Autoscaler, from 50 to 20.

    $ kubectl --kubeconfig $KUBE_CONFIG patch hpa php-apache --patch '{"spec":{"targetCPUUtilizationPercentage":20}}'
    

    Since the Node resource usage is less than Pod resource requests, the number of pending Pods is increased.

    $ kubectl --kubeconfig $KUBE_CONFIG get pods
    
    NAME                               READY     STATUS    RESTARTS   AGE
    php-apache-f67db78b-nz6zq          0/1       Pending   0          3m
    php-apache-f67db78b-x2gr9          0/1       Pending   0          3m
    php-apache-f67db78b-2lvww          1/1       Running   0          41m
    php-apache-f67db78b-hbc86          1/1       Running   0          15m
    php-apache-f67db78b-mppcl          1/1       Running   0          15m
    php-apache-f67db78b-p2q6r          1/1       Running   0          3m
    php-apache-f67db78b-pctkv          1/1       Running   0          15m
    php-apache-f67db78b-r8whp          1/1       Running   0          3m
    php-apache-f67db78b-sjrh7          1/1       Running   0          3m
    php-apache-f67db78b-wr5dh          1/1       Running   0          15m
    

    Scale up Nodes

    Under the condition that "nodes are scaled up if Pod resource requests made by the user are less than those of the current cluster,"

    Cluster Autoscaler decides the number of necessary Nodes, "cloudProviderTarget", based on the current additional Pod requests. In this example, one more Node is needed, so Cluster Autoscaler adjusts the total number of Nodes to 2.

    You can check "cloudProviderTarget" using a ConfigMap.

    $ kubectl --kubeconfig $KUBE_CONFIG get cm cluster-autoscaler-status -o yaml -n kube-system
    
    apiVersion: v1
    data:
      status: |+
        Cluster-autoscaler status at 2019-09-03 09:53:35.841671859 +0000 UTC:
        Cluster-wide:
          Health:      Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0)
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:49:29.11931241 +0000 UTC m=+29.776755835
          ScaleUp:     InProgress (ready=1 registered=1)
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:50:20.028647773 +0000 UTC m=+80.686091197
          ScaleDown:   NoCandidates (candidates=0)
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:49:29.11931241 +0000 UTC m=+29.776755835
    
        NodeGroups:
          Name:        k8s-default-group
          Health:      Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0 cloudProviderTarget=2 (minSize=1, maxSize=5))
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:49:29.11931241 +0000 UTC m=+29.776755835
          ScaleUp:     InProgress (ready=1 cloudProviderTarget=2)
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:50:20.028647773 +0000 UTC m=+80.686091197
          ScaleDown:   NoCandidates (candidates=0)
                       LastProbeTime:      2019-09-03 09:53:35.645663163 +0000 UTC m=+276.303106480
                       LastTransitionTime: 2019-09-03 09:49:29.11931241 +0000 UTC m=+29.776755835
    
    kind: ConfigMap
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/last-updated: 2019-09-03 09:53:35.841671859 +0000
          UTC
      creationTimestamp: 2019-09-03T08:59:31Z
      name: cluster-autoscaler-status
      namespace: kube-system
      resourceVersion: "426684212"
      selfLink: /api/v1/namespaces/kube-system/configmaps/cluster-autoscaler-status
      uid: 248a8014-ce29-11e9-8a51-f220cd8c2e67
    

    Once the process is completed, you can confirm that a Node was added.

    $kubectl --kubeconfig $KUBE_CONFIG get nodes
    
    NAME             STATUS    ROLES     AGE       VERSION
    nks-worker-5uh   Ready     node      3h        v1.12.7
    nks-worker-5ad   Ready     node      32s       v1.12.7
    

    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.