Available in VPC
You can check the list of node pools within a cluster and perform management tasks on individual node pools, such as viewing their details, deleting them, upgrading them, and changing the number of worker nodes.
View the list of node pools
List of all node pools
To view the list of all node pools created within the service, navigate to
> Services > Containers > Ncloud Kubernetes Service > Node pools in the VPC environment in the NAVER Cloud Platform console.
List of node pools within individual clusters
To view the list of node pools within each cluster:
- From the VPC environment on the NAVER Cloud Platform console, navigate to
> Services > Containers > Ncloud Kubernetes Service. - From the list of clusters, click the row of the cluster for which you want to check the list of node pools.
View node pools details
To view the details for each node pools, click the name of the node pools you want to view from the list of node pools.
Add node pools
To add new node pools to a cluster:
- Click the individual cluster row from the list of clusters.
- In the cluster details tab, click [Add] under the node pools area.
- Enter the node pools name, server image name, server type, number of nodes, Kubernetes Label, Taint, subnet, and Node IAM Role, and then click [Next].
- After finally checking the node pools settings, click [Create].
Change the number of worker nodes in node pools
Change the number of worker nodes manually
You can manually change the number of worker nodes in a cluster node pools or set it to be automatically scaled using the Cluster Autoscaler.
To manually change the number of worker nodes in a cluster node pools:
- Click the individual cluster row from the list of clusters.
- Under the cluster details tab, click the name of the node pools you want to view from the list of node pools.
- In the node pools details page, click [Edit].
- In the settings popup window, click Not Set and enter the number of worker nodes.
- Click [Edit].
Remove specific worker nodes
When reducing the number of nodes in the node pools, the task priority is as follows.
- Stopped nodes
- Nodes with earlier creation dates
To remove a specific node from the cluster node pools:
- Click the individual cluster row from the list of clusters.
- Under the cluster details tab, click the name of the node pools you want to view from the list of node pools.
- In the node pools details page, click [Delete] for the target node.
Alternatively, you can stop the target node and edit the number of nodes.
Automatically scale the number with the Cluster Autoscaler
To automatically scale the number of worker nodes using Cluster Autoscaler, see Using Cluster Autoscaler.
Edit the Kubernetes Label and Taint of node pools
To edit the Kubernetes Label and Taint of node pools:
- Click the individual cluster row from the list of clusters.
- Under the cluster details tab, click the name of the node pools you want to view from the list of node pools.
- In the node pools details page, click [Edit] below the Kubernetes Label or Taint list.
- Change the Kubernetes Label or Taint, and then click [OK].
Delete node pools
To delete a cluster node pools:
- Click the individual cluster row from the list of clusters.
- Under the cluster details tab, click the name of the node pools you want to delete from the list of node pools.
- In the node pools details page, click [Delete].
- In the confirmation popup window, enter the node pools name, and then click [Delete].
Identify node pools through node labels
Information about the node pools to which each node belongs is added to the node label, which is displayed in the form of "ncloud.com/nks-nodepool: {NodePool name}." To view the node label, use the "--show-labels" option. You can use node labels to identify the node pools and effectively deploy pods according to each node pool's specifications.
To check the node label, enter the following commands.
$ kubectl --kubeconfig $KUBE_CONFIG get nodes --show-labels
Upgrade node pools
To upgrade a node pool’s version, you can specify the number max-surge-upgrade nodes and max-unavailable-upgrade nodes. Node pool version upgrades are performed as follows.
- As many new version nodes as the number of max-surge-upgrade nodes are created before waiting until the nodes are registered.
- The settings are adjusted so that no more pods can be scheduled to the node to be replaced, and the existing pods are relocated to other nodes.
- The nodes to be replaced are removed.
- The actions above are repeated until all the nodes are replaced with the newer version.
Check before an upgrade
To check for elements that may affect the service before upgrading the node pools, review the following.
- Changes in the new version: To check if the changes in the new version may affect your service, see Kubernetes changelog.
- Version skew policy: To check the version compatibility among clusters and their components, see Version Skew Policy.
- Admission Webhook: If there are webhooks present in the cluster, an upgrade may end up in an impasse. To take necessary measures before upgrading, see Dynamic Admission Control.
- Secure available servers: For a stable upgrade, secure as many available servers as the number of max-surge-upgrade nodes.
- Check changes to node information: Upgrading the node pools will initialize or change existing node-related information. Before performing the upgrade, check information such as local volume details, node names, IP addresses, labels, and so on.
When upgrading a node pools, worker nodes are replaced. As a result, existing node IPs and settings are not retained, and existing information cannot be reused or reassigned after the upgrade. To minimize the impact on your service, check the relevant information and settings before upgrading.
Note the following to configure the details related to the upgrade.
- PodDisruptionBudget: To maintain the pods in service at a ratio or number you want even during the cluster upgrade, see Specifying a Disruption Budget.
- Readiness Probe: See Define readiness probes to adjust the settings so that only pods in a serviceable state are accessible through Ncloud Kubernetes Service resources when pods are redeployed during node pools replacement.
How to upgrade
To upgrade node pools:
- Click the row of the cluster with the node pools to upgrade from the list of clusters.
- In the cluster details tab, click [Upgrade] under the name of the node pools to upgrade.
- Set the number of max-surge-upgrade nodes and max-unavailable-upgrade nodes before performing the upgrade.
- Max surge upgrade nodes: the number of nodes that can be added during an upgrade. The default value is 1, and you can specify a minimum of 0 to a maximum of 0.
- Max unavailable upgrade nodes: the number of nodes you can afford to have become unavailable during the upgrade. The default value is 0, and you can specify a maximum of 5.
When the number of existing nodes is reduced, there is an additional reduction equal to this number. When new nodes are added, the maximum amount of increase is the sum of the maximum number of max-surge-upgrade nodes and max-unavailable-upgrade nodes.
Examples of setting the number of nodes
- Example 1: Node pools size is 5, max-surge-upgrade node number is 1, max-unavailable-upgrade nodes number is 0.
As 1 more node can be created than the size of the node pool, there can be up to 6 nodes.
Nodes cannot be reduced to fewer than the existing node pools size.
Since the nodes are replaced one by one, a relatively slow but stable upgrade is possible. - Example 2: Node pools size is 5, max-surge-upgrade node number is 5, max-unavailable-upgrade nodes number is 0.
Up to five more nodes than the node pools size can be created, resulting in a maximum of 10 nodes.
Nodes cannot be reduced to fewer than the existing node pools size.
Since many nodes are added at once, a relatively fast and stable upgrade is possible. - Example 3: Node pools size is 5, max-surge-upgrade node number is 0, max-unavailable-upgrade nodes number is 5.
The numbers of nodes can be smaller than the existing node pool size, and there can up to 0 nodes.
Nodes cannot be increased to more than the existing node pools size.
Since many nodes can be replaced at once, a fast upgrade is possible, but the cluster may become unstable.