Access and manage Cluster

Prev Next

Available in VPC

Once cluster creation is completed, the list of clusters can be viewed from the Ncloud Kubernetes Service dashboard, and the clusters can be accessed. You can perform tasks related to an individual cluster, such as viewing cluster details, file reconfiguration, and deleting cluster, from the cluster list.

From the VPC environment on the NAVER Cloud Platform console, click i_menu > Services > Containers > Ncloud Kubernetes Service in order to see the list of created clusters.

Accessing cluster

You can access the created cluster using the kubectl command.

To access a cluster using the kubectl command, follow these steps:

  1. From the VPC environment on the NAVER Cloud Platform console, navigate to i_menu > Services > Containers > Ncloud Kubernetes Service.

  2. Click the row of the cluster you want to access from the cluster list and set up the configuration file according to its authentication method.

    • Admin authentication: Click the [Download] button to download the cluster's configuration file.
    • IAM authentication: Click the [IAM authentication guide] button to install ncp-iam-authenticator and create the kubeconfig file.
    Note

    Admin authentication is only supported for clusters created before February 13, 2022.

  3. Specify the downloaded configuration file's path, using the kubectl command with the --kubeconfig option as follows, and run the command:

    $ kubectl --kubeconfig "Configuration file" get nodes
    
    Note

    You can also add the configuration file to the $HOME/.kube/config path manually.

  4. To set the environment variable according to your OS, see the following code:

    • Setting the environment variable $KUBE_CONFIG in Mac or Linux OS
    $ export KUBE_CONFIG="${HOME}/Downloads/kubeconfig-1865.yaml"
    $ echo $KUBE_CONFIG
    /Users/user/Downloads/kubeconfig-1865.yaml
    
    $ kubectl --kubeconfig $KUBE_CONFIG get nodes
    
    • Setting the environment variable $KUBE_CONFIG in Windows PowerShell
    > $KUBE_CONFIG=$HOME+"\Downloads\kubeconfig-1865.yaml"
    > $KUBE_CONFIG
    C:\Users\NAVER\Downloads\kubeconfig-1865.yaml
    > kubectl --kubeconfig $KUBE_CONFIG get nodes
    
    • Setting the environment variable $KUBE_CONFIG in Windows command prompt
    > SET KUBE_CONFIG=%USERPROFILE%\Downloads\kubeconfig-1865.yaml
    > kubectl --kubeconfig %KUBE_CONFIG% get nodes
    

The result will be shown as the following example if it is correctly configured:

NAME                  STATUS   ROLES   AGE
nks-pool-0000-w-001   Ready    node    5h22m
nks-pool-0000-w-002   Ready    node    5h22m
nks-pool-0000-w-003   Ready    node    5h22m

Viewing cluster details

To view the details, click an individual cluster row from the cluster list.

  • To see the guide on how to access the cluster, click the [View guide] button. More information about what is explained in the guide can be seen in Access cluster
  • You can click the [Edit] button on the right side of Audit log to change its activation status.
    Once Audit log is enabled, Cloud Log Analytics can collect logs. However, you have to request subscription to the Cloud Log Analytics service to enable it. (See CLA user guide)

Resetting the kubeconfig file

You can reset the kubeconfig configuration file. Once the resetting function is used, the existing file cannot be used anymore. You must download the new configuration file created from resetting and use it.

To reset the kubeconfig configuration file, follow these steps:

  1. Click the individual cluster row from the cluster list.
  2. From the Cluster details tab, click the [Reset] button.
  3. Enter the cluster's name where the authentication file is to be reset and click the [Reset] button.
Note

The kubeconfig file can only be reset for clusters that support Admin authentication.

Deleting clusters

To delete clusters, follow these steps:

  1. Click the row of the cluster you want to delete from the cluster list.
  2. Click the [Delete] button on the upper left of the cluster list.
  3. Enter the cluster's name in the confirmation popup window and click the [Delete] button.

Using Cluster Autoscaler

In order to automatically scale the number of worker nodes using Cluster Autoscaler, see Use Cluster Autoscaler

Cluster upgrade

You can upgrade a cluster to replace the internal management master area with a new version. Post-upgrade version is applied to the node pools added after the upgrade. Viewing, editing, or deleting resources may be temporarily unavailable during the upgrade.

Pre-upgrade check

See the following and check elements that may affect the service before upgrading the cluster:

  • Changes in the new version: Refer to Kubernetes changelog and check if the changes in the new version may affect your service.
  • Version skew policy: To check the version compatibility among clusters and their components, see Version Skew Policy
  • Admission Webhook: If there are webhooks present in the cluster, the upgrade may end up in an impasse. Refer to Dynamic Admission Control and take necessary measures in advance before upgrading.
  • Secure available servers: Secure 3 or more available servers for stable upgrade.
  • Resource backup/recovery: Kube-system resources, such as nodelocaldns and coredns ConfigMap, as well as StorageClass, are not preserved during the upgrade and will be replaced with new versions. If you have edited any of these resources, back up those resources and reapply the resources after the upgrade is complete.

To set details related to the upgrade, see the following:

  • PodDisruptionBudget: You can maintain the pods in service at a ratio or number you want, even during the cluster upgrade while referring to Specifying a Disruption Budget
  • Readiness Probe: You can adjust the settings so that only the pods in a serviceable state are accessible through the Kubernetes Service resources when pods are redeployed during the node pool replacement while referring to Define readiness probes

How to upgrade

To upgrade a cluster, follow these steps:

  1. Click the row of clusters you want to upgrade from the cluster list.
  2. From the Cluster details tab, click [Upgrade].
  3. Select the version to upgrade to from the settings popup window and enter the cluster's name.
  4. Click ** [Upgrade]**.

Configuring IP ACLs for Control Plane

You can restrict access to the Kubernetes Control Plane based on public IP addresses.

In the Kubernetes Console, select the cluster whose IP Access Control List (IP ACL) you want to edit, and click the Endpoint > Edit IP ACL button.

vnks_ip_acl_ko

Default action

If the public IP of the client who wants to access the Control Plane is not allowed or denied by the rules registered in the IP Access Control List (ACL), access is filtered according to the default action.

IP ACL(Access Control List)

You can set IP ACLs for the Kubernetes Control Plane based on public IP addresses.
You can register up to 20.

  • Access source: Enter IPv4-based CIDR Block.
  • Action: Select the method to handle when the IP corresponding to the access source CIDR is accessed.
  • Memo: you can write notes about the rule to be entered.

Example

Case 1) Blocking all access to public IP and trying to access Kubernetes Control Plane only from VM instances in VPC

  • Set default action to deny.
  • No rules are registered in IP ACL.

If set as above, all access using public IP is blocked. When accessing the Control Plane from the VM instance environment within the VPC network, the public IP is not used, and therefore it is not affected by IP ACLs.

Case 2) Blocking access to a specific public IP 143.248.142.77 and allowing access to other public IPs

  • Set default action to allow.
  • Set IP ACL access source to 143.248.142.77/32 and action to deny.

If set as above, access to 143.248.142.77 is denied, but access to the rest of the public IPs is allowed by default action.