- Print
- PDF
Cluster access and management
- Print
- PDF
Available in VPC
Once cluster creation is completed, the list of clusters can be viewed from the Ncloud Kubernetes Service dashboard, and the clusters can be accessed. You can perform tasks related to an individual cluster, such as viewing cluster details, file reconfiguration, deleting cluster, etc., from the cluster list.
From the VPC environment of the NAVER Cloud Platform console, click the Services > Container > Ncloud Kubernetes Service menus in order to see the list of created clusters.
Access cluster
You can access the created cluster using the kubectl command.
The following describes how to access a cluster using the kubectl command.
From the VPC environment of the NAVER Cloud Platform console, click the Services > Container > Ncloud Kubernetes Service menus in order.
Click the row of the cluster you want to access from the cluster list, and set up the configuration file according to its authentication method.
- Admin authentication: Click the [Download] button to download the cluster's configuration file.
- IAM authentication: Click the [IAM authentication guide] button to install
ncp-iam-authenticator
, and then create the kubeconfig file.
Specify the downloaded configuration file's path, using the
kubectl
command with the--kubeconfig
option as below, and run the command.$ kubectl --kubeconfig "Configuration file" get nodes
NoteYou can also add the configuration file to the path
$HOME/.kube/config
manually.Refer to the code shown below and set the environment variable according to your OS.
- Set the environment variable
$KUBE_CONFIG
in Mac or Linux OS
$ export KUBE_CONFIG="${HOME}/Downloads/kubeconfig-1865.yaml" $ echo $KUBE_CONFIG /Users/azamara/Downloads/kubeconfig-1865.yaml $ kubectl --kubeconfig $KUBE_CONFIG get nodes
- Set the environment variable
$KUBE_CONFIG
in Windows PowerShell
> $KUBE_CONFIG=$HOME+"\Downloads\kubeconfig-1865.yaml" > $KUBE_CONFIG C:\Users\NAVER\Downloads\kubeconfig-1865.yaml > kubectl --kubeconfig $KUBE_CONFIG get nodes
- Set the environment variable
$KUBE_CONFIG
in Windows command prompt
> SET KUBE_CONFIG=%USERPROFILE%\Downloads\kubeconfig-1865.yaml > kubectl --kubeconfig %KUBE_CONFIG% get nodes
- Set the environment variable
The result will be shown as the example below if it is correctly configured.
NAME STATUS ROLES AGE
nks-pool-0000-w-001 Ready node 5h22m
nks-pool-0000-w-002 Ready node 5h22m
nks-pool-0000-w-003 Ready node 5h22m
View cluster details
Click an individual cluster row from the cluster list to view the details.
- Click the [View guide] button to see the guide on how to access the cluster. More information about what is explained in the guide can be seen in Access cluster.
- You can click the [Edit] button on the right side of Audit log to change its activation status.
Once Audit log is enabled, Cloud Log Analytics can collect logs. However, you have to request subscription to the Cloud Log Analytics service to enable it. (Refer to CLA Guide.)
Reset the kubeconfig file
You can reset the kubeconfig configuration file. Once the resetting feature is used, the existing file can't be used anymore. You must download the new configuration file created from resetting and use it.
The following describes how to reset the kubeconfig configuration file.
- Click the individual cluster row from the cluster list.
- From the Cluster details tab, click the [Reset] button.
- Enter the cluster's name where the authentication file is to be reset, and then click the [Reset] button.
Delete cluster
The following shows how to delete a cluster.
- Click the row of the cluster you want to delete from the cluster list.
- Click the [Delete] button on the upper left of the cluster list.
- Enter the cluster's name in the confirmation pop-up window, and then click the [Delete] button.
Use of Cluster Autoscaler
Using Cluster Autoscaler allows you to automatically scale clusters up or down. The Autoscaler automatically increases the number of nodes if the cluster currently doesn't have enough resources compared to the amount of pod resources requested by users. It also reduces the number of nodes if a specific node's usage is maintained at a low level for a certain amount of time.
The following describes how to use the Cluster Autoscaler.
- Click the individual cluster row from the cluster list.
- Click the name of the nodepool you want to apply the Autoscaler from the nodepool list displayed under the Cluster details tab.
- Click the [Edit] button.
- From the Settings pop-up window, click the [Settings] button, and then specify Minimum number of nodes and Maximum number of nodes.
- Click the [Edit] button.
Take the following into account when using the Autoscaler.
- It takes between one and five minutes to start or stop the Autoscaler.
- The number of nodes can't be manually changed while using the Autoscaler, so you should set the feature to Not set to manually change the number of nodes.
- The Autoscaler is only applied to the nodepools with the feature set in the cluster.
- Cluster Autoscaler does not scale down underutilized nodes in the following cases.
- When nodes are not controlled by a controller, such as deployment and StatefulSet
- When local storage is specified
- When pods can't be moved to another node
- When the annotation
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
is set
For more information on the Autoscaler's characteristics, refer to Autoscaler FAQ.
Check proper Autoscaler operation
Run the command shown below to check if the Autoscaler works properly.
$ kubectl --kubeconfig $KUBE_CONFIG get cm cluster-autoscaler-status -o yaml -n kube-system
An example of the result when it works properly is as shown below.
$ kubectl --kubeconfig $KUBE_CONFIG get cm cluster-autoscaler-status -o yaml -n kube-system
apiVersion: v1
data:
status: |+
Cluster-autoscaler status at 2019-09-03 08:59:53.84165088 +0000 UTC:
Cluster-wide:
Health: Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0)
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
ScaleUp: NoActivity (ready=1 registered=1)
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
NodeGroups:
Name: k8s-default-group
Health: Healthy (ready=1 unready=0 notStarted=0 longNotStarted=0 registered=1 longUnregistered=0 cloudProviderTarget=1 (minSize=1, maxSize=5))
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
ScaleUp: NoActivity (ready=1 cloudProviderTarget=1)
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2019-09-03 08:59:53.70167178 +0000 UTC m=+23.846174142
LastTransitionTime: 2019-09-03 08:59:43.520248394 +0000 UTC m=+13.664750787
kind: ConfigMap
metadata:
annotations:
cluster-autoscaler.kubernetes.io/last-updated: 2019-09-03 08:59:53.84165088 +0000
UTC
creationTimestamp: 2019-09-03T08:59:31Z
name: cluster-autoscaler-status
namespace: kube-system
resourceVersion: "426558451"
selfLink: /api/v1/namespaces/kube-system/configmaps/cluster-autoscaler-status
uid: 248a8014-ce29-11e9-8a51-f220cd8c2e67
Cluster upgrade
You can upgrade a cluster to replace the internal management master area with a new version. Post-upgrade version is applied to the nodepools added after the upgrade. Viewing, modifying, or deleting resources may be temporarily unavailable during the upgrade.
Pre-upgrade check
See the following and check elements that may affect the service before upgrading the cluster.
- Changes in new version: Refer to Kubernetes changelog and check if the changes in the new version may affect your service.
- Admission webhook: If there are webhooks present in the cluster, the upgrade may end up in an impasse. Refer to Dynamic Admission Control and take necessary measures in advance before upgrading.
- Secure available servers: Secure three or more available servers for stability of the upgrade.
You can set details related to the upgrade by referencing the following.
- PodDisruptionBudget: You can maintain the pods in service at a ratio or number you want, even during the cluster upgrade. Refer to Specifying a Disruption Budget for your Application.
- Readiness probe: You can adjust the settings so that only the pods in a serviceable state are accessible through Ncloud Kubernetes Service resources when pods are redeployed during the nodepool replacement. Refer to Define readiness probes.
How to upgrade
The following describes how to upgrade a cluster.
- Click the cluster row you want to upgrade from the cluster list.
- From the Cluster details tab, click the [Upgrade] button.
- Select the version to upgrade to from the Settings pop-up window, and enter the cluster's name.
- Click the [Upgrade] button.
Configure IP ACLs for Control Plane
You can restrict access to the Kubernetes Control Plane based on public IP addresses.
In the Kubernetes Console, select the cluster whose IP Access Control List (IP ACL) you want to edit, and click the [Endpoint] > [Edit IP ACL] button.
Default action
If the public IP of the client who wants to access the Control Plane is not allowed or denied by the rules registered in the IP ACL (Access Control List), access is filtered according to the default action.
IP ACL(Access Control List)
You can set IP ACLs for the Kubernetes Control Plane based on public IP addresses.
You can register up to 20.
- Access Source: Enter IPv4-based CIDR Block.
- Action: Select the method to handle when the IP corresponding to the access source CIDR is accessed.
- Memo: You can write a memo about the rule to be entered.
Example
Case 1) Blocking all access to public IP and trying to access Kubernetes Control Plane only from VM instances in VPC
- Set Default action to deny.
- No rules are registered in IP ACL.
In case of setting as above, all access using public IP is blocked. When accessing the Control Plane from the VM instance environment within the VPC network, the public IP is not used, and therefore the IP ACL is not affected.
Case 2) If you want to block access to a specific public IP 143.248.142.77 and allow access to other public IPs
- Set Default action to allow.
- IP ACL has access source 143.248.142.77/32, action set to deny.
If set as above, access to 143.248.142.77 is denied, but access to the rest of the public IPs is allowed by default action.