Troubleshoot networking issues

Prev Next

Available in VPC

In using Ncloud Kubernetes Service, you may encounter the following problems. Check the causes and solutions by problem and take proper action.

Existence of Target Group that you didn't create

Target groups exist that were not created.

Cause

If spec.defaultBackend is not specified within Ingress Manifest mapped with ALB, a target group is created set as port 80 of the worker node group. Such target groups are created on purpose and do not affect ALB's operation.

Solution

It is an intended action as explained in the cause section, so no action is required.

NetworkPolicy errors

NetworkPolicy is not operating properly.

Cause

When you use Cilium CNI, there are known unsupported functions. among the functions of NetworkPolicy.

Solution

The known unsupported functions are as follows:

Cilium's Connectivity Test failed

Cilium fails the connectivity test.

Cause

Cilium provides a Connectivity Test set for testing the network status. If you run the test set as is on Ncloud Kubernetes Service, the following two tests are bound to fail. This is an issue known to occur when you use node-local-dns bound to link-local ip.

Solution

For proper DNS resolving, you must add the IP band used by node-local-dns to the toCIDR field in CiliumNetworkPolicy.

# pod-to-a-allowed-cnp
  egress:
  - toPorts:
    - ports:
      - port: "53"
        protocol: ANY
    toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toCIDR:
    - 169.254.25.10/32

# pod-to-external-fqdn-allow-google-cnp
  egress:
  - toPorts:
    - ports:
      - port: "53"
        protocol: ANY
      rules:
        dns:
        - matchPattern: '*'
    toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toCIDR:
    - 169.254.25.10/32

Pending status persisting after creating LoadBalancer-type service resource

I have created a service resource of LoadBalancer type, but it is maintaining the Pending status.

Cause

If the Load Balancer subnet selected during cluster creation has been deleted or if there is no IP address available for assignment, this problem occurs because External-IP cannot be assigned.

Solution

You must change the default Load Balancer subnet, or select a different Load Balancer subnet.

To change the default Load Balancer subnet, follow these steps:

  1. Check the configmap that has the name ncloud-config in kube-system Namespace.

    $ kubectl --kubeconfig $KUBE_CONFIG get configmap ncloud-config -n kube-system
    $ kubectl --kubeconfig $KUBE_CONFIG get configmap ncloud-config -n kube-system -o yaml
    
  2. See the following commands to check the configmap that has the name ncloud-config in kube-system Namespace and edit it.

    $ kubectl --kubeconfig $KUBE_CONFIG get configmap ncloud-config -n kube-system
    NAME            DATA   AGE
    ncloud-config   9      131m
    
    $ kubectl --kubeconfig $KUBE_CONFIG get configmap ncloud-config -n kube-system -o yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ncloud-config
      namespace: kube-system
    data:
      acgNo: "12345"
      apiUrl: https://nks.apigw.ntruss.com
      basePath: /ncloud-api/v1
      lbPublicSubnetNo: "98765"
      lbSubnetNo: "12345"
      regionCode: KR
      regionNo: "11"
      vpcNo: "12345"
      zoneNo: "110"
    
    • data.acgNo: enter instanceID of acg assigned to the eth0 interface of the worker node
    • data.apiUrl: enter https://nks.apigw.ntruss.com
    • data.basePath: enter /ncloud-api/v1
    • data.lbPublicSubnetNo: enter SubnetID of the Load Balancer subnet in VPC where the worker node is assigned
    • data.lbSubnetNo: enter SubnetID of the private Load Balancer subnet in VPC where the worker node is assigned
    • data.regionCode: enter the Region code where the worker node is located (for example, "FKR")
    • data.regionNo: enter the Region number of the worker node (example: "11")
    • data.vpcNo: enter the VPC ID of the VPC to which the worker node is assigned
    • data.zoneNo: enter the Zone number where the worker node is located (for example, "110")
  3. Run the following command to change the Load Balancer subnet:

    $ kubectl --kubeconfig $KUBE_CONFIG -n kube-system patch configmap ncloud-config --type='json' -p='[{"op":"replace", "path":"/data/lbSubnetNo", "value": "94465"}]'
    
    • This is an example command using 94465 as a subnet ID.
    • Use the SubnetID of the Load Balancer subnet in the VPC where the Kubernetes worker node is created. If you use an invalid SubnetID, Load Balancer cannot be properly created after the subnet is changed.
  4. After the change is completed, run the following command to check whether the change has been applied to configmap.

    $ kubectl --kubeconfig $KUBE_CONFIG get configmap ncloud-config -n kube-system -o yaml
    

Errors in Application Load Balancer (ALB) integrated with Ingress

Application Load Balancer (ALB) integrated with Ingress is not created normally, or doesn't work.

Cause

  • ALB Ingress Controller pods may not work, or there may be internal errors.
  • If no ALB Ingress Controller exists, ALB cannot be created.
  • If Ingress Manifest has an error, ALB or relevant tools may not be created.
  • If you change the created ALB through the console or API, a problem may occur.
  • If the status of the cluster configuration worker node is not Operating, a problem may occur.

Solution

View resource and log
Check the status of ALB created through ALB Ingress Controller. The commands to view the related resources and logs are as follows:

Note
  • The cluster whose hypervisor is KVM includes ALB Ingress Controller by default.

  • The controller is not exposed to users, so you must check the errors by viewing the Ingress resources.

$ kubectl --kubeconfig $KUBE_CONFIG logs -n kube-system -l app.kubernetes.io/name=alb-ingress-controller
$ kubectl --kubeconfig $KUBE_CONFIG describe ingress [ingress-name]

No ALB Ingress Controller
If no ALB Ingress Controller exists, ALB cannot be created. See ALB Ingress Controller setting guide to install ALB Ingress Controller in the cluster.

Check ALB Ingress Controller version
The version supported when you create a public Load Balancer through Public LB Subnet is ALB Ingress Controller v0.8.0 or higher. Check the version of Controller you are using and install the latest version.

Ingress Manifest errors
If Ingress Manifest has an error, ALB or relevant tools may not be created. Check the cause by checking the pod log of ALB Ingress Controller.

  • If the rule inside Manifest is inconsistent with the rule of the created ALB
  • If the service name or port written in the Manifest rule is incorrect
  • If the rules are in an incorrect order (The topmost rule is applied first.)
  • If an invalid annotation is used
    • Check if there is any typo or incorrect symbol when you set annotation
    • Check whether you use an annotation supported by the Ingress Controller version

When changing ALB to console and API
If you change the created ALB through the console or API, a problem may occur. ALB created through ALB Ingress Controller needs to be regularly synchronized with Manifest of Ingress registered in Kubernetes Cluster. It is recommended to avoid changing ALB through the console or API. When changing ALB, reapply Manifest to proceed with status synchronization.

Cluster's worker node status problem
If the status of the cluster configuration worker node is not Operating, it may not operate properly. Check the status of the worker node and try again.

When creating Public LB
For the cluster that does not register Public LB Subnet, you cannot create a Load Balancer of public type. When you create a public Load Balancer in Ncloud Kubernetes Service, register Public LB Subnet in the cluster.
To create Public LB Subnet, see Subnet Management. Register Public LB Subnet in the cluster and try again.

When creating Private LB

  • Check whether Private LB Subnet is registered in the Kubernetes cluster.
  • Check whether you are using the following annotation. For more information, see Set ALB Ingress Controller.
    alb.ingress.kubernetes.io/network-type: "true

Occurrence of packet drop in the cluster after changing kernel parameters

Packet drop occurs in the cluster after I have changed kernel parameters.

Cause

When Cilium, the CNI of Ncloud Kubernetes Service, is started, rp_filter is dynamically disabled. If the changed parameters are inconsistent with the Cilium settings, packet drop may occur.

Solution

Before you change the syctl.conf file and apply the change, you must first edit the rp_filter setting "net.ipv4.conf.all.rp_filter" to "0."
When you apply the sysctl.conf file to change specific kernel parameters of the worker node, be careful not to change the sysctl.conf file settings. When you are unsure, you can use Customer service in NAVER Cloud Platform's portal.

Changing Ingress certificates failed

I cannot change the certificate of Application LoadBalancer created through Ingress.
Changing the certificate of Application LoadBalancer created through Ingress fails.
Changes in the certificate of Application LoadBalancer created through Ingress are not applied.

Cause

  • When you set the annotation wrongly, a problem may occur.
  • When the certificate is not available, a problem may occur.
  • When you change the certificate through the console, the certificate is not changed properly.
  • When ALB Ingress Controller does not support multiple certificate annotation, a problem may occur.

Solution

For Application LoadBalancer created through Ingress, it is recommended to avoid a task through the console. Proceed with changing certificates through annotation. Check the annotation you set.

  • Check whether there is any typo or a symbol is used properly in the annotation
  • ALB Ingress Controller supports Multiple certificates for the 0.8.0 version or higher. Check the version of ALB Ingress Controller you are using.

Incorrect Client IP

Client IP is incorrect. I want to view the actual user's Client IP.

Cause

Kubernetes proceeds with SNAT when sending requests to a pod in the cluster. In this process, the client's actual IP may change.

Solution

Load Balancers that can be created through Ncloud Kubernetes Service are NetworkLoadBalancer (NLB), NetworkProxyLoadBalancer (NPLB), and ApplicationLoadBalancer (ALB). The client IP can be viewed in different ways depending on the Load Balancer type.

  • ALB: because this type uses the HTTP/HTTPS protocol, you can view the client IP through the X-Forwarded-For header in the application.
  • NPLB: you can enable proxy-protocol to view the client IP. You must enter related annotation in the service details and have the proxy-protocol settings enabled in the application. For example, if you are using nginx-ingress-controller, you must have the proxy-protocol setting of nginx-ingress-controller enabled.
  • NLB: the client IP can be displayed in the Load Balancer, but for this you must make certain settings in the cluster. In the service details, change "externalTrafficPolicy" to "Local" to be able to view the Client IP. For more information on "externalTrafficPolicy" settings, see Official document.

Changing the target group's settings when creating ALB

I cannot change the target group's settings when creating ALB.

Cause

An error may occur because the related content is not set in the Service designated as the ALB rules.

Solution

You can change and set the details of Service set as Ingress's backend. Using annotations, you can set the load balancing algorithm and health check of the target group created through Service.
For example, the following setting performs load balancing through the least-connection algorithm for the target groups created through the "naver" Service.

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/algorithm-type: "least-connection"
  name: naver

For annotations available in Service and Ingress, see Setting ALB Ingress Controller.

Note

If you cannot find the information you need in this guide, or if you need more information, click the following feedback icon and send your opinions at any time. We will refer to your opinions and provide more useful information.