GPU nodes
- Print
- PDF
GPU nodes
- Print
- PDF
Article Summary
Share feedback
Thanks for sharing your feedback!
Available in VPC
GPU nodes may be used as worker nodes of the Ncloud Kubernetes Service.
Caution
- GPU nodes can only be used in the KR region.
- Clusters exclusively made up of GPU nodes have usage restrictions.
- The default object provided by the Ncloud Kubernetes Service must have a general node.
- Add a GPU node pool to your regular node pool for usage.
Place NVIDIA GPU device plugin
In order to place the NVIDIA device plugin after the GPU node switches to Running at the cluster, execute the following command:
1. Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod 700 get_helm.sh \
&& ./get_helm.sh
2. Add nvidia-device-plugin helm storage
helm repo add nvdp https://nvidia.github.io/k8s-device-plugin \
&& helm repo update
3. Release NVIDIA device plugin
helm install --generate-name nvdp/nvidia-device-plugin
You can release this only in the GPU node pool through the Node Selector.
helm install --generate-name nvdp/nvidia-device-plugin --set nodeSelector."ncloud.com/nks-nodepool"=\[GPU node pool name]
Note
This resource, provided by NVIDIA, can be changed.
Check the official NVIDIA website to see if it has changed.
Check the official Kubernetes website to see if it has changed.
Use device plugin
Note
- In GPU nodes, you can upgrade your SMI/CUDA driver.
- You can upgrade your driver using runfile. For detailed instructions, see Reinstall and upgrade GPU driver/CUDA.
Kubernetes implements device plugins and enables its pods to access special hardware features such as GPU.
For more information on using device plugins, see How to use device plugins at the official website of Kubernetes.
Was this article helpful?