让我们创建一个使用一些 GPU 计算的示例 pod,以确保一切按预期工作。

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd
spec:
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvidia/samples:vectoradd-cuda11.6.0-ubuntu18.04"
    resources:
       limits:
         nvidia.com/gpu: 1
EOF
$ kubectl logs pod/cuda-vectoradd
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

如果您在输出中看到 Test PASSED,则可以确信您的 Kubernetes 集群已正确设置 GPU 计算。

接下来,清理该 pod。

$ kubectl delete pod cuda-vectoradd
pod "cuda-vectoradd" deleted