[Bug 1249541] New: Issues while deploying the nvidia gpu operator following the official steps from: https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation
bugzilla_noreply at suse.com
bugzilla_noreply at suse.com
Fri Sep 12 13:00:30 UTC 2025
https://bugzilla.suse.com/show_bug.cgi?id=1249541
Bug ID: 1249541
Summary: Issues while deploying the nvidia gpu operator
following the official steps from:
https://documentation.suse.com/cloudnative/rke2/latest
/en/advanced.html#_operator_installation
Classification: SUSE AI
Product: SUSE AI Application Containers
Version: unspecified
Hardware: x86-64
OS: SLES 15
Status: NEW
Severity: Normal
Priority: P5 - None
Component: documentation
Assignee: tbazant at suse.com
Reporter: tapas.nandi at suse.com
QA Contact: ai-maintainers at lists.suse.com
Target Milestone: ---
Found By: ---
Blocker: ---
While following the official documentation
"https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation"
for installing GPU operator on RKE2 v1.32.8+rke2r1 we face an issue where
the pod: nvidia-container-toolkit-daemonset fails to run because of the below
error:
"level=error msg="error running nvidia-toolkit: unable to setup runtime: unable
to restart containerd: unable to dial: dial unix
/runtime/sock-dir/containerd.sock: connect: no such file or directory""
===============================
On investigating despite mentioning the containerd socket information during
deployment as below:
toolkit:
env:
- name: CONTAINERD_SOCKET
value: /run/k3s/containerd/containerd.sock
===============================
Tested this on 3 different clusters
The workaround for this is to manually edit the daemonset and add the correct
hostpath under volumes:
kubectl edit ds -n gpu-operator nvidia-container-toolkit-daemonset
----------------------------
Before:
- hostPath:
path: /run/containerd
type: ""
name: containerd-socket
----------------------------
After:
- hostPath:
path: /run/k3s/containerd
type: ""
name: containerd-socket
----------------------------
After this the pod starts successfully.
Before:
suse-ai-n1:~ # kubectl get pods -n gpu-operator | grep nvidia-container-toolkit
nvidia-container-toolkit-daemonset-bjqsl 0/1
CrashLoopBackOff 12 (41s ago) 44m
nvidia-container-toolkit-daemonset-d5ktb 0/1
CrashLoopBackOff 12 (30s ago) 44m
nvidia-container-toolkit-daemonset-sj826 0/1
CrashLoopBackOff 12 (45s ago) 44m
After:
suse-ai-n1:~ # kubectl get pods -n gpu-operator | grep nvidia-container-toolkit
nvidia-container-toolkit-daemonset-s6qjg 1/1 Running
0 81s
nvidia-container-toolkit-daemonset-twtcs 1/1 Running
0 79s
nvidia-container-toolkit-daemonset-xhf7b 1/1 Running
0 82s
============================
Suggestion:
Add the workaround in the documentation as below:
In case the deployment of the Nvidia container toolkit fails and the toolkit
daemonset fails to start follow the below steps to resolve this:
Manually edit the daemonset and add the correct hostpath under volumes:
kubectl edit ds -n gpu-operator nvidia-container-toolkit-daemonset
----------------------------
Before:
- hostPath:
path: /run/containerd
type: ""
name: containerd-socket
----------------------------
After:
- hostPath:
path: /run/k3s/containerd
type: ""
name: containerd-socket
--
You are receiving this mail because:
You are the QA Contact for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.suse.com/pipermail/ai-maintainers/attachments/20250912/aa8fe6d2/attachment.htm>
More information about the Ai-maintainers
mailing list