<html>
    <head>
      <base href="https://bugzilla.suse.com/" />
    </head>
    <body><table border="1" cellspacing="0" cellpadding="8">
        <tr>
          <th>Bug ID</th>
          <td><a class="bz_bug_link 
          bz_status_NEW "
   title="NEW - Issues while deploying the nvidia gpu operator following the official steps from: https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation"
   href="https://bugzilla.suse.com/show_bug.cgi?id=1249541">1249541</a>
          </td>
        </tr>

        <tr>
          <th>Summary</th>
          <td>Issues while deploying the nvidia gpu operator following the official steps from: https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation
          </td>
        </tr>

        <tr>
          <th>Classification</th>
          <td>SUSE AI
          </td>
        </tr>

        <tr>
          <th>Product</th>
          <td>SUSE AI Application Containers
          </td>
        </tr>

        <tr>
          <th>Version</th>
          <td>unspecified
          </td>
        </tr>

        <tr>
          <th>Hardware</th>
          <td>x86-64
          </td>
        </tr>

        <tr>
          <th>OS</th>
          <td>SLES 15
          </td>
        </tr>

        <tr>
          <th>Status</th>
          <td>NEW
          </td>
        </tr>

        <tr>
          <th>Severity</th>
          <td>Normal
          </td>
        </tr>

        <tr>
          <th>Priority</th>
          <td>P5 - None
          </td>
        </tr>

        <tr>
          <th>Component</th>
          <td>documentation
          </td>
        </tr>

        <tr>
          <th>Assignee</th>
          <td>tbazant@suse.com
          </td>
        </tr>

        <tr>
          <th>Reporter</th>
          <td>tapas.nandi@suse.com
          </td>
        </tr>

        <tr>
          <th>QA Contact</th>
          <td>ai-maintainers@lists.suse.com
          </td>
        </tr>

        <tr>
          <th>Target Milestone</th>
          <td>---
          </td>
        </tr>

        <tr>
          <th>Found By</th>
          <td>---
          </td>
        </tr>

        <tr>
          <th>Blocker</th>
          <td>---
          </td>
        </tr></table>
        <p>
          <div>
          <pre>While following the official documentation
"<a href="https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation">https://documentation.suse.com/cloudnative/rke2/latest/en/advanced.html#_operator_installation</a>"
for installing GPU operator on RKE2 v1.32.8+rke2r1 we face an issue where 

the pod: nvidia-container-toolkit-daemonset fails to run because of the below
error:

"level=error msg="error running nvidia-toolkit: unable to setup runtime: unable
to restart containerd: unable to dial: dial unix
/runtime/sock-dir/containerd.sock: connect: no such file or directory""

===============================
On investigating despite mentioning the containerd socket information during
deployment as below:
    toolkit:
      env:
      - name: CONTAINERD_SOCKET
        value: /run/k3s/containerd/containerd.sock
===============================
Tested this on 3 different clusters 

The workaround for this is to manually edit the daemonset and add the correct
hostpath under volumes:

kubectl edit ds -n gpu-operator nvidia-container-toolkit-daemonset

----------------------------
Before:
      - hostPath:
          path: /run/containerd
          type: ""
        name: containerd-socket
----------------------------

After:
      - hostPath:
          path: /run/k3s/containerd
          type: ""
        name: containerd-socket

----------------------------
After this the pod starts successfully.

Before:
suse-ai-n1:~ # kubectl get pods -n gpu-operator | grep nvidia-container-toolkit
nvidia-container-toolkit-daemonset-bjqsl                      0/1    
CrashLoopBackOff   12 (41s ago)   44m
nvidia-container-toolkit-daemonset-d5ktb                      0/1    
CrashLoopBackOff   12 (30s ago)   44m
nvidia-container-toolkit-daemonset-sj826                      0/1    
CrashLoopBackOff   12 (45s ago)   44m

After:
suse-ai-n1:~ # kubectl get pods -n gpu-operator | grep nvidia-container-toolkit
nvidia-container-toolkit-daemonset-s6qjg                      1/1     Running  
  0          81s
nvidia-container-toolkit-daemonset-twtcs                      1/1     Running  
  0          79s
nvidia-container-toolkit-daemonset-xhf7b                      1/1     Running  
  0          82s

============================
Suggestion:
Add the workaround in the documentation as below:

In case the deployment of the Nvidia container toolkit fails and the toolkit
daemonset fails to start follow the below steps to resolve this:

Manually edit the daemonset and add the correct hostpath under volumes:

kubectl edit ds -n gpu-operator nvidia-container-toolkit-daemonset

----------------------------
Before:
      - hostPath:
          path: /run/containerd
          type: ""
        name: containerd-socket
----------------------------

After:
      - hostPath:
          path: /run/k3s/containerd
          type: ""
        name: containerd-socket</pre>
          </div>
        </p>
      <hr>
      <span>You are receiving this mail because:</span>
      
      <ul>
          <li>You are the QA Contact for the bug.</li>
      </ul>
    </body>
</html>