<div class="container">
    <h1>Security update for slurm</h1>

    <table class="table table-striped table-bordered">
        <tbody>
        <tr>
            <th>Announcement ID:</th>
            <td>SUSE-SU-2025:01751-1</td>
        </tr>
        <tr>
            <th>Release Date:</th>
            <td>2025-05-29T12:53:41Z</td>
        </tr>
        
        <tr>
            <th>Rating:</th>
            <td>important</td>
        </tr>
        <tr>
            <th>References:</th>
            <td>
                <ul>
                    
                        <li style="display: inline;">
                            <a href="https://bugzilla.suse.com/show_bug.cgi?id=1243666">bsc#1243666</a>
                        </li>
                    
                    
                </ul>
            </td>
        </tr>
        
            <tr>
                <th>
                    Cross-References:
                </th>
                <td>
                    <ul>
                    
                        <li style="display: inline;">
                            <a href="https://www.suse.com/security/cve/CVE-2025-43904.html">CVE-2025-43904</a>
                        </li>
                    
                    </ul>
                </td>
            </tr>
            <tr>
                <th>CVSS scores:</th>
                <td>
                    <ul class="list-group">
                        
                            <li class="list-group-item">
                                <span class="cvss-reference">CVE-2025-43904</span>
                                <span class="cvss-source">
                                    (
                                    
                                        SUSE
                                    
                                    ):
                                </span>
                                <span class="cvss-score">8.5</span>
                                <span class="cvss-vector">CVSS:4.0/AV:L/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N</span>
                            </li>
                        
                            <li class="list-group-item">
                                <span class="cvss-reference">CVE-2025-43904</span>
                                <span class="cvss-source">
                                    (
                                    
                                        SUSE
                                    
                                    ):
                                </span>
                                <span class="cvss-score">7.8</span>
                                <span class="cvss-vector">CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H</span>
                            </li>
                        
                    </ul>
                </td>
            </tr>
        
        <tr>
            <th>Affected Products:</th>
            <td>
                <ul class="list-group">
                    
                        <li class="list-group-item">HPC Module 15-SP7</li>
                    
                        <li class="list-group-item">SUSE Linux Enterprise Desktop 15 SP7</li>
                    
                        <li class="list-group-item">SUSE Linux Enterprise Real Time 15 SP7</li>
                    
                        <li class="list-group-item">SUSE Linux Enterprise Server 15 SP7</li>
                    
                        <li class="list-group-item">SUSE Linux Enterprise Server for SAP Applications 15 SP7</li>
                    
                        <li class="list-group-item">SUSE Package Hub 15 15-SP7</li>
                    
                </ul>
            </td>
        </tr>
        </tbody>
    </table>

    <p>An update that solves one vulnerability can now be installed.</p>

    


    
        <h2>Description:</h2>
    
    <p>This update for slurm fixes the following issues:</p>
<p>Update to version 24.11.5.</p>
<p>Security issues fixed:</p>
<ul>
<li>CVE-2025-43904: an issue with permission handling for Coordinators within the accounting system allowed Coordinators
  to promote a user to Administrator (bsc#1243666).</li>
</ul>
<p>Other changes and issues fixed:</p>
<ul>
<li>
<p>Changes from version 24.11.5</p>
</li>
<li>
<p>Return error to <code>scontrol</code> reboot on bad nodelists.</p>
</li>
<li><code>slurmrestd</code> - Report an error when QOS resolution fails for
    v0.0.40 endpoints.</li>
<li><code>slurmrestd</code> - Report an error when QOS resolution fails for
    v0.0.41 endpoints.</li>
<li><code>slurmrestd</code> - Report an error when QOS resolution fails for
    v0.0.42 endpoints.</li>
<li><code>data_parser/v0.0.42</code> - Added <code>+inline_enums</code> flag which
    modifies the output when generating OpenAPI specification.
    It causes enum arrays to not be defined in their own schema
    with references (<code>$ref</code>) to them. Instead they will be dumped
    inline.</li>
<li>Fix binding error with <code>tres-bind map/mask</code> on partial node
    allocations.</li>
<li>Fix <code>stepmgr</code> enabled steps being able to request features.</li>
<li>Reject step creation if requested feature is not available
    in job.</li>
<li><code>slurmd</code> - Restrict listening for new incoming RPC requests
    further into startup.</li>
<li><code>slurmd</code> - Avoid <code>auth/slurm</code> related hangs of CLI commands
    during startup and shutdown.</li>
<li><code>slurmctld</code> - Restrict processing new incoming RPC requests
    further into startup. Stop processing requests sooner during
    shutdown.</li>
<li><code>slurmcltd</code> - Avoid auth/slurm related hangs of CLI commands
    during startup and shutdown.</li>
<li><code>slurmctld</code> - Avoid race condition during shutdown or
    ereconfigure that could result in a crash due delayed
    processing of a connection while plugins are unloaded.</li>
<li>Fix small memleak when getting the job list from the database.</li>
<li>Fix incorrect printing of <code>%</code> escape characters when printing
    stdio fields for jobs.</li>
<li>Fix padding parsing when printing stdio fields for jobs.</li>
<li>Fix printing <code>%A</code> array job id when expanding patterns.</li>
<li>Fix reservations causing jobs to be held for <code>Bad Constraints</code>.</li>
<li><code>switch/hpe_slingshot</code> - Prevent potential segfault on failed
    curl request to the fabric manager.</li>
<li>Fix printing incorrect array job id when expanding stdio file
    names. The <code>%A</code> will now be substituted by the correct value.</li>
<li>Fix printing incorrect array job id when expanding stdio file
    names. The <code>%A</code> will now be substituted by the correct value.</li>
<li><code>switch/hpe_slingshot</code> - Fix VNI range not updating on slurmctld
    restart or reconfigre.</li>
<li>Fix steps not being created when using certain combinations of
    <code>-c</code> and <code>-n</code> inferior to the jobs requested resources, when
    using stepmgr and nodes are configured with
    <code>CPUs == Sockets*CoresPerSocket</code>.</li>
<li>Permit configuring the number of retry attempts to destroy CXI
    service via the new destroy_retries <code>SwitchParameter</code>.</li>
<li>Do not reset <code>memory.high</code> and <code>memory.swap.max</code> in slurmd
    startup or reconfigure as we are never really touching this
    in <code>slurmd</code>.</li>
<li>Fix reconfigure failure of slurmd when it has been started
    manually and the <code>CoreSpecLimits</code> have been removed from
    <code>slurm.conf</code>.</li>
<li>Set or reset CoreSpec limits when slurmd is reconfigured and
    it was started with systemd.</li>
<li><code>switch/hpe-slingshot</code> - Make sure the slurmctld can free
    step VNIs after the controller restarts or reconfigures while
    the job is running.</li>
<li>
<p>Fix backup <code>slurmctld</code> failure on 2nd takeover.</p>
</li>
<li>
<p>Changes from version 24.11.4</p>
</li>
<li>
<p><code>slurmctld</code>,<code>slurmrestd</code> - Avoid possible race condition that
    could have caused process to crash when listener socket was
    closed while accepting a new connection.</p>
</li>
<li><code>slurmrestd</code> - Avoid race condition that could have resulted
    in address logged for a UNIX socket to be incorrect.</li>
<li><code>slurmrestd</code> - Fix parameters in OpenAPI specification for the
    following endpoints to have <code>job_id</code> field:
    <code>GET /slurm/v0.0.40/jobs/state/
    GET /slurm/v0.0.41/jobs/state/
    GET /slurm/v0.0.42/jobs/state/
    GET /slurm/v0.0.43/jobs/state/</code></li>
<li><code>slurmd</code> - Fix tracking of thread counts that could cause
    incoming connections to be ignored after burst of simultaneous
    incoming connections that trigger delayed response logic.</li>
<li>Avoid unnecessary <code>SRUN_TIMEOUT</code> forwarding to <code>stepmgr</code>.</li>
<li>Fix jobs being scheduled on higher weighted powered down nodes.</li>
<li>Fix how backfill scheduler filters nodes from the available
    nodes based on exclusive user and <code>mcs_label</code> requirements.</li>
<li><code>acct_gather_energy/{gpu,ipmi}</code> - Fix potential energy
    consumption adjustment calculation underflow.</li>
<li><code>acct_gather_energy/ipmi</code> - Fix regression introduced in 24.05.5
    (which introduced the new way of preserving energy measurements
    through slurmd restarts) when <code>EnergyIPMICalcAdjustment=yes</code>.</li>
<li>Prevent <code>slurmctld</code> deadlock in the assoc mgr.</li>
<li>Fix memory leak when <code>RestrictedCoresPerGPU</code> is enabled.</li>
<li>Fix preemptor jobs not entering execution due to wrong
    calculation of accounting policy limits.</li>
<li>Fix certain job requests that were incorrectly denied with
    node configuration unavailable error.</li>
<li><code>slurmd</code> - Avoid crash due when slurmd has a communications
    failure with <code>slurmstepd</code>.</li>
<li>Fix memory leak when parsing yaml input.</li>
<li>Prevent <code>slurmctld</code> from showing error message about <code>PreemptMode=GANG</code>
    being a cluster-wide option for <code>scontrol update part</code> calls
    that don&#x27;t attempt to modify partition PreemptMode.</li>
<li>Fix setting <code>GANG</code> preemption on partition when updating
    <code>PreemptMode</code> with <code>scontrol</code>.</li>
<li>Fix <code>CoreSpec</code> and <code>MemSpec</code> limits not being removed
    from previously configured slurmd.</li>
<li>Avoid race condition that could lead to a deadlock when <code>slurmd</code>,
    <code>slurmstepd</code>, <code>slurmctld</code>, <code>slurmrestd</code> or <code>sackd</code> have a fatal
    event.</li>
<li>Fix jobs using <code>--ntasks-per-node</code> and <code>--mem</code> keep pending
    forever when the requested mem divided by the number of CPUs
    will surpass the configured <code>MaxMemPerCPU</code>.</li>
<li><code>slurmd</code> - Fix address logged upon new incoming RPC connection
    from <code>INVALID</code> to IP address.</li>
<li>Fix memory leak when retrieving reservations. This affects
    <code>scontrol</code>, <code>sinfo</code>, <code>sview</code>, and the following <code>slurmrestd</code>
    endpoints:
    <code>GET /slurm/{any_data_parser}/reservation/{reservation_name}</code>
    <code>GET /slurm/{any_data_parser}/reservations</code></li>
<li>Log warning instead of <code>debuflags=conmgr</code> gated log when
    deferring new incoming connections when number of active
    connections exceed <code>conmgr_max_connections</code>.</li>
<li>Avoid race condition that could result in worker thread pool
    not activating all threads at once after a reconfigure resulting
    in lower utilization of available CPU threads until enough
    internal activity wakes up all threads in the worker pool.</li>
<li>Avoid theoretical race condition that could result in new
    incoming RPC
    socket connections being ignored after reconfigure.</li>
<li>slurmd - Avoid race condition that could result in a state
    where   new incoming RPC connections will always be ignored.</li>
<li>Add ReconfigFlags=KeepNodeStateFuture to restore saved <code>FUTURE</code>
    node state on restart and reconfig instead of reverting to
    <code>FUTURE</code> state. This will be made the default in 25.05.</li>
<li>Fix case where hetjob submit would cause <code>slurmctld</code> to crash.</li>
<li>Fix jobs using <code>--cpus-per-gpu</code> and <code>--mem</code> keep pending forever
    when the requested mem divided by the number of CPUs will surpass
    the configured <code>MaxMemPerCPU</code>.</li>
<li>Enforce that jobs using <code>--mem</code> and several <code>--*-per-*</code> options
    do not violate the <code>MaxMemPerCPU</code> in place.</li>
<li><code>slurmctld</code> - Fix use-cases of jobs incorrectly pending held
    when <code>--prefer</code> features are not initially satisfied.</li>
<li><code>slurmctld</code> - Fix jobs incorrectly held when <code>--prefer</code> not
    satisfied in some use-cases.</li>
<li>
<p>Ensure <code>RestrictedCoresPerGPU</code> and <code>CoreSpecCount</code> don&#x27;t overlap.</p>
</li>
<li>
<p>Changes from version 24.11.3</p>
</li>
<li>
<p>Fix database cluster ID generation not being random.</p>
</li>
<li>Fix a regression in which <code>slurmd -G</code> gave no output.</li>
<li>Fix a long-standing crash in <code>slurmctld</code> after updating a
    reservation with an empty nodelist. The crash could occur
    after restarting slurmctld, or if downing/draining a node
    in the reservation with the <code>REPLACE</code> or <code>REPLACE_DOWN</code> flag.</li>
<li>Avoid changing process name to "<code>watch</code>" from original daemon name.
    This could potentially breaking some monitoring scripts.</li>
<li>Avoid <code>slurmctld</code> being killed by <code>SIGALRM</code> due to race condition
    at startup.</li>
<li>Fix race condition in slurmrestd that resulted in "<code>Requested
    data_parser plugin does not support OpenAPI plugin</code>" error being
    returned for valid endpoints.</li>
<li>Fix race between <code>task/cgroup</code> CPUset and <code>jobacctgather/cgroup</code>.
    The first was removing the pid from <code>task_X</code> cgroup directory
    causing memory limits to not being applied.</li>
<li>If multiple partitions are requested, set the <code>SLURM_JOB_PARTITION</code>
    output environment variable to the partition in which the job is
    running for <code>salloc</code> and <code>srun</code> in order to match the documentation
    and the behavior of <code>sbatch</code>.</li>
<li><code>srun</code> - Fixed wrongly constructed <code>SLURM_CPU_BIND</code> env variable
    that could get propagated to downward srun calls in certain mpi
    environments, causing launch failures.</li>
<li>Don&#x27;t print misleading errors for stepmgr enabled steps.</li>
<li><code>slurmrestd</code> - Avoid connection to slurmdbd for the following
    endpoints:
    <code>GET /slurm/v0.0.41/jobs
    GET /slurm/v0.0.41/job/{job_id}</code></li>
<li><code>slurmrestd</code> - Avoid connection to slurmdbd for the following
    endpoints:
    <code>GET /slurm/v0.0.40/jobs
    GET /slurm/v0.0.40/job/{job_id}</code></li>
<li><code>slurmrestd</code> - Fix possible memory leak when parsing arrays with
    <code>data_parser/v0.0.40</code>.</li>
<li><code>slurmrestd</code> - Fix possible memory leak when parsing arrays with
    <code>data_parser/v0.0.41</code>.</li>
<li>
<p><code>slurmrestd</code> - Fix possible memory leak when parsing arrays with
    <code>data_parser/v0.0.42</code>.</p>
</li>
<li>
<p>Changes from version 24.11.2</p>
</li>
<li>
<p>Fix segfault when submitting <code>--test-only</code> jobs that can
    preempt.</p>
</li>
<li>Fix regression introduced in 23.11 that prevented the
    following flags from being added to a reservation on an
    update: <code>DAILY</code>, <code>HOURLY</code>, <code>WEEKLY</code>, <code>WEEKDAY</code>, and <code>WEEKEND</code>.</li>
<li>Fix crash and issues evaluating job&#x27;s suitability for running
    in nodes with already suspended job(s) there.</li>
<li><code>slurmctld</code> will ensure that healthy nodes are not reported as
    <code>UnavailableNodes</code> in job reason codes.</li>
<li>Fix handling of jobs submitted to a current reservation with
    flags <code>OVERLAP,FLEX</code> or <code>OVERLAP,ANY_NODES</code> when it overlaps nodes
    with a future maintenance reservation. When a job submission
    had a time limit that overlapped with the future maintenance
    reservation, it was rejected. Now the job is accepted but
    stays pending with the reason "<code>ReqNodeNotAvail, Reserved for
    maintenance</code>".</li>
<li><code>pam_slurm_adopt</code> - avoid errors when explicitly setting some
    arguments to the default value.</li>
<li>Fix QOS preemption with <code>PreemptMode=SUSPEND</code>.</li>
<li><code>slurmdbd</code> - When changing a user&#x27;s name update lineage at the
    same time.</li>
<li>Fix regression in 24.11 in which <code>burst_buffer.lua</code> does not
    inherit the <code>SLURM_CONF</code> environment variable from <code>slurmctld</code> and
    fails to run if slurm.conf is in a non-standard location.</li>
<li>Fix memory leak in slurmctld if <code>select/linear</code> and the
    <code>PreemptParameters=reclaim_licenses</code> options are both set in
    <code>slurm.conf</code>.  Regression in 24.11.1.</li>
<li>Fix running jobs, that requested multiple partitions, from
    potentially being set to the wrong partition on restart.</li>
<li><code>switch/hpe_slingshot</code> - Fix compatibility with newer cxi
    drivers, specifically when specifying <code>disable_rdzv_get</code>.</li>
<li>Add <code>ABORT_ON_FATAL</code> environment variable to capture a backtrace
    from any <code>fatal()</code> message.</li>
<li>Fix printing invalid address in rate limiting log statement.</li>
<li><code>sched/backfill</code> - Fix node state <code>PLANNED</code> not being cleared from
    fully allocated nodes during a backfill cycle.</li>
<li><code>select/cons_tres</code> - Fix future planning of jobs with
    <code>bf_licenses</code>.</li>
<li>Prevent redundant "<code>on_data returned rc: Rate limit exceeded,
    please retry momentarily</code>" error message from being printed in
    slurmctld logs.</li>
<li>Fix loading non-default QOS on pending jobs from pre-24.11
    state.</li>
<li>Fix pending jobs displaying <code>QOS=(null)</code> when not explicitly
    requesting a QOS.</li>
<li>Fix segfault issue from job record with no <code>job_resrcs</code>.</li>
<li>Fix failing <code>sacctmgr delete/modify/show</code> account operations
    with <code>where</code> clauses.</li>
<li>Fix regression in 24.11 in which Slurm daemons started
    catching several <code>SIGTSTP</code>, <code>SIGTTIN</code> and <code>SIGUSR1</code> signals and
    ignored them, while before they were not ignoring them. This
    also caused slurmctld to not being able to shutdown after a
    <code>SIGTSTP</code> because slurmscriptd caught the signal and stopped
    while slurmctld ignored it. Unify and fix these situations and
    get back to the previous behavior for these signals.</li>
<li>Document that <code>SIGQUIT</code> is no longer ignored by <code>slurmctld</code>,
    <code>slurmdbd</code>, and slurmd in 24.11. As of 24.11.0rc1, <code>SIGQUIT</code> is
    identical to <code>SIGINT</code> and <code>SIGTERM</code> for these daemons, but this
    change was not documented.</li>
<li>Fix not considering nodes marked for reboot without ASAP in
    the scheduler.</li>
<li>Remove the <code>boot^</code> state on unexpected node reboot after return
    to service.</li>
<li>Do not allow new jobs to start on a node which is being
    rebooted with the flag <code>nextstate=resume</code>.</li>
<li>Prevent lower priority job running after cancelling an ASAP
    reboot.</li>
<li>Fix srun jobs starting on <code>nextstate=resume</code> rebooting nodes.</li>
</ul>



    

    <h2>Patch Instructions:</h2>
    <p>
        To install this SUSE  update use the SUSE recommended
        installation methods like YaST online_update or "zypper patch".<br/>

        Alternatively you can run the command listed for your product:
    </p>
    <ul class="list-group">
        
            <li class="list-group-item">
                HPC Module 15-SP7
                
                    
                        <br/>
                        <code>zypper in -t patch SUSE-SLE-Module-HPC-15-SP7-2025-1751=1</code>
                    
                    
                
            </li>
        
            <li class="list-group-item">
                SUSE Package Hub 15 15-SP7
                
                    
                        <br/>
                        <code>zypper in -t patch SUSE-SLE-Module-Packagehub-Subpackages-15-SP7-2025-1751=1</code>
                    
                    
                
            </li>
        
    </ul>

    <h2>Package List:</h2>
    <ul>
        
            
                <li>
                    HPC Module 15-SP7 (aarch64 x86_64)
                    <ul>
                        
                            <li>slurm-sql-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-lua-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-rest-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-cray-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-auth-none-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-munge-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-slurmdbd-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-torque-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-lua-24.11.5-150700.3.3.1</li>
                        
                            <li>libnss_slurm2-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-node-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-slurmdbd-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-pam_slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-debugsource-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sview-24.11.5-150700.3.3.1</li>
                        
                            <li>libpmi0-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-node-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>perl-slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sql-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-torque-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-plugins-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>libpmi0-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-cray-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>libnss_slurm2-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-pam_slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-munge-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-plugins-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>perl-slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>libslurm42-24.11.5-150700.3.3.1</li>
                        
                            <li>libslurm42-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-devel-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-auth-none-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-rest-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sview-debuginfo-24.11.5-150700.3.3.1</li>
                        
                    </ul>
                </li>
            
                <li>
                    HPC Module 15-SP7 (noarch)
                    <ul>
                        
                            <li>slurm-doc-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-config-man-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-webdoc-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-config-24.11.5-150700.3.3.1</li>
                        
                    </ul>
                </li>
            
        
            
                <li>
                    SUSE Package Hub 15 15-SP7 (ppc64le s390x)
                    <ul>
                        
                            <li>slurm-sql-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-lua-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-rest-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-cray-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-auth-none-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-munge-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-slurmdbd-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-torque-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-lua-24.11.5-150700.3.3.1</li>
                        
                            <li>libnss_slurm2-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-node-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-slurmdbd-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-pam_slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-debugsource-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sview-24.11.5-150700.3.3.1</li>
                        
                            <li>libpmi0-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-node-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>perl-slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sql-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-torque-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-plugins-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>libpmi0-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-cray-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>libnss_slurm2-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-pam_slurm-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-munge-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-plugins-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>perl-slurm-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-hdf5-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-devel-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-auth-none-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-hdf5-debuginfo-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-rest-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sview-debuginfo-24.11.5-150700.3.3.1</li>
                        
                    </ul>
                </li>
            
                <li>
                    SUSE Package Hub 15 15-SP7 (noarch)
                    <ul>
                        
                            <li>slurm-doc-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-config-man-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-config-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-sjstat-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-openlava-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-seff-24.11.5-150700.3.3.1</li>
                        
                            <li>slurm-webdoc-24.11.5-150700.3.3.1</li>
                        
                    </ul>
                </li>
            
        
    </ul>

    
        <h2>References:</h2>
        <ul>
            
                
                    <li>
                        <a href="https://www.suse.com/security/cve/CVE-2025-43904.html">https://www.suse.com/security/cve/CVE-2025-43904.html</a>
                    </li>
                
            
                
                    <li>
                        <a href="https://bugzilla.suse.com/show_bug.cgi?id=1243666">https://bugzilla.suse.com/show_bug.cgi?id=1243666</a>
                    </li>
                
            
        </ul>
    
</div>