SUSE-RU-2022:0154-1: moderate: Recommended update for ceph-csi, csi-external-attacher, csi-external-provisioner, csi-external-resizer, csi-external-snapshotter, csi-node-driver-registrar, rook

sle-updates at lists.suse.com sle-updates at lists.suse.com
Mon Jan 24 11:18:51 UTC 2022


   SUSE Recommended Update: Recommended update for ceph-csi, csi-external-attacher, csi-external-provisioner, csi-external-resizer, csi-external-snapshotter, csi-node-driver-registrar, rook
______________________________________________________________________________

Announcement ID:    SUSE-RU-2022:0154-1
Rating:             moderate
References:         
Affected Products:
                    SUSE Enterprise Storage 7
______________________________________________________________________________

   An update that has 0 recommended fixes can now be installed.

Description:

   This update for ceph-csi, csi-external-attacher, csi-external-provisioner,
   csi-external-resizer, csi-external-snapshotter, csi-node-driver-registrar,
   rook fixes the following issues:

   - Update to 3.4.0 Features: Beta: Below features have been lifted from its
     Alpha support to Beta
       * Snapshot creation and deletion
       * Volume restore from snapshot
       * Volume clone support
       * Volume/PV Metrics of File Mode Volume
       * Volume/PV Metrics of Block Mode Volume

       Alpha:
       * rbd-nbd volume mounter Enhancement:
     * Restore RBD snapshot to a different Pool
     * Snapshot schedule support for RBD mirrored PVC
     * Mirroring support for thick PVC
     * Multi-Tenant support for vault encryption
     * AmazonMetadata KMS provider support
     * rbd-nbd volume healer support
     * Locking enhancement for improving POD deletion performance
     * Improvements in lock handling for snap and clone operations
     * Better thick provisioning support
     * Create CephFS subvolume with VolumeNamePrefix
     * CephFS Subvolume path addition in PV object
     * Consumption of go-ceph APIs for various CephFS controller and node
       operations.
     * Resize of the RBD encrypted volume
     * Better error handling for GRPC
     * Golang profiling support for debugging
     * Updated Kubernetes sidecar versions to the latest release
     * Kubernetes dependency update to v1.21.2
     * Create storageclass and secrets using helm charts CI/E2E
     * Expansion of RBD encrypted volumes
     * Update and addition of new static golang tools
     * Kubernetes v1.21 support
     * Unit tests for SecretsKMS
     * Test for Vault with ServiceAccount per Tenant
     * E2E for user secret based metadata encryption
     * Update rook.sh and Ceph cluster version in E2E
     * Added RBD test for testing sc, secret via helm
     * Update feature gates setting from minikube.sh
     * Add CephFS test for sc, secret via helm
     * Add e2e for static PVC without imageFeature parameter
     * Make use of snapshot v1 API and client sets in e2e tests
     * Validate thick-provisioned PVC-PVC cloning
     * Adding retry support for various e2e failure scenarios
     * Refactor KMS configuration and usage
   - Removed patch ceph-csi-locking.patch (got merged upstream)

   - Update to v3.3.0
     * Feature
       * Add command line arguments to configure leader election options
         (#313, @RaunakShah)
       * Adds mappings for PV access modes to new CSI access modes:
         SINGLE_NODE_SINGLE_WRITER and SINGLE_NODE_MULTI_WRITER. (#308,
         @chrishenzie)
       * Updates Kubernetes dependencies to v1.22.0 (#321, @chrishenzie) [SIG
         Storage]
     * Bug or Regression
       * Fix a bug that the controller can panic crash when it receives
         DeletedFinalStateUnknown deletion event. (#304, @Jiawei0227)
     * Other (Cleanup or Flake)
       * Updates container-storage-interface dependency to v1.5.0 (#312,
         @chrishenzie)
       * Reuse the same gRPC CSI client for all CSI driver calls (#318,
         @yeya24)

   - Update to v3.2.1
   - Get rid of vendoring
   - Update version of go to 1.16

   - Update to v3.0.2

   - Update version to 3.0.0
     * Feature
       * Add command line arguments to configure leader election options
         (#643, @RaunakShah)
       * Adds mappings for PV access modes to new CSI access modes:
         SINGLE_NODE_SINGLE_WRITER and SINGLE_NODE_MULTI_WRITER. (#630,
         @chrishenzie)
       * The provisioner sidecar now has an argument called
         controller-publish-readonly which sets the value of CSI PV spec
         readonly field value based on the PVC access mode. If this flag is
         set to true and the PVC access mode only contains the ROX access
         mode, the controller automatically sets
         PersistentVolume.spec.CSIPersistentVolumeSource.readOnly field to
         true. (#469, @humblec)
       * Updates Kubernetes dependencies to v1.22.0 (#660, @chrishenzie) [SIG
         Storage]
       * Updates container-storage-interface dependency to v1.5.0 (#644,
         @chrishenzie)
     * Bug or Regression
       * Fix a bug that not being able to use block device mode when enable a
         storage capacity tracking mode. (#635, @bells17)
       * Fix a data race in cloning protection controller (#651, @tksm)
       * Fix capacity information updates when topology changes. Only
         affected central deployment and network attached storage, not
         deployment on each node. This broke in v2.2.0 as part of a bug fix
         for capacity informer handling. (#617, @bai3shuo4)
       * Fix env name from POD_NAMESPACE to NAMESPACE for
         capacity-ownerref-level option. (#636, @bells17)
       * Fixed reporting of metrics when a migratable CSI driver is used.
         (#620, @jsafrane)
       * Newly provisioned CSI Migration enabled PV will have
         "provisioned-by" annotation set to in-tree provisioner name instead
         of the CSI provisioner (#646, @wongma7)

   - Update version to 2.2.2
   - Get rid of vendoring
   - Use go 1.16 for building

   - Update version to 2.0.4

   - Update to version 1.3.0
     * Other (Cleanup or Flake)
       * Updates Kubernetes dependencies to v1.22.0 (#165, @chrishenzie) [SIG
         Storage]
       * Updates container-storage-interface dependency to v1.5.0 (#156,
         @chrishenzie)
     * Feature
       * Adds mappings for PV access modes to new CSI access modes:
         SINGLE_NODE_SINGLE_WRITER and SINGLE_NODE_MULTI_WRITER. (#151,
         @chrishenzie)
       * leader-election-lease-duration, leader-election-renew-deadline and
         leader-election-retry-period were added to command line arguments to
         configure leader election options (#158, @RaunakShah)

   - Update to version 1.2.0
   - Get rid of vendoring
   - Push go version to 1.16

   - Update to version 1.0.1

   - Update to version 4.2.0
     * Feature
      * Snapshot APIs
        * The namespace of the referenced VolumeSnapshot is printed when
   printing a VolumeSnapshotContent. (#535, @tsmetana)
      * Snapshot Controller
        * retry-interval-start and retry-interval-max arguments are added to
   common-controller which controls retry interval of failed volume snapshot
   creation and deletion. These values set the ratelimiter for snapshot and
   content queues. (#530, @humblec)
        * Add command line arguments leader-election-lease-duration,
   leader-election-renew-deadline, and leader-election-retry-period to
   configure leader election options for the snapshot controller. (#575,
   @bertinatto)
        * Adds an operations_in_flight metric for determining the number of
   snapshot operations in progress. (#519, @ggriffiths)
        * Introduced "SnapshotCreated" and "SnapshotReady" events. (#540,
   @rexagod)
      * CSI Snapshotter Sidecar
        * retry-interval-start and retry-interval-max arguments are added to
   csi-snapshotter sidecar which controls retry interval of failed volume
   snapshot creation and deletion. These values set the ratelimiter for
   volumesnapshotcontent queue. (#308, @humblec)
        * Add command line arguments leader-election-lease-duration,
   leader-election-renew-deadline, and leader-election-retry-period to
   configure leader election options for CSI snapshotter sidecar. (#538,
   @RaunakShah)
     * Bug or Regression
      * Snapshot Controller
        * Add process_start_time_seconds metric (#569, @saikat-royc)
        * Adds the leader election health check for the snapshot controller
   at /healthz/leader-election (#573, @ggriffiths)
        * Remove kube-system namespace verification during startup and
   instead list volumes across all namespaces (#515, @mauriciopoppe)
     * Other (Cleanup or Flake)
      * Updates Kubernetes dependencies to v1.22.0 (#570, @chrishenzie) [SIG
        Storage]
      * Updates csi-lib-utils dependency to v0.10.0 (#574, @chrishenzie)
      * Updates container-storage-interface dependency to v1.5.0 (#532,
        @chrishenzie)
     * Snapshot Validation Webhook
      * Changed the webhook image from distroless/base to distroless/static.
        (#550, @WanzenBug)

   - Update to version 4.1.1
   - Get rid of vendoring
   - Update go-version to 1.16

   - Update to version 3.0.2

   - Update to version 2.3.0
     * Dockerfile.Windows args changed to ADDON_IMAGE and BASE_IMAGE (#146,
       @mauriciopoppe)
     * Updates Kubernetes dependencies to v1.22.0 (#159, @chrishenzie) [SIG
       Storage]
     * Updates csi-lib-utils dependency to v0.10.0 (#160, @chrishenzie)
     * New running modes, the kubelet-registration-probe mode checks if
       node-driver-registrar kubelet plugin registration succeeded. (#152,
       @mauriciopoppe)
     * Updates container-storage-interface dependency to v1.5.0 (#151,
       @chrishenzie)

   - Update to version 2.2.0

     * Updated runtime (Go 1.16) and dependencies (#136, @pohly)

     * Update image and tag names for Windows to have separate parameters for
       nanoserver and servercore (#111, @jingxu97)
   - Update to v1.7.7 Rook v1.7.7 is a patch release limited in scope and
     focusing on small feature additions and bug fixes to the Ceph operator.
     * docs: Support ephemeral volumes with Ceph CSI RBD and CephFS driver
       (#9055, @humblec)
     * core: Allow downgrade of all daemons consistently (#9098, @travisn)
     * core: Reconcile once instead of multiple times after the cluster CR is
       edited (#9091, @leseb)
     * nfs: Add pool setting CR option (#9040, @leseb)
     * ceph: Trigger 'CephMonQuorumLost' alert when mon quorum is down
       (#9068, @aruniiird)
     * rgw: Updated livenessProbe and readinessProbe (#9080, @satoru-takeuchi)
     * mgr: Do not set the balancer mode on pacific (#9063, @leseb)
     * helm: Add appVersion property to the charts (#9051, @travisn)
     * rgw: Read tls secret hint for insecure tls (#9020, @leseb)
     * ceph: Ability to set labels on the crash collector (#9044, @leseb)
     * core: Treat cluster as not existing if the cleanup policy is set
       (#9041, @travisn)
     * docs: Document failover and failback scenarios for applications
       (#8411, @Yuggupta27)
     * ceph: Update endpoint with IP for external RGW server (#9010, @thotz)
   - Combined gomod.patch and gosum.patch to vendor.patch
       * Patching module-files to match the SUSE build env

   - Update to v1.7.6 Rook v1.7.6 is a patch release limited in scope and
     focusing on small feature additions and bug fixes to the Ceph operator.
     * core: only merge stderr on error (#8995, @leseb)core: only merge
       stderr on error (#8995, @leseb)
     * nfs: remove RADOS options from CephNFS and use .nfs pool (#8501,
       @josephsawaya)
     * csi: fix comment for the provisioner and clusterID (#8990, @Madhu-1)
     * mon: Enable mon failover for the arbiter in stretch mode (#8984,
       @travisn)
     * monitoring: fixing the queries for alerts 'CephMgrIsAbsent' and
       'CephMgrIsMissingReplicas' (#8985, @aruniiird)
     * osd: fix kms auto-detection when full TLS (#8867, @leseb)
     * csi: add affinity to csi version check job (#8965, @Rakshith-R)
     * pool: remove default value for pool compression (#8966, @leseb)
     * monitoring: handle empty ceph_version in ceph_mon_metadata to avoid
       raising misleading alert (#8947, @GowthamShanmugam)
     * nfs: remove RADOS options from CephNFS and use .nfs pool (#8501,
       @josephsawaya)
     * osd: print the c-v output when inventory command fails (#8971, @leseb)
     * helm: remove chart content not in common.yaml (#8884, @BlaineEXE)
     * rgw: replace period update --commit with function (#8911, @BlaineEXE)
     * rgw: fixing ClientID of log-collector for RGW instance (#8889,
       @parth-gr)
     * mon: run ceph commands to mon with timeout (#8939, @leseb)
     * osd: do not hide errors (#8933, @leseb)
     * rgw: use trace logs for RGW admin HTTP info (#8937, @BlaineEXE)

   - Update to v1.7.5 Rook v1.7.5 is a patch release limited in scope and
     focusing on small feature additions and bug fixes to the Ceph operator.
     * Update csi sidecar references to the latest versions (#8820, @humblec)
     * No longer install the VolumeReplication CRDs from Rook (#8845,
       @travisn)
     * Initialize rbd block pool after creation (#8923, @Rakshith-R)
     * Close stdoutPipe for the discovery daemon (#8917, @subhamkrai)
     * Add documentation to recover a pod from a lost node (#8742,
       @subhamkrai)
     * Increasing the auto-resolvable alerts delay to 15m (#8896, @aruniiird)
     * Change CephAbsentMgr to use 'up' query (#8882, @aruniiird)
     * Adding 'namespace' field to the needed ceph queries (#8901, @aruniiird)
     * Update period if period does not exist (#8828, @BlaineEXE)
     * Do not fail on KMS keys deletion (#8868, @leseb)
     * Do not build all the multus args to remote exec cmd (#8860, @leseb)
     * Fix external script when passing monitoring list (#8807, @leseb)
     * Use insecure TLS for bucket health check (#8712, @leseb)
     * Add PVC privileges to the rook-ceph-purge-osd service account (#8833,
       @ashangit)
     * Fix the example of local PVC-based cluster (#8846, @satoru-takeuchi)
     * Add signal handling for log collector (#8806, @leseb)
     * Prometheus rules format changes (#8774, @aruniiird)
     * Add namespace to ceph node down query (#8793, @aruniiird)

   - Added gomod.patch and gosum.patch
       * Patching module-files to match the SUSE build env

   - Update to v1.7.4 Rook v1.7.4 is a patch release limited in scope and
     focusing on small feature additions and bug fixes to the Ceph operator.
    * Add missing error type check to exec (#8751, @BlaineEXE)
    * Raise minimum supported version of Ceph-CSI to v3.3.0 (#8803, @humblec)
    * Set the Ceph v16.2.6 release as the default version (#8743, @leseb)
    * Pass region to newS3agent() (#8766, @thotz)
    * Remove unnecessary CephFS provisioner permission (#8739, @Madhu-1)
    * Configurable csi provisioner replica count (#8801, @Madhu-1)
    * Allow setting the default storageclass for a filesystem in the helm
      chart (#8771, @kubealex)
    * Retry object health check if creation fails (#8708, @BlaineEXE)
    * Use the admin socket for the mgr liveness probe (#8721, @jmolmo)
    * Correct the CephFS mirroring documentation (#8732, @leseb)
    * Reconcile OSD PDBs if allowed disruption is 0 (#8698, @sp98)
    * Add peer spec migration to upgrade doc (#8435, @BlaineEXE)
    * Fix lvm osd db device check (#8267, @lyind)
    * Refactor documentation to simplify for the Ceph provider (#8693,
      @travisn)
    * Emphasize unit tests in the development guide (#8685, @BlaineEXE)
   - Update to v1.7.3 Rook Ceph v1.7.3 is a patch release limited in scope
     and focusing on small feature additions and bug fixes.
    * Cassandra and NFS have moved to their own repos. All improvements in
      this repo starting from this release will only be for the Ceph storage
      provider. (#8619, @BlaineEXE)
    * Image list for offline installation can be found in images.txt (#8596,
      @subhamkrai)
    * Add networking.k8s.io/v1 Ingress chart compatibility (#8666, @hall)
    * Modify the log info when ok to continue fails (#8675, @subhamkrai)
    * Print the output on errors from ceph-volume (#8670, @leseb)
    * Add quota and capabilities configuration for CephObjectStore users
      (#8211, @thotz)
    * Fix pool deletion when uninstalling a multus cluster configuration
      (#8659, @leseb)
    * Use node externalIP if no internalIP defined (#8653, @JrCs)
    * Fix CephOSDCriticallyFull and CephOSDNearFull monitoring alert queries
      (#8668, @Muyan0828)
    * Fix CephMonQuorumAtRisk monitoring alert query (#8652, @anmolsachan)
    * Allow an even number of mons (#8636, @travisn)
    * Create a pod disruption budget for the Ceph mgr deployment when two
      mgrs are requested (#8593, @parth-gr)
    * Fix error message in UpdateNodeStatus (#8629, @hiroyaonoe)
    * Avoid multiple reconciles of ceph cluster due to the ipv4 default
      setting (#8638, @leseb)
    * Avoid duplicate ownerReferences (#8615, @YZ775)
    * Auto grow OSDs size on PVCs based on prometheus metrics (#8078,
      @parth-gr)
    * External cluster configuration script fixed for backward compatibility
      with python2 (#8623, @aruniiird)
    * Fix vault kv secret engine auto-detection (#8618, @leseb)
    * Add ClusterID and PoolID mappings between local and peer cluster
      (#8626, @sp98)
    * Set the filesystem status when mirroring is not enabled (#8609,
      @travisn)
   - Update to v1.7.2 Rook v1.7.2 s a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Merge toleration for osd/prepareOSD pod if specified both places
         (#8566, @subhamkrai)
       * Fix panic when recreating the csidriver object (#8582, @Madhu-1)
       * Build with latest golang v1.16.7 (#8540, @BlaineEXE)
       * Do not check ok-to-stop when OSDs are in CLBO (#8583, @leseb)
       * Convert util.NewSet() to sets.NewString() (#8584, @parth-gr)
       * Add support for update() from lib-bucket-provisioner (#8514, @thotz)
       * Signal handling with context (#8441, @leseb)
       * Make storage device config nullable (#8552, @BlaineEXE)
       * Allow K8s version check on prerelease versions (#8561, @subhamkrai)
       * Add permissions to rook-ceph-mgr role for osd removal in rook
         orchestator (#8568, @josephsawaya)
       * Use serviceAccountName as the key in ceph csi templates (#8546,
         @humblec)
       * Consolidate the calls to set mon config (#8590, @travisn)
     * NFS
       * Upgrade nfs-ganesha to 3.5 version (#8534, @kam1kaze)
   - Update to v1.7.1 Rook v1.7.1 s a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Update Ceph CSI version to v3.4.0 (#8425, @Madhu-1)
       * Add ability to specify the CA bundle for RGW (#8492, @degorenko)
       * Remove unused mon timeout cli flags (#8489, @leseb)
       * Add an option to enable/disable merge all placement (#8381,
         @subhamkrai)
       * Refuse to failover the arbiter mon on stretch clusters (#8520,
         @travisn)
       * Improve topology example of cluster on local pvc (#8491,
         @satoru-takeuchi)
   - Update to v1.7.0 v1.7.0 is a minor release with features primarily for
     the Ceph operator. K8s Version Support Kubernetes supported versions:
     1.11 and newer. Upgrade Guides If you are running a previous Rook
     version, please see the corresponding storage provider upgrade guide:
       * Ceph Breaking Changes Ceph Clusters with multiple filesystems will
         need to update their Ceph version to Pacific. The Operator
         configuration option ROOK_ALLOW_MULTIPLE_FILESYSTEMS has been
         removed in favor of simply verifying the Ceph version is at least
         Pacific where multiple filesystems are fully supported. Features Ceph
         * Official Ceph images are now being published to quay.io. To pick
   up the latest version of Ceph, update your CephCLuster spec field image
   must be updated to point to quay. See the example cluster.
         * Add support for creating Hybrid Storage Pools.
           * A hybrid storage pool creates a CRUSH rule for choosing the
   primary OSD for high performance devices (ssd, nvme, etc) and the
   remaining OSD for low performance devices (hdd).
           * See the design and Ceph docs for more details.
         * Add support CephFS mirroring peer configuration. See the
   configuration for more details.
         * Add support for Kubernetes TLS secrets for referring TLS certs
   needed for the Ceph RGW server.
         * Stretch clusters are considered stable
           * Ceph v16.2.5 or greater is required for stretch clusters
         * The use of peer secret names in CephRBDMirror is deprecated.
   Please use CephBlockPool CR to configure peer secret names and import
   peers. See the mirroring section in the CephBlockPool spec for more
   details.
         * Add user data protection when deleting Rook-Ceph Custom Resources.
   See the design for detailed information.
           * A CephCluster will not be deleted if there are any other
   Rook-Ceph Custom resources referencing it with the assumption that they
   are using the underlying Ceph cluster.
           * A CephObjectStore will not be deleted if there is a bucket
   present. In addition to protection from deletion when users have data in
   the store, this implicitly protects these resources from being deleted
   when there is a referencing ObjectBucketClaim present. Cassandra
         * CRDs converted from v1beta1 to v1
           * Schema is generated from the internal types for more complete
   validation
           * Minimum K8s version for the v1 CRDs is K8s 1.16 NFS
         * CRDs converted from v1beta1 to v1
           * Schema is generated from the internal types for more complete
   validation
           * Minimum K8s version for the v1 CRDs is K8s 1.16

   - Update to v1.6.10 Rook v1.6.10 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Reconcile OSD PDB if allowed disruptions are 0 (#8698)
       * Merge tolerations for the OSDs if specified in both all and osd
         placement (#8630)
       * External cluster script compatibility with python2 (#8623)
       * Do not check ok-to-stop when OSDs are in CLBO (#8583)
       * Fix panic when recreating the csidriver object (#8582)

   - Update to v1.6.9 Rook v1.6.9 s a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Make storage device config nullable (#8552)
       * Build with latest golang v1.16.7 (#8540)
       * Refuse to failover the arbiter mon on stretch clusters (#8520)
       * Add an option to enable/disable merge all placement (#8381)
       * Update ancillary monitoring resources (#8406)
       * Updated mon health check goroutine for reconfiguring patch values
         (#8370)
       * Releases for v1.6 are now based on Github actions instead of Jenkins
         (#8525 #8564)

   - Update to v1.6.8 Rook v1.6.8 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Re-enable lvm mode for OSDs on disks. See details to know if your
         OSDs are affected by unexpected partitions (#8319)
       * Update test to watch for v1 cronjob instead of v1beta1 (#8356)
       * Update PodDisruptionBudget from v1beta1 to v1 (#7977)
       * Add support for tls certs via k8s tls secrets for rgw (#8243)
       * Create correct ClusterRoleBinding for helm chart in namespace other
         than rook-ceph (#8344)
       * If two mgrs, ensure services are reconciled with the cluster (#8330)
       * Proxy rbd commands when multus is enabled (#8339)
       * Proxy ceph command when multus is configured (#8272)
       * Ensure OSD keyring exists at OSD pod start (#8155)
       * Add an example of a pvc-based ceph cluster on bare metal (#7969)
       * Mount /dev for the OSD daemon on lv-backed pvc (#8304)
       * Add ceph cluster context for lib bucket provisioning reconcile
         (#8310)
       * Create PDBs for all rgw and cephfs (#8301)
       * Always rehydrate the access and secret keys (#8286)
       * Fix PDB of RGW instances (#8274)
       * Ability to disable pool mirroring (#8215)
       * Fetch rgw port from the CephObjectStore the OBC (#8244)
       * Enable debug logging for adminops client log level is debug (#8208)
       * Update blockPoolChannel before starting the mirror monitoring (#8222)
       * Scaling down nfs deployment was failing (#8250)

   - removed update-tarball.sh (_service file will be used instead)

   - Update to v1.6.7 Rook v1.6.7 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Ignore atari partitions for OSDs when scanning disks. This is a
         partial fix for multiple OSDs being created unexpectedly per disk,
         causing OSD corruption. See details to know if your OSDs are
         affected (#8195)
       * Update CSIDriver object from betav1 to v1 (#8029)
       * Retry cluster reconcile immediately after cancellation (#8237)
       * Avoid operator resource over-usage when configuring RGW pools and
         memory limits are applied (#8238)
       * Remove k8s.io/kubernetes as a code dependency (#7913)
       * Silence harmless errors if the operator is still initializing (#8227)
       * If MDS resource limits are not set, assign mds_cache_memory_limit =
         resource requests * 0.8 (#8180)
       * Do not require rgw instances spec for external clusters (#8219)
       * Add tls support to external rgw endpoint (#8092)
       * Stop overwriting shared livenessProbe when overridden (#8206)
       * Update cluster-on-pvc example for proper OSD scheduling (#8199)
   - Update to v1.6.6 Rook v1.6.6 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Update csi sidecar images to latest release (#8125)
       * Update csi node-driver-registrar to latest release (#8190)
       * Evict a mon if colocated with another mon (#8181)
       * Enable logging in legacy LVM OSD daemons (#8175)
       * Do not leak key encryption key to the log (#8173)
       * Read and validate CSI params in a goroutine (#8140)
       * Only require rgw-admin-ops user when an RGW endpoint is provided
         (#8164)
       * Avoid unnecessary OSD restarts when multus is configured (#8142)
       * Use cacert if no client cert/key are present for OSD encryption with
         Vault (#8157)
       * Mons in stretch cluster should be assigned to a node when using
         dataDirHostPath (#8147)
       * Support cronjob v1 for newer versions of K8s to avoid deprecated
         v1beta1 (#8114)
       * Initialise httpclient for bucketchecker and objectstoreuse (#8139)
       * Activate osd container should use correct host path for config
         (#8137)
       * Set device class for already present osd deployments (#8134)
       * No need for --force when creating filesystem (#8130)
       * Expose enableCSIHostNetwork correctly in the helm chart (#8074)
       * Add RBAC for mgr to create service monitor (#8118)
       * Update operator internal controller runtime and k8s reference
         version (#8087)
   - Update to v1.6.5 Rook v1.6.5 is a patch release limited in scope and
     focusing on small feature additions and bug fixes. We are happy to
     announce the availability of a Helm chart to configure the CephCluster
     CR. Please try it out and share feedback! We would like to declare it
     stable in v1.7.
     * Ceph
       * Experimental Helm chart for CephClusters (#7778)
       * Disable insecure global id if no insecure clients are detected. If
         insecure clients are still required, see these instructions. (#7746)
       * Enable host networking by default in the CSI driver due to issues
         with client IO hangs when the driver restarts (#8102)
       * Add a disaster recovery guide for an accidentally deleted
         CephCluster CR (#8040)
       * Do not fail prepareOSD job if devices are not passed (#8098)
       * Ensure MDS and RGW are upgraded anytime the ceph image changes
         (#8060)
       * External cluster config enables v1 address type when enabling v2
         (#8083)
       * Create object pools in parallel for faster object store reconcile
         (#8082)
       * Fix detection of delete event reconciliation (#8086)
       * Use RGW admin API for s3 user management (#7998)
   - Update to v1.6.4 Rook v1.6.4 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Support for separate tolerations and affinities for rbd and cephfs
         CSI drivers (#8006)
       * Update ceph version to 15.2.13 (#8004)
       * External cluster upgrades fix for CRD schema (#8042)
       * Build with golang 1.16 instead of 1.15 (#7945)
       * Retry starting CSI drivers on initial failure (#8020)
       * During uninstall stop monitoring rbd mirroring before cleanup (#8031)
       * Update the backend path for RGW transit engine (#8008)
       * If reducing mon count only remove one extra mon per health check
         (#8011)
       * Parse radosgw-admin json properly for internal commands (#8000)
       * Expand OSD PVCs only if the underlying storage class allow expansion
         (#8001)
       * Allow the operator log level to be changed dynamically (#7976)
       * Pin experimental volume replication to release-v0.1 branch (#7985)
       * Remove '--site-name' arg when creating bootstrap peer token (#7986)
       * Do not configure external metric endpoint if not present (#7974)
       * Helm chart to allow multiple filesystems (#7930)
       * Rehydrate the bootstrap peer token secret on monitor changes (#7935)
   - Update to v1.6.3 Rook v1.6.3 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Ensure correct devices are started for OSDs after node restart
         (#7951)
       * Write reconcile results to events on the CephCluster CR (#7222)
       * Updated dashboard ingress example for networking v1 (#7933)
       * Remove obsolete gateway type setting in object store CRD (#7919)
       * Support specifying only public network or only cluster network or
         both (#7546)
       * Generate same operator deployment for OKD as OCP (#7898)
       * Ensure correct hostpath lock for OSD integrity (#7886)
       * Improve resilience of mon failover if operator is restarted during
         failover (#7884)
       * Disallow overriding the liveness probe handler function (#7889)
       * Actively update the service endpoint for external mgr (#7875)
       * Remove obsolete CSI statefulset template path vars from K8s 1.13
         (#7877)
       * Create crash collector pods after mon secret created (#7867)
       * OSD controller only updates PDBs during node drains instead of any
         OSD down event (#7726)
       * Allow heap dump generation when logCollector sidecar is not running
         (#7847)
       * Add nullable to object gateway settings (#7857)
   - Update to v1.6.2 Rook v1.6.2 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Set base Ceph operator image and example deployments to v16.2.2
         (#7829)
       * Update snapshot APIs from v1beta1 to v1 (#7711)
       * Documentation for creating static PVs (#7782)
       * Allow setting primary-affinity for the OSD (#7807)
       * Remove unneeded debug log statements (#7526)
       * Preserve volume claim template annotations during upgrade (#7835)
       * Allow re-creating erasure coded pool with different settings (#7820)
       * Double mon failover timeout during a node drain (#7801)
       * Remove unused volumesource schema from CephCluster CRD (#7813)
       * Set the device class on raw mode osds (#7815)
       * External cluster schema fix to allow not setting mons (#7789)
       * Add phase to the CephFilesystem CRD (#7752)
       * Generate full schema for volumeClaimTemplates in the CephCluster CRD
         (#7631)
       * Automate upgrades for the MDS daemon to properly scale down and
         scale up (#7445)
       * Add Vault KMS support for object stores (#7385)
       * Ensure object store endpoint is initialized when creating an object
         user (#7633)
       * Support for OBC operations when RGW is configured with TLS (#7764)
       * Preserve the OSD topology affinity during upgrade for clusters on
         PVCs (#7759)
       * Unify timeouts for various Ceph commands (#7719)
       * Allow setting annotations on RGW service (#7598)
       * Expand PVC size of mon daemons if requested (#7715)
   - Update to v1.6.1 Rook v1.6.1 is a patch release limited in scope and
     focusing on small feature additions and bug fixes.
     * Ceph
       * Disable host networking by default in the CSI plugin with option to
         enable (#7356)
       * Fix the schema for erasure-coded pools so replication size is not
         required (#7662)
       * Improve node watcher for adding new OSDs (#7568)
       * Operator base image updated to v16.2.1 (#7713)
       * Deployment examples updated to Ceph v15.2.11 (#7733)
       * Update Ceph-CSI to v3.3.1 (#7724)
       * Allow any device class for the OSDs in a pool instead of restricting
         the schema (#7718)
       * Fix metadata OSDs for Ceph Pacific (#7703)
       * Allow setting the initial CRUSH weight for an OSD (#7472)
       * Fix object store health check in case SSL is enabled (#7331)
       * Upgrades now ensure latest config flags are set for MDS and RGW
         (#7681)
       * Suppress noisy RGW log entry for radosgw-admin commands (#7663)
   - Update to v1.6.0

     * Major Themes v1.6.0 is a minor release with features primarily for the
       Ceph operator.

     * K8s Version Support Kubernetes supported versions: 1.11 and newer
     * Upgrade Guides If you are running a previous Rook version, please see
       the corresponding storage provider upgrade guide:
       * Ceph
    * Breaking Changes
      * Removed Storage Providers Each storage provider is unique and
        requires time and attention to properly develop and support. After
        much discussion with the community, we have decided to remove three
        storage providers from Rook in order to focus our efforts on storage
        providers that have active community support. See the project status
        for more information. These storage providers have been removed:
       * CockroachDB
       * EdgeFS
       * YugabyteDB
     * Ceph Support for creating OSDs via Drive Groups was removed. Please
       refer to the Ceph upgrade guide for migration instructions.
     * Features
       * Ceph Ceph Pacific (v16) support, including features such as:
         Multiple Ceph Filesystems Networking dual stack CephFilesystemMirror
         CRD to support mirroring of CephFS volumes with Pacific Ceph CSI
         Driver CSI v3.3.0 driver enabled by default Volume Replication
         Controller for improved RBD replication support Multus support GRPC
         metrics disabled by default Ceph RGW Extended the support of vault
         KMS configuration Scale with multiple daemons with a single
         deployment instead of a separate deployment for each rgw daemon
         OSDs: LVM is no longer used to provision OSDs as of Nautilus 14.2.14
         Octopus 15.2.9, and Pacific 16.2.0, simplifying the OSDs on raw
         devices, except for encrypted OSDs and multiple OSDs per device.
         More efficient updates for multiple OSDs at the same time (in the
         same failure domain) to speed up upgrades for larger Ceph clusters
         Multiple Ceph mgr daemons are supported for stretch clusters and
         other clusters where HA of the mgr is critical (set count: 2 under
         mgr in the CephCluster CR) Pod Disruption Budgets (PDBs) are enabled
         by default for Mon, RGW, MDS, and OSD daemons. See the disruption
         management settings. Monitor failover can be disabled, for scenarios
         where maintenance is planned and automatic mon failover is not
         desired CephClient CRD has been converted to use the
         controller-runtime library


Patch Instructions:

   To install this SUSE Recommended Update use the SUSE recommended installation methods
   like YaST online_update or "zypper patch".

   Alternatively you can run the command listed for your product:

   - SUSE Enterprise Storage 7:

      zypper in -t patch SUSE-Storage-7-2022-154=1



Package List:

   - SUSE Enterprise Storage 7 (noarch):

      rook-ceph-helm-charts-1.7.7+git0.4ec49a23b-3.24.3
      rook-k8s-yaml-1.7.7+git0.4ec49a23b-3.24.3


References:




More information about the sle-updates mailing list