From Michael.Bukva at suse.com Sun Jul 2 20:38:37 2017 From: Michael.Bukva at suse.com (Michael Bukva) Date: Mon, 3 Jul 2017 02:38:37 +0000 Subject: [Deepsea-users] Antw: Re: Add management of disks/devices that already have partitions In-Reply-To: <5956644E0200001C00104FB1@prv-mh.provo.novell.com> References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> <5955FE500200001C002E9ABE@prv-mh.provo.novell.com> <5407412.xEQ2N0ij9I@fury.home> <5956644E0200001C00104FB1@prv-mh.provo.novell.com> Message-ID: [ Disclaimer: deepsea & Ceph newbie ;) ] How can I tell that deepsea has ignored, as opposed to ?failed to detect? certain disks? That sub-problem, is actually *the* problem for relative newbies IMO. Doing Stuff to disks aggressively is often not a good idea (!) Urging an admin to do Stuff to disks just at the moment they are under pressure to press ?deploy? is a less bad idea that comes with added complexities, as discussed well on this thread. A third option is simply to tell folks what you did. A ?silent ignore? of disks is not that. Na?ve or not, I hope ?documenting ignores? for clarity is a low impact enhancement for deepsea relative to other approaches. Regards, -MB From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Martin Weiss Sent: Saturday, 1 July 2017 12:46 AM To: deepsea-users at lists.suse.com Subject: Re: [Deepsea-users] Antw: Re: Add management of disks/devices that already have partitions Thanks - commented there! Martin Am 30.06.2017 um 12:30 schrieb Eric Jackson >: The issue is https://github.com/SUSE/DeepSea/issues/259, but we have not had the time to write anything for this. On Friday, June 30, 2017 01:31:28 AM Martin Weiss wrote: as far as I have understood deepsea - it ignores all disks / devices that already have partitions. This causes an additional effort to clean these disks manually before they can show up in profiles. Would it be possible to enhance deepsea ?not? to ignore these disks? IMO it would be great to discover them and add them to the proposals - maybe in a separate section of the files like ?already partitioned?. Then in a further step these disks could be specified / moved in the profiles by an administrator what then would allow to clear / use them with deepsea during deployment stage in case they are specified to be an OSD filesystem/journal/wal/db... (deepsea could clean the disks before putting OSD data on them). IMP such a feature would allow a more complete disk management with deepsea... Thoughts? Fine until somebody uses this feature to wipe their data for which they have no backup. Automating disk-zapping/filesystem-wiping is a tricky business. Is it a higher risk in case we provide a framework around this within deepsea or in case the customer uses manual steps? IMO - In case we would have this support in deepsea we could add checks and verifications and make it easier and safer for the admin and ensure he is not deleting the OS or existing OSD data.. Martin _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgrimmer at suse.com Tue Jul 4 07:36:29 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 4 Jul 2017 15:36:29 +0200 Subject: [Deepsea-users] Official DeepSea package repositories Message-ID: Hi, I'm currently working on updating the upstream openATTIC 3.x installation documentation to reflect the requirements of having a Ceph cluster that has been deployed using DeepSea. Which is the official/up to date package repository on OBS that users of the upstream packages should be using for installation? A search for "DeepSea" on OBS yields the following likely candidates: https://build.opensuse.org/package/show/home:swiftgist/deepsea https://build.opensuse.org/package/show/filesystems:ceph/deepsea https://build.opensuse.org/package/show/filesystems:ceph:luminous/deepsea The package in Eric's "swiftgist" home project (which used to be the default location) is at version 0.7.13, while the one in the ceph:luminous project is currently at 0.7.15, but it also contains a 0.7.11 tarball, but that one does not seem to be used by the RPM spec file. The one in filesystems:ceph/deepsea is at version 0.7.11, too. I assume that for openATTIC 3.x and Ceph Luminous, the package in filesystems:ceph:luminous/deepsea is the correct one to use, correct? Thanks, Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From tserong at suse.com Tue Jul 4 20:38:47 2017 From: tserong at suse.com (Tim Serong) Date: Wed, 5 Jul 2017 12:38:47 +1000 Subject: [Deepsea-users] Official DeepSea package repositories In-Reply-To: References: Message-ID: <83da0e66-5cc2-feac-20e6-ae42829ddd26@suse.com> On 07/04/2017 11:36 PM, Lenz Grimmer wrote: > Hi, > > I'm currently working on updating the upstream openATTIC 3.x > installation documentation to reflect the requirements of having a Ceph > cluster that has been deployed using DeepSea. > > Which is the official/up to date package repository on OBS that users of > the upstream packages should be using for installation? > > A search for "DeepSea" on OBS yields the following likely candidates: > > https://build.opensuse.org/package/show/home:swiftgist/deepsea > https://build.opensuse.org/package/show/filesystems:ceph/deepsea > https://build.opensuse.org/package/show/filesystems:ceph:luminous/deepsea > > The package in Eric's "swiftgist" home project (which used to be the > default location) is at version 0.7.13, while the one in the > ceph:luminous project is currently at 0.7.15, but it also contains a > 0.7.11 tarball, but that one does not seem to be used by the RPM spec file. That tarball just needs deleting. > The one in filesystems:ceph/deepsea is at version 0.7.11, too. > > I assume that for openATTIC 3.x and Ceph Luminous, the package in > filesystems:ceph:luminous/deepsea is the correct one to use, correct? Short version: yes. Long version: Right now, filesystems:ceph:luminous/deepsea is a link to filesystems:ceph/deepsea. Both should thus be the latest and greatest, which means really filesystems:ceph:luminous/deepsea needs to be submitted to filesystems:ceph/deepsea to make that true. In future, we may end up with something like the following: - filesystems:ceph:jewel/deepsea (for deepsea version 0.6.x) - filesystems:ceph:luminous/deepsea (for deepsea version 0.7.x) - filesystems:ceph:m[whatever]/deepsea (etc.) ...with the latest version always being a link to filesystems:ceph/deepsea. This makes sense if we're maintaining multiple codestreams of deepsea. If it turns out that we don't actually need to do that, and can just maintain a single codestream, we can drop the structure and only use filesystems:ceph/deepsea. Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong at suse.com From jfajerski at suse.com Wed Jul 5 01:57:08 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Wed, 5 Jul 2017 09:57:08 +0200 Subject: [Deepsea-users] Official DeepSea package repositories In-Reply-To: <83da0e66-5cc2-feac-20e6-ae42829ddd26@suse.com> References: <83da0e66-5cc2-feac-20e6-ae42829ddd26@suse.com> Message-ID: <20170705075708.noyfct7vbgdvq7lq@jf_suse_laptop> On Wed, Jul 05, 2017 at 12:38:47PM +1000, Tim Serong wrote: >On 07/04/2017 11:36 PM, Lenz Grimmer wrote: >> Hi, >> >> I'm currently working on updating the upstream openATTIC 3.x >> installation documentation to reflect the requirements of having a Ceph >> cluster that has been deployed using DeepSea. >> >> Which is the official/up to date package repository on OBS that users of >> the upstream packages should be using for installation? >> >> A search for "DeepSea" on OBS yields the following likely candidates: >> >> https://build.opensuse.org/package/show/home:swiftgist/deepsea >> https://build.opensuse.org/package/show/filesystems:ceph/deepsea >> https://build.opensuse.org/package/show/filesystems:ceph:luminous/deepsea >> >> The package in Eric's "swiftgist" home project (which used to be the >> default location) is at version 0.7.13, while the one in the >> ceph:luminous project is currently at 0.7.15, but it also contains a >> 0.7.11 tarball, but that one does not seem to be used by the RPM spec file. > >That tarball just needs deleting. > >> The one in filesystems:ceph/deepsea is at version 0.7.11, too. >> >> I assume that for openATTIC 3.x and Ceph Luminous, the package in >> filesystems:ceph:luminous/deepsea is the correct one to use, correct? > >Short version: yes. > >Long version: Right now, filesystems:ceph:luminous/deepsea is a link to >filesystems:ceph/deepsea. Both should thus be the latest and greatest, >which means really filesystems:ceph:luminous/deepsea needs to be >submitted to filesystems:ceph/deepsea to make that true. > >In future, we may end up with something like the following: > >- filesystems:ceph:jewel/deepsea (for deepsea version 0.6.x) >- filesystems:ceph:luminous/deepsea (for deepsea version 0.7.x) >- filesystems:ceph:m[whatever]/deepsea (etc.) Seconded! > >...with the latest version always being a link to >filesystems:ceph/deepsea. This makes sense if we're maintaining >multiple codestreams of deepsea. If it turns out that we don't actually >need to do that, and can just maintain a single codestream, we can drop >the structure and only use filesystems:ceph/deepsea. I think we already have two code streams for jewel and luminous/master. I have a service file for deepsea...will chat with Eric today. So the packaging should become more formalised soon. > >Regards, > >Tim >-- >Tim Serong >Senior Clustering Engineer >SUSE >tserong at suse.com >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From elezar at suse.com Thu Jul 6 07:40:43 2017 From: elezar at suse.com (Eugene Lezar) Date: Thu, 06 Jul 2017 07:40:43 -0600 Subject: [Deepsea-users] Importing of ceph-deploy clusters (eg. for upgrading SES 3 to SES 5) References: <595E59FB02000055000FF2AE@prv1-mh.provo.novell.com> Message-ID: <595E59FB02000055000FF2AE@prv1-mh.provo.novell.com> Hello, Should this already be available with deepsea version 0.7.15-1.3? From "https://build.opensuse.org/package/show/home:swiftgist/deepsea" I see: .. - Initial support for importing ceph-deploy clusters ... However this does not seem to be the case yet. Thanks, Eugene From smueller at suse.com Thu Jul 6 03:40:58 2017 From: smueller at suse.com (Stephan =?ISO-8859-1?Q?M=FCller?=) Date: Thu, 06 Jul 2017 11:40:58 +0200 Subject: [Deepsea-users] [ses-users] Trying out migrations In-Reply-To: <2025010.7cllH2ST15@fury.home> References: <2025010.7cllH2ST15@fury.home> Message-ID: <1499334058.5251.70.camel@suse.com> Hi I tried to migrate from filestore to bluestore, according to this mail, but it was without success. After that I tried step 1 from this mail?http://mailman.suse.de/mlarch/SuSE/ses-users/2017/ses-users.2017.0 5/msg00201.html?and successfully put my cluster into a error state it hasn't recovered for over half an hour (no change at all in output of ceph -s). I will describe where the problems started in your mail. Am Donnerstag, den 29.06.2017, 10:41 -0400 schrieb Eric Jackson: > Hello all, > ? For everyone wanting to try out the migration functionality with > DeepSea? > 0.7.15 ( and soon node by node with 0.7.16), here is a primitive > workflow.??I? > would suggest using the new proposal runner to create multiple > configurations. > > 1) Start with a fresh cluster My cluster wasn't completely fresh as I have used it for 1 or 2 weeks, but to only to create pools and RBDs with no files on them. > 2) Run "salt '*' osd.report" > ?????The result should agree that your current configuration is > active They were. > 3) Create a new hardware profile > ?????e.g. salt-run proposal.populate nvme-ssd=True ratio=3? > name=bluestore+waldb > ?????I used this for servers with 6 SSDs and 2 NVMe devices. > I did it returned {} when I ran it the first time. Later I have run it again and got a config as the return but also this warning message: "[WARNING ] not overwriting existing proposal stargazer-sle- 1.oa.suse.de" > 4) Edit your policy.cfg, comment out the existing profile and add the > new? > profile.??I commented out these lines:? > > profile-2Intel745GB-6INTEL372GB-1/cluster/*.sls > profile-2Disk2GB-1/stack/default/ceph/minions/data*.yml > > and added > > profile-bluestore+waldb/cluster/blueshark[4-8]*.sls > profile-bluestore+waldb/stack/default/ceph/minions/blueshark[4- > 8]*.yml > I did the same. > 5) Refresh the pillar (e.g. salt-run state.orch ceph.stage.2) > I got the following error: "profiles_populated???????: ['There are no files under the profiles directory. Probably an issue with the discovery stage.'] [ERROR???] No highstate or sls specified, no execution made" Than I started running step 3 again, with the warning seen under step 3 but continued with step 5 getting the same error. Than I started to copy the missing files from the old comment out profile. That worked but didn't change anything (If I had really worked with deepSea before I would have known that) > 6) Run "salt '*' osd.report" > ?????Depending on the configuration changes between the existing and > new? > profile, expect to see some list of devices > > 7) Run the migration > ????salt-run state.orch ceph.migrate.osds > ????Unfortunately, no progress is returned.??Running 'ceph osd tree' > will give? > an indication of which server and OSD is currently reconfiguring. To run this you first have to run: salt-run disengage.safety After I have run this after I die step 1 from the other mentioned mail because nothing had changed, my cluster ended up in a unrecoverable error state (at least for me). > > 8) Run "salt '*' osd.report" when complete.??The expectation is that? > everything is converted.??If any devices did not succeed, check the? > /var/log/salt/minion for commands related to that device.??All > commands (e.g.? > sgdisk, prepare, activate, etc.) are logged. > > With our working wip-osd branch, Stage 3 will be able to correct > broken OSDs? > under some conditions.??I expect this to be available with 0.7.16. > > ---- > Another comment about the current strategy: Stage 3 will only add an > OSD and? > the migrate will only reconfigure an OSD.??I can see the argument > that Stage 3? > should just make the cluster the way the admin wants it. > > Any thoughts on this either way would be appreciated. > My final question is why did Step 3 not succeed leaving me clueless whats wrong :/ -- Stephan M?ller SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From bo.jin at suse.com Wed Jul 12 02:59:01 2017 From: bo.jin at suse.com (bjin) Date: Wed, 12 Jul 2017 10:59:01 +0200 Subject: [Deepsea-users] discovery output not ok Message-ID: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> Hi, I have 4 OSD nodes each with 2 SATA and 1 SSD. After stage.discover I got this profile which I want to use: (SATA for OSD, SSD for journal) But this this profile I got error that the no 3rd partition can be created. After deleting osds section it worked. Thanks to Martin Weiss hint. Any thoughts? /srv/pillar/ceph/proposals/profile-1INTEL111GB-2WDC931GB-2/stack/default/ceph/minions # cat sesnode3.suse.home.yml storage: data+journals: - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3KTLJUE: /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3AY0ZS5: /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN osds: - /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN -- Bo Jin Sales Engineer SUSE Mobile: +41 79 2586688 Merkurstrasse 14 Postfach 14 8953 Dietikon Schweiz From jfajerski at suse.com Wed Jul 12 03:45:02 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Wed, 12 Jul 2017 11:45:02 +0200 Subject: [Deepsea-users] discovery output not ok In-Reply-To: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> References: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> Message-ID: <20170712094502.ipqq5omah5aguspm@jf_suse_laptop> Hi, currently DeepSea still uses the old proposal code. Your report surely sounds like a bug, but we will probably not fix it. The new proposal code is in DeepSea already, just not yet used by default. You can have a look at the help text of the new proposal runner by running salt-run proposal.help Have a look at what is proposed by running salt-run proposal.peek and to write the proposal for use in stage 3 run salt-run proposal.populate hth, Jan On Wed, Jul 12, 2017 at 10:59:01AM +0200, bjin wrote: >Hi, > >I have 4 OSD nodes each with 2 SATA and 1 SSD. > >After stage.discover I got this profile which I want to use: (SATA for >OSD, SSD for journal) But this this profile I got error that the no >3rd partition can be created. > >After deleting osds section it worked. Thanks to Martin Weiss hint. >Any thoughts? > >/srv/pillar/ceph/proposals/profile-1INTEL111GB-2WDC931GB-2/stack/default/ceph/minions ># cat sesnode3.suse.home.yml >storage: > data+journals: > - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3KTLJUE: >/dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN > - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3AY0ZS5: >/dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN > >osds: > > - /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN > > >-- >Bo Jin >Sales Engineer >SUSE >Mobile: +41 79 2586688 >Merkurstrasse 14 >Postfach 14 >8953 Dietikon >Schweiz > >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From bo.jin at suse.com Wed Jul 12 07:27:39 2017 From: bo.jin at suse.com (bjin) Date: Wed, 12 Jul 2017 15:27:39 +0200 Subject: [Deepsea-users] discovery output not ok In-Reply-To: <4064328.b9j51Ua5NG@fury.home> References: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> <4064328.b9j51Ua5NG@fury.home> Message-ID: you are right. The two blanks at front of osds was accidently removed by me in the email. In yml file is correct. On 07/12/2017 02:30 PM, Eric Jackson wrote: > On Wednesday, July 12, 2017 10:59:01 AM bjin wrote: >> Hi, >> >> I have 4 OSD nodes each with 2 SATA and 1 SSD. >> >> After stage.discover I got this profile which I want to use: (SATA for >> OSD, SSD for journal) But this this profile I got error that the no 3rd >> partition can be created. >> >> After deleting osds section it worked. Thanks to Martin Weiss hint. Any >> thoughts? >> > Is the pasting of the yaml file accurate? Are there no spaces in front of the > osds keyword? There should be two to be at the same level as data+journals. > >> /srv/pillar/ceph/proposals/profile-1INTEL111GB-2WDC931GB-2/stack/default/cep >> h/minions # cat sesnode3.suse.home.yml >> storage: >> data+journals: >> - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3KTLJUE: >> /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN >> - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3AY0ZS5: >> /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN >> >> osds: >> >> - /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN -- Bo Jin Sales Engineer SUSE Mobile: +41 79 2586688 Merkurstrasse 14 Postfach 14 8953 Dietikon Schweiz From jfajerski at suse.com Thu Jul 13 03:31:00 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 13 Jul 2017 11:31:00 +0200 Subject: [Deepsea-users] discovery output not ok In-Reply-To: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> References: <7b6660b3-b398-0706-5baf-d4474a06fcca@suse.com> Message-ID: <20170713093100.ngln2f5khbkfsrsd@jf_suse_laptop> On Wed, Jul 12, 2017 at 10:59:01AM +0200, bjin wrote: >Hi, > >I have 4 OSD nodes each with 2 SATA and 1 SSD. > >After stage.discover I got this profile which I want to use: (SATA for >OSD, SSD for journal) But this this profile I got error that the no >3rd partition can be created. > >After deleting osds section it worked. Thanks to Martin Weiss hint. >Any thoughts? > >/srv/pillar/ceph/proposals/profile-1INTEL111GB-2WDC931GB-2/stack/default/ceph/minions ># cat sesnode3.suse.home.yml >storage: > data+journals: > - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3KTLJUE: >/dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN > - /dev/disk/by-id/ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J3AY0ZS5: >/dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN > >osds: > > - /dev/disk/by-id/ata-INTEL_SSDSC2KW120H6_CVLT648505MG120GGN Was this profile created like that by stage 1? If so could you please rerun this proposal stage with debugging enabled? add log_level: debug to /etc/salt/master and rerun stage 1. The log will be in /var/log/salt/master. > > >-- >Bo Jin >Sales Engineer >SUSE >Mobile: +41 79 2586688 >Merkurstrasse 14 >Postfach 14 >8953 Dietikon >Schweiz > >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg)