From lmorris at suse.com Sat Apr 8 21:26:44 2017 From: lmorris at suse.com (Larry Morris) Date: Sun, 9 Apr 2017 03:26:44 +0000 Subject: [Deepsea-users] DeepSea 0.7.5 In-Reply-To: <1867369.YNWVSCb3nu@ruby> References: <1867369.YNWVSCb3nu@ruby> Message-ID: <52570FA7-60BB-4317-B117-FF7EA7D7B7F9@suse.com> Thanks Eric Larry > On Apr 8, 2017, at 5:40 AM, Eric Jackson wrote: > > Hello all, > DeepSea 0.7.5 has been released. No significant new features, but a > substantial number of fixes, cleanup and improvements. The CHANGELOG is listed > below: > > - Fix bugs for ceph.purge, disengage.safety > - Skip unassigned service orchestrations > - Add pylintrc and associated bootstrap script > - Fix ganesha ordering, restart, validation > - Fix permissions, encoding of runners, modules > - Add various unit tests - filequeue, push > - Improve comment handling in policy.cfg > - Add shared keys for mds, rgw > - Correct building, dependencies on OpenSUSE > - Fix certificate of origin, url in contributing.md > - Change Stage 0 ordering > - Support DEV_ENV flag > - Enable openATTIC rpcd, systemd services > - Add cephservices runner, module - renamed cephprocesses > - Fix eauth for cherrypy configuration > - Change cephfs pools initial pg from 256 to 128 > - Rewrite cephdisks to handle raid controllers, support lspci > - Support multiple public, cluster networks > - Various python improvements, remove unnecessary methods > > The rpm is available from > https://build.opensuse.org/package/show/home:swiftgist/deepsea > > Eric > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From jfajerski at suse.com Mon Apr 10 06:36:22 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Mon, 10 Apr 2017 14:36:22 +0200 Subject: [Deepsea-users] custom ratio proposals and filters Message-ID: <20170410123622.zj45yrzce5nuzyul@jf_suse_laptop> Pulling this to the deepsea ml: > > I'll send an email to the deepsea-users list once it is complete enough for > > people to test it. So far filtering is only based on size. Options are data > > and journal to pass different sizes for data and journal drive.They can > > either contain a number (for exact selection) or a range. > > What other attributes do you have in mind? > > I might want to select a particular vendor's SSDs to be data devices, > combined with another vendors NVMe for the journal (or, in the future, > WAL/metadata), and apply custom ratios to that while I'm at it. The basic idea works like that. Drives are filtered and then the custom ratio is applied. The decision to use size filters for now was mostly motivated by the assumption, that this will provide enough filtering power on a per host basis. I.e. I thought a host with two sets of SSDs that are the same size, but are intended for a different use is sufficiently unlikely. Though I think its no problem to add a vendor filter too. > > Or use that kind of SSDs directly, while those are meant as journals for > my spinners. And maybe set some other BlueStore/XFS attributes > differently too based on those. > > Or maybe my setup is simpler - everything that is rotational:1 is a data > disk, and everything that isn't is a journal. The idea is that all these setups can be achieved with the right size filters. Currently build in is the assumption that rotational drive can only ever be data device or standalone OSD, SSDs can be journal drives for spinners and NVMEs can -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From lmb at suse.com Mon Apr 10 07:15:47 2017 From: lmb at suse.com (Lars Marowsky-Bree) Date: Mon, 10 Apr 2017 15:15:47 +0200 Subject: [Deepsea-users] custom ratio proposals and filters In-Reply-To: <20170410123622.zj45yrzce5nuzyul@jf_suse_laptop> References: <20170410123622.zj45yrzce5nuzyul@jf_suse_laptop> Message-ID: <20170410131547.wm7rcgeawlubqvm6@suse.com> On 2017-04-10T14:36:22, Jan Fajerski wrote: > The basic idea works like that. Drives are filtered and then the custom ratio is > applied. The decision to use size filters for now was mostly motivated by the > assumption, that this will provide enough filtering power on a per host basis. > I.e. I thought a host with two sets of SSDs that are the same size, but are > intended for a different use is sufficiently unlikely. > Though I think its no problem to add a vendor filter too. It's a good start, yes! But I'm pretty sure size filtering is neither sufficient nor the most intuitive way of handling this, I would say. e.g., a policy that is based on attributes that describe directly what I want, instead of being based on factors that this implies in the present, seems preferable. Someone who starts with a "Oh, I know how large this drive is" and then has to adjust it because the new revision 3 months later is 1GB larger ... ;-) If they can filter on arbitrary attributes, size would be one of them and all would be well. Regards, Lars -- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde From abonilla at suse.com Tue Apr 11 08:16:12 2017 From: abonilla at suse.com (Alejandro Bonilla) Date: Tue, 11 Apr 2017 14:16:12 +0000 Subject: [Deepsea-users] Recreation of a broken OSD In-Reply-To: <4734465.EGusFN8fiK@ruby> References: <58ECE2900200001C002DBB1F@prv-mh.provo.novell.com> <58ECE2900200001C002DBB1F@prv-mh.provo.novell.com> <4734465.EGusFN8fiK@ruby> Message-ID: <85E35509-FC63-4191-A42D-D4951AE03FF4@suse.com> > On Apr 11, 2017, at 10:06 AM, Eric Jackson wrote: > > On Tuesday, April 11, 2017 06:05:03 AM Martin Weiss wrote: >> Hi *, >> >> is there a way to "re-create" an OSD via DeepSea that does not work (start) >> anymore - i.e. due to a corrupted XFS filesystem? >> >> Thanks, >> Martin > We don't have the surgical removal yet. That is, the removal of a single OSD > with journal from a single state file. > > If you do remove the OSD manually though including the journal, then rerunning > Stage 3 will add it back. Is this a stand-alone OSD or a separate journal? > Do you need the commands to remove the OSD from Ceph and wipe the drive? Yes please. OSD Removal, and hopefully the best magic to find that OSDs journal partition. And how to rerun the stages so the new disk can be put in place. I assume the old entry for that disk may still reside in the disk list, but would be ?ignored? because it?s no longer there? > > Eric_______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From joao at suse.de Sun Apr 23 07:50:55 2017 From: joao at suse.de (Joao Eduardo Luis) Date: Sun, 23 Apr 2017 14:50:55 +0100 Subject: [Deepsea-users] DeepSea 0.7.6 In-Reply-To: <2582038.WBrbzbZtji@ruby> References: <2582038.WBrbzbZtji@ruby> Message-ID: On 04/22/2017 02:43 PM, Eric Jackson wrote: > Hello everyone, > DeepSea 0.7.6 has been released. The notable feature is the rolling > upgrade. With a running Ceph cluster, an admin can gracefully upgrade the OS, > Salt and Ceph. See my previous email about specifics. The CHANGELOG is listed > below: > > - Rolling upgrade I must say I'm a bit surprised seeing this today, just a single day after you gave a "heads up about a large PR to implement the rolling upgrade". Has this behemoth of a PR, with "81 commits and 55 files [...] Several of these [... being ...] new" been properly peer-reviewed and discussed? I ask this because upon checking the PR on github, as well as the merge commit, I see no discussion or reviews/Reviewed-by. It feels strange seeing such a large PR, with roughly 1.4k added lines, to be announced one day and merged the very next; especially without seeing any sort of involvement from anyone else (beside the authors). Additionally, I'm inclined to presume there were no other set of eyes on the PR due to commits such as 3dd3fe474df036ed4322b4d03da9d57934ac3baa which fixes a 'typo', and could have been squashed with the previous commit 7ca4dabbd2311a04eb39b03c2b29343970f7e476 (which, in this case, would have reduced the number of commits in the patch set). And I have this feeling that many more like them are out there, considering some of the commit messages. -Joao From jschmid at suse.de Sun Apr 23 20:20:44 2017 From: jschmid at suse.de (Joshua Schmid) Date: Mon, 24 Apr 2017 04:20:44 +0200 Subject: [Deepsea-users] DeepSea 0.7.6 In-Reply-To: References: <2582038.WBrbzbZtji@ruby> Message-ID: <6f9c5bdf-f33f-a9fc-fcca-018883ebdc4b@suse.de> On 23/04/2017 15:50, Joao Eduardo Luis wrote: > On 04/22/2017 02:43 PM, Eric Jackson wrote: >> Hello everyone, >> DeepSea 0.7.6 has been released. The notable feature is the rolling >> upgrade. With a running Ceph cluster, an admin can gracefully upgrade >> the OS, >> Salt and Ceph. See my previous email about specifics. The CHANGELOG >> is listed >> below: >> >> - Rolling upgrade > > I must say I'm a bit surprised seeing this today, just a single day > after you gave a "heads up about a large PR to implement the rolling > upgrade". > > Has this behemoth of a PR, with "81 commits and 55 files [...] Several > of these [... being ...] new" been properly peer-reviewed and discussed? > > I ask this because upon checking the PR on github, as well as the merge > commit, I see no discussion or reviews/Reviewed-by. > > It feels strange seeing such a large PR, with roughly 1.4k added lines, > to be announced one day and merged the very next; especially without > seeing any sort of involvement from anyone else (beside the authors). > The original commit #43 [https://github.com/SUSE/DeepSea/pull/43] has quite some comments and was reviewed by Blain. It has been open for a while now and plenty of discussions with various people have been held. The fashion in which the Final PR #222 was merged doesn't look as well reviewed, but it actually is only copy+addendum of #43/162. > Additionally, I'm inclined to presume there were no other set of eyes on > the PR due to commits such as > > 3dd3fe474df036ed4322b4d03da9d57934ac3baa > > which fixes a 'typo', and could have been squashed with the previous commit > > 7ca4dabbd2311a04eb39b03c2b29343970f7e476 > > (which, in this case, would have reduced the number of commits in the > patch set). +1. I should've squashed them before. We haven't talked about a PR squashing habit in deepsea, yet. But I'm in favor of enforcing such a rule. > > And I have this feeling that many more like them are out there, > considering some of the commit messages. > > -Joao > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From joao at suse.de Mon Apr 24 02:27:08 2017 From: joao at suse.de (Joao Eduardo Luis) Date: Mon, 24 Apr 2017 09:27:08 +0100 Subject: [Deepsea-users] DeepSea 0.7.6 In-Reply-To: <6f9c5bdf-f33f-a9fc-fcca-018883ebdc4b@suse.de> References: <2582038.WBrbzbZtji@ruby> <6f9c5bdf-f33f-a9fc-fcca-018883ebdc4b@suse.de> Message-ID: On 04/24/2017 03:20 AM, Joshua Schmid wrote: > > > On 23/04/2017 15:50, Joao Eduardo Luis wrote: >> On 04/22/2017 02:43 PM, Eric Jackson wrote: >>> Hello everyone, >>> DeepSea 0.7.6 has been released. The notable feature is the rolling >>> upgrade. With a running Ceph cluster, an admin can gracefully upgrade >>> the OS, >>> Salt and Ceph. See my previous email about specifics. The CHANGELOG >>> is listed >>> below: >>> >>> - Rolling upgrade >> >> I must say I'm a bit surprised seeing this today, just a single day >> after you gave a "heads up about a large PR to implement the rolling >> upgrade". >> >> Has this behemoth of a PR, with "81 commits and 55 files [...] Several >> of these [... being ...] new" been properly peer-reviewed and discussed? >> >> I ask this because upon checking the PR on github, as well as the merge >> commit, I see no discussion or reviews/Reviewed-by. >> >> It feels strange seeing such a large PR, with roughly 1.4k added lines, >> to be announced one day and merged the very next; especially without >> seeing any sort of involvement from anyone else (beside the authors). >> > > The original commit #43 [https://github.com/SUSE/DeepSea/pull/43] has > quite some comments and was reviewed by Blain. It has been open for a > while now and plenty of discussions with various people have been held. > The fashion in which the Final PR #222 was merged doesn't look as well > reviewed, but it actually is only copy+addendum of #43/162. I apologize for the noise then. It was not obvious such a discussion had been had. In the past, I've found that in cases like these it's useful to have a pointer to the PR where the review process happened. If not for anything else, at least to allow the community around a project to have a sense of transparency. Not everyone follows development, but everyone likes to know where things are coming from when major milestones are reached. >> Additionally, I'm inclined to presume there were no other set of eyes on >> the PR due to commits such as >> >> 3dd3fe474df036ed4322b4d03da9d57934ac3baa >> >> which fixes a 'typo', and could have been squashed with the previous commit >> >> 7ca4dabbd2311a04eb39b03c2b29343970f7e476 >> >> (which, in this case, would have reduced the number of commits in the >> patch set). > > +1. I should've squashed them before. > We haven't talked about a PR squashing habit in deepsea, yet. But I'm in > favor of enforcing such a rule. Call me biased, but I believe this should not require a rule to be enforced - it should be common sense. Squashing makes reading the history easier, and allows you to review individual patches. If people don't squash, you're forced to review the whole diff. Besides, not squashing patches like the one I mentioned leads to git-bisect being completely useless, because some of the patches won't even compile/run/wtv. -Joao From jschmid at suse.de Mon Apr 24 03:07:34 2017 From: jschmid at suse.de (Joshua Schmid) Date: Mon, 24 Apr 2017 11:07:34 +0200 Subject: [Deepsea-users] DeepSea 0.7.6 In-Reply-To: References: <2582038.WBrbzbZtji@ruby> <6f9c5bdf-f33f-a9fc-fcca-018883ebdc4b@suse.de> Message-ID: On 24/04/2017 10:27, Joao Eduardo Luis wrote: > On 04/24/2017 03:20 AM, Joshua Schmid wrote: >> >> >> On 23/04/2017 15:50, Joao Eduardo Luis wrote: >>> On 04/22/2017 02:43 PM, Eric Jackson wrote: >>>> Hello everyone, >>>> DeepSea 0.7.6 has been released. The notable feature is the rolling >>>> upgrade. With a running Ceph cluster, an admin can gracefully upgrade >>>> the OS, >>>> Salt and Ceph. See my previous email about specifics. The CHANGELOG >>>> is listed >>>> below: >>>> >>>> - Rolling upgrade >>> >>> I must say I'm a bit surprised seeing this today, just a single day >>> after you gave a "heads up about a large PR to implement the rolling >>> upgrade". >>> >>> Has this behemoth of a PR, with "81 commits and 55 files [...] Several >>> of these [... being ...] new" been properly peer-reviewed and discussed? >>> >>> I ask this because upon checking the PR on github, as well as the merge >>> commit, I see no discussion or reviews/Reviewed-by. >>> >>> It feels strange seeing such a large PR, with roughly 1.4k added lines, >>> to be announced one day and merged the very next; especially without >>> seeing any sort of involvement from anyone else (beside the authors). >>> >> >> The original commit #43 [https://github.com/SUSE/DeepSea/pull/43] has >> quite some comments and was reviewed by Blain. It has been open for a >> while now and plenty of discussions with various people have been held. >> The fashion in which the Final PR #222 was merged doesn't look as well >> reviewed, but it actually is only copy+addendum of #43/162. > > I apologize for the noise then. It was not obvious such a discussion had > been had. > > In the past, I've found that in cases like these it's useful to have a > pointer to the PR where the review process happened. If not for anything > else, at least to allow the community around a project to have a sense > of transparency. Not everyone follows development, but everyone likes to > know where things are coming from when major milestones are reached. the description of #222, points to #43 and #162 :) https://github.com/SUSE/DeepSea/pull/222#issue-223528515 I think we had a discussion on using highlevel & non-interal labels for milestones just like ceph upstream does. Couldn't find it in the archives right now, but that would've helped to clarify the goals a bit more. Reopening the discussion hereby. > > >>> Additionally, I'm inclined to presume there were no other set of eyes on >>> the PR due to commits such as >>> >>> 3dd3fe474df036ed4322b4d03da9d57934ac3baa >>> >>> which fixes a 'typo', and could have been squashed with the previous >>> commit >>> >>> 7ca4dabbd2311a04eb39b03c2b29343970f7e476 >>> >>> (which, in this case, would have reduced the number of commits in the >>> patch set). >> >> +1. I should've squashed them before. >> We haven't talked about a PR squashing habit in deepsea, yet. But I'm in >> favor of enforcing such a rule. > > Call me biased, but I believe this should not require a rule to be > enforced - it should be common sense. Squashing makes reading the > history easier, and allows you to review individual patches. If people > don't squash, you're forced to review the whole diff. > > Besides, not squashing patches like the one I mentioned leads to > git-bisect being completely useless, because some of the patches won't > even compile/run/wtv. > > -Joao > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From joao at suse.de Mon Apr 24 04:01:21 2017 From: joao at suse.de (Joao Eduardo Luis) Date: Mon, 24 Apr 2017 11:01:21 +0100 Subject: [Deepsea-users] DeepSea 0.7.6 In-Reply-To: References: <2582038.WBrbzbZtji@ruby> <6f9c5bdf-f33f-a9fc-fcca-018883ebdc4b@suse.de> Message-ID: <141cad37-8695-cefc-2b5b-8decbf310f5c@suse.de> On 04/24/2017 10:07 AM, Joshua Schmid wrote: > > > On 24/04/2017 10:27, Joao Eduardo Luis wrote: >> On 04/24/2017 03:20 AM, Joshua Schmid wrote: >>> >>> >>> On 23/04/2017 15:50, Joao Eduardo Luis wrote: >>>> On 04/22/2017 02:43 PM, Eric Jackson wrote: >>>>> Hello everyone, >>>>> DeepSea 0.7.6 has been released. The notable feature is the rolling >>>>> upgrade. With a running Ceph cluster, an admin can gracefully upgrade >>>>> the OS, >>>>> Salt and Ceph. See my previous email about specifics. The CHANGELOG >>>>> is listed >>>>> below: >>>>> >>>>> - Rolling upgrade >>>> >>>> I must say I'm a bit surprised seeing this today, just a single day >>>> after you gave a "heads up about a large PR to implement the rolling >>>> upgrade". >>>> >>>> Has this behemoth of a PR, with "81 commits and 55 files [...] Several >>>> of these [... being ...] new" been properly peer-reviewed and discussed? >>>> >>>> I ask this because upon checking the PR on github, as well as the merge >>>> commit, I see no discussion or reviews/Reviewed-by. >>>> >>>> It feels strange seeing such a large PR, with roughly 1.4k added lines, >>>> to be announced one day and merged the very next; especially without >>>> seeing any sort of involvement from anyone else (beside the authors). >>>> >>> >>> The original commit #43 [https://github.com/SUSE/DeepSea/pull/43] has >>> quite some comments and was reviewed by Blain. It has been open for a >>> while now and plenty of discussions with various people have been held. >>> The fashion in which the Final PR #222 was merged doesn't look as well >>> reviewed, but it actually is only copy+addendum of #43/162. >> >> I apologize for the noise then. It was not obvious such a discussion had >> been had. >> >> In the past, I've found that in cases like these it's useful to have a >> pointer to the PR where the review process happened. If not for anything >> else, at least to allow the community around a project to have a sense >> of transparency. Not everyone follows development, but everyone likes to >> know where things are coming from when major milestones are reached. > > the description of #222, points to #43 and #162 :) > > https://github.com/SUSE/DeepSea/pull/222#issue-223528515 Oh well... I promise I read the description before making noise, but somehow I overlooked that. My bad. (I will still point out that neither the 'heads up' email, nor the release email, points the consumers to any of those PRs though. I actually had to go on github and search for the merge that seemed it could be the one.) -Joao From Martin.Weiss at suse.com Wed Apr 26 23:44:45 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Wed, 26 Apr 2017 23:44:45 -0600 Subject: [Deepsea-users] Using deepSea to deploy OSDs with Bluestore? References: <590193690200001C0010149E@prv-mh.provo.novell.com> Message-ID: <590193690200001C0010149E@prv-mh.provo.novell.com> Hi *, Is this possible already? In case yes - where can I specify which OSD should be "classic / XFS" and which OSD should be Bluestore based? I would also be interested if there are specific settings possible on how Bluestore should be "created" (devices or partitions for data, metadata, rocksdb or what ever makes sense in case of Bluestore ;-)). Thanks, Martin From jfajerski at suse.com Thu Apr 27 01:25:12 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 27 Apr 2017 09:25:12 +0200 Subject: [Deepsea-users] Using deepSea to deploy OSDs with Bluestore? In-Reply-To: <590193690200001C0010149E@prv-mh.provo.novell.com> References: <590193690200001C0010149E@prv-mh.provo.novell.com> <590193690200001C0010149E@prv-mh.provo.novell.com> Message-ID: <20170427072511.gxxvf344mxnwom2o@jf_suse_laptop> Hi Martin, we are currently working on Bluestore, in the proposal process and in the OSD deployment process. Some of the discussion leaked into the dmcrypt thread: https://github.com/SUSE/DeepSea/issues/62 On Wed, Apr 26, 2017 at 11:44:45PM -0600, Martin Weiss wrote: >Hi *, > >Is this possible already? > >In case yes - where can I specify which OSD should be "classic / XFS" and which OSD should be Bluestore based? > >I would also be interested if there are specific settings possible on how Bluestore should be "created" (devices or partitions for data, metadata, rocksdb or what ever makes sense in case of Bluestore ;-)). > >Thanks, >Martin >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From vtheile at suse.com Thu Apr 27 07:47:19 2017 From: vtheile at suse.com (Volker Theile) Date: Thu, 27 Apr 2017 15:47:19 +0200 Subject: [Deepsea-users] Get rid of onlyif, is there a better solution? Message-ID: Hello all, i'm currently improving the openATTIC orchestration in DeepSea, especially the stage 5 state. There are a lot of onlyif requisites, e.g. openattic nop: test.nop {% if 'openattic' not in salt['pillar.get']('roles') %} stop openattic-systemd: service.dead: - name: openattic-systemd - enable: False - onlyif: - "which oaconfig >/dev/null 2>&1" remove openattic database: cmd.run: - names: - "su - postgres -c 'dropdb openattic'" - "su - postgres -c 'dropuser openattic'" - onlyif: - "which oaconfig >/dev/null 2>&1" ... I also found this in ganesha and other states: https://github.com/votdev/DeepSea/blob/wip-openattic/srv/salt/ceph/rescind/openattic/default.sls https://github.com/SUSE/DeepSea/blob/master/srv/salt/ceph/rescind/ganesha/default.sls#L16 https://github.com/SUSE/DeepSea/blob/master/srv/salt/ceph/rescind/igw/lrbd/default.sls All of the above examples can be broken down to something like: Do this if package xyz is installed. So i'm asking myself if there is not a better solution. I have something in mind: {% 'openattic' in grains['pkg'] %} stop openattic-systemd: service.dead: - name: openattic-systemd - enable: False remove openattic database: cmd.run: - names: - "su - postgres -c 'dropdb openattic'" - "su - postgres -c 'dropuser openattic'" {% endif %} The grain 'pkg' must be populated via a custom grain module. Is this a good idea? Are there any other solution that are better? I know there are discussions about something like below which looks also promising and much better than testing whether an executable or something else exists: stop openattic-systemd: service.dead: - name: openattic-systemd - enable: False - onlyif: - pkg.is_installed: - name: openattic Sadly this seems not to be possible with Salt at the moment. Please tell me your thoughts or give me a hint how to solve this behaviour more elegant. Regards Volker -- Volker Theile Software Engineer | openATTIC SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) Phone: +49 173 5876879 E-Mail: vtheile at suse.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From jfajerski at suse.com Fri Apr 28 09:42:02 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Fri, 28 Apr 2017 17:42:02 +0200 Subject: [Deepsea-users] New storage profile proposals Message-ID: <20170428154201.rcbb63eingn3uvfb@jf_suse_laptop> Hi *, I have pushed the basic functionality of the new proposal process to this branch: https://github.com/SUSE/DeepSea/tree/propose-custom-ratios. I would like to encourage everyone to play with it a bit and share their experience. Most of the implementation is in the proposal runner and can be used without it interfering with the way things work currently. That being said, this new proposal process is not fit for productions usage just yet. To get the new runner and module run the following on the master: git clone https://github.com/SUSE/deepsea cd deepsea git checkout propose-custom-ratios make install salt '*' state.apply ceph.sync Now you can run 'salt-run proposal.peek' and the runner will return a storage profile proposal. This proposal can be influenced by a number of parameters. I have tried to document everything, which can be called via 'salt-run proposal.help'. Bare in mind, that this is still very much under development, so the docs might be incomplete or just not bring the point across very well. Also I don't do a lot of parameter validation just yet. After you have played around a bit with the peek method you can write out the proposal user 'salt-run proposal.populate'. This will write the storage profile files to /srv/pillar/ceph/proposal/profile-default/*. The proposal.populate method can be run multiple times. It will not overwrite existing files but with the help of that 'target' parameter you can create proposals for groups of minions (or even single minions). The idea is that the proposal runner is run multiple times to create storage profiles for every kind of OSD the cluster will have. E.g. a cluster might have some machines with NVME drives and SSDs and some machines with only spinners. To keep the runner fairly simple such a setup would be attacked by to runs of the proposal runner, one for the nvme/ssd storage profiles and one for the standalone profiles. I hope this all makes some kind of sense to anyone but me. I haven't yet had a chance to gain some distance to the code. I have discussed quite a few details with Eric so he might be someone who can help get people unstuck (sorry Eric;), since I will be out the coming week. Best, Jan