From jfajerski at suse.com Mon Nov 6 13:55:57 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Mon, 6 Nov 2017 21:55:57 +0100 Subject: [Deepsea-users] Deploying on 3 networks with DeepSea Message-ID: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> Hi list, does the following scenario ring any alarm bells with anyone? The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: Should there be dns names for the public-network ip addresses? Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? Imho this setup should work fine, but I might be missing something. Best, Jan -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From rdias at suse.com Mon Nov 6 14:00:38 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 6 Nov 2017 14:00:38 -0700 Subject: [Deepsea-users] Deploying on 3 networks with DeepSea In-Reply-To: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> Message-ID: > On 6 Nov 2017, at 13:55, Jan Fajerski wrote: > > Hi list, > does the following scenario ring any alarm bells with anyone? > The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. > The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. > The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: > Should there be dns names for the public-network ip addresses? > Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? The only thing I?m seeing is that the public network might not be used at all because the MONs will bind to the management network, and the OSDs will connect to the MONs using the management network. If the management network has a lower bandwidth than the public network, then it might be a problem. > > Imho this setup should work fine, but I might be missing something. > Best, > Jan > > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 529 bytes Desc: Message signed with OpenPGP URL: From jfajerski at suse.com Mon Nov 6 14:17:25 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Mon, 6 Nov 2017 22:17:25 +0100 Subject: [Deepsea-users] [ses-users] Deploying on 3 networks with DeepSea In-Reply-To: References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> Message-ID: <20171106211724.45l5i2ewefyxxqbr@jf_suse_laptop> On Mon, Nov 06, 2017 at 02:00:38PM -0700, Ricardo Dias wrote: > > >> On 6 Nov 2017, at 13:55, Jan Fajerski wrote: >> >> Hi list, >> does the following scenario ring any alarm bells with anyone? >> The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. >> The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. >> The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: >> Should there be dns names for the public-network ip addresses? >> Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? > >The only thing I?m seeing is that the public network might not be used at all because the MONs will bind to the management network, and the OSDs will connect to the MONs using the management network. > >If the management network has a lower bandwidth than the public network, then it might be a problem. ceph.conf contains the mon_host settings with the appropriate ip addresses on the public network. So I'd assume the MONs use the correct network. Iiuc the mon_initial_members list contains only a list of MON ids and has nothing to do with networking. > >> >> Imho this setup should work fine, but I might be missing something. >> Best, >> Jan >> >> -- >> Jan Fajerski >> Engineer Enterprise Storage >> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG N?rnberg) >> _______________________________________________ >> Deepsea-users mailing list >> Deepsea-users at lists.suse.com >> http://lists.suse.com/mailman/listinfo/deepsea-users > -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From Martin.Weiss at suse.com Mon Nov 6 14:30:42 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Mon, 06 Nov 2017 14:30:42 -0700 Subject: [Deepsea-users] Antw: [ses-users] Deploying on 3 networks with DeepSea In-Reply-To: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> Message-ID: <5A00D4820200001C0030261A@prv-mh.provo.novell.com> Do we know if any SES/ceph service or installation / configuration routine relies on "proper" DNS setup (forward and reverse lookup for public network addresses of all servers in the cluster) or does ceph use IP addresses only for sure? What can / will / might happen in case the "mon name" resolves via DNS to the admin network address and not the public network address? Thanks Martin Hi list, does the following scenario ring any alarm bells with anyone? The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: Should there be dns names for the public-network ip addresses? Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? Imho this setup should work fine, but I might be missing something. Best, Jan -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdias at suse.com Mon Nov 6 14:34:17 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 6 Nov 2017 14:34:17 -0700 Subject: [Deepsea-users] [ses-users] Deploying on 3 networks with DeepSea In-Reply-To: <20171106211724.45l5i2ewefyxxqbr@jf_suse_laptop> References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> <20171106211724.45l5i2ewefyxxqbr@jf_suse_laptop> Message-ID: <9003CFE9-FD39-4708-908C-021BB37FDD37@suse.com> > On 6 Nov 2017, at 14:17, Jan Fajerski wrote: > > On Mon, Nov 06, 2017 at 02:00:38PM -0700, Ricardo Dias wrote: >> >> >>> On 6 Nov 2017, at 13:55, Jan Fajerski wrote: >>> >>> Hi list, >>> does the following scenario ring any alarm bells with anyone? >>> The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. >>> The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. >>> The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: >>> Should there be dns names for the public-network ip addresses? >>> Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? >> >> The only thing I?m seeing is that the public network might not be used at all because the MONs will bind to the management network, and the OSDs will connect to the MONs using the management network. >> >> If the management network has a lower bandwidth than the public network, then it might be a problem. > ceph.conf contains the mon_host settings with the appropriate ip addresses on the public network. So I'd assume the MONs use the correct network. Iiuc the mon_initial_members list contains only a list of MON ids and has nothing to do with networking. Was ceph.conf generated by DeepSea or it was tweaked manually? I?m asking this because with the network setup you described, I was expecting DeepSea to fill the public cluster setting in ceph.conf with the management network. >> >>> >>> Imho this setup should work fine, but I might be missing something. >>> Best, >>> Jan >>> >>> -- >>> Jan Fajerski >>> Engineer Enterprise Storage >>> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >>> HRB 21284 (AG N?rnberg) >>> _______________________________________________ >>> Deepsea-users mailing list >>> Deepsea-users at lists.suse.com >>> http://lists.suse.com/mailman/listinfo/deepsea-users >> > > > > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 529 bytes Desc: Message signed with OpenPGP URL: From jfajerski at suse.com Mon Nov 6 14:59:23 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Mon, 6 Nov 2017 22:59:23 +0100 Subject: [Deepsea-users] [ses-users] Deploying on 3 networks with DeepSea In-Reply-To: <9003CFE9-FD39-4708-908C-021BB37FDD37@suse.com> References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> <20171106211724.45l5i2ewefyxxqbr@jf_suse_laptop> <9003CFE9-FD39-4708-908C-021BB37FDD37@suse.com> Message-ID: <20171106215923.2ev573rdyel7qpfq@jf_suse_laptop> On Mon, Nov 06, 2017 at 02:34:17PM -0700, Ricardo Dias wrote: > > >> On 6 Nov 2017, at 14:17, Jan Fajerski wrote: >> >> On Mon, Nov 06, 2017 at 02:00:38PM -0700, Ricardo Dias wrote: >>> >>> >>>> On 6 Nov 2017, at 13:55, Jan Fajerski wrote: >>>> >>>> Hi list, >>>> does the following scenario ring any alarm bells with anyone? >>>> The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. >>>> The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. >>>> The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: >>>> Should there be dns names for the public-network ip addresses? >>>> Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? >>> >>> The only thing I?m seeing is that the public network might not be used at all because the MONs will bind to the management network, and the OSDs will connect to the MONs using the management network. >>> >>> If the management network has a lower bandwidth than the public network, then it might be a problem. >> ceph.conf contains the mon_host settings with the appropriate ip addresses on the public network. So I'd assume the MONs use the correct network. Iiuc the mon_initial_members list contains only a list of MON ids and has nothing to do with networking. > >Was ceph.conf generated by DeepSea or it was tweaked manually? >I?m asking this because with the network setup you described, I was expecting DeepSea to fill the public cluster setting in ceph.conf with the management network. ceph.conf was generated by DeepSea but the public and cluster network was set to the correct network after stage 1 in the pillar. I.e. DeepSea generated the correct ceph.conf after being told what networks to use. > >>> >>>> >>>> Imho this setup should work fine, but I might be missing something. >>>> Best, >>>> Jan >>>> >>>> -- >>>> Jan Fajerski >>>> Engineer Enterprise Storage >>>> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >>>> HRB 21284 (AG N?rnberg) >>>> _______________________________________________ >>>> Deepsea-users mailing list >>>> Deepsea-users at lists.suse.com >>>> http://lists.suse.com/mailman/listinfo/deepsea-users >>> >> >> >> >> -- >> Jan Fajerski >> Engineer Enterprise Storage >> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG N?rnberg) > -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From rdias at suse.com Mon Nov 6 15:02:21 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 6 Nov 2017 15:02:21 -0700 Subject: [Deepsea-users] [ses-users] Deploying on 3 networks with DeepSea In-Reply-To: <20171106215923.2ev573rdyel7qpfq@jf_suse_laptop> References: <20171106205437.hijzvrnq7goovvtx@jf_suse_laptop> <20171106211724.45l5i2ewefyxxqbr@jf_suse_laptop> <9003CFE9-FD39-4708-908C-021BB37FDD37@suse.com> <20171106215923.2ev573rdyel7qpfq@jf_suse_laptop> Message-ID: > On 6 Nov 2017, at 14:59, Jan Fajerski wrote: > > On Mon, Nov 06, 2017 at 02:34:17PM -0700, Ricardo Dias wrote: >> >> >>> On 6 Nov 2017, at 14:17, Jan Fajerski wrote: >>> >>> On Mon, Nov 06, 2017 at 02:00:38PM -0700, Ricardo Dias wrote: >>>> >>>> >>>>> On 6 Nov 2017, at 13:55, Jan Fajerski wrote: >>>>> >>>>> Hi list, >>>>> does the following scenario ring any alarm bells with anyone? >>>>> The cluster has a salt-master, MONs and OSDs. All nodes have 3 networks: a management network and public- and cluster-network. All nodes have a hostname that resolves via DNS to the ip on the management network. The ips on cluster- and public-network don't correspond to DNS names. >>>>> The MONs derive their IDs from the short hostname, which would resolve to the ip addresses on the management network if one would attempt that. >>>>> The cluster deployed and operates fine. Is there any conceivable disadvantage with this setup? For example: >>>>> Should there be dns names for the public-network ip addresses? >>>>> Does DeepSea have any issue (or will have issues in the future) with running on the management network (the salt master has access to the public-network)? >>>> >>>> The only thing I?m seeing is that the public network might not be used at all because the MONs will bind to the management network, and the OSDs will connect to the MONs using the management network. >>>> >>>> If the management network has a lower bandwidth than the public network, then it might be a problem. >>> ceph.conf contains the mon_host settings with the appropriate ip addresses on the public network. So I'd assume the MONs use the correct network. Iiuc the mon_initial_members list contains only a list of MON ids and has nothing to do with networking. >> >> Was ceph.conf generated by DeepSea or it was tweaked manually? >> I?m asking this because with the network setup you described, I was expecting DeepSea to fill the public cluster setting in ceph.conf with the management network. > ceph.conf was generated by DeepSea but the public and cluster network was set to the correct network after stage 1 in the pillar. I.e. DeepSea generated the correct ceph.conf after being told what networks to use. Ah, in that case everything should be fine from what I understand. >> >>>> >>>>> >>>>> Imho this setup should work fine, but I might be missing something. >>>>> Best, >>>>> Jan >>>>> >>>>> -- >>>>> Jan Fajerski >>>>> Engineer Enterprise Storage >>>>> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >>>>> HRB 21284 (AG N?rnberg) >>>>> _______________________________________________ >>>>> Deepsea-users mailing list >>>>> Deepsea-users at lists.suse.com >>>>> http://lists.suse.com/mailman/listinfo/deepsea-users >>>> >>> >>> >>> >>> -- >>> Jan Fajerski >>> Engineer Enterprise Storage >>> SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, >>> HRB 21284 (AG N?rnberg) >> > > > > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 529 bytes Desc: Message signed with OpenPGP URL: From Robert.Grosschopff at suse.com Wed Nov 8 03:02:19 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Wed, 8 Nov 2017 10:02:19 +0000 Subject: [Deepsea-users] Deepsea "Missing Storage Attribute" Message-ID: <230308B6-0AD4-4DCF-9543-3961F8B4EC31@suse.com> Hi, a partner is about to install SES 5 to conduct a PoC. After they noticed that SSDs are treated differently i.e. not used as OSDs they purged the cluster - salt-run disengage.safety - salt-run state.orch ceph.purge After going through salt-run proposal.help they figured out that they will have to set "standalone=True". Running the deepsea stages again gives them an "Missing Storage Attribute" error. This happens regardless whether they use the new profile or the generated profile-default. Has anybody come across this error ? Any hints ? Robert From Robert.Grosschopff at suse.com Wed Nov 8 03:22:09 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Wed, 8 Nov 2017 10:22:09 +0000 Subject: [Deepsea-users] Deepsea "Missing Storage Attribute" Message-ID: Please ignore. Problem solved. Typing error in policy.cfg -----Original Message----- From: on behalf of Robert Grosschopff Reply-To: Discussions about the DeepSea management framework for Ceph Date: Wednesday, 8. November 2017 at 11:02 To: "deepsea-users at lists.suse.com" Subject: [Deepsea-users] Deepsea "Missing Storage Attribute" Hi, a partner is about to install SES 5 to conduct a PoC. After they noticed that SSDs are treated differently i.e. not used as OSDs they purged the cluster - salt-run disengage.safety - salt-run state.orch ceph.purge After going through salt-run proposal.help they figured out that they will have to set "standalone=True". Running the deepsea stages again gives them an "Missing Storage Attribute" error. This happens regardless whether they use the new profile or the generated profile-default. Has anybody come across this error ? Any hints ? Robert _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From Robert.Grosschopff at suse.com Wed Nov 8 04:08:27 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Wed, 8 Nov 2017 11:08:27 +0000 Subject: [Deepsea-users] Deepsea "Missing Storage Attribute" In-Reply-To: <2176348.mGWZHbIAMT@fury.home> References: <230308B6-0AD4-4DCF-9543-3961F8B4EC31@suse.com> <2176348.mGWZHbIAMT@fury.home> Message-ID: Hi Eric, partner had "fat fingers" and added an additional letter to one of the minion names. Thanks Robert -----Original Message----- From: on behalf of Eric Jackson Reply-To: Discussions about the DeepSea management framework for Ceph Date: Wednesday, 8. November 2017 at 11:48 To: "deepsea-users at lists.suse.com" Subject: Re: [Deepsea-users] Deepsea "Missing Storage Attribute" Hi Robert, What profile lines do you have in your policy.cfg? I assume that the error message contains the names of the minions and the pathnames to check. If you find the issue, rerun Stage 2 and the validation for Stage 3 will succeed. Eric On Wednesday, November 08, 2017 10:02:19 AM Robert Grosschopff wrote: > Hi, > > a partner is about to install SES 5 to conduct a PoC. > > After they noticed that SSDs are treated differently i.e. not used as OSDs > they purged the cluster - salt-run disengage.safety > - salt-run state.orch ceph.purge > > After going through salt-run proposal.help they figured out that they will > have to set "standalone=True". > > Running the deepsea stages again gives them an "Missing Storage Attribute" > error. This happens regardless whether they use the new profile or the > generated profile-default. > > Has anybody come across this error ? > Any hints ? > > Robert > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From lgrimmer at suse.com Thu Nov 16 10:48:12 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Thu, 16 Nov 2017 18:48:12 +0100 Subject: [Deepsea-users] DeepSea on CentOS 7 Message-ID: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> Hi, thanks to Ricardo Dias, we now have DeepSea running on CentOS 7 - thank you! Demo: https://asciinema.org/a/147812 RPM packages: https://copr.fedorainfracloud.org/coprs/rjdias/home/packages/ Let us know how it works for you. Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From joao at suse.de Thu Nov 16 10:52:28 2017 From: joao at suse.de (Joao Eduardo Luis) Date: Thu, 16 Nov 2017 17:52:28 +0000 Subject: [Deepsea-users] DeepSea on CentOS 7 In-Reply-To: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> References: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> Message-ID: On 11/16/2017 05:48 PM, Lenz Grimmer wrote: > Hi, > > thanks to Ricardo Dias, we now have DeepSea running on CentOS 7 - thank you! > > Demo: https://asciinema.org/a/147812 > > RPM packages: https://copr.fedorainfracloud.org/coprs/rjdias/home/packages/ > > Let us know how it works for you. Congrats! This should definitely also be sent to ceph-users :) -Joao From lgrimmer at suse.com Fri Nov 17 01:23:59 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Fri, 17 Nov 2017 09:23:59 +0100 Subject: [Deepsea-users] DeepSea on CentOS 7 In-Reply-To: References: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> Message-ID: <55018cd1-b5b6-cdd1-1805-df9b6ca74ee9@suse.com> On 11/16/2017 06:52 PM, Joao Eduardo Luis wrote: > This should definitely also be sent to ceph-users :) Yes, I'll take care of that right away. And openattic-users, too! Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From rdias at suse.com Fri Nov 17 08:20:05 2017 From: rdias at suse.com (Ricardo Dias) Date: Fri, 17 Nov 2017 15:20:05 +0000 Subject: [Deepsea-users] DeepSea on CentOS 7 In-Reply-To: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> References: <6af86462-9e94-5474-8c15-972831a4b7de@suse.com> Message-ID: <1510932005.28420.32.camel@suse.com> Hi, Just a few more details about this... The support for CentOS 7 is still experimental and it's not included yet in the upstream project. The sources can be found in https://github.com/rjfd/DeepSea/tree/wip-centos I've prepared a Vagrantfile to help with the testing. This vagrant setup provisions 4 VMs with CentOS 7 and installs the necessary repos for installing DeepSea. It also installs the latest Salt version, deploys the salt-minions in all VMs, and installs the DeepSea RPM that is available in: https://copr.fedorainfracloud.org/coprs/rjdias/home/packages/ You can check the provision steps inside the Vagrantfile. The Vagrantfile is available in the following repo: https://github.com/rjfd/vagrant-deepsea-centos.git Clone the above repo, and then create a settings.yml file based on the settings.sample.yml that is in the repo. In the settings.yml you can specify if you want vagrant to also deploy the Ceph cluster using DeepSea or if you want to run those steps manually. The option is called "deploy_ceph". By default, vagrant will not deploy the Ceph cluster automatically (what would be the fun of that!). When you are ready run: $ vagrant up After vagrant is finished with the provisioning of the VMs, access the VM called "salt": $ vagrant ssh salt Switch to root: $ sudo su - Then run the following instructions to deploy a Ceph cluster and additional services like RGW, NFS, etc...: $ deepsea stage run ceph.stage.prep $ deepsea stage run ceph.stage.discovery # copy the auto generated policy.cfg that vagrant stored in /tmp # and check it's contents to understand in which nodes is installed # which services $ cp /tmp/policy.cfg /srv/pillar/ceph/proposals $ deepsea stage run ceph.stage.configure # the env variable DEV_ENV is necessary because we are deploying # Ceph in less than 4 storage nodes. In this setup only node1, node2, # and node3 will store OSDs. $ DEV_ENV=true deepsea stage run ceph.stage.deploy $ DEV_ENV=true deepsea stage run ceph.stage.service After the above instructions you ca check the status of your cluster: $ ceph -s I've prepared another demo where I run exactly the above steps. The demo is available here: https://asciinema.org/a/147996 Thanks, Ricardo Dias On Thu, 2017-11-16 at 18:48 +0100, Lenz Grimmer wrote: > Hi, > > thanks to Ricardo Dias, we now have DeepSea running on CentOS 7 - > thank you! > > Demo: https://asciinema.org/a/147812 > > RPM packages: https://copr.fedorainfracloud.org/coprs/rjdias/home/pac > kages/ > > Let us know how it works for you. > > Lenz > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From ncutler at suse.cz Fri Nov 24 00:58:10 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Fri, 24 Nov 2017 08:58:10 +0100 Subject: [Deepsea-users] OBS project Message-ID: <13ad464f-4dfb-8c07-4951-d761aec70d50@suse.cz> Now that Mimic development has started, I have created a filesystems:ceph:mimic project in OBS and will soon have a successful ceph build there. Since Factory/Tumbleweed is supposed to be more-or-less on the cutting edge, I would like to migrate filesystems:ceph/ceph to the new Mimic build ASAP. Will this have any adverse effect on the DeepSea build in filesystems:ceph/deepsea? Nathan From jfajerski at suse.com Fri Nov 24 01:34:51 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Fri, 24 Nov 2017 09:34:51 +0100 Subject: [Deepsea-users] OBS project In-Reply-To: <13ad464f-4dfb-8c07-4951-d761aec70d50@suse.cz> References: <13ad464f-4dfb-8c07-4951-d761aec70d50@suse.cz> Message-ID: <20171124083451.dwau3d6325ztvhjr@jf_suse_laptop> On Fri, Nov 24, 2017 at 08:58:10AM +0100, Nathan Cutler wrote: >Now that Mimic development has started, I have created a >filesystems:ceph:mimic project in OBS and will soon have a successful >ceph build there. > >Since Factory/Tumbleweed is supposed to be more-or-less on the cutting >edge, I would like to migrate filesystems:ceph/ceph to the new Mimic >build ASAP. Will this have any adverse effect on the DeepSea build in >filesystems:ceph/deepsea? For the build: no I don't think so. Though I guess installation from that one project won't work anymore? Wasn't the approach so far that filesystems:ceph mirrored the current latest version (now to be filesystems:ceph:mimic)? Is this going to change now? Jan > >Nathan >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users From ncutler at suse.cz Fri Nov 24 01:46:12 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Fri, 24 Nov 2017 09:46:12 +0100 Subject: [Deepsea-users] OBS project In-Reply-To: <20171124083451.dwau3d6325ztvhjr@jf_suse_laptop> References: <13ad464f-4dfb-8c07-4951-d761aec70d50@suse.cz> <20171124083451.dwau3d6325ztvhjr@jf_suse_laptop> Message-ID: <294f8f32-f8db-9fdb-b2f2-1e09140faf47@suse.cz> >> Since Factory/Tumbleweed is supposed to be more-or-less on the cutting >> edge, I would like to migrate filesystems:ceph/ceph to the new Mimic >> build ASAP. Will this have any adverse effect on the DeepSea build in >> filesystems:ceph/deepsea? > For the build: no I don't think so. Though I guess installation from > that one project won't work anymore? Right. This change will cause problems for anyone who might be relying on filesystems:ceph containing RPMs built from the "stable" branch of deepsea (I think it's called "SES5") together with corresponding ceph RPMs. After I make this change, the ceph build in filesystems:ceph will be intended to work with deepsea RPMs built from the master branch. > Wasn't the approach so far that filesystems:ceph mirrored the current > latest version (now to be filesystems:ceph:mimic)? Is this going to > change now? That has always been the approach, since filesystems:ceph is the devel project for Factory/Tumbleweed. Nathan From rdias at suse.com Fri Nov 24 01:47:29 2017 From: rdias at suse.com (Ricardo Dias) Date: Fri, 24 Nov 2017 08:47:29 +0000 Subject: [Deepsea-users] Backporting PRs Message-ID: <1511513249.28420.84.camel@suse.com> Hi, I've looked into the number of PRs that have the "backport" label in github and I found that there are a lot of them. Do all of those PRs really need to be backported? Can we have a meeting to review all the PRs that are currently labeled to be backported and make sure that we are not backporting more than we should? Thanks, -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From jfajerski at suse.com Fri Nov 24 02:00:18 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Fri, 24 Nov 2017 10:00:18 +0100 Subject: [Deepsea-users] OBS project In-Reply-To: <294f8f32-f8db-9fdb-b2f2-1e09140faf47@suse.cz> References: <13ad464f-4dfb-8c07-4951-d761aec70d50@suse.cz> <20171124083451.dwau3d6325ztvhjr@jf_suse_laptop> <294f8f32-f8db-9fdb-b2f2-1e09140faf47@suse.cz> Message-ID: <20171124090018.4uujpoggxk7gmxga@jf_suse_laptop> On Fri, Nov 24, 2017 at 09:46:12AM +0100, Nathan Cutler wrote: >>>Since Factory/Tumbleweed is supposed to be more-or-less on the >>>cutting edge, I would like to migrate filesystems:ceph/ceph to the >>>new Mimic build ASAP. Will this have any adverse effect on the >>>DeepSea build in filesystems:ceph/deepsea? >>For the build: no I don't think so. Though I guess installation from >>that one project won't work anymore? > >Right. This change will cause problems for anyone who might be relying >on filesystems:ceph containing RPMs built from the "stable" branch of >deepsea (I think it's called "SES5") together with corresponding ceph >RPMs. After I make this change, the ceph build in filesystems:ceph >will be intended to work with deepsea RPMs built from the master >branch. Right. DeepSea's SES5 branch should move to filesystems:ceph:luminous of course. I'll bring this up in next weeks standup. Until then I think the SES5 branch should work reasonably with the current ceph devel project. > >>Wasn't the approach so far that filesystems:ceph mirrored the >>current latest version (now to be filesystems:ceph:mimic)? Is this >>going to change now? > >That has always been the approach, since filesystems:ceph is the devel >project for Factory/Tumbleweed. > >Nathan >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From rdias at suse.com Mon Nov 27 02:39:02 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 27 Nov 2017 09:39:02 +0000 Subject: [Deepsea-users] DeepSea version number management and packaging Message-ID: <1511775542.28420.130.camel@suse.com> Hi list, I opened a PR (https://github.com/SUSE/DeepSea/pull/817) with a new proposal for managing the version number in deepsea. The reasons behind this proposal are: - the support for retrieving the deepsea version number through salt, or the deepsea-cli, independently of the installation method - the support for different distros (non-rpm based for instance) With the above PR the version number is moved from the "deepsea.spec" file to it's own file called "version.txt". This "version.txt" is the only location where the version is stored inside the git repo. The "deepsea.spec" file was renamed to "deepsea.spec.in" and works like a template for the generation of the concrete "deepsea.spec" file. The generation of the concrete "deepsea.spec" file is done along with the tarball generation implemented as a Makefile target. The tarball generation in the Makefile computes the final version number as the composition of: - the string inside version.txt - the OFFSET that corresponds to the number of commits since the last commit that changed the "version.txt" file - the HEAD commit hash The final version number takes the form: VERSION+git.OFFSET.HASH The final tarball includes the final version number in the version.txt and the concrete "deepsea.spec" file with the final version number as well. The result tarball can be used to package deepsea in different linux distributions. To help in the packaging of DeepSea in OBS, I also changed the _service based process in favor of a bash script called "checkin.sh" that knows how to clone the DeepSea git repo, generate the tarball using "make tarball", and extract the "deepsea.spec" from the tarball. I branched the deepsea package in OBS where I made the above changes. The package URL is: https://build.opensuse.org/package/show/home:rjdias:branches:filesystem s:ceph/deepsea There is a file README.txt with the instructions on how to use "checkin.sh". In summary with all of what was described above the procedure to release a new DeepSea version and package will be: 1) commit to git a change to the version number in version.txt (version bump) 2) checkout the OBS package "osc co project_name deepsea" 3) run "checkin.sh", you can pass optional argument to checkin the code from a different repo or branch 4) run "osc vc" to update the "deepsea.changes" 5) run "osc ci --noservice" to commit the OBS package files into OBS The important takeaway is that the tarball generation and version management is independent of the linux distro, and therefore will not limit our work when supporting a new distro in the future. Thoughts from anyone? Thanks, -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From jschmid at suse.de Mon Nov 27 06:27:27 2017 From: jschmid at suse.de (Joshua Schmid) Date: Mon, 27 Nov 2017 14:27:27 +0100 Subject: [Deepsea-users] Backporting PRs In-Reply-To: <1511513249.28420.84.camel@suse.com> References: <1511513249.28420.84.camel@suse.com> Message-ID: <20171127132727.4627rgbak4yicmkc@g127.suse.de> Ricardo Dias wrote on Fri, 24. Nov 08:47: > Hi, Hi, sorry for the late reply, I was completely blocked the last couple of days.. > > I've looked into the number of PRs that have the "backport" label in > github and I found that there are a lot of them. > > Do all of those PRs really need to be backported? > > Can we have a meeting to review all the PRs that are currently labeled > to be backported and make sure that we are not backporting more than we > should? As the SES5 branch maintainer, I'd be up for a discussion/review on what to backport. I'd propose we extend the Deepsea standup tomorrow? Anyone is welcome to join. > > Thanks, > -- > Ricardo Dias > Senior Software Engineer - Storage Team > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 > (AG N?rnberg) > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Joshua Schmid Software Engineer SUSE Enterprise Storage From rdias at suse.com Mon Nov 27 06:30:19 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 27 Nov 2017 13:30:19 +0000 Subject: [Deepsea-users] Backporting PRs In-Reply-To: <20171127132727.4627rgbak4yicmkc@g127.suse.de> References: <1511513249.28420.84.camel@suse.com> <20171127132727.4627rgbak4yicmkc@g127.suse.de> Message-ID: <1511789419.28420.133.camel@suse.com> On Mon, 2017-11-27 at 14:27 +0100, Joshua Schmid wrote: > Ricardo Dias wrote on Fri, 24. Nov 08:47: > > Hi, > > Hi, > sorry for the late reply, I was completely blocked the last couple of > days.. > > > > I've looked into the number of PRs that have the "backport" label > > in > > github and I found that there are a lot of them. > > > > Do all of those PRs really need to be backported? > > > > Can we have a meeting to review all the PRs that are currently > > labeled > > to be backported and make sure that we are not backporting more > > than we > > should? > > As the SES5 branch maintainer, I'd be up for a discussion/review on > what to backport. > I'd propose we extend the Deepsea standup tomorrow? Anyone is welcome > to join. > Sounds good to me! > > > > Thanks, > > -- > > Ricardo Dias > > Senior Software Engineer - Storage Team > > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham > > Norton, > > HRB 21284 > > (AG N?rnberg) > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > > -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From lgrimmer at suse.com Mon Nov 27 06:43:40 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Mon, 27 Nov 2017 14:43:40 +0100 Subject: [Deepsea-users] DeepSea version number management and packaging In-Reply-To: <1511775542.28420.130.camel@suse.com> References: <1511775542.28420.130.camel@suse.com> Message-ID: <2a3f1e97-9955-299b-92e6-96ed6db1f13a@suse.com> Hi Ricardo, thanks for pursuing this. On 11/27/2017 10:39 AM, Ricardo Dias wrote: > I opened a PR (https://github.com/SUSE/DeepSea/pull/817) with a new > proposal for managing the version number in deepsea. > The reasons behind this proposal are: > - the support for retrieving the deepsea version number through salt, > or the deepsea-cli, independently of the installation method > - the support for different distros (non-rpm based for instance) > > With the above PR the version number is moved from the "deepsea.spec" > file to it's own file called "version.txt". This "version.txt" is the > only location where the version is stored inside the git repo. > > The "deepsea.spec" file was renamed to "deepsea.spec.in" and works like > a template for the generation of the concrete "deepsea.spec" file. > The generation of the concrete "deepsea.spec" file is done along with > the tarball generation implemented as a Makefile target. Glad you adopted my approach here :) > The tarball generation in the Makefile computes the final version > number as the composition of: > - the string inside version.txt > - the OFFSET that corresponds to the number of commits since the last > commit that changed the "version.txt" file > - the HEAD commit hash > The final version number takes the form: VERSION+git.OFFSET.HASH > The final tarball includes the final version number in the version.txt > and the concrete "deepsea.spec" file with the final version number as > well. I'm still not convinced that adding git hashes to the actual version number is a good idea, but that's likely a bikeshed painting discussion :) > The result tarball can be used to package deepsea in different linux > distributions. > > To help in the packaging of DeepSea in OBS, I also changed the _service > based process in favor of a bash script called "checkin.sh" that knows > how to clone the DeepSea git repo, generate the tarball using "make > tarball", and extract the "deepsea.spec" from the tarball. > I branched the deepsea package in OBS where I made the above changes. > The package URL is: > https://build.opensuse.org/package/show/home:rjdias:branches:filesystem > s:ceph/deepsea > > There is a file README.txt with the instructions on how to use > "checkin.sh". > > In summary with all of what was described above the procedure to > release a new DeepSea version and package will be: > > 1) commit to git a change to the version number in version.txt (version > bump) > 2) checkout the OBS package "osc co project_name deepsea" > 3) run "checkin.sh", you can pass optional argument to checkin the > code from a different repo or branch > 4) run "osc vc" to update the "deepsea.changes" > 5) run "osc ci --noservice" to commit the OBS package files into OBS > > The important takeaway is that the tarball generation and version > management is independent of the linux distro, and therefore will not > limit our work when supporting a new distro in the future. I like the approach of having one "pristine source archive" that is built as part of the release process which then is used as the basis for all other release package builds. In the end, this is more about process than implementation, e.g. when to bump up the version number or how it should be formatted. As long as it's repeatable and there is a direct correlation between the tarball release and the corresponding git revision that was used for the build, I'm fine. Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From joao at suse.de Mon Nov 27 07:05:45 2017 From: joao at suse.de (Joao Eduardo Luis) Date: Mon, 27 Nov 2017 14:05:45 +0000 Subject: [Deepsea-users] DeepSea version number management and packaging In-Reply-To: <2a3f1e97-9955-299b-92e6-96ed6db1f13a@suse.com> References: <1511775542.28420.130.camel@suse.com> <2a3f1e97-9955-299b-92e6-96ed6db1f13a@suse.com> Message-ID: <37b9e76d-9621-d77f-3a3b-3dc6faedd423@suse.de> On 11/27/2017 01:43 PM, Lenz Grimmer wrote: > Hi Ricardo, > > thanks for pursuing this. > > On 11/27/2017 10:39 AM, Ricardo Dias wrote: > >> I opened a PR (https://github.com/SUSE/DeepSea/pull/817) with a new >> proposal for managing the version number in deepsea. >> The reasons behind this proposal are: >> - the support for retrieving the deepsea version number through salt, >> or the deepsea-cli, independently of the installation method >> - the support for different distros (non-rpm based for instance) >> >> With the above PR the version number is moved from the "deepsea.spec" >> file to it's own file called "version.txt". This "version.txt" is the >> only location where the version is stored inside the git repo. >> >> The "deepsea.spec" file was renamed to "deepsea.spec.in" and works like >> a template for the generation of the concrete "deepsea.spec" file. >> The generation of the concrete "deepsea.spec" file is done along with >> the tarball generation implemented as a Makefile target. > > Glad you adopted my approach here :) > >> The tarball generation in the Makefile computes the final version >> number as the composition of: >> - the string inside version.txt >> - the OFFSET that corresponds to the number of commits since the last >> commit that changed the "version.txt" file >> - the HEAD commit hash >> The final version number takes the form: VERSION+git.OFFSET.HASH >> The final tarball includes the final version number in the version.txt >> and the concrete "deepsea.spec" file with the final version number as >> well. > > I'm still not convinced that adding git hashes to the actual version > number is a good idea, but that's likely a bikeshed painting discussion :) For versions *we* release, this is somewhat pointless, as long as we tag the release and build from it. Having the git sha in the version however becomes really useful when figuring which version is being run by the user. It may or may not be the version we are releasing. Users won't necessarily be running a tagged version. They may be running from a package generated by us on OBS, or from something they built locally, or from their own clone on OBS, or whatever else. It's important to know how far they diverted, and on which commit they are at when diagnosing issues. It may even be useful for downstreams, to efficiently check whether a consumer of a given package is running the vanilla or a patched version, simply by comparing their git hash to whatever is the hash for that version. -Joao From rdias at suse.com Tue Nov 28 04:20:21 2017 From: rdias at suse.com (Ricardo Dias) Date: Tue, 28 Nov 2017 11:20:21 +0000 Subject: [Deepsea-users] Python3 support Message-ID: <1511868021.4922.28.camel@suse.com> Hi all, Just to inform that in a very near future the Salt version available in openSUSE 42.3 will be upgraded to the python3 build of Salt. This means that DeepSea will soon stop work in openSUSE 42.3, since it depends on python2 to work correctly. We should start adding the support for python3 so that we will be ready when the time comes. Contributions are wecolme! Thanks, -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From lgrimmer at suse.com Tue Nov 28 07:30:41 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 28 Nov 2017 15:30:41 +0100 Subject: [Deepsea-users] DeepSea version number management and packaging In-Reply-To: <37b9e76d-9621-d77f-3a3b-3dc6faedd423@suse.de> References: <1511775542.28420.130.camel@suse.com> <2a3f1e97-9955-299b-92e6-96ed6db1f13a@suse.com> <37b9e76d-9621-d77f-3a3b-3dc6faedd423@suse.de> Message-ID: <855e58e6-c2b9-6d2e-f1c5-e1908270e31c@suse.com> Hi Joao, On 11/27/2017 03:05 PM, Joao Eduardo Luis wrote: >> I'm still not convinced that adding git hashes to the actual version >> number is a good idea, but that's likely a bikeshed painting >> discussion :) > > For versions *we* release, this is somewhat pointless, as long as we tag > the release and build from it. Which should be established practice, yes. For each public/official release, there should be a corresponding git tag, as well as a tarball containing the sources relating to that revision. > Having the git sha in the version however becomes really useful when > figuring which version is being run by the user. It may or may not be > the version we are releasing. But does it have to be part of the actual version number? The git hash does not really add any direct value to users just curious about which version they're using. Can't the git hash that particular build is based on be stored in a file like version.txt? This would have the benefit of other tools being able to query the version number without having to go through the packaging system. > Users won't necessarily be running a tagged version. They may be running > from a package generated by us on OBS, or from something they built > locally, or from their own clone on OBS, or whatever else. It's > important to know how far they diverted, and on which commit they are at > when diagnosing issues. The approach of storing the git revision in version.txt of course only works if you have a standardized process for building the "source tarball" that is the basis for building actual RPM package. If you have tools that perform a direct "git-archive" (like OBS), this does not work, and the only option for storing the git revision is as part of the archive's name, which is somewhat brittle and hard to use from within the application. To share how we do this in openATTIC: we maintain the version number in version.txt. It actually points to the *next* public release version, so e.g. as soon as oA 3.6.0 has been released, we bump up the version number in version.txt to 3.6.1, to indicate that this is the version we're currently working on: $ cat version.txt [package] VERSION = 3.6.1 For creating a tarball package, we have a script named "make_dist.py", which is capable of building both "release" tarballs as well as "snapshot" tarballs from a given git revision/branch. When building a source distribution tarball, version.txt is updated with some additional information before it is added to the tarball: [package] VERSION = 3.6.1 STATE = snapshot REV = af5d766e3036b3c439aa668428342bb8cd0528de BUILDDATE = 201711271141 The variables are hopefully self-explanatory; STATE could be either "snapshot" or "release" (which indicates an official release build). The script also updates the version number in the RPM spec file included in the tarball, so it matches the version string of the source tarball. When building a snapshot, the version number of the tarball is extended with a time stamp, e.g. openattic-3.6.1~201711271141.tar.bz2. This ensures that you can easily update from one snapshot to the next one. Both RPM and dpkg understand this versioning scheme and also are aware of the fact that "3.6.1" (the final release) is actually newer than "3.6.1~" (a snapshot of the 3.6.1 development). This approach has served us very well so far, and also makes it possible to query oA for it's version number via the REST API, for example. > It may even be useful for downstreams, to efficiently check whether a > consumer of a given package is running the vanilla or a patched version, > simply by comparing their git hash to whatever is the hash for that > version. Absolutely, agreed - but only if all patches applied to this package are actually tracked in a git repo, and not applied by RPM at build time... (but our approach using version.txt also is prone to that). Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From lgrimmer at suse.com Tue Nov 28 07:50:06 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 28 Nov 2017 15:50:06 +0100 Subject: [Deepsea-users] Basic node patch/update management Message-ID: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> Hi, as far as I know, we currently perform package updates on all nodes as part of stage 0, correct? What is the approach for keeping all cluster nodes up to date and patched at run time, after the cluster has been deployed? I wonder if we could provide some basic functionality to the admin to achieve the following: - obtain a status overview of the current patch/update level for all or selected nodes (e.g. a list of all pending updates per node?) - apply patches/updates on all or selected nodes only At a first step, it would be nice to have an easy CLI command for this. Going forward, having the same functionality as part of the REST API would be golden :) Thoughts? Or does this already exist and I'm just not aware of it? Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From vtheile at suse.com Tue Nov 28 08:17:42 2017 From: vtheile at suse.com (Volker Theile) Date: Tue, 28 Nov 2017 16:17:42 +0100 Subject: [Deepsea-users] Basic node patch/update management In-Reply-To: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> References: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> Message-ID: <3e8fb3bd-f2dd-61c7-c951-c7addaa25f3c@suse.com> Am 28.11.2017 um 15:50 schrieb Lenz Grimmer: > Hi, > > as far as I know, we currently perform package updates on all nodes as > part of stage 0, correct? > > What is the approach for keeping all cluster nodes up to date and > patched at run time, after the cluster has been deployed? I use the following commands: # salt '*' pkg.refresh_db # salt '*' pkg.upgrade > > I wonder if we could provide some basic functionality to the admin to > achieve the following: > > - obtain a status overview of the current patch/update level for all > or selected nodes (e.g. a list of all pending updates per node?) > - apply patches/updates on all or selected nodes only > > At a first step, it would be nice to have an easy CLI command for this. > > Going forward, having the same functionality as part of the REST API > would be golden :) > > Thoughts? Or does this already exist and I'm just not aware of it? > > Lenz > > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users Volker -- Volker Theile Software Engineer | openATTIC SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) Phone: +49 173 5876879 E-Mail: vtheile at suse.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From jschmid at suse.de Tue Nov 28 09:47:03 2017 From: jschmid at suse.de (Joshua Schmid) Date: Tue, 28 Nov 2017 17:47:03 +0100 Subject: [Deepsea-users] Basic node patch/update management In-Reply-To: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> References: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> Message-ID: <20171128164703.v3clg2y4xi4o3vwv@g127.suse.de> Lenz Grimmer wrote on Tue, 28. Nov 14:50: > Hi, > > as far as I know, we currently perform package updates on all nodes as > part of stage 0, correct? > > What is the approach for keeping all cluster nodes up to date and > patched at run time, after the cluster has been deployed? > > I wonder if we could provide some basic functionality to the admin to > achieve the following: > > - obtain a status overview of the current patch/update level for all There is: 'zypper lu' (list-updates) 'zypper lp' (list-patches) 'zypper pchk' (patch-check) and corresponding action-takers > or selected nodes (e.g. a list of all pending updates per node?) > - apply patches/updates on all or selected nodes only > > At a first step, it would be nice to have an easy CLI command for this. > > Going forward, having the same functionality as part of the REST API > would be golden :) > > Thoughts? Or does this already exist and I'm just not aware of it? > > Lenz > > -- > SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) > GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Joshua Schmid Software Engineer SUSE Enterprise Storage From lgrimmer at suse.com Tue Nov 28 10:03:05 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 28 Nov 2017 18:03:05 +0100 Subject: [Deepsea-users] Basic node patch/update management In-Reply-To: <3e8fb3bd-f2dd-61c7-c951-c7addaa25f3c@suse.com> References: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> <3e8fb3bd-f2dd-61c7-c951-c7addaa25f3c@suse.com> Message-ID: <4f96147c-5aff-74b0-3ed6-4a77ce16edaa@suse.com> Hi Volker, On 11/28/2017 04:17 PM, Volker Theile wrote: >> What is the approach for keeping all cluster nodes up to date and >> patched at run time, after the cluster has been deployed? > > I use the following commands: > > # salt '*' pkg.refresh_db > # salt '*' pkg.upgrade Right, that's the generic Salt approach. But I wonder if we need to take some extra cautions here, e.g. when it comes to the order in which nodes are upgraded? I'm not sure if it's safe to do this in parallel, as it might result in unexpected service restarts on all nodes at the same time. Salt by itself does not have any knowledge of the various roles of a Ceph cluster and their dependencies. Or maybe that's not really a valid concern? Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From lgrimmer at suse.com Tue Nov 28 10:06:08 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 28 Nov 2017 18:06:08 +0100 Subject: [Deepsea-users] Basic node patch/update management In-Reply-To: <20171128164703.v3clg2y4xi4o3vwv@g127.suse.de> References: <5389187d-9414-812d-9a24-d9b95bde616c@suse.com> <20171128164703.v3clg2y4xi4o3vwv@g127.suse.de> Message-ID: Hi Joshua, thanks for your feedback! On 11/28/2017 05:47 PM, Joshua Schmid wrote: >> - obtain a status overview of the current patch/update level for all > There is: > > 'zypper lu' (list-updates) > 'zypper lp' (list-patches) > 'zypper pchk' (patch-check) > > and corresponding action-takers Thanks, good to know. Is there any documentation that outlines the differences between these? The zypper docs aren't really helpful in explaining the differences between "updates" and "patches", for example. Would it make sense to "saltify" these actions for generating these lists and the corresponding update actions? Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From lgrimmer at suse.com Wed Nov 29 09:51:54 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Wed, 29 Nov 2017 17:51:54 +0100 Subject: [Deepsea-users] Running stage 0 fails with some errors ("rpm: -1: unknown option") Message-ID: Hi, just making sure it's not a user error before I submit a bug report about this... This is deepsea-0.7.35+git.0.b1a7f7f-5.1.noarch on an up to date openSUSE Leap 42.3 cluster (5 nodes), installed from OBS. When running Stage 0, I observe the following errors - are these already known? Thanks, Lenz ceph-01:/srv/pillar/ceph # salt-run state.orch ceph.stage.0 [WARNING ] /usr/lib/python2.7/site-packages/salt/client/__init__.py:705: DeprecationWarning: The target type should be passed using the 'tgt_type' argument instead of 'expr_form'. Support for using 'expr_form' will be removed in Salt Fluorine. deepsea_minions : valid master_minion : valid ceph_version : valid [ERROR ] Run failed on minions: ceph-01.fritz.box Failures: ceph-01.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- [WARNING ] All minions are ready [ERROR ] Run failed on minions: ceph-05.fritz.box, ceph-03.fritz.box, ceph-04.fritz.box, ceph-01.fritz.box, ceph-02.fritz.box Failures: ceph-05.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-03.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-04.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-01.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-02.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-01.fritz.box_master: Name: salt-api - Function: salt.state - Result: Changed Started: - 16:18:43.590921 Duration: 1201.393 ms Name: sync master - Function: salt.state - Result: Changed Started: - 16:18:44.792415 Duration: 754.295 ms Name: repo master - Function: salt.state - Result: Clean Started: - 16:18:45.546822 Duration: 454.632 ms Name: prepare master - Function: salt.state - Result: Changed Started: - 16:18:46.001615 Duration: 2599.394 ms Name: filequeue.remove - Function: salt.runner - Result: Clean Started: - 16:18:48.601128 Duration: 553.253 ms ---------- ID: restart master Function: salt.state Result: False Comment: Run failed on minions: ceph-01.fritz.box Failures: ceph-01.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- Started: 16:18:49.154541 Duration: 302.971 ms Changes: Name: filequeue.add - Function: salt.runner - Result: Changed Started: - 16:18:49.457627 Duration: 1243.244 ms Name: minions.ready - Function: salt.runner - Result: Changed Started: - 16:18:50.700978 Duration: 1469.163 ms Name: repo - Function: salt.state - Result: Clean Started: - 16:18:52.170259 Duration: 523.778 ms Name: common packages - Function: salt.state - Result: Clean Started: - 16:18:52.694173 Duration: 1807.766 ms Name: sync - Function: salt.state - Result: Changed Started: - 16:18:54.502060 Duration: 2560.375 ms Name: mines - Function: salt.state - Result: Clean Started: - 16:18:57.062590 Duration: 1467.328 ms Name: updates - Function: salt.state - Result: Changed Started: - 16:18:58.530032 Duration: 3384.731 ms ---------- ID: restart Function: salt.state Result: False Comment: Run failed on minions: ceph-05.fritz.box, ceph-03.fritz.box, ceph-04.fritz.box, ceph-01.fritz.box, ceph-02.fritz.box Failures: ceph-05.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-03.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-04.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-01.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- ceph-02.fritz.box: Data failed to compile: ---------- Rendering SLS 'base:ceph.updates.restart.default' failed: mapping values are not allowed here; line 9 --- [...] warning: module.run: - name: advise.reboot - running: 4.4.92-31 - installed: rpm: -1: unknown option <====================== - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" reboot: cmd.run: - name: "shutdown -r now" [...] --- Started: 16:19:01.914876 Duration: 547.539 ms Changes: Summary for ceph-01.fritz.box_master ------------- Succeeded: 12 (changed=7) Failed: 2 ------------- Total states run: 14 Total run time: 18.870 s -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: From rdias at suse.com Wed Nov 29 09:59:39 2017 From: rdias at suse.com (Ricardo Dias) Date: Wed, 29 Nov 2017 16:59:39 +0000 Subject: [Deepsea-users] Running stage 0 fails with some errors ("rpm: -1: unknown option") In-Reply-To: References: Message-ID: <1511974779.3686.9.camel@suse.com> On Wed, 2017-11-29 at 17:51 +0100, Lenz Grimmer wrote: > Hi, > > just making sure it's not a user error before I submit a bug report > about this... > > This is deepsea-0.7.35+git.0.b1a7f7f-5.1.noarch on an up to date > openSUSE Leap 42.3 cluster (5 nodes), installed from OBS. > > When running Stage 0, I observe the following errors - are these > already > known? Yes, that is caused by running DeepSea with salt version 2017.7.2 that is the default version of salt in opensuse leap 42.3. Currently DeepSea only supports salt version 2016.11.4 > > Thanks, > > Lenz > > ceph-01:/srv/pillar/ceph # salt-run state.orch ceph.stage.0 > [WARNING ] /usr/lib/python2.7/site- > packages/salt/client/__init__.py:705: > DeprecationWarning: The target type should be passed using the > 'tgt_type' argument instead of 'expr_form'. Support for using > 'expr_form' will be removed in Salt Fluorine. > > deepsea_minions : valid > master_minion : valid > ceph_version : valid > [ERROR ] Run failed on minions: ceph-01.fritz.box > Failures: > ceph-01.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > > [WARNING ] All minions are ready > [ERROR ] Run failed on minions: ceph-05.fritz.box, ceph- > 03.fritz.box, > ceph-04.fritz.box, ceph-01.fritz.box, ceph-02.fritz.box > Failures: > ceph-05.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-03.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-04.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-01.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-02.fritz.box: > Data failed to compile: > ---------- > Rendering SLS 'base:ceph.updates.restart.default' failed: > mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown > option <====================== > - unless: "echo rpm: -1: unknown option | grep -q 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > > ceph-01.fritz.box_master: > Name: salt-api - Function: salt.state - Result: Changed Started: - > 16:18:43.590921 Duration: 1201.393 ms > Name: sync master - Function: salt.state - Result: Changed Started: > - > 16:18:44.792415 Duration: 754.295 ms > Name: repo master - Function: salt.state - Result: Clean Started: - > 16:18:45.546822 Duration: 454.632 ms > Name: prepare master - Function: salt.state - Result: Changed > Started: > - 16:18:46.001615 Duration: 2599.394 ms > Name: filequeue.remove - Function: salt.runner - Result: Clean > Started: - 16:18:48.601128 Duration: 553.253 ms > ---------- > ID: restart master > Function: salt.state > Result: False > Comment: Run failed on minions: ceph-01.fritz.box > Failures: > ceph-01.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > Started: 16:18:49.154541 > Duration: 302.971 ms > Changes: > Name: filequeue.add - Function: salt.runner - Result: Changed > Started: > - 16:18:49.457627 Duration: 1243.244 ms > Name: minions.ready - Function: salt.runner - Result: Changed > Started: > - 16:18:50.700978 Duration: 1469.163 ms > Name: repo - Function: salt.state - Result: Clean Started: - > 16:18:52.170259 Duration: 523.778 ms > Name: common packages - Function: salt.state - Result: Clean > Started: > - 16:18:52.694173 Duration: 1807.766 ms > Name: sync - Function: salt.state - Result: Changed Started: - > 16:18:54.502060 Duration: 2560.375 ms > Name: mines - Function: salt.state - Result: Clean Started: - > 16:18:57.062590 Duration: 1467.328 ms > Name: updates - Function: salt.state - Result: Changed Started: - > 16:18:58.530032 Duration: 3384.731 ms > ---------- > ID: restart > Function: salt.state > Result: False > Comment: Run failed on minions: ceph-05.fritz.box, > ceph-03.fritz.box, ceph-04.fritz.box, ceph-01.fritz.box, ceph- > 02.fritz.box > Failures: > ceph-05.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-03.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-04.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-01.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > ceph-02.fritz.box: > Data failed to compile: > ---------- > Rendering SLS > 'base:ceph.updates.restart.default' > failed: mapping values are not allowed here; line 9 > > --- > [...] > > warning: > module.run: > - name: advise.reboot > - running: 4.4.92-31 > - installed: rpm: -1: unknown option > <====================== > - unless: "echo rpm: -1: unknown option | grep > -q > 4.4.92-31" > > reboot: > cmd.run: > - name: "shutdown -r now" > [...] > --- > Started: 16:19:01.914876 > Duration: 547.539 ms > Changes: > > Summary for ceph-01.fritz.box_master > ------------- > Succeeded: 12 (changed=7) > Failed: 2 > ------------- > Total states run: 14 > Total run time: 18.870 s > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From lgrimmer at suse.com Thu Nov 30 08:26:43 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Thu, 30 Nov 2017 16:26:43 +0100 Subject: [Deepsea-users] Running stage 0 fails with some errors ("rpm: -1: unknown option") In-Reply-To: <1511974779.3686.9.camel@suse.com> References: <1511974779.3686.9.camel@suse.com> Message-ID: Hi Ricardo, On 11/29/2017 05:59 PM, Ricardo Dias wrote: > Yes, that is caused by running DeepSea with salt version 2017.7.2 that > is the default version of salt in opensuse leap 42.3. > > Currently DeepSea only supports salt version 2016.11.4 Oh, so users on Leap can't actually use DeepSea at all? Or is there a recommended way to downgrade Salt to the previous version? Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: