From dbyte at suse.com Fri Jan 6 13:10:35 2017 From: dbyte at suse.com (David Byte) Date: Fri, 6 Jan 2017 20:10:35 +0000 Subject: [Deepsea-users] Antw: Re: Fwd: Detecting disks by their properties rather than by their path In-Reply-To: <585BD4A10200001C002C7627@prv-mh.provo.novell.com> References: <543124068.5941150.1481233886382.JavaMail.zimbra@redhat.com> <1ab0549b-fe84-e004-cc11-722bf82b9626@suse.com> <3974773.GTqgBxxajX@ruby> <585BD4A10200001C002C7627@prv-mh.provo.novell.com> Message-ID: <18494FC5-D417-44BF-BDAB-1415BA240062@suse.com> I completed a version and then failed to upload it to my github (github.com/dmbyte/SES-scripts). You are looking for the do-osds.sh. I?ll keep looking and see if I can find the finished version and upload it to my github. You can take a look at what?s there and maybe figure out the continuation in the meantime. David Byte Sr. Technology Strategist Alliances and SUSE Embedded dbyte at suse.com 918.528.4422 From: on behalf of Martin Weiss Reply-To: Discussions about the DeepSea management framework for Ceph Date: Thursday, December 22, 2016 at 6:26 AM To: "deepsea-users at lists.suse.com" Subject: [Deepsea-users] Antw: Re: Fwd: Detecting disks by their properties rather than by their path Cool thing - would be great if you could share this script so that we can give that a try in different deployments ;-) Just one additional point - in case the servers have multiple HBAs - building the OSDs and also later on the crush map might also require the info which disk is connected to which HBA on which port.. Thanks, Martin I wrote a script about 6 months ago that uses lsblk output to compare the configuration of storage nodes. This includes device manufacturer, model, rotational speed and maybe a few other attributes. This is enough on most systems to get you pretty close. I've been doing some additional work on a script that also picks up the connection type and speed (12G SAS for example). With all of this data, it would make it easy to group the disks by type and if the ratios are within bounds, you could make it easy to say use group B as journals for group A. Group C should be in its own bucket, etc. David Byte Sr. Technical Strategist IHV Alliances and Embedded SUSE Sent from my iPhone. Typos are Apple's fault. > On Dec 21, 2016, at 6:31 AM, Eric Jackson wrote: > > Hi Lenz, > As Lars mentioned, this has been an ongoing conversation. Currently, > DeepSea is using the long list, but at least it's generated by Salt. At some > point, the list does need to exist on the minion during the creation of OSDs. > Now, that could be a Salt module instead of the Salt pillar, effectively > generating the list dynamically. > > I only have a couple of concerns with no solution at the time. How do I > trivially know what models I have? I most likely cannot say simply Samsung or > Intel, but will need to collect all the alpha-numeric numbers of every drive > on all nodes. Ideally, I'd like to believe that it's only a few at a given > time in the life of a Ceph cluster, but over the course of a couple of years, > I could see that list being rather lengthy as well with replacements and > upgrades. > Another concern is debugging. While a hardcoded list of devices in the Salt > pillar is not elegant, there's no moving parts. If a device in the list is > not an OSD, the process is currently how did ceph-disk fail. Using a Salt > module that accepts various filters to return a list may not be seen as an > improvement over a static file when trying to determine why a drive is not an > OSD. > The last issue I haven't had enough time to think about entirely, but I > wonder if temporary failure conditions will create any unintended side effects. > Currently, the Salt pillar is authoritative. If the list becomes ephemeral > and changes between runs (not due to actual intended changes, but more like > flaky hardware), does that make any other operations more difficult? > > What would you think of a filter based solution that generates the list as a > static file for each storage node? This would mimic the current behavior of > keyrings in DeepSea. It's a two step process. The keyrings are generated and > stored on the Salt master. The second step includes adding the keyring to > Ceph and distributing it to the correct roles. That keeps debugging simple > and avoids regenerating keyrings unnecessarily. I think this addresses the > second and third issue above. Additionally, the static file may provide a > history via version control. That could prove useful for sysadmin teams > knowing the previously detected hardware (e.g. Did somebody change the filter > for this node or did they change the hardware?) > > These lists would still feed into the generation of hardware profiles unless > the goal is not only to include a filter, but have the admin provide a desired > profile. Maybe only adding the filter would be a sufficient first step? I am > believing that the default filter would be all available devices as it is now. > > Eric > > > >> On Wednesday, December 21, 2016 09:46:48 AM Lenz Grimmer wrote: >> Hi all, >> >> stumbled over this idea in the ceph-ansible mailing list and found it >> quite interesting. Is there some merit in being able to select disks >> based on this kind information? How does DeepSea handle this? >> >> Lenz >> >> >> -------- Forwarded Message -------- >> Subject: [Ceph-ansible] Detecting disks by their properties rather than >> by their path >> Date: Thu, 8 Dec 2016 16:51:26 -0500 (EST) >> From: Erwan Velu >> To: Ceph-ansible at lists.ceph.com, lgrimmer at suse.de >> >> Hi list, >> >> This is my first contribution to ceph-ansible and wanted to share my >> idea with you and get preliminary feedbacks on it. >> First time I saw ceph-ansible, I was surprised on the way disks are >> defined by naming disks per node with their path like "/dev/sdx" >> >> To my low-level background this is a big issue as : >> - if you have 20-30 disks per node that make a serious list to maintain >> - doesn't garantee what device you select : does /dev/sdx is a usb key, >> an SSD or a rotational drive ? >> - name can change over time : inserting a new disk or rebooting can lead >> to a different name >> >> >> My first approach was about using /dev/disk/by-id/ paths but that tigger >> the following issues : >> - still need a serious list of devices, which is even longer ... >> - name integrate serial number or hexa strings making difficult to >> maintain it, every device is different >> >> I eneded up with the following idea: >> - Why should'nt we choose the disks by their features and let a module >> find the associated path. >> >> The syntax looks like : >> "{'samsung_journal_disks': {'model': 'SAMSUNG MZ7LN512', 'rotational': >> '0', 'count': 2, 'ceph_type': 'journal' }}" >> >> With this syntax, you can select disk by the model, vendor, size, SSD or >> not. >> Then you can specify to get more of that type by specifing the count' >> number : having 10 disks of the same type doesn't make the definition longer >> It is then possible to define what disks are "journals" or "data" by >> defining the ceph_type attribute. >> >> If that definition match the actual system, the module returns the >> associated /dev/disk/by-id/ path like : >> samsung_journal_disks_000 : >> /dev/disk/by-id/scsi-36848f690e68a50001e428e511e4a6c20 >> samsung_journal_disks_000 : >> /dev/disk/by-id/scsi-36848f690e68a50001e428e521e55c62b >> >> The real benefit of that is the disks path become the result of a search >> and not a starting point. The disk path have very few value then. >> >> Ceph-ansible will be able to select what disks should be used for what : >> this part is under work. >> >> I wrote a documentation about it, you can have an overview here : >> https://gist.github.com/ErwanAliasr1/b19c62fac061f4f924f587b1973cf0ea >> >> All this work can be found in https://github.com/ceph/ceph-ansible/pull/1128 >> >> I didn't detailled everything in this first mail to avoid being too verbose. >> >> I'd love to get your feedbacks on that idea : >> - does it solve issues you already had ? >> - could it be useful for you ? >> >> Cheers, >> Erwan >> _______________________________________________ >> Ceph-ansible mailing list >> Ceph-ansible at lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From bo.jin at suse.com Mon Jan 9 04:51:28 2017 From: bo.jin at suse.com (Bo Jin) Date: Mon, 9 Jan 2017 12:51:28 +0100 Subject: [Deepsea-users] install calamari Message-ID: <8c3f8fa5-f07b-e4a9-cfd7-a2e6e720fab3@suse.com> Hi, I deployed a ceph cluster using ses4 deepsea. Now I want to install calamari on the master server. I did it but if I open calamari web interface it still asks me to use ceph-deploy to all cluster nodes in order to get information about cluster and nodes. But I thought if calamari is using salt and deepsea already deployed salt to the cluster nodes and minions why shouldn't calamari just pick up the minion information? Next question is: I installed openattic on a separate node which is not salt master. It works so far excep "node" tab in openattic UI. Someone told me that for node view openattic must be running on salt master. So how should openattic and calamari co-exist in a such environment? Or what is a best practice to have calamari running in parallel to openattic? Thanks -- Bo Jin Sales Engineer SUSE Linux Mobile: +41792586688 bo.jin at suse.com www.suse.com From tserong at suse.com Mon Jan 9 19:58:13 2017 From: tserong at suse.com (Tim Serong) Date: Tue, 10 Jan 2017 13:58:13 +1100 Subject: [Deepsea-users] install calamari In-Reply-To: <8c3f8fa5-f07b-e4a9-cfd7-a2e6e720fab3@suse.com> References: <8c3f8fa5-f07b-e4a9-cfd7-a2e6e720fab3@suse.com> Message-ID: On 01/09/2017 10:51 PM, Bo Jin wrote: > Hi, > I deployed a ceph cluster using ses4 deepsea. Now I want to install > calamari on the master server. I did it but if I open calamari web > interface it still asks me to use ceph-deploy to all cluster nodes in > order to get information about cluster and nodes. > > But I thought if calamari is using salt and deepsea already deployed > salt to the cluster nodes and minions why shouldn't calamari just pick > up the minion information? Calamari includes some salt state files which set up a scheduled job on the ceph nodes to check cluster status. That's how it knows there's a cluster, i.e. it's not enough for the minions to exist, if calamari's ceph.heartbeat function isn't running properly, it won't realise the cluster is there. But, this should have all been set up automatically. Here's some things to check: 1) Make sure the salt state files included in the calamari-server package haven't been edited or mangled by something else. Run `rpm -q --verify calamari-server` - if this gives no output, you're good. If it shows any of the files in /srv/salt or /srv/reactor as having been changed somehow, that might be the source of the problem. 2) Run `salt '*' state.highstate` on the master and see if that fixes it (maybe the salt state included with calamari-server simply wasn't applied yet somehow?) 3) Make sure the ceph client admin keyring (/etc/ceph/ceph.client.admin.keyring) is installed on all the MON nodes . Calamari's ceph.heartbeat function won't give proper cluster state if this is not present. 4) Try `salt '*' ceph.get_heartbeats --output json` on the master and see if you get a blob of cluster status information. > Next question is: I installed openattic on a separate node which is not > salt master. It works so far excep "node" tab in openattic UI. Someone > told me that for node view openattic must be running on salt master. > So how should openattic and calamari co-exist in a such environment? Or > what is a best practice to have calamari running in parallel to openattic? You could give salt-master a second IP address, and tweak /etc/apache2/conf.d/calamari.conf so that calamari is only available on that IP address. For example, on one of my test systems: - eth0 is 192.168.12.225 - eth0:0 is 192.168.12.226 - I edited /etc/apache2/conf.d/calamari.conf and changed to - Restart apache, and openATTIC is accessible on http://192.168.12.225/, calamari on http://192.168.12.226/ Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong at suse.com From bo.jin at suse.com Tue Jan 10 05:33:59 2017 From: bo.jin at suse.com (Bo Jin) Date: Tue, 10 Jan 2017 13:33:59 +0100 Subject: [Deepsea-users] install calamari In-Reply-To: References: <8c3f8fa5-f07b-e4a9-cfd7-a2e6e720fab3@suse.com> Message-ID: <6ca2b95f-bedb-50fe-2eaf-2923d4b1e673@suse.com> Problem solved. You are right I didn't have /etc/ceph/ceph.client.admin.keyring on the mon nodes. So I modified the policy.cfg and redeployed stage 2 and 3. Then calamari started to show contents. Thanks Tim Bo On 01/10/2017 03:58 AM, Tim Serong wrote: > On 01/09/2017 10:51 PM, Bo Jin wrote: >> Hi, >> I deployed a ceph cluster using ses4 deepsea. Now I want to install >> calamari on the master server. I did it but if I open calamari web >> interface it still asks me to use ceph-deploy to all cluster nodes in >> order to get information about cluster and nodes. >> >> But I thought if calamari is using salt and deepsea already deployed >> salt to the cluster nodes and minions why shouldn't calamari just pick >> up the minion information? > > Calamari includes some salt state files which set up a scheduled job on > the ceph nodes to check cluster status. That's how it knows there's a > cluster, i.e. it's not enough for the minions to exist, if calamari's > ceph.heartbeat function isn't running properly, it won't realise the > cluster is there. But, this should have all been set up automatically. > > Here's some things to check: > > 1) Make sure the salt state files included in the calamari-server > package haven't been edited or mangled by something else. Run `rpm -q > --verify calamari-server` - if this gives no output, you're good. If it > shows any of the files in /srv/salt or /srv/reactor as having been > changed somehow, that might be the source of the problem. > > 2) Run `salt '*' state.highstate` on the master and see if that fixes it > (maybe the salt state included with calamari-server simply wasn't > applied yet somehow?) > > 3) Make sure the ceph client admin keyring > (/etc/ceph/ceph.client.admin.keyring) is installed on all the MON nodes > . Calamari's ceph.heartbeat function won't give proper cluster state if > this is not present. > > 4) Try `salt '*' ceph.get_heartbeats --output json` on the master and > see if you get a blob of cluster status information. > >> Next question is: I installed openattic on a separate node which is not >> salt master. It works so far excep "node" tab in openattic UI. Someone >> told me that for node view openattic must be running on salt master. >> So how should openattic and calamari co-exist in a such environment? Or >> what is a best practice to have calamari running in parallel to openattic? > > You could give salt-master a second IP address, and tweak > /etc/apache2/conf.d/calamari.conf so that calamari is only available on > that IP address. For example, on one of my test systems: > > - eth0 is 192.168.12.225 > - eth0:0 is 192.168.12.226 > - I edited /etc/apache2/conf.d/calamari.conf and changed > to > - Restart apache, and openATTIC is accessible on http://192.168.12.225/, > calamari on http://192.168.12.226/ > > Regards, > > Tim > -- Bo Jin Sales Engineer SUSE Linux Mobile: +41792586688 bo.jin at suse.com www.suse.com From bo.jin at suse.com Tue Jan 10 13:48:36 2017 From: bo.jin at suse.com (Bo Jin) Date: Tue, 10 Jan 2017 21:48:36 +0100 Subject: [Deepsea-users] remove osd Message-ID: Hi, What is the correct policy if e.g. I have 5 cluster nodes but I don't want node1 being used for osd (storage role) but only use node1 for being master. How should I define it in policy.cfg? node names convention: sesnode[12345] # Cluster assignment cluster-ceph/cluster/*.sls # Hardware Profile profile-*-1/cluster/sesnode[2345]*.sls profile-*-1/stack/default/ceph/minions/*yml # Common configuration config/stack/default/global.yml config/stack/default/ceph/cluster.yml # Role assignment role-master/cluster/sesnode1.sls role-admin/cluster/ses*.sls role-mon/cluster/sesnode[234]*.sls role-mon/stack/default/ceph/minions/sesnode[234]*.yml Or should I better define a line for storage role. role-storage/cluster/sesnode[2345].sls And last question: If I want to get rid of one existing osd node should I 1. modify policy.cfg and re-run the stages? or 2. just use command salt "sesnode3*" state.sls ceph.rescind.storage.terminate but would next time if I run stage.3 again re-deploy the osd to sesnode3? ? Thanks -- Bo Jin Sales Engineer SUSE Linux Mobile: +41792586688 bo.jin at suse.com www.suse.com From bo.jin at suse.com Tue Jan 10 13:57:03 2017 From: bo.jin at suse.com (Bo Jin) Date: Tue, 10 Jan 2017 21:57:03 +0100 Subject: [Deepsea-users] install calamari In-Reply-To: References: <8c3f8fa5-f07b-e4a9-cfd7-a2e6e720fab3@suse.com> Message-ID: <84ccaed6-47b1-927d-a469-9651159acda1@suse.com> Hi Tim, Thanks for your advice. now I have openattic and calamari both running on master node. I did the change in calamari.conf as you suggested. Great. Bo On 01/10/2017 03:58 AM, Tim Serong wrote: > On 01/09/2017 10:51 PM, Bo Jin wrote: >> Hi, >> I deployed a ceph cluster using ses4 deepsea. Now I want to install >> calamari on the master server. I did it but if I open calamari web >> interface it still asks me to use ceph-deploy to all cluster nodes in >> order to get information about cluster and nodes. >> >> But I thought if calamari is using salt and deepsea already deployed >> salt to the cluster nodes and minions why shouldn't calamari just pick >> up the minion information? > > Calamari includes some salt state files which set up a scheduled job on > the ceph nodes to check cluster status. That's how it knows there's a > cluster, i.e. it's not enough for the minions to exist, if calamari's > ceph.heartbeat function isn't running properly, it won't realise the > cluster is there. But, this should have all been set up automatically. > > Here's some things to check: > > 1) Make sure the salt state files included in the calamari-server > package haven't been edited or mangled by something else. Run `rpm -q > --verify calamari-server` - if this gives no output, you're good. If it > shows any of the files in /srv/salt or /srv/reactor as having been > changed somehow, that might be the source of the problem. > > 2) Run `salt '*' state.highstate` on the master and see if that fixes it > (maybe the salt state included with calamari-server simply wasn't > applied yet somehow?) > > 3) Make sure the ceph client admin keyring > (/etc/ceph/ceph.client.admin.keyring) is installed on all the MON nodes > . Calamari's ceph.heartbeat function won't give proper cluster state if > this is not present. > > 4) Try `salt '*' ceph.get_heartbeats --output json` on the master and > see if you get a blob of cluster status information. > >> Next question is: I installed openattic on a separate node which is not >> salt master. It works so far excep "node" tab in openattic UI. Someone >> told me that for node view openattic must be running on salt master. >> So how should openattic and calamari co-exist in a such environment? Or >> what is a best practice to have calamari running in parallel to openattic? > > You could give salt-master a second IP address, and tweak > /etc/apache2/conf.d/calamari.conf so that calamari is only available on > that IP address. For example, on one of my test systems: > > - eth0 is 192.168.12.225 > - eth0:0 is 192.168.12.226 > - I edited /etc/apache2/conf.d/calamari.conf and changed > to > - Restart apache, and openATTIC is accessible on http://192.168.12.225/, > calamari on http://192.168.12.226/ > > Regards, > > Tim > -- Bo Jin Sales Engineer SUSE Linux Mobile: +41792586688 bo.jin at suse.com www.suse.com From bo.jin at suse.com Wed Jan 11 00:07:51 2017 From: bo.jin at suse.com (Bo Jin) Date: Wed, 11 Jan 2017 08:07:51 +0100 Subject: [Deepsea-users] remove osd In-Reply-To: <3554774.yjOq4KGC9N@ruby> References: <3554774.yjOq4KGC9N@ruby> Message-ID: Hi Eric, Thanks for your answer. see below. On 01/10/2017 11:26 PM, Eric Jackson wrote: > Hi Bo, > > On Tuesday, January 10, 2017 09:48:36 PM Bo Jin wrote: >> Hi, >> What is the correct policy if e.g. >> I have 5 cluster nodes but I don't want node1 being used for osd >> (storage role) but only use node1 for being master. How should I define >> it in policy.cfg? >> >> node names convention: sesnode[12345] >> >> # Cluster assignment >> cluster-ceph/cluster/*.sls >> # Hardware Profile >> profile-*-1/cluster/sesnode[2345]*.sls >> profile-*-1/stack/default/ceph/minions/*yml >> # Common configuration >> config/stack/default/global.yml >> config/stack/default/ceph/cluster.yml >> # Role assignment >> role-master/cluster/sesnode1.sls >> role-admin/cluster/ses*.sls >> role-mon/cluster/sesnode[234]*.sls >> role-mon/stack/default/ceph/minions/sesnode[234]*.yml >> > > The above is fine. Rerun the stages 2-5 for the removal. Quick question: is > this with the SES product DeepSea 0.6.10? Or are you using DeepSea master > 0.7.1? The 0.6.10 works fine for removals and I need to merge that particular > fix back to master yet. > I'm using deepsea-0.6.10-1.3.noarch >> Or should I better define a line for storage role. >> >> role-storage/cluster/sesnode[2345].sls > > There's no direct storage role in the policy.cfg. I wanted you to be able to > use the "storage" role when issuing Salt commands, but also give you the > flexibility to assign or customize different hardware profiles for groups of > hardware. That rack has this media that I want to work as dedicated OSDs, but > the other rack has separated journals. Once assigned, both racks are storage > and doing anything on either does the "right" thing considering the profile. > > The role-storage directory under proposals probably doesn't help here. I'll > get that removed. > >> >> And last question: >> If I want to get rid of one existing osd node should I >> 1. modify policy.cfg and re-run the stages? or >> 2. just use command salt "sesnode3*" state.sls >> ceph.rescind.storage.terminate but would next time if I run stage.3 >> again re-deploy the osd to sesnode3? > > The short answer for nearly every change to your cluster is > > 1) Modify your policy.cfg > 2) Rerun Stages 2-5 (0-5 if adding new hardware or you really don't want to > think about it.) > > If you know the subcommands and are aware of a couple of dependencies (e.g. > you need Stage 2 to update/refresh the pillar), then you could run those Salt > commands directly. > > Now, the good news: the subcommands are completely dependent on the Salt > pillar as well. If you notice, I went a little paranoid on adding Jinja > conditionals to every rescind sls file. I was fearful somebody might run > > salt '*' state.apply ceph.rescind.storage > So if I want to remove one particular osd.id like ceph osd rm osd.2 how should I use salt to accomplish it? Can I pass an argument to salt "sesnode2*" state.sls ceph.remove.storage osd.2 ? I see the pillar for instance for /srv/pillar/ceph/proposals/profile-2Disk100GB-1/stack/default/ceph/minions/sesnode1.mydomain.sls go.home.yml storage: data+journals: [] osds: - /dev/vdb - /dev/vdc But why are both hdd still listed here even I excluded this node in my policy.cfg? After updating policy.cfg I re-run stage 2 and 3. > and effectively delete their storage. Take a look at > /srv/salt/ceph/rescind/storage/default.sls. Notice the conditional after the > storage.nop. If that minion is still assigned the storage role, nothing is > executed. > > So, you can do the higher level Stage orchestrations or apply the state files > and DeepSea will carry out your intention. > >> ? >> Thanks >> >> >> _______________________________________________ >> Deepsea-users mailing list >> Deepsea-users at lists.suse.com >> http://lists.suse.com/mailman/listinfo/deepsea-users -- Bo Jin Sales Engineer SUSE Linux Mobile: +41792586688 bo.jin at suse.com www.suse.com From Robert.Grosschopff at suse.com Fri Jan 13 05:29:47 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Fri, 13 Jan 2017 12:29:47 +0000 Subject: [Deepsea-users] Stage 1 Message-ID: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> For some reason or other one of my minions is not discovered. The respective files (ses4-4.local.site.sls, ?) are missing thus preventing the cluster from coming up. ?sudo salt ?*? test.ping? sees all minions. Thanks Robert From ejackson at suse.com Fri Jan 13 08:21:56 2017 From: ejackson at suse.com (Eric Jackson) Date: Fri, 13 Jan 2017 10:21:56 -0500 Subject: [Deepsea-users] Stage 1 In-Reply-To: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> Message-ID: <32989236.ULVbX0sjI4@ruby> Hi Robert, Do you think the minion wasn't responding initially? If salt '*' test.ping salt '*' pillar.items is working, then rerun Stage 0 and Stage 1. I use vagrant with many VMs and on occasion, I'll have one not respond during the initial setup and have to manually intervene. Eric On Friday, January 13, 2017 12:29:47 PM Robert Grosschopff wrote: > For some reason or other one of my minions is not discovered. The respective > files (ses4-4.local.site.sls, ?) are missing thus preventing the cluster > from coming up. > ?sudo salt ?*? test.ping? sees all minions. > > Thanks > Robert > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. URL: From robert.grosschopff at suse.com Mon Jan 16 02:23:55 2017 From: robert.grosschopff at suse.com (Robert Grosschopff (SUSE)) Date: Mon, 16 Jan 2017 10:23:55 +0100 Subject: [Deepsea-users] Stage 1 In-Reply-To: <32989236.ULVbX0sjI4@ruby> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> Message-ID: <1484558635.4015.17.camel@suse.com> Hi Eric, test.ping and pillar.items work fine. Rerunning stage 0 fails for one node. cephadm at salt:~> sudo salt-run state.orch ceph.stage.0 master_minion : valid ceph_version : valid None ########################################################### The salt-run command reports when all minions complete. The command may appear to hang. Interrupting (e.g. Ctrl-C) does not stop the command. In another terminal, try 'salt-run jobs.active' or 'salt-run state.event pretty=True' to see progress. ########################################################### False True [WARNING ] All minions are ready True [WARNING ] Output from salt state not highstate [ERROR ] Run failed on minions: ses4-4.local.site Failures: ses4-4.local.site: The minion function caused an exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1071, in _thread_return return_data = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/salt/modules/state.py", line 760, in sls ret = st_.state.call_high(high_) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2156, in call_high ret = dict(list(disabled.items()) + list(self.call_chunks(chunks).items())) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1688, in call_chunks running = self.call_chunk(low, running, chunks) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2042, in call_chunk self.event(running[tag], len(chunks), fire_event=low.get('fire_event')) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1836, in event [self.jid, self.opts['id'], str(chunk_ret['name'])], 'state_result' KeyError: 'name' On Fri, 2017-01-13 at 10:21 -0500, Eric Jackson wrote: > Hi Robert, > Do you think the minion wasn't responding initially? If > > salt '*' test.ping > salt '*' pillar.items > > is working, then rerun Stage 0 and Stage 1. I use vagrant with many VMs and > on occasion, I'll have one not respond during the initial setup and have to > manually intervene. > > Eric > > On Friday, January 13, 2017 12:29:47 PM Robert Grosschopff wrote: > > For some reason or other one of my minions is not discovered. The respective > > files (ses4-4.local.site.sls, ?) are missing thus preventing the cluster > > from coming up. > > > ?sudo salt ?*? test.ping? sees all minions. > > > > Thanks > > Robert > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From jschmid at suse.de Mon Jan 16 02:35:13 2017 From: jschmid at suse.de (Joshua Schmid) Date: Mon, 16 Jan 2017 10:35:13 +0100 Subject: [Deepsea-users] Stage 1 In-Reply-To: <1484558635.4015.17.camel@suse.com> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> <1484558635.4015.17.camel@suse.com> Message-ID: <92a289ed-4ff0-dbe3-fd41-95c23afcb378@suse.de> Hey Robert, On 01/16/2017 10:23 AM, Robert Grosschopff (SUSE) wrote: > Hi Eric, > > test.ping and pillar.items work fine. > > Rerunning stage 0 fails for one node. > > > cephadm at salt:~> sudo salt-run state.orch ceph.stage.0 > master_minion : valid > ceph_version : valid > None > > ########################################################### > The salt-run command reports when all minions complete. > The command may appear to hang. Interrupting (e.g. Ctrl-C) > does not stop the command. > > In another terminal, try 'salt-run jobs.active' or > 'salt-run state.event pretty=True' to see progress. > ########################################################### > > False > True > [WARNING ] All minions are ready > True > [WARNING ] Output from salt state not highstate > [ERROR ] Run failed on minions: ses4-4.local.site > Failures: > ses4-4.local.site: > The minion function caused an exception: Traceback (most recent > call last): > File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1071, > in _thread_return > return_data = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/salt/modules/state.py", > line 760, in sls > ret = st_.state.call_high(high_) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 2156, > in call_high > ret = dict(list(disabled.items()) + > list(self.call_chunks(chunks).items())) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 1688, > in call_chunks > running = self.call_chunk(low, running, chunks) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 2042, > in call_chunk > self.event(running[tag], len(chunks), > fire_event=low.get('fire_event')) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 1836, > in event > [self.jid, self.opts['id'], str(chunk_ret['name'])], > 'state_result' > KeyError: 'name' That looks like a version missmatch to me.. Could you ensure that you have salt installed from the ses4 repo on all nodes including the master? [..] >>> >>> Thanks >>> Robert >>> -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From Robert.Grosschopff at suse.com Mon Jan 16 03:13:04 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Mon, 16 Jan 2017 10:13:04 +0000 Subject: [Deepsea-users] Stage 1 In-Reply-To: <92a289ed-4ff0-dbe3-fd41-95c23afcb378@suse.de> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> <1484558635.4015.17.camel@suse.com> <92a289ed-4ff0-dbe3-fd41-95c23afcb378@suse.de> Message-ID: <1025CA04-49B2-45B6-AF16-EE2C0C4C1372@suse.com> Hi Joshua, package salt-minion and deepsea are all installed using the SES4.0 repository ? Information for package salt-minion: ------------------------------------ Repository : SES4.0 Name : salt-minion Version : 2015.8.7-17.1 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 30.0 KiB Installed : Yes Status : up-to-date Summary : The client component for Saltstack Description : Salt minion is queried and controlled from the master. Listens to the salt master and execute the commands. Thanks Robert On 16/01/2017, 10:35, "deepsea-users-bounces at lists.suse.com on behalf of Joshua Schmid" wrote: Hey Robert, On 01/16/2017 10:23 AM, Robert Grosschopff (SUSE) wrote: > Hi Eric, > > test.ping and pillar.items work fine. > > Rerunning stage 0 fails for one node. > > > cephadm at salt:~> sudo salt-run state.orch ceph.stage.0 > master_minion : valid > ceph_version : valid > None > > ########################################################### > The salt-run command reports when all minions complete. > The command may appear to hang. Interrupting (e.g. Ctrl-C) > does not stop the command. > > In another terminal, try 'salt-run jobs.active' or > 'salt-run state.event pretty=True' to see progress. > ########################################################### > > False > True > [WARNING ] All minions are ready > True > [WARNING ] Output from salt state not highstate > [ERROR ] Run failed on minions: ses4-4.local.site > Failures: > ses4-4.local.site: > The minion function caused an exception: Traceback (most recent > call last): > File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1071, > in _thread_return > return_data = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/salt/modules/state.py", > line 760, in sls > ret = st_.state.call_high(high_) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 2156, > in call_high > ret = dict(list(disabled.items()) + > list(self.call_chunks(chunks).items())) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 1688, > in call_chunks > running = self.call_chunk(low, running, chunks) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 2042, > in call_chunk > self.event(running[tag], len(chunks), > fire_event=low.get('fire_event')) > File "/usr/lib/python2.7/site-packages/salt/state.py", line 1836, > in event > [self.jid, self.opts['id'], str(chunk_ret['name'])], > 'state_result' > KeyError: 'name' That looks like a version missmatch to me.. Could you ensure that you have salt installed from the ses4 repo on all nodes including the master? [..] >>> >>> Thanks >>> Robert >>> -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From Supriti.Singh at suse.com Mon Jan 16 03:38:41 2017 From: Supriti.Singh at suse.com (Supriti Singh) Date: Mon, 16 Jan 2017 11:38:41 +0100 Subject: [Deepsea-users] Stage 1 In-Reply-To: <1484558635.4015.17.camel@suse.com> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> <1484558635.4015.17.camel@suse.com> Message-ID: <587CB0C102000042001C0DFC@smtp.nue.novell.com> There is a similar issue reported here https://github.com/SUSE/DeepSea/issues/15 It seems zypper failed on the minion, and salt did not report it correctly. ------ Supriti Singh??SUSE Linux GmbH, GF: Felix Imend??rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N??rnberg) >>> "Robert Grosschopff (SUSE)" 01/16/17 10:24 AM >>> Hi Eric, test.ping and pillar.items work fine. Rerunning stage 0 fails for one node. cephadm at salt:~> sudo salt-run state.orch ceph.stage.0 master_minion : valid ceph_version : valid None ########################################################### The salt-run command reports when all minions complete. The command may appear to hang. Interrupting (e.g. Ctrl-C) does not stop the command. In another terminal, try 'salt-run jobs.active' or 'salt-run state.event pretty=True' to see progress. ########################################################### False True [WARNING ] All minions are ready True [WARNING ] Output from salt state not highstate [ERROR ] Run failed on minions: ses4-4.local.site Failures: ses4-4.local.site: The minion function caused an exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1071, in _thread_return return_data = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/salt/modules/state.py", line 760, in sls ret = st_.state.call_high(high_) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2156, in call_high ret = dict(list(disabled.items()) + list(self.call_chunks(chunks).items())) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1688, in call_chunks running = self.call_chunk(low, running, chunks) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2042, in call_chunk self.event(running[tag], len(chunks), fire_event=low.get('fire_event')) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1836, in event [self.jid, self.opts['id'], str(chunk_ret['name'])], 'state_result' KeyError: 'name' On Fri, 2017-01-13 at 10:21 -0500, Eric Jackson wrote: > Hi Robert, > Do you think the minion wasn't responding initially? If > > salt '*' test.ping > salt '*' pillar.items > > is working, then rerun Stage 0 and Stage 1. I use vagrant with many VMs and > on occasion, I'll have one not respond during the initial setup and have to > manually intervene. > > Eric > > On Friday, January 13, 2017 12:29:47 PM Robert Grosschopff wrote: > > For some reason or other one of my minions is not discovered. The respective > > files (ses4-4.local.site.sls, ?) are missing thus preventing the cluster > > from coming up. > > > ?sudo salt ?*? test.ping? sees all minions. > > > > Thanks > > Robert > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Grosschopff at suse.com Tue Jan 17 05:07:23 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Tue, 17 Jan 2017 12:07:23 +0000 Subject: [Deepsea-users] Stage 1 In-Reply-To: <587CB0C102000042001C0DFC@smtp2.provo.novell.com> References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> <1484558635.4015.17.camel@suse.com> <587CB0C102000042001C0DFC@smtp2.provo.novell.com> Message-ID: Prepared another VM to act as an OSD, installed/enabled/started salt-minion.service, accepted the key and then ran stage.0 and stage.1 . Still no joy ? . Apart from ses4-[4|5].local.site.[yml|sls] no other files used by salt are created for the 4th or the 5th OSD. On 16/01/2017, 11:38, "Supriti Singh" wrote: There is a similar issue reported here https://github.com/SUSE/DeepSea/issues/15 It seems zypper failed on the minion, and salt did not report it correctly. ------ Supriti Singh SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) >>> "Robert Grosschopff (SUSE)" 01/16/17 10:24 AM >>> Hi Eric, test.ping and pillar.items work fine. Rerunning stage 0 fails for one node. cephadm at salt:~> sudo salt-run state.orch ceph.stage.0 master_minion : valid ceph_version : valid None ########################################################### The salt-run command reports when all minions complete. The command may appear to hang. Interrupting (e.g. Ctrl-C) does not stop the command. In another terminal, try 'salt-run jobs.active' or 'salt-run state.event pretty=True' to see progress. ########################################################### False True [WARNING ] All minions are ready True [WARNING ] Output from salt state not highstate [ERROR ] Run failed on minions: ses4-4.local.site Failures: ses4-4.local.site: The minion function caused an exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1071, in _thread_return return_data = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/salt/modules/state.py", line 760, in sls ret = st_.state.call_high(high_) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2156, in call_high ret = dict(list(disabled.items()) + list(self.call_chunks(chunks).items())) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1688, in call_chunks running = self.call_chunk(low, running, chunks) File "/usr/lib/python2.7/site-packages/salt/state.py", line 2042, in call_chunk self.event(running[tag], len(chunks), fire_event=low.get('fire_event')) File "/usr/lib/python2.7/site-packages/salt/state.py", line 1836, in event [self.jid, self.opts['id'], str(chunk_ret['name'])], 'state_result' KeyError: 'name' On Fri, 2017-01-13 at 10:21 -0500, Eric Jackson wrote: > Hi Robert, > Do you think the minion wasn't responding initially? If > > salt '*' test.ping > salt '*' pillar.items > > is working, then rerun Stage 0 and Stage 1. I use vagrant with many VMs and > on occasion, I'll have one not respond during the initial setup and have to > manually intervene. > > Eric > > On Friday, January 13, 2017 12:29:47 PM Robert Grosschopff wrote: > > For some reason or other one of my minions is not discovered. The respective > > files (ses4-4.local.site.sls, ?) are missing thus preventing the cluster > > from coming up. > > > ?sudo salt ?*? test.ping? sees all minions. > > > > Thanks > > Robert > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From jschmid at suse.de Tue Jan 17 05:15:01 2017 From: jschmid at suse.de (Joshua Schmid) Date: Tue, 17 Jan 2017 13:15:01 +0100 Subject: [Deepsea-users] Stage 1 In-Reply-To: References: <2CB0C4B5-6095-4526-BC8C-8C4686D24574@suse.com> <32989236.ULVbX0sjI4@ruby> <1484558635.4015.17.camel@suse.com> <587CB0C102000042001C0DFC@smtp2.provo.novell.com> Message-ID: <5c8555ab-8357-24ba-9ebe-b7f4be47c354@suse.de> The errors that you are getting are not very descriptive. Could you increase the loglevel to -> 'log_level: debug' in /etc/salt/minion and upload it? thanks On 01/17/2017 01:07 PM, Robert Grosschopff wrote: > Prepared another VM to act as an OSD, installed/enabled/started salt-minion.service, accepted the key and then ran stage.0 and stage.1 . > Still no joy ? . > Apart from ses4-[4|5].local.site.[yml|sls] no other files used by salt are created for the 4th or the 5th OSD. > [..] -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From jfajerski at suse.com Thu Jan 19 03:17:34 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 19 Jan 2017 11:17:34 +0100 Subject: [Deepsea-users] Deepsea dependency on salt-minion? In-Reply-To: <58809D180200001C002CB77D@prv-mh.provo.novell.com> References: <58809D180200001C002CB77D@prv-mh.provo.novell.com> Message-ID: <20170119101734.tuqsl44545wdj5im@jf_suse_laptop> On Thu, Jan 19, 2017 at 03:03:52AM -0700, Martin Weiss wrote: > Hi *, > I had expected that Deepsea needs to be installed on the salt-master - > but have seen that there is a dependency on salt-minion. > Any idea why we have this dependency? Yes DeepSea needs a minion on the master machine. This is most importantly used for key management. > (there are customers that do not want to have the salt-master to be a > salt-minion at the same point in time) Did the customer mention why they have an issue with that? > Thanks, > Martin >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com From jfajerski at suse.com Thu Jan 19 04:12:28 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 19 Jan 2017 12:12:28 +0100 Subject: [Deepsea-users] Antw: Re: Deepsea dependency on salt-minion? In-Reply-To: <5880A5B40200001C002CB7AB@prv-mh.provo.novell.com> References: <58809D180200001C002CB77D@prv-mh.provo.novell.com> <20170119101734.tuqsl44545wdj5im@jf_suse_laptop> <5880A5B40200001C002CB7AB@prv-mh.provo.novell.com> Message-ID: <20170119111228.l5zeekigmsfyifnp@jf_suse_laptop> On Thu, Jan 19, 2017 at 03:40:36AM -0700, Martin Weiss wrote: > If you do a "mistake" in targeting - you end up with "killing" the > master. > This is also the reason why SUMA servers per default to not patch or > configure "themselves".. > Could you give more details why we need a minion on the salt-master for > "key management"? Is this just for the ceph-keys or for ssh keys etc? > Salt should also be able to do any file management on remote minions > without requiring a minion on the master... (even getting the keys from > an other "remote" minion.) Talking about cephx keys only. What we gain is that we never leak keys to minions that have more privileges then the daemon on that host needs. I.e. the admin key is only needed on the master (and admin nodes of course) but not on OSDs for example. Otherwise one needs a privileged key on, say an OSD node to authorize the OSD key. So its not an issue of managing files but the way salt manages files in interaction with the cephx tools. > Martin > On Thu, Jan 19, 2017 at 03:03:52AM -0700, Martin Weiss wrote: > > Hi *, > > I had expected that Deepsea needs to be installed on the > salt-master - > > but have seen that there is a dependency on salt-minion. > > Any idea why we have this dependency? > Yes DeepSea needs a minion on the master machine. This is most > importantly used > for key management. > > (there are customers that do not want to have the salt-master to be > a > > salt-minion at the same point in time) > Did the customer mention why they have an issue with that? > > Thanks, > > Martin > >_______________________________________________ > >Deepsea-users mailing list > >Deepsea-users at lists.suse.com > >[1]http://lists.suse.com/mailman/listinfo/deepsea-users > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH > jfajerski at suse.com > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > [2]http://lists.suse.com/mailman/listinfo/deepsea-users > >References > > 1. http://lists.suse.com/mailman/listinfo/deepsea-users > 2. http://lists.suse.com/mailman/listinfo/deepsea-users >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com From Martin.Weiss at suse.com Thu Jan 19 04:25:04 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Thu, 19 Jan 2017 04:25:04 -0700 Subject: [Deepsea-users] Antw: Re: Antw: Re: Deepsea dependency on salt-minion? In-Reply-To: <20170119111228.l5zeekigmsfyifnp@jf_suse_laptop> References: <58809D180200001C002CB77D@prv-mh.provo.novell.com> <20170119101734.tuqsl44545wdj5im@jf_suse_laptop> <5880A5B40200001C002CB7AB@prv-mh.provo.novell.com> <20170119111228.l5zeekigmsfyifnp@jf_suse_laptop> Message-ID: <5880B0200200001C002CB809@prv-mh.provo.novell.com> > On Thu, Jan 19, 2017 at 03:40:36AM ?0700, Martin Weiss wrote: >> If you do a "mistake" in targeting ? you end up with "killing" the >> master. >> This is also the reason why SUMA servers per default to not patch or >> configure "themselves".. >> Could you give more details why we need a minion on the salt?master for >> "key management"? Is this just for the ceph?keys or for ssh keys etc? >> Salt should also be able to do any file management on remote minions >> without requiring a minion on the master... (even getting the keys from >> an other "remote" minion.) > Talking about cephx keys only. What we gain is that we never leak keys to > minions that have more privileges then the daemon on that host needs. I.e. > the > admin key is only needed on the master (and admin nodes of course) but not > on > OSDs for example. So where are these keys created? Do we create the on a server that has a minion and ceph installed? Keep in mind that customers are also not going to install ceph on their SUSE Manager server.. I would have expected that we create the keys via salt on a minion that has the ceph packages installed during initial cluster creation and then copy this to the master and use it as pilar for other minions.. Seems that is done different.. > Otherwise one needs a privileged key on, say an OSD node to authorize the > OSD > key. Salt has more or less "root" access - so what is the problem with using a special admin-key for ceph administration via salt? > So its not an issue of managing files but the way salt manages files in > interaction with the cephx tools. Ok - I have too less knowledge, here... but I can foresee problems in SUMA environments... >> Martin >> On Thu, Jan 19, 2017 at 03:03:52AM ?0700, Martin Weiss wrote: >> > Hi *, >> > I had expected that Deepsea needs to be installed on the >> salt?master ? >> > but have seen that there is a dependency on salt?minion. >> > Any idea why we have this dependency? >> Yes DeepSea needs a minion on the master machine. This is most >> importantly used >> for key management. >> > (there are customers that do not want to have the salt?master to be >> a >> > salt?minion at the same point in time) >> Did the customer mention why they have an issue with that? >> > Thanks, >> > Martin >> >_______________________________________________ >> >Deepsea?users mailing list >> >Deepsea?users at lists.suse.com >> >[1]http://lists.suse.com/mailman/listinfo/deepsea?users >> ?? >> Jan Fajerski >> Engineer Enterprise Storage >> SUSE Linux GmbH >> jfajerski at suse.com >> _______________________________________________ >> Deepsea?users mailing list >> Deepsea?users at lists.suse.com >> [2]http://lists.suse.com/mailman/listinfo/deepsea?users >> >>References >> >> 1. http://lists.suse.com/mailman/listinfo/deepsea?users >> 2. http://lists.suse.com/mailman/listinfo/deepsea?users > >>_______________________________________________ >>Deepsea?users mailing list >>Deepsea?users at lists.suse.com >>http://lists.suse.com/mailman/listinfo/deepsea?users > > > ?? > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH > jfajerski at suse.com > _______________________________________________ > Deepsea?users mailing list > Deepsea?users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea?users From loic.devulder at mpsa.com Thu Jan 19 07:42:32 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Thu, 19 Jan 2017 14:42:32 +0000 Subject: [Deepsea-users] WARNING message with DeepSea on dmidecode command Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDAC8B1@YLAV4460.INETPSA.com> Hi all, I'm installing a fresh Ceph (SES4) cluster using DeepSea and I have a warning on the dmidecode command and I'm not sure if it's "normal" or not. The WARNING message is not pretty to see :-) ylal8020:~ # salt-run state.orch ceph.stage.prep [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. master_minion : valid ceph_version : valid [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] All minions are ready ylal8020.inetpsa.com_master: Name: sync master - Function: salt.state - Result: Changed Started: - 15:17:53.184729 Duration: 661.644 ms Name: repo master - Function: salt.state - Result: Clean Started: - 15:17:53.846636 Duration: 437.289 ms Name: prepare master - Function: salt.state - Result: Changed Started: - 15:17:54.284157 Duration: 2182.235 ms Name: filequeue.remove - Function: salt.runner - Result: Clean Started: - 15:17:56.466615 Duration: 1200.11 ms Name: restart master - Function: salt.state - Result: Clean Started: - 15:17:57.666954 Duration: 826.666 ms Name: filequeue.add - Function: salt.runner - Result: Changed Started: - 15:17:58.493936 Duration: 622.81 ms Name: minions.ready - Function: salt.runner - Result: Changed Started: - 15:17:59.117152 Duration: 866.052 ms Name: begin - Function: salt.state - Result: Changed Started: - 15:17:59.983454 Duration: 3357.955 ms Name: sync - Function: salt.state - Result: Changed Started: - 15:18:03.342006 Duration: 875.244 ms Name: mines - Function: salt.state - Result: Changed Started: - 15:18:04.217517 Duration: 1185.075 ms Name: repo - Function: salt.state - Result: Clean Started: - 15:18:05.402991 Duration: 15170.784 ms Name: common packages - Function: salt.state - Result: Clean Started: - 15:18:20.574028 Duration: 2494.467 ms Name: updates - Function: salt.state - Result: Changed Started: - 15:18:23.068804 Duration: 2659.983 ms Name: restart - Function: salt.state - Result: Clean Started: - 15:18:25.729027 Duration: 977.038 ms Name: complete - Function: salt.state - Result: Changed Started: - 15:18:26.706312 Duration: 332.819 ms Summary for ylal8020.inetpsa.com_master ------------- Succeeded: 15 (changed=9) Failed: 0 ------------- Total states run: 15 Total run time: 33.850 s Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. From loic.devulder at mpsa.com Fri Jan 20 01:23:02 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 08:23:02 +0000 Subject: [Deepsea-users] WARNING message with DeepSea on dmidecode command In-Reply-To: <4397409.T4JhPd4Rbp@ruby> References: <3CBFA7CC2505A74B9C172B35128B88637EDAC8B1@YLAV4460.INETPSA.com> <4397409.T4JhPd4Rbp@ruby> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDACDF3@YLAV4460.INETPSA.com> Hi Eric, Thanks for your explanation :-) I tried your suggestion with logging but even with info loglevel I have the warning message (now with info messages too): ylal8020:~ # salt-run -l info state.orch ceph.stage.configure [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "default/None/cluster.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" ... As this warning message is not a problem for me I will let as this. Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: jeudi 19 > janvier 2017 22:39 ??: Discussions about the DeepSea management > framework for Ceph Objet?: Re: > [Deepsea-users] WARNING message with DeepSea on dmidecode command > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > Hi Loic, > Unfortunately, I'll call it "normal". I have seen it as well but > only on real hardware IIRC. When I checked into it, the message is > incorrect since the user could run dmidecode. > The real solution is to track this down and fix the check correctly > within Salt itself. In the meantime, if you are trying DeepSea > repeatedly and the message has crossed the annoyance threshold, > running with logging set to info might be acceptable. You would miss > out on the "All minions are ready" > message, but that may not be an issue for your current situation. > Two ways to set the log level. Either > /etc/salt/master.d/logging.conf (or /etc/salt/master itself) or > specify at the command line > > salt-run -l info state.orch ceph.stage.prep > > I am hopeful that this has been addressed in Salt 2016.11, but I > have not checked yet. > > Eric > > On Thursday, January 19, 2017 02:42:32 PM LOIC DEVULDER wrote: > > Hi all, > > > > I'm installing a fresh Ceph (SES4) cluster using DeepSea and I have > > a warning on the dmidecode command and I'm not sure if it's "normal" > > or > not. > > The WARNING message is not pretty to see :-) > > > > ylal8020:~ # salt-run state.orch ceph.stage.prep [WARNING ] Although > > 'dmidecode' was found in path, the current user cannot execute it. > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > was found in path, the current user cannot execute it. Grains output > > might not be accurate. [WARNING ] Although 'dmidecode' was found in > > path, the current user cannot execute it. Grains output might not be > > accurate. master_minion : valid > > ceph_version : valid > > [WARNING ] Although 'dmidecode' was found in path, the current user > > cannot execute it. Grains output might not be accurate. [WARNING ] > > Although 'dmidecode' was found in path, the current user cannot > > execute it. Grains output might not be accurate. [WARNING ] Although > > 'dmidecode' was found in path, the current user cannot execute it. > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > was found in path, the current user cannot execute it. Grains output > > might not be accurate. [WARNING ] All minions are ready > > ylal8020.inetpsa.com_master: > > Name: sync master - Function: salt.state - Result: Changed Started: > > - > > 15:17:53.184729 Duration: 661.644 ms Name: repo master - Function: > > salt.state - Result: Clean Started: - 15:17:53.846636 Duration: > > 437.289 ms > > Name: prepare master - Function: salt.state - Result: Changed Started: > > - > > 15:17:54.284157 Duration: 2182.235 ms Name: filequeue.remove - Function: > > salt.runner - Result: Clean Started: - 15:17:56.466615 Duration: > > 1200.11 ms > > Name: restart master - Function: salt.state - Result: Clean Started: > > - > > 15:17:57.666954 Duration: 826.666 ms Name: filequeue.add - Function: > > salt.runner - Result: Changed Started: - 15:17:58.493936 Duration: > > 622.81 ms Name: minions.ready - Function: salt.runner - Result: > > Changed Started: - > > 15:17:59.117152 Duration: 866.052 ms Name: begin - Function: > > salt.state - > > Result: Changed Started: - 15:17:59.983454 Duration: 3357.955 ms Name: > > sync > > - Function: salt.state - Result: Changed Started: - 15:18:03.342006 > > Duration: 875.244 ms Name: mines - Function: salt.state - Result: > > Changed > > Started: - 15:18:04.217517 Duration: 1185.075 ms Name: repo - Function: > > salt.state - Result: Clean Started: - 15:18:05.402991 Duration: > > 15170.784 ms Name: common packages - Function: salt.state - Result: > > Clean Started: - > > 15:18:20.574028 Duration: 2494.467 ms Name: updates - Function: > > salt.state > > - Result: Changed Started: - 15:18:23.068804 Duration: 2659.983 ms Name: > > restart - Function: salt.state - Result: Clean Started: - > > 15:18:25.729027 > > Duration: 977.038 ms Name: complete - Function: salt.state - Result: > > Changed Started: - 15:18:26.706312 Duration: 332.819 ms > > > > Summary for ylal8020.inetpsa.com_master > > ------------- > > Succeeded: 15 (changed=9) > > Failed: 0 > > ------------- > > Total states run: 15 > > Total run time: 33.850 s > > > > > > Regards / Cordialement, > > ___________________________________________________________________ > > PSA Groupe > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team > > BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: > > SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level > > 4: 22 92 40 > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > Mobile: +33 (0)6 87 72 47 31 > > ___________________________________________________________________ > > > > This message may contain confidential information. If you are not > > the intended recipient, please advise the sender immediately and > > delete this message. For further information on confidentiality and > > the risks inherent in electronic communication see > > http://disclaimer.psa-peugeot- > citroen.com. > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users From swamireddy at gmail.com Fri Jan 20 01:24:04 2017 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Fri, 20 Jan 2017 13:54:04 +0530 Subject: [Deepsea-users] DeepSea for Ubuntu Message-ID: Hello, Is there any progress to port current code to Ubuntu ? Please update. Thanks Swami -------------- next part -------------- An HTML attachment was scrubbed... URL: From loic.devulder at mpsa.com Fri Jan 20 02:38:05 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 09:38:05 +0000 Subject: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> Hi all! I have a strange behaviour with the Stage 3 of DeepSea (I'm pretty sure I've correctly read the SES4 manual :-)). I have an error in the storage part: ylal8020:~ # salt-run -l info state.orch ceph.stage.deploy [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "default/None/cluster.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "default/None/ceph_conf.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "default/None/minions/ylal8020.inetpsa.com_master.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "global.yml": Can't parse as a valid yaml dictionary [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "None/cluster.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "None/ceph_conf.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "None/minions/ylal8020.inetpsa.com_master.yml": can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Loading fresh modules for state activity [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://ceph/stage/deploy/init.sls' [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://ceph/stage/deploy/default.sls' [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. firewall : ['enabled on minion ylxl0080.inetpsa.com', 'enabled on minion ylal8300.inetpsa.com', 'enabled on minion ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion ylal8030.inetpsa.com'] [INFO ] Runner completed: 20170120094422738754 [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. fsid : valid public_network : valid public_interface : valid cluster_network : valid cluster_interface : valid monitors : valid master_role : valid mon_host : valid mon_initial_members : valid time_server : valid fqdn : valid storage : ['Storage nodes ylxl0080.inetpsa.com,ylxl0050.inetpsa.com,ylxl0060.inetpsa.com,ylxl0070.inetpsa.com missing storage attribute. Check /srv/pillar/ceph/stack/ceph/minions/*.yml and /srv/pillar/ceph/stack/default/ceph/minions/*.yml'] [INFO ] Runner completed: 20170120094423541025 [INFO ] Running state [Fail on Warning is True] at time 09:44:24.218000 [INFO ] Executing state salt.state for Fail on Warning is True [ERROR ] No highstate or sls specified, no execution made [INFO ] Completed state [Fail on Warning is True] at time 09:44:24.219177 ylal8020.inetpsa.com_master: ---------- ID: ready check failed Function: salt.state Name: Fail on Warning is True Result: False Comment: No highstate or sls specified, no execution made Started: 09:44:24.218000 Duration: 1.177 ms Changes: Summary for ylal8020.inetpsa.com_master ------------ Succeeded: 0 Failed: 1 ------------ Total states run: 1 Total run time: 1.177 ms [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. [ERROR ] prep_jid could not store a jid after 5 tries. [ERROR ] Could not store job cache info. Job details for this run may be unavailable. [INFO ] Runner completed: 20170120094421330276 Re-executing the Stage 2 doesn't do anythings. I saw a role-storage directory that was empty, I tried to create the sls files inside it but no change (I re-executed the Stage 2 after the change). Is someone has an idea of what can I do? Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. From jschmid at suse.de Fri Jan 20 03:02:05 2017 From: jschmid at suse.de (Joshua Schmid) Date: Fri, 20 Jan 2017 11:02:05 +0100 Subject: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea In-Reply-To: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> References: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> Message-ID: <56389cd8-410e-6306-c0af-46df23e99716@suse.de> Hey Loic, On 01/20/2017 10:38 AM, LOIC DEVULDER wrote: [..] > firewall : ['enabled on minion ylxl0080.inetpsa.com', 'enabled on minion ylal8300.inetpsa.com', 'enabled on minion ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion ylal8030.inetpsa.com'] > [INFO ] Runner completed: 20170120094422738754 That's not the root cause, but you might disable the firewall. > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. > fsid : valid > public_network : valid > public_interface : valid > cluster_network : valid > cluster_interface : valid > monitors : valid > master_role : valid > mon_host : valid > mon_initial_members : valid > time_server : valid > fqdn : valid > storage : ['Storage nodes ylxl0080.inetpsa.com,ylxl0050.inetpsa.com,ylxl0060.inetpsa.com,ylxl0070.inetpsa.com missing storage attribute. Check /srv/pillar/ceph/stack/ceph/minions/*.yml and /srv/pillar/ceph/stack/default/ceph/minions/*.yml'] > [INFO ] Runner completed: 20170120094423541025 > [INFO ] Running state [Fail on Warning is True] at time 09:44:24.218000 > [INFO ] Executing state salt.state for Fail on Warning is True > [ERROR ] No highstate or sls specified, no execution made > [INFO ] Completed state [Fail on Warning is True] at time 09:44:24.219177 > ylal8020.inetpsa.com_master: > ---------- > ID: ready check failed > Function: salt.state > Name: Fail on Warning is True > Result: False > Comment: No highstate or sls specified, no execution made > Started: 09:44:24.218000 > Duration: 1.177 ms > Changes: > > Summary for ylal8020.inetpsa.com_master > ------------ > Succeeded: 0 > Failed: 1 > ------------ > Total states run: 1 > Total run time: 1.177 ms > [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. > [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. > [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. > [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. > [WARNING ] Could not write out jid file for job 20170120094421330276. Retrying. > [ERROR ] prep_jid could not store a jid after 5 tries. > [ERROR ] Could not store job cache info. Job details for this run may be unavailable. > [INFO ] Runner completed: 20170120094421330276 Could you provide us your policy.cfg? and whats the output of: ls /srv/pillar/ceph/proposals/profile-*/**/ > > Re-executing the Stage 2 doesn't do anythings. > > I saw a role-storage directory that was empty, I tried to create the sls files inside it but no change (I re-executed the Stage 2 after the change). The `profile-*` directory indicates the suggested disks used for cluster deployment. e.g profile-1-DISK_MODELNAME-5-DISK_MODELNAME/ The role-storage exists mainly for internal targeting. What you should look for is: ls /srv/pillar/ceph/proposals/profile-*/**/ there should be a `cluster` and a `stack` directory. the `cluster` directory consists of .sls files with a content if it's assigned role. The content should look like this: roles: - storage In the `stack` directory you will find some sub-directories -> [default/ceph/minions]. Check if you find .yml files with a similar content of: storage: data+journals: [] osds: - /dev/disk/by-id/scsi-UUID_OF_THE_DISKS - /dev/disk/by-id/scsi-UUID_OF_THE_DISKS > > Is someone has an idea of what can I do? > > Regards / Cordialement, > ___________________________________________________________________ > PSA Groupe > Lo?c Devulder (loic.devulder at mpsa.com) > Senior Linux System Engineer / Linux HPC Specialist > DF/DDCE/ISTA/DSEP/ULES - Linux Team > BESSONCOURT / EXTENSION RIVE DROITE / B19 > Internal postal address: SX.BES.15 > Phone Incident - Level 3: 22 94 39 > Phone Incident - Level 4: 22 92 40 > Office: +33 (0)9 66 66 69 06 (27 69 06) > Mobile: +33 (0)6 87 72 47 31 > ___________________________________________________________________ > > This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From loic.devulder at mpsa.com Fri Jan 20 06:55:29 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 13:55:29 +0000 Subject: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea In-Reply-To: <56389cd8-410e-6306-c0af-46df23e99716@suse.de> References: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> <56389cd8-410e-6306-c0af-46df23e99716@suse.de> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDAD151@YLAV4460.INETPSA.com> Hi Joshua, Thanks for your help: after reading you explanation I found that finally I didn't read the documentation correctly :-( I missed to put the stack directory of the hardware profiles in the policy.cfg file and so the disks informations were missing. Now it's better, all parts are valided. But I run into another issue: ylal8020:~ # salt-run state.orch ceph.stage.deploy [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. firewall : ['enabled on minion ylal8300.inetpsa.com', 'enabled on minion ylxl0080.inetpsa.com', 'enabled on minion ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion ylal8030.inetpsa.com'] [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. fsid : valid public_network : valid public_interface : valid cluster_network : valid cluster_interface : valid monitors : valid storage : valid master_role : valid mon_host : valid mon_initial_members : valid time_server : valid fqdn : valid [ERROR ] No highstate or sls specified, no execution made ylal8020.inetpsa.com_master: ---------- ID: ready check failed Function: salt.state Name: Fail on Warning is True Result: False Comment: No highstate or sls specified, no execution made Started: 14:48:20.293455 Duration: 0.967 ms Changes: Summary for ylal8020.inetpsa.com_master ------------ Succeeded: 0 Failed: 1 ------------ Total states run: 1 Total run time: 0.967 ms I'm reading the documentation again to see where I failed :-) Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Joshua Schmid > Envoy??: vendredi 20 janvier 2017 11:02 > ??: deepsea-users at lists.suse.com > Objet?: Re: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > Hey Loic, > > On 01/20/2017 10:38 AM, LOIC DEVULDER wrote: > [..] > > firewall : ['enabled on minion ylxl0080.inetpsa.com', > 'enabled on minion ylal8300.inetpsa.com', 'enabled on minion > ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled > on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', > 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion > ylal8030.inetpsa.com'] > > [INFO ] Runner completed: 20170120094422738754 > > That's not the root cause, but you might disable the firewall. > > > [WARNING ] Although 'dmidecode' was found in path, the current user > cannot execute it. Grains output might not be accurate. > > fsid : valid > > public_network : valid > > public_interface : valid > > cluster_network : valid > > cluster_interface : valid > > monitors : valid > > master_role : valid > > mon_host : valid > > mon_initial_members : valid > > time_server : valid > > fqdn : valid > > storage : ['Storage nodes > ylxl0080.inetpsa.com,ylxl0050.inetpsa.com,ylxl0060.inetpsa.com,ylxl0070.in > etpsa.com missing storage attribute. Check > /srv/pillar/ceph/stack/ceph/minions/*.yml and > /srv/pillar/ceph/stack/default/ceph/minions/*.yml'] > > [INFO ] Runner completed: 20170120094423541025 > > [INFO ] Running state [Fail on Warning is True] at time > 09:44:24.218000 > > [INFO ] Executing state salt.state for Fail on Warning is True > > [ERROR ] No highstate or sls specified, no execution made > > [INFO ] Completed state [Fail on Warning is True] at time > 09:44:24.219177 > > ylal8020.inetpsa.com_master: > > ---------- > > ID: ready check failed > > Function: salt.state > > Name: Fail on Warning is True > > Result: False > > Comment: No highstate or sls specified, no execution made > > Started: 09:44:24.218000 > > Duration: 1.177 ms > > Changes: > > > > Summary for ylal8020.inetpsa.com_master > > ------------ > > Succeeded: 0 > > Failed: 1 > > ------------ > > Total states run: 1 > > Total run time: 1.177 ms > > [WARNING ] Could not write out jid file for job 20170120094421330276. > Retrying. > > [WARNING ] Could not write out jid file for job 20170120094421330276. > Retrying. > > [WARNING ] Could not write out jid file for job 20170120094421330276. > Retrying. > > [WARNING ] Could not write out jid file for job 20170120094421330276. > Retrying. > > [WARNING ] Could not write out jid file for job 20170120094421330276. > Retrying. > > [ERROR ] prep_jid could not store a jid after 5 tries. > > [ERROR ] Could not store job cache info. Job details for this run may > be unavailable. > > [INFO ] Runner completed: 20170120094421330276 > > > Could you provide us your policy.cfg? > > and whats the output of: > > ls /srv/pillar/ceph/proposals/profile-*/**/ > > > > > Re-executing the Stage 2 doesn't do anythings. > > > > I saw a role-storage directory that was empty, I tried to create the sls > files inside it but no change (I re-executed the Stage 2 after the > change). > > > The `profile-*` directory indicates the suggested disks used for cluster > deployment. e.g profile-1-DISK_MODELNAME-5-DISK_MODELNAME/ > > The role-storage exists mainly for internal targeting. What you should > look for is: > > ls /srv/pillar/ceph/proposals/profile-*/**/ > > there should be a `cluster` and a `stack` directory. > > the `cluster` directory consists of .sls files with a content if > it's assigned role. The content should look like this: > > > roles: > - storage > > In the `stack` directory you will find some sub-directories -> > [default/ceph/minions]. > Check if you find .yml files with a similar content of: > > storage: > data+journals: [] > osds: > - /dev/disk/by-id/scsi-UUID_OF_THE_DISKS > - /dev/disk/by-id/scsi-UUID_OF_THE_DISKS > > > > > > Is someone has an idea of what can I do? > > > > Regards / Cordialement, > > ___________________________________________________________________ > > PSA Groupe > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / > > Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / > > EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone > > Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > Mobile: +33 (0)6 87 72 47 31 > > ___________________________________________________________________ > > > > This message may contain confidential information. If you are not the > intended recipient, please advise the sender immediately and delete this > message. For further information on confidentiality and the risks inherent > in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > > > > -- > Freundliche Gr??e - Kind regards, > Joshua Schmid > SUSE Enterprise Storage > SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg > -------------------------------------------------------------------------- > ------------------------------------------ > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, > Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) > -------------------------------------------------------------------------- > ------------------------------------------ > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From loic.devulder at mpsa.com Fri Jan 20 07:01:58 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 14:01:58 +0000 Subject: [Deepsea-users] WARNING message with DeepSea on dmidecode command In-Reply-To: <64565999.EsnP8FyQyV@ruby> References: <3CBFA7CC2505A74B9C172B35128B88637EDAC8B1@YLAV4460.INETPSA.com> <4397409.T4JhPd4Rbp@ruby> <3CBFA7CC2505A74B9C172B35128B88637EDACDF3@YLAV4460.INETPSA.com> <64565999.EsnP8FyQyV@ruby> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDAD187@YLAV4460.INETPSA.com> No problem -:) And yes I find all the log level descriptions in the config file. Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: vendredi > 20 janvier 2017 14:21 ??: deepsea-users at lists.suse.com Objet?: Re: > [Deepsea-users] WARNING message with DeepSea on dmidecode command > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > Hi Loic, > Sorry, brain was in reverse. :) I should have said use > > salt-run -l error state.orch ceph.stage.configure > > because INFO includes WARNING which is why you only received > additional messages. BTW, these levels are listed in /etc/salt/master > with log_level. > > Eric > > On Friday, January 20, 2017 08:23:02 AM LOIC DEVULDER wrote: > > Hi Eric, > > > > Thanks for your explanation :-) > > > > I tried your suggestion with logging but even with info loglevel I > > have the warning message (now with info messages too): ylal8020:~ # > > salt-run -l info state.orch ceph.stage.configure [WARNING ] Although > > 'dmidecode' was found in path, the current user cannot execute it. > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > was found in path, the current user cannot execute it. Grains > > output might not be accurate. [INFO ] Ignoring pillar stack template > "": > > can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring > > pillar stack template "": can't find from root dir > "/srv/pillar/ceph/stack" > > [INFO ] Ignoring pillar stack template "": can't find from root dir > > "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": > > can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring > > pillar stack template "default/None/cluster.yml": can't find from > > root > dir > > "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": > > can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring > > pillar stack template "": can't find from root dir > "/srv/pillar/ceph/stack" > > [INFO ] Ignoring pillar stack template "": can't find from root dir > > "/srv/pillar/ceph/stack" [INFO ] Ignoring pillar stack template "": > > can't find from root dir "/srv/pillar/ceph/stack" [INFO ] Ignoring > > pillar stack template "": can't find from root dir > "/srv/pillar/ceph/stack" > > ... > > > > As this warning message is not a problem for me I will let as this. > > > > Regards / Cordialement, > > ___________________________________________________________________ > > PSA Groupe > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team > > BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal > > address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident > > - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) > > Mobile: +33 (0)6 87 72 47 31 > > ___________________________________________________________________ > > > > This message may contain confidential information. If you are not > > the intended recipient, please advise the sender immediately and > > delete this message. For further information on confidentiality and > > the risks inherent in electronic communication see > > http://disclaimer.psa-peugeot- > citroen.com. > > > -----Message d'origine----- > > > De : deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > > > bounces at lists.suse.com] De la part de Eric Jackson Envoy? : jeudi > > > 19 janvier 2017 22:39 ? : Discussions about the DeepSea management > > > framework for Ceph Objet : Re: > > > [Deepsea-users] WARNING message with DeepSea on dmidecode command > > > > > > >>> Real sender address / Reelle adresse d expedition : > > > >>> deepsea-users-bounces at lists.suse.com <<< > > > > > > ****************************************************************** > > > ** > > > ** > > > Hi Loic, > > > > > > Unfortunately, I'll call it "normal". I have seen it as well > > > but > > > > > > only on real hardware IIRC. When I checked into it, the message > > > is incorrect since the user could run dmidecode. > > > > > > The real solution is to track this down and fix the check > > > correctly > > > > > > within Salt itself. In the meantime, if you are trying DeepSea > > > repeatedly and the message has crossed the annoyance threshold, > > > running with logging set to info might be acceptable. You would > > > miss out on the "All minions are ready" > > > message, but that may not be an issue for your current situation. > > > > > > Two ways to set the log level. Either > > > > > > /etc/salt/master.d/logging.conf (or /etc/salt/master itself) or > > > specify at the command line > > > > > > salt-run -l info state.orch ceph.stage.prep > > > > > > I am hopeful that this has been addressed in Salt 2016.11, but I > > > > > > have not checked yet. > > > > > > Eric > > > > > > On Thursday, January 19, 2017 02:42:32 PM LOIC DEVULDER wrote: > > > > Hi all, > > > > > > > > I'm installing a fresh Ceph (SES4) cluster using DeepSea and I > > > > have a warning on the dmidecode command and I'm not sure if it's > "normal" > > > > or > > > > > > not. > > > > > > > The WARNING message is not pretty to see :-) > > > > > > > > ylal8020:~ # salt-run state.orch ceph.stage.prep [WARNING ] > > > > Although 'dmidecode' was found in path, the current user cannot > execute it. > > > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > > > was found in path, the current user cannot execute it. Grains > > > > output might not be accurate. [WARNING ] Although 'dmidecode' > > > > was found in path, the current user cannot execute it. Grains > > > > output > might not be > > > > accurate. master_minion : valid > > > > ceph_version : valid > > > > [WARNING ] Although 'dmidecode' was found in path, the current > > > > user cannot execute it. Grains output might not be accurate. > > > > [WARNING ] Although 'dmidecode' was found in path, the current > > > > user cannot execute it. Grains output might not be accurate. > > > > [WARNING ] Although 'dmidecode' was found in path, the current > > > > user > cannot execute it. > > > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > > > was found in path, the current user cannot execute it. Grains > > > > output might not be accurate. [WARNING ] All minions are ready > > > > > > > > ylal8020.inetpsa.com_master: > > > > Name: sync master - Function: salt.state - Result: Changed > Started: > > > > - > > > > 15:17:53.184729 Duration: 661.644 ms Name: repo master - Function: > > > > salt.state - Result: Clean Started: - 15:17:53.846636 Duration: > > > > 437.289 ms > > > > Name: prepare master - Function: salt.state - Result: Changed > Started: > > > > - > > > > 15:17:54.284157 Duration: 2182.235 ms Name: filequeue.remove - > Function: > > > > salt.runner - Result: Clean Started: - 15:17:56.466615 Duration: > > > > 1200.11 ms > > > > Name: restart master - Function: salt.state - Result: Clean Started: > > > > - > > > > 15:17:57.666954 Duration: 826.666 ms Name: filequeue.add - Function: > > > > salt.runner - Result: Changed Started: - 15:17:58.493936 Duration: > > > > 622.81 ms Name: minions.ready - Function: salt.runner - Result: > > > > Changed Started: - > > > > 15:17:59.117152 Duration: 866.052 ms Name: begin - Function: > > > > salt.state - > > > > Result: Changed Started: - 15:17:59.983454 Duration: 3357.955 ms > Name: > > > > sync > > > > - Function: salt.state - Result: Changed Started: - > > > > 15:18:03.342006 > > > > Duration: 875.244 ms Name: mines - Function: salt.state - Result: > > > > Changed > > > > Started: - 15:18:04.217517 Duration: 1185.075 ms Name: repo - > Function: > > > > salt.state - Result: Clean Started: - 15:18:05.402991 Duration: > > > > 15170.784 ms Name: common packages - Function: salt.state - Result: > > > > Clean Started: - > > > > 15:18:20.574028 Duration: 2494.467 ms Name: updates - Function: > > > > salt.state > > > > - Result: Changed Started: - 15:18:23.068804 Duration: 2659.983 > > > > ms > Name: > > > > restart - Function: salt.state - Result: Clean Started: - > > > > 15:18:25.729027 > > > > Duration: 977.038 ms Name: complete - Function: salt.state - Result: > > > > Changed Started: - 15:18:26.706312 Duration: 332.819 ms > > > > > > > > Summary for ylal8020.inetpsa.com_master > > > > ------------- > > > > Succeeded: 15 (changed=9) > > > > Failed: 0 > > > > ------------- > > > > Total states run: 15 > > > > Total run time: 33.850 s > > > > > > > > > > > > Regards / Cordialement, > > > > ________________________________________________________________ > > > > __ > > > > _ > > > > PSA Groupe > > > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System > > > > Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux > > > > Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal > address: > > > > SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - > > > > Level > > > > 4: 22 92 40 > > > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > > > Mobile: +33 (0)6 87 72 47 31 > > > > ________________________________________________________________ > > > > __ > > > > _ > > > > > > > > This message may contain confidential information. If you are > > > > not the intended recipient, please advise the sender immediately > > > > and delete this message. For further information on > > > > confidentiality and the risks inherent in electronic > > > > communication see http://disclaimer.psa-peugeot-> > > > > citroen.com. > > > > > > > _______________________________________________ > > > > Deepsea-users mailing list > > > > Deepsea-users at lists.suse.com > > > > http://lists.suse.com/mailman/listinfo/deepsea-users > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users From jschmid at suse.de Fri Jan 20 07:15:43 2017 From: jschmid at suse.de (Joshua Schmid) Date: Fri, 20 Jan 2017 15:15:43 +0100 Subject: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea In-Reply-To: <3CBFA7CC2505A74B9C172B35128B88637EDAD151@YLAV4460.INETPSA.com> References: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> <56389cd8-410e-6306-c0af-46df23e99716@suse.de> <3CBFA7CC2505A74B9C172B35128B88637EDAD151@YLAV4460.INETPSA.com> Message-ID: <8c9a90cf-c668-1845-e94e-105012bb1fc1@suse.de> On 01/20/2017 02:55 PM, LOIC DEVULDER wrote: > Hi Joshua, > > Thanks for your help: after reading you explanation I found that finally I didn't read the documentation correctly :-( > I missed to put the stack directory of the hardware profiles in the policy.cfg file and so the disks informations were missing. > > Now it's better, all parts are valided. But I run into another issue: > ylal8020:~ # salt-run state.orch ceph.stage.deploy > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. > firewall : ['enabled on minion ylal8300.inetpsa.com', 'enabled on minion ylxl0080.inetpsa.com', 'enabled on minion ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion ylal8030.inetpsa.com'] > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. > fsid : valid > public_network : valid > public_interface : valid > cluster_network : valid > cluster_interface : valid > monitors : valid > storage : valid > master_role : valid > mon_host : valid > mon_initial_members : valid > time_server : valid > fqdn : valid > [ERROR ] No highstate or sls specified, no execution made > ylal8020.inetpsa.com_master: > ---------- > ID: ready check failed > Function: salt.state > Name: Fail on Warning is True > Result: False > Comment: No highstate or sls specified, no execution made > Started: 14:48:20.293455 > Duration: 0.967 ms > Changes: > > Summary for ylal8020.inetpsa.com_master > ------------ > Succeeded: 0 > Failed: 1 > ------------ > Total states run: 1 > Total run time: 0.967 ms > > I'm reading the documentation again to see where I failed :-) The last piece you are missing is `firewall`. The error message is a bit shadowed due to the cluttered logs. As Eric mentioned in you other mail: salt-run -l error state.orch ceph.stage.configure because INFO includes WARNING which is why you only received additional messages. BTW, these levels are listed in /etc/salt/master with log_level. try to disable the firewall with: utilizing salt -> salt "*" cmd.run "systemctl stop SuSEfirewall2" hth -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From loic.devulder at mpsa.com Fri Jan 20 07:31:17 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 14:31:17 +0000 Subject: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea In-Reply-To: <8c9a90cf-c668-1845-e94e-105012bb1fc1@suse.de> References: <3CBFA7CC2505A74B9C172B35128B88637EDACEB8@YLAV4460.INETPSA.com> <56389cd8-410e-6306-c0af-46df23e99716@suse.de> <3CBFA7CC2505A74B9C172B35128B88637EDAD151@YLAV4460.INETPSA.com> <8c9a90cf-c668-1845-e94e-105012bb1fc1@suse.de> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDAD1D7@YLAV4460.INETPSA.com> Yes, it's the firewall that causes problem. Now all works well :-) I configured the firewall to authorize salt, but it's seems that the "salt" service is not known on the minions. This is because the service declaration is in the salt-master package. I will try to manually add the port in the /etc/sysconfig/SuSEfirewall2 file. I only have a few problem with ntp: I will open a new thread for this little issue. Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Joshua Schmid > Envoy??: vendredi 20 janvier 2017 15:16 > ??: deepsea-users at lists.suse.com > Objet?: Re: [Deepsea-users] Error during Stage 3 (deploy) of DeepSea > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > > > On 01/20/2017 02:55 PM, LOIC DEVULDER wrote: > > Hi Joshua, > > > > Thanks for your help: after reading you explanation I found that > > finally I didn't read the documentation correctly :-( I missed to put > the stack directory of the hardware profiles in the policy.cfg file and so > the disks informations were missing. > > > > Now it's better, all parts are valided. But I run into another issue: > > ylal8020:~ # salt-run state.orch ceph.stage.deploy [WARNING ] Although > > 'dmidecode' was found in path, the current user cannot execute it. > Grains output might not be accurate. > > [WARNING ] Although 'dmidecode' was found in path, the current user > cannot execute it. Grains output might not be accurate. > > [WARNING ] Although 'dmidecode' was found in path, the current user > cannot execute it. Grains output might not be accurate. > > firewall : ['enabled on minion ylal8300.inetpsa.com', > 'enabled on minion ylxl0080.inetpsa.com', 'enabled on minion > ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled > on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', > 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion > ylal8030.inetpsa.com'] > > [WARNING ] Although 'dmidecode' was found in path, the current user > cannot execute it. Grains output might not be accurate. > > fsid : valid > > public_network : valid > > public_interface : valid > > cluster_network : valid > > cluster_interface : valid > > monitors : valid > > storage : valid > > master_role : valid > > mon_host : valid > > mon_initial_members : valid > > time_server : valid > > fqdn : valid > > [ERROR ] No highstate or sls specified, no execution made > > ylal8020.inetpsa.com_master: > > ---------- > > ID: ready check failed > > Function: salt.state > > Name: Fail on Warning is True > > Result: False > > Comment: No highstate or sls specified, no execution made > > Started: 14:48:20.293455 > > Duration: 0.967 ms > > Changes: > > > > Summary for ylal8020.inetpsa.com_master > > ------------ > > Succeeded: 0 > > Failed: 1 > > ------------ > > Total states run: 1 > > Total run time: 0.967 ms > > > > I'm reading the documentation again to see where I failed :-) > > The last piece you are missing is `firewall`. > The error message is a bit shadowed due to the cluttered logs. As Eric > mentioned in you other mail: > > salt-run -l error state.orch ceph.stage.configure > > because INFO includes WARNING which is why you only received additional > messages. BTW, these levels are listed in /etc/salt/master with > log_level. > > > try to disable the firewall with: > > utilizing salt -> salt "*" cmd.run "systemctl stop SuSEfirewall2" > > hth > > > -- > Freundliche Gr??e - Kind regards, > Joshua Schmid > SUSE Enterprise Storage > SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg > -------------------------------------------------------------------------- > ------------------------------------------ > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, > Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) > -------------------------------------------------------------------------- > ------------------------------------------ > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From loic.devulder at mpsa.com Fri Jan 20 08:16:13 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Fri, 20 Jan 2017 15:16:13 +0000 Subject: [Deepsea-users] NTP configuration Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDAD27E@YLAV4460.INETPSA.com> Hi, During my tests with DeepSea I ran into a little problem: I can't be able to remove the NTP configuration. Ok I know: why should I want to do this? Simply because I already have NTP configured on my servers (we have a custom NTP config in my company). I try to remove these lines from the /srv/pillar/ceph/proposals/config/stack/default/global.yml file: ylal8020:/srv/pillar # cat ceph/proposals/config/stack/default/global.yml time_server: '{{ pillar.get("master_minion") }}' time_service: ntp But I ran into a weird issue while trying to execute the configuration stage: ylal8020:/srv/pillar # salt-run state.orch ceph.stage.configure [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. ylal8020.inetpsa.com_master: Name: push.proposal - Function: salt.runner - Result: Changed Started: - 15:53:58.352226 Duration: 563.82 ms Name: refresh_pillar1 - Function: salt.state - Result: Changed Started: - 15:53:58.916733 Duration: 589.218 ms Name: configure.cluster - Function: salt.runner - Result: Changed Started: - 15:53:59.506662 Duration: 1003.844 ms Name: refresh_pillar2 - Function: salt.state - Result: Changed Started: - 15:54:00.511544 Duration: 662.566 ms Name: admin key - Function: salt.state - Result: Clean Started: - 15:54:01.174305 Duration: 455.286 ms Name: mon key - Function: salt.state - Result: Clean Started: - 15:54:01.629844 Duration: 396.696 ms Name: osd key - Function: salt.state - Result: Clean Started: - 15:54:02.026768 Duration: 391.508 ms Name: igw key - Function: salt.state - Result: Clean Started: - 15:54:02.418500 Duration: 1192.624 ms Name: mds key - Function: salt.state - Result: Clean Started: - 15:54:03.611366 Duration: 1172.492 ms Name: rgw key - Function: salt.state - Result: Clean Started: - 15:54:04.784086 Duration: 1193.912 ms Name: openattic key - Function: salt.state - Result: Clean Started: - 15:54:05.978226 Duration: 393.879 ms Name: igw config - Function: salt.state - Result: Clean Started: - 15:54:06.372340 Duration: 1183.398 ms Summary for ylal8020.inetpsa.com_master ------------- Succeeded: 12 (changed=4) Failed: 0 ------------- Total states run: 12 Total run time: 9.199 s Ok I know there is no direct error but the pillar.items is not good, some items are missing: ylal8020:/srv/pillar # salt '*' pillar.items ylal8300.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - mon ylal8030.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - mon ylal8020.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - master - admin ylxl0060.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - storage ylxl0050.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - storage ylal8290.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - mon ylxl0080.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - storage ylxl0070.inetpsa.com: ---------- benchmark: ---------- default-collection: simple.yml job-file-directory: /run/cephfs_bench_jobs log-file-directory: /var/log/cephfs_bench_logs work-directory: /run/cephfs_bench cluster: ceph master_minion: ylal8020.inetpsa.com mon_host: mon_initial_members: - ylal8290 - ylal8030 - ylal8300 roles: - storage So my "simple" question is: how I can configure global.yml to not let DeepSea configure NTP? Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. From Martin.Weiss at suse.com Mon Jan 23 00:25:01 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Mon, 23 Jan 2017 00:25:01 -0700 Subject: [Deepsea-users] Antw: Re: Integration of DeepSea in a ceph-deploy based installation In-Reply-To: <1604394.1GZ50RZMij@ruby> References: <5880D81E0200001C002CB8DE@prv-mh.provo.novell.com> <5880DDEC0200001C002CB8F4@prv-mh.provo.novell.com> <5880DDEC0200001C002CB8F4@prv-mh.provo.novell.com> <1604394.1GZ50RZMij@ruby> Message-ID: <5885BDDD0200001C002CBEC5@prv-mh.provo.novell.com> > On Thursday, January 19, 2017 07:40:28 AM Martin Weiss wrote: >> Hi *, >> >> in case I want to add DeepSea to an existing SES deployment ? how can I do >> that without "destroying" or "changing" or "re?deploying" anything in the >> existing cluster? (in case that is already possible) >> >> Thanks, >> Martin > Hi Martin, > I have not gone down this path yet. Technically, you can run Stage 1 > without causing any harm to your existing cluster. It's running a discovery > > and only writing files to the master. The next step would be the issue. > How comfortable do you feel in creating a policy.cfg that matches the > existing environment? What about enhancing deepsea with creating the policy.cfg automatically? In case I want to integrated deepsea into an existing environment - I believe this is a must have.. > This also implies verifying/creating/selecting the > hardware profile for the various storage nodes. For a simple cluster of > stand? > alone OSDs and a ceph cluster of only monitors and mds, you could quite > literally be done in minutes. Not enough knowledge and experience so far - and I am not sure what might happen in case there is a mistake in the policy.cfg.. > For a cluster of separate journals and other additional services such as > iSCSI or RGW, you would need to roll up your sleeves. At this point, if you > > have such a cluster, I do not think it would be worth your time. This is > the > harder task of accurately recording all the data and journal devices as well > > as centralizing configurations back onto the Salt master. I have heard that we want to deprecate ceph-deploy in one of the next releases and replace it with deepsea. With that in mind I believe we have to deliver a migration path from "ceph-deploy" to "deepsea".. Ad yes - I agree - at the moment where deepsea is not yet on paar with ceph-deploy this might not be worth the effort... But we need to start building the proper solution before deprecating ceph-deploy ;-). Oh - and btw. - this would also be an added value for customers migrating from NON SLES bases Ceph deployments to SES.. > > What happens if you try and something goes wrong? I don't know. My > personal paranoia level would be high enough that I would skip using any of > the stage orchestration commands (i.e. salt?run state.orch ceph.stage.3) and > run the individual steps on the individual nodes. It would be a bit tedious > > for a migration, but much easier to recover. Agreed - this adds to my wish of an automatically generated policy.cfg for an existing cluster. > > In the longer term, we do have an existing card in > https://github.com/SUSE/DeepSea/projects/2, but have not pursued it yet. I > think the above would need to be automated for creating a policy.cfg with > accurate hardware profiles and somehow verifiable. Also, I do not know if > this > could be a generic solution for any Ceph cluster. So for the moment I understand this status: 1. Recommendation: Do not use DeepSea for existing ceph-deploy based clusters. 2. Inventory can be done with DeepSea 3. policy.cfg can be created manually 4. ? -> In case I build the policy.cfg manually and correct - what would happen if I go through the next steps of DeepSea, then? Will this kill / overwrite anything already existing? Thanks, Martin > > Eric From loic.devulder at mpsa.com Mon Jan 23 01:13:38 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Mon, 23 Jan 2017 08:13:38 +0000 Subject: [Deepsea-users] NTP configuration In-Reply-To: <55002178.YIbAjLZtEa@ruby> References: <3CBFA7CC2505A74B9C172B35128B88637EDAD27E@YLAV4460.INETPSA.com> <55002178.YIbAjLZtEa@ruby> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDBCB04@YLAV4460.INETPSA.com> Hi, Thanks Eric, that's what I need! Maybe could it be a good idea to add this wiki link at the beginning of the DeepSea installation method paragraph in the SES documentation, to avoid this kind of dumb question :-) Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: vendredi > 20 janvier 2017 20:53 ??: Discussions about the DeepSea management > framework for Ceph Objet?: Re: > [Deepsea-users] NTP configuration > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > Hi Loic, > The short answer is to tell DeepSea to do something else which > includes "do nothing". Check the first example here > https://github.com/SUSE/DeepSea/wiki/customize. I used ntp. > > Salt is not fond of absence or empty configurations. As many > defaults as we tried to put in, state files need at least a no-op. > The strategy throughout DeepSea is everything can be overridden since > I cannot predict what would need to be customized at a site. > > Eric > > > On Friday, January 20, 2017 03:16:13 PM LOIC DEVULDER wrote: > > Hi, > > > > During my tests with DeepSea I ran into a little problem: I can't be > > able to remove the NTP configuration. > > > > Ok I know: why should I want to do this? Simply because I already > > have NTP configured on my servers (we have a custom NTP config in my > company). > > > > I try to remove these lines from the > > /srv/pillar/ceph/proposals/config/stack/default/global.yml file: > > ylal8020:/srv/pillar # cat > > ceph/proposals/config/stack/default/global.yml > > time_server: '{{ pillar.get("master_minion") }}' > > time_service: ntp > > > > But I ran into a weird issue while trying to execute the > > configuration > > stage: ylal8020:/srv/pillar # salt-run state.orch > > ceph.stage.configure [WARNING ] Although 'dmidecode' was found in > > path, the current user cannot execute it. Grains output might not be > > accurate. [WARNING ] Although 'dmidecode' was found in path, the > > current user cannot execute it. Grains output might not be accurate. > > [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > was found in path, the current user cannot execute it. Grains output > might not be accurate. > > ylal8020.inetpsa.com_master: > > Name: push.proposal - Function: salt.runner - Result: Changed > > Started: - > > 15:53:58.352226 Duration: 563.82 ms Name: refresh_pillar1 - Function: > > salt.state - Result: Changed Started: - 15:53:58.916733 Duration: > > 589.218 ms Name: configure.cluster - Function: salt.runner - Result: > > Changed > > Started: - 15:53:59.506662 Duration: 1003.844 ms Name: > > refresh_pillar2 > > - > > Function: salt.state - Result: Changed Started: - 15:54:00.511544 > Duration: > > 662.566 ms Name: admin key - Function: salt.state - Result: Clean > Started: > > - 15:54:01.174305 Duration: 455.286 ms Name: mon key - Function: > > salt.state > > - Result: Clean Started: - 15:54:01.629844 Duration: 396.696 ms Name: > > osd key - Function: salt.state - Result: Clean Started: - > > 15:54:02.026768 > > Duration: 391.508 ms Name: igw key - Function: salt.state - Result: > > Clean > > Started: - 15:54:02.418500 Duration: 1192.624 ms Name: mds key - > Function: > > salt.state - Result: Clean Started: - 15:54:03.611366 Duration: > > 1172.492 ms > > Name: rgw key - Function: salt.state - Result: Clean Started: - > > 15:54:04.784086 Duration: 1193.912 ms Name: openattic key - Function: > > salt.state - Result: Clean Started: - 15:54:05.978226 Duration: > > 393.879 ms > > Name: igw config - Function: salt.state - Result: Clean Started: - > > 15:54:06.372340 Duration: 1183.398 ms > > > > Summary for ylal8020.inetpsa.com_master > > ------------- > > Succeeded: 12 (changed=4) > > Failed: 0 > > ------------- > > Total states run: 12 > > Total run time: 9.199 s > > > > Ok I know there is no direct error but the pillar.items is not good, > > some items are missing: ylal8020:/srv/pillar # salt '*' pillar.items > > ylal8300.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - mon > > ylal8030.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - mon > > ylal8020.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - master > > - admin > > ylxl0060.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - storage > > ylxl0050.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - storage > > ylal8290.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - mon > > ylxl0080.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - storage > > ylxl0070.inetpsa.com: > > ---------- > > benchmark: > > ---------- > > default-collection: > > simple.yml > > job-file-directory: > > /run/cephfs_bench_jobs > > log-file-directory: > > /var/log/cephfs_bench_logs > > work-directory: > > /run/cephfs_bench > > cluster: > > ceph > > master_minion: > > ylal8020.inetpsa.com > > mon_host: > > mon_initial_members: > > - ylal8290 > > - ylal8030 > > - ylal8300 > > roles: > > - storage > > > > So my "simple" question is: how I can configure global.yml to not > > let DeepSea configure NTP? > > > > Regards / Cordialement, > > ___________________________________________________________________ > > PSA Groupe > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team > > BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: > > SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level > > 4: 22 92 40 > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > Mobile: +33 (0)6 87 72 47 31 > > ___________________________________________________________________ > > > > This message may contain confidential information. If you are not > > the intended recipient, please advise the sender immediately and > > delete this message. For further information on confidentiality and > > the risks inherent in electronic communication see > > http://disclaimer.psa-peugeot- > citroen.com. > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users From ejackson at suse.com Mon Jan 23 03:58:15 2017 From: ejackson at suse.com (Eric Jackson) Date: Mon, 23 Jan 2017 05:58:15 -0500 Subject: [Deepsea-users] Antw: Re: Integration of DeepSea in a ceph-deploy based installation In-Reply-To: <5885BDDD0200001C002CBEC5@prv-mh.provo.novell.com> References: <5880D81E0200001C002CB8DE@prv-mh.provo.novell.com> <1604394.1GZ50RZMij@ruby> <5885BDDD0200001C002CBEC5@prv-mh.provo.novell.com> Message-ID: <2069930.8tyYCd1lu8@ruby> On Monday, January 23, 2017 12:25:01 AM Martin Weiss wrote: > > On Thursday, January 19, 2017 07:40:28 AM Martin Weiss wrote: > > I have heard that we want to deprecate ceph-deploy in one of the next > releases and replace it with deepsea. With that in mind I believe we have > to deliver a migration path from "ceph-deploy" to "deepsea".. > > Ad yes - I agree - at the moment where deepsea is not yet on paar with > ceph-deploy this might not be worth the effort... But we need to start > building the proper solution before deprecating ceph-deploy ;-). To my knowledge, the only two items ceph-deploy does that DeepSea does not is support dm-crypt and provide a command line editor of ceph.conf. The former is in the todo list and I am still trying to understand the requirements of the latter. > > Oh - and btw. - this would also be an added value for customers migrating > from NON SLES bases Ceph deployments to SES.. > > What happens if you try and something goes wrong? I don't know. My > > > > personal paranoia level would be high enough that I would skip using any > > of > > the stage orchestration commands (i.e. salt?run state.orch ceph.stage.3) > > and run the individual steps on the individual nodes. It would be a bit > > tedious > > > > for a migration, but much easier to recover. > > Agreed - this adds to my wish of an automatically generated policy.cfg for > an existing cluster. > > In the longer term, we do have an existing card in > > > > https://github.com/SUSE/DeepSea/projects/2, but have not pursued it yet. > > I > > think the above would need to be automated for creating a policy.cfg with > > accurate hardware profiles and somehow verifiable. Also, I do not know if > > this > > could be a generic solution for any Ceph cluster. > > So for the moment I understand this status: > > 1. Recommendation: Do not use DeepSea for existing ceph-deploy based > clusters. 2. Inventory can be done with DeepSea > 3. policy.cfg can be created manually > 4. ? > > -> In case I build the policy.cfg manually and correct - what would happen > if I go through the next steps of DeepSea, then? Will this kill / overwrite > anything already existing? > If you did build your policy.cfg identically to the existing environment along with centralizing any customized configurations (e.g. ceph.conf, rgw.conf, etc.), then nothing would happen. Running Stage 3 and Stage 4 would check that all roles and services are as they should be. > Thanks, > Martin > > > Eric > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. URL: From lgrimmer at suse.com Mon Jan 23 04:08:04 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Mon, 23 Jan 2017 12:08:04 +0100 Subject: [Deepsea-users] Configure centralized logging? Message-ID: <539f63c6-d113-322d-56ba-922550804580@suse.com> Hi, is there a way to instruct DeepSea to configure all nodes in a cluster to perform remote logging to a central syslog/log host by default? If not, I would like to submit this as an enhancement request - this would make it much easier to process logs on a single instance for errors and failures. It should be possible for an admin to define a "log host" (the admin node by default?) that all nodes should send their logs to, instead of keeping them on the local file system. Thanks, Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From swamireddy at gmail.com Mon Jan 23 04:20:26 2017 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Mon, 23 Jan 2017 16:50:26 +0530 Subject: [Deepsea-users] DeepSea for Ubuntu In-Reply-To: <2227634.dlsvgTyjO7@ruby> References: <2227634.dlsvgTyjO7@ruby> Message-ID: Hi Eric - Can I use the current source repo for Unbutu deployment? Thanks Swami On Mon, Jan 23, 2017 at 4:34 PM, Eric Jackson wrote: > On Friday, January 20, 2017 01:54:04 PM M Ranga Swami Reddy wrote: > > Hello, > > Is there any progress to port current code to Ubuntu ? > > Little. The in progress PR https://github.com/SUSE/DeepSea/pull/73 has > the > hook to call lshw instead of hwinfo. > > The good news is that everything that had been in progress prior to > supporting > other distros is wrapping up. This is next on the list: > https://github.com/SUSE/DeepSea/projects/1 > > > > > > Please update. > > > > Thanks > > Swami > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loic.devulder at mpsa.com Mon Jan 23 08:40:04 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER) Date: Mon, 23 Jan 2017 15:40:04 +0000 Subject: [Deepsea-users] NTP configuration References: <3CBFA7CC2505A74B9C172B35128B88637EDAD27E@YLAV4460.INETPSA.com> <55002178.YIbAjLZtEa@ruby> Message-ID: <3CBFA7CC2505A74B9C172B35128B88637EDBD0F9@YLAV4460.INETPSA.com> Hi again, After reading the validate.py file I found another method to disable NTP configuration from DeepSea: I can set time_service to disable in global.yml file: ylal8020:/srv/pillar/ceph/proposals # cat config/stack/default/global.yml time_service: disabled It's better because I can see that time_server is disabled when I run the stage 3 but I have an error whith sntp. DeepSea tries to execute a sntp command with no hostname and so it fails: ylal8020:/srv/pillar/ceph/proposals # salt-run state.orch ceph.stage.deploy [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. firewall : disabled [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. fsid : valid public_network : valid public_interface : valid cluster_network : valid cluster_interface : valid monitors : valid storage : valid master_role : valid mon_host : valid mon_initial_members : valid time_server : disabled fqdn : valid [ERROR ] Run failed on minions: ylxl0080.inetpsa.com, ylal8300.inetpsa.com, ylxl0050.inetpsa.com, ylal8020.inetpsa.com, ylal8290.inetpsa.com, ylxl0060.inetpsa.com, ylxl0070.inetpsa.com, ylal8030.inetpsa.com Failures: ylxl0080.inetpsa.com: Name: ntp - Function: pkg.installed - Result: Clean Started: - 15:48:10.924640 Duration: 711.651 ms ---------- ID: sync time Function: cmd.run Name: sntp -S -c Result: False Comment: Command "sntp -S -c " run Started: 15:48:11.637698 Duration: 59.368 ms Changes: ---------- pid: 16982 retcode: 1 stderr: /usr/sbin/sntp: The 'concurrent' option requires an argument. sntp - standard Simple Network Time Protocol client program - Ver. 4.2.8p9 Usage: sntp [ - [] | --[{=| }] ]... \ [ hostname-or-IP ...] Try 'sntp --help' for more information. stdout: Summary for ylxl0080.inetpsa.com ------------ Succeeded: 1 (changed=1) Failed: 1 ------------ Total states run: 2 Total run time: 771.019 ms ylal8300.inetpsa.com: Name: ntp - Function: pkg.installed - Result: Clean Started: - 15:48:10.856634 Duration: 817.354 ms ---------- ID: sync time Function: cmd.run Name: sntp -S -c Result: False Comment: Command "sntp -S -c " run Started: 15:48:11.675171 Duration: 55.126 ms Changes: ---------- pid: 2658 retcode: 1 stderr: /usr/sbin/sntp: The 'concurrent' option requires an argument. sntp - standard Simple Network Time Protocol client program - Ver. 4.2.8p9 Usage: sntp [ - [] | --[{=| }] ]... \ [ hostname-or-IP ...] Try 'sntp --help' for more information. stdout: We can see that time_service has disappeared and have been replaced by "time_server: disabled". According to validate.py the time_server value is set to disabled when time_service is disabled, so it's seems to be normal. I try to find how DeepSea executes this cmd.run but my Salt's knowledge seems to low :-( I was able to bypass this error with Eric's informations from the wiki. I added the disabled.sls file and "time_init: disabled" in the /srv/pillar/ceph/proposals/config/stack/default/global.yml file and it works. Is there a way to add a test that not execute this cmd.run when time_service/time_server is/are set to disable? It's not a big deal if this is not possible, as adding the disabled.sls file is not too complicated but it could be easier from a sysadmin point of view :-) Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: LOIC DEVULDER - U329683 > Envoy??: lundi 23 janvier 2017 09:14 > ??: Discussions about the DeepSea management framework for Ceph users at lists.suse.com> > Objet?: RE: [Deepsea-users] NTP configuration > > Hi, > > Thanks Eric, that's what I need! > > Maybe could it be a good idea to add this wiki link at the beginning of > the DeepSea installation method paragraph in the SES documentation, to > avoid this kind of dumb question :-) > > > Regards / Cordialement, > ___________________________________________________________________ > PSA Groupe > Lo?c Devulder (loic.devulder at mpsa.com) > Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES > - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal > address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - > Level 4: 22 92 40 > Office: +33 (0)9 66 66 69 06 (27 69 06) > Mobile: +33 (0)6 87 72 47 31 > ___________________________________________________________________ > > This message may contain confidential information. If you are not the > intended recipient, please advise the sender immediately and delete this > message. For further information on confidentiality and the risks inherent > in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > > > -----Message d'origine----- > > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: vendredi > > 20 janvier 2017 20:53 ??: Discussions about the DeepSea management > > framework for Ceph Objet?: Re: > > [Deepsea-users] NTP configuration > > > > >>> Real sender address / Reelle adresse d expedition : > > >>> deepsea-users-bounces at lists.suse.com <<< > > > > ********************************************************************** > > Hi Loic, > > The short answer is to tell DeepSea to do something else which > > includes "do nothing". Check the first example here > > https://github.com/SUSE/DeepSea/wiki/customize. I used ntp. > > > > Salt is not fond of absence or empty configurations. As many > > defaults as we tried to put in, state files need at least a no-op. > > The strategy throughout DeepSea is everything can be overridden since > > I cannot predict what would need to be customized at a site. > > > > Eric > > > > > > On Friday, January 20, 2017 03:16:13 PM LOIC DEVULDER wrote: > > > Hi, > > > > > > During my tests with DeepSea I ran into a little problem: I can't be > > > able to remove the NTP configuration. > > > > > > Ok I know: why should I want to do this? Simply because I already > > > have NTP configured on my servers (we have a custom NTP config in my > > company). > > > > > > I try to remove these lines from the > > > /srv/pillar/ceph/proposals/config/stack/default/global.yml file: > > > ylal8020:/srv/pillar # cat > > > ceph/proposals/config/stack/default/global.yml > > > time_server: '{{ pillar.get("master_minion") }}' > > > time_service: ntp > > > > > > But I ran into a weird issue while trying to execute the > > > configuration > > > stage: ylal8020:/srv/pillar # salt-run state.orch > > > ceph.stage.configure [WARNING ] Although 'dmidecode' was found in > > > path, the current user cannot execute it. Grains output might not be > > > accurate. [WARNING ] Although 'dmidecode' was found in path, the > > > current user cannot execute it. Grains output might not be accurate. > > > [WARNING ] Although 'dmidecode' was found in path, the current user > cannot execute it. > > > Grains output might not be accurate. [WARNING ] Although 'dmidecode' > > > was found in path, the current user cannot execute it. Grains output > > might not be accurate. > > > ylal8020.inetpsa.com_master: > > > Name: push.proposal - Function: salt.runner - Result: Changed > > > Started: - > > > 15:53:58.352226 Duration: 563.82 ms Name: refresh_pillar1 - Function: > > > salt.state - Result: Changed Started: - 15:53:58.916733 Duration: > > > 589.218 ms Name: configure.cluster - Function: salt.runner - Result: > > > Changed > > > Started: - 15:53:59.506662 Duration: 1003.844 ms Name: > > > refresh_pillar2 > > > - > > > Function: salt.state - Result: Changed Started: - 15:54:00.511544 > > Duration: > > > 662.566 ms Name: admin key - Function: salt.state - Result: Clean > > Started: > > > - 15:54:01.174305 Duration: 455.286 ms Name: mon key - Function: > > > salt.state > > > - Result: Clean Started: - 15:54:01.629844 Duration: 396.696 ms Name: > > > osd key - Function: salt.state - Result: Clean Started: - > > > 15:54:02.026768 > > > Duration: 391.508 ms Name: igw key - Function: salt.state - Result: > > > Clean > > > Started: - 15:54:02.418500 Duration: 1192.624 ms Name: mds key - > > Function: > > > salt.state - Result: Clean Started: - 15:54:03.611366 Duration: > > > 1172.492 ms > > > Name: rgw key - Function: salt.state - Result: Clean Started: - > > > 15:54:04.784086 Duration: 1193.912 ms Name: openattic key - Function: > > > salt.state - Result: Clean Started: - 15:54:05.978226 Duration: > > > 393.879 ms > > > Name: igw config - Function: salt.state - Result: Clean Started: - > > > 15:54:06.372340 Duration: 1183.398 ms > > > > > > Summary for ylal8020.inetpsa.com_master > > > ------------- > > > Succeeded: 12 (changed=4) > > > Failed: 0 > > > ------------- > > > Total states run: 12 > > > Total run time: 9.199 s > > > > > > Ok I know there is no direct error but the pillar.items is not good, > > > some items are missing: ylal8020:/srv/pillar # salt '*' pillar.items > > > ylal8300.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - mon > > > ylal8030.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - mon > > > ylal8020.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - master > > > - admin > > > ylxl0060.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - storage > > > ylxl0050.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - storage > > > ylal8290.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - mon > > > ylxl0080.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - storage > > > ylxl0070.inetpsa.com: > > > ---------- > > > benchmark: > > > ---------- > > > default-collection: > > > simple.yml > > > job-file-directory: > > > /run/cephfs_bench_jobs > > > log-file-directory: > > > /var/log/cephfs_bench_logs > > > work-directory: > > > /run/cephfs_bench > > > cluster: > > > ceph > > > master_minion: > > > ylal8020.inetpsa.com > > > mon_host: > > > mon_initial_members: > > > - ylal8290 > > > - ylal8030 > > > - ylal8300 > > > roles: > > > - storage > > > > > > So my "simple" question is: how I can configure global.yml to not > > > let DeepSea configure NTP? > > > > > > Regards / Cordialement, > > > ___________________________________________________________________ > > > PSA Groupe > > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > > / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team > > > BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: > > > SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level > > > 4: 22 92 40 > > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > > Mobile: +33 (0)6 87 72 47 31 > > > ___________________________________________________________________ > > > > > > This message may contain confidential information. If you are not > > > the intended recipient, please advise the sender immediately and > > > delete this message. For further information on confidentiality and > > > the risks inherent in electronic communication see > > > http://disclaimer.psa-peugeot- > > citroen.com. > > > > > > _______________________________________________ > > > Deepsea-users mailing list > > > Deepsea-users at lists.suse.com > > > http://lists.suse.com/mailman/listinfo/deepsea-users From Martin.Weiss at suse.com Tue Jan 24 00:30:25 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Tue, 24 Jan 2017 00:30:25 -0700 Subject: [Deepsea-users] Antw: Re: Antw: Re: Integration of DeepSea in a ceph-deploy based installation In-Reply-To: <2069930.8tyYCd1lu8@ruby> References: <5880D81E0200001C002CB8DE@prv-mh.provo.novell.com> <1604394.1GZ50RZMij@ruby> <5885BDDD0200001C002CBEC5@prv-mh.provo.novell.com> <2069930.8tyYCd1lu8@ruby> Message-ID: <588710A10200001C002CC4EC@prv-mh.provo.novell.com> How can we verify if the policy.cfg is matching the current deployment without changing anything? Is there some sort of a "dry-run"? Regarding DeepSea vs. ceph-deploy - if we are on par or even have more functionality in DeepSea I believe the only thing that we need to add to DeepSea is a "migration method" for existing ceph-deploy based clusters before we can move away from ceph-deploy. So getting the inventory, building the policy.cfg and then comparing DeepSea with the existing deployment with a "dry-run" would be nice to have "automated" or in a script... Btw. I do not need a "command line editor for ceph.conf".. Oh - and we need proper integration of DeepSea into existing Salt deployments i.e. in case the customer has SUSE Manager - DeepSea needs to integrate, there.. Thanks Martin On Monday, January 23, 2017 12:25:01 AM Martin Weiss wrote: > > On Thursday, January 19, 2017 07:40:28 AM Martin Weiss wrote: > > I have heard that we want to deprecate ceph-deploy in one of the next > releases and replace it with deepsea. With that in mind I believe we have > to deliver a migration path from "ceph-deploy" to "deepsea".. > > Ad yes - I agree - at the moment where deepsea is not yet on paar with > ceph-deploy this might not be worth the effort... But we need to start > building the proper solution before deprecating ceph-deploy ;-). To my knowledge, the only two items ceph-deploy does that DeepSea does not is support dm-crypt and provide a command line editor of ceph.conf. The former is in the todo list and I am still trying to understand the requirements of the latter. > > Oh - and btw. - this would also be an added value for customers migrating > from NON SLES bases Ceph deployments to SES.. > > What happens if you try and something goes wrong? I don't know. My > > > > personal paranoia level would be high enough that I would skip using any > > of > > the stage orchestration commands (i.e. salt?run state.orch ceph.stage.3) > > and run the individual steps on the individual nodes. It would be a bit > > tedious > > > > for a migration, but much easier to recover. > > Agreed - this adds to my wish of an automatically generated policy.cfg for > an existing cluster. > > In the longer term, we do have an existing card in > > > > https://github.com/SUSE/DeepSea/projects/2, but have not pursued it yet. > > I > > think the above would need to be automated for creating a policy.cfg with > > accurate hardware profiles and somehow verifiable. Also, I do not know if > > this > > could be a generic solution for any Ceph cluster. > > So for the moment I understand this status: > > 1. Recommendation: Do not use DeepSea for existing ceph-deploy based > clusters. 2. Inventory can be done with DeepSea > 3. policy.cfg can be created manually > 4. ? > > -> In case I build the policy.cfg manually and correct - what would happen > if I go through the next steps of DeepSea, then? Will this kill / overwrite > anything already existing? > If you did build your policy.cfg identically to the existing environment along with centralizing any customized configurations (e.g. ceph.conf, rgw.conf, etc.), then nothing would happen. Running Stage 3 and Stage 4 would check that all roles and services are as they should be. > Thanks, > Martin > > > Eric > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.betancourt at suse.com Tue Jan 24 08:50:02 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 15:50:02 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <3357045.UujxmkCVYX@ruby> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <3357045.UujxmkCVYX@ruby> Message-ID: <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> Hi Eric, Thank you for your quick turnaround. I did the deepsea installation following section 4.2 of the SES4 installation guide via zypper. If you have the rpm version of the packages that you mentioned below, let me know their location and I'll give them a try. Thanks again, Jos? Betancourt Linux Architecture - IHV Alliances & Embedded Systems jose.betancourt at suse.com +1.908.672.2719 -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Tuesday, January 24, 2017 5:36 AM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea Hi Jos?, On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: > Hello Everyone, > > I have a small setup with 6 physical devices. I've been using deepsea > and probably ran salt-run state.orch ceph.stage.configure and > ceph.stage.deploy too many times and it's now complaining that I have > too few monitors and storage nodes. I'm curious, but to your question... > > What I would like to do is to be able to roll back to a point where I can > start over. With ceph-deploy, I have the option of running ceph-deploy > purge, purgedata and forgetkeys and I can pretty much start again > (this is a lab environment). > > Is there an equivalent salt-run invocation to basically "reset" the > salt-run steps so that I can start at stage 1 again? I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. I completed this Saturday and it effectively removes the Ceph cluster and resets DeepSea. I am waiting for feedback from another developer, but you are welcome to try it. The commands would be salt-run disengage.safety salt-run state.orch ceph.purge and optionally salt 'admin*' purge.proposals That third step can be included in the default ceph.purge. That's one of the questions I was hoping for feedback. (The default removes the cluster and most of the pillar configuration and allows you to start at Stage 2. If your Stage 1 will remain the same, then removing the proposals doesn't really add anything.) Are you cloning from github or working from an rpm? If the latter, I will do another release of master as soon as this branch is merged. Eric > > Best, > > Jos? Betancourt > Linux Architecture - IHV Alliances & Embedded Systems > jose.betancourt at suse.com > +1.908.672.2719 From jfajerski at suse.com Tue Jan 24 08:55:38 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Tue, 24 Jan 2017 16:55:38 +0100 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <3357045.UujxmkCVYX@ruby> <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> Message-ID: <20170124155538.mbr3cct22f7c6ph5@jf_suse_laptop> On Tue, Jan 24, 2017 at 03:50:02PM +0000, Jose Betancourt wrote: >Hi Eric, > >Thank you for your quick turnaround. I did the deepsea installation following section 4.2 of the SES4 installation guide via zypper. If you have the rpm version of the packages that you mentioned below, let me know their location and I'll give them a try. I happen to have that branch as a rpm: https://build.opensuse.org/package/show/home:jfajerski/deepsea > >Thanks again, > > >Jos? Betancourt >Linux Architecture - IHV Alliances & Embedded Systems >jose.betancourt at suse.com >+1.908.672.2719 > > > >-----Original Message----- >From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson >Sent: Tuesday, January 24, 2017 5:36 AM >To: Discussions about the DeepSea management framework for Ceph >Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea > >Hi Jos?, > >On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: >> Hello Everyone, >> >> I have a small setup with 6 physical devices. I've been using deepsea >> and probably ran salt-run state.orch ceph.stage.configure and >> ceph.stage.deploy too many times and it's now complaining that I have >> too few monitors and storage nodes. > >I'm curious, but to your question... >> >> What I would like to do is to be able to roll back to a point where I can >> start over. With ceph-deploy, I have the option of running ceph-deploy >> purge, purgedata and forgetkeys and I can pretty much start again >> (this is a lab environment). >> >> Is there an equivalent salt-run invocation to basically "reset" the >> salt-run steps so that I can start at stage 1 again? > >I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. I completed this Saturday and it effectively removes the Ceph cluster and resets DeepSea. I am waiting for feedback from another developer, but you are welcome to try it. > >The commands would be > >salt-run disengage.safety >salt-run state.orch ceph.purge > >and optionally > >salt 'admin*' purge.proposals > >That third step can be included in the default ceph.purge. That's one of the questions I was hoping for feedback. (The default removes the cluster and most of the pillar configuration and allows you to start at Stage 2. If your Stage 1 will remain the same, then removing the proposals doesn't really add >anything.) > >Are you cloning from github or working from an rpm? If the latter, I will do another release of master as soon as this branch is merged. > >Eric > >> >> Best, >> >> Jos? Betancourt >> Linux Architecture - IHV Alliances & Embedded Systems >> jose.betancourt at suse.com >> +1.908.672.2719 >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com From jfajerski at suse.com Tue Jan 24 09:05:12 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Tue, 24 Jan 2017 17:05:12 +0100 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <20170124155538.mbr3cct22f7c6ph5@jf_suse_laptop> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <3357045.UujxmkCVYX@ruby> <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> <20170124155538.mbr3cct22f7c6ph5@jf_suse_laptop> Message-ID: <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> On Tue, Jan 24, 2017 at 04:55:38PM +0100, Jan Fajerski wrote: >On Tue, Jan 24, 2017 at 03:50:02PM +0000, Jose Betancourt wrote: >>Hi Eric, >> >>Thank you for your quick turnaround. I did the deepsea installation following section 4.2 of the SES4 installation guide via zypper. If you have the rpm version of the packages that you mentioned below, let me know their location and I'll give them a try. >I happen to have that branch as a rpm: >https://build.opensuse.org/package/show/home:jfajerski/deepsea Be aware though that it'll only be there temporally. >> >>Thanks again, >> >> >>Jos? Betancourt >>Linux Architecture - IHV Alliances & Embedded Systems >>jose.betancourt at suse.com >>+1.908.672.2719 >> >> >> >>-----Original Message----- >>From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson >>Sent: Tuesday, January 24, 2017 5:36 AM >>To: Discussions about the DeepSea management framework for Ceph >>Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea >> >>Hi Jos?, >> >>On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: >>>Hello Everyone, >>> >>>I have a small setup with 6 physical devices. I've been using deepsea >>>and probably ran salt-run state.orch ceph.stage.configure and >>>ceph.stage.deploy too many times and it's now complaining that I have >>>too few monitors and storage nodes. >> >>I'm curious, but to your question... >>> >>>What I would like to do is to be able to roll back to a point where I can >>>start over. With ceph-deploy, I have the option of running ceph-deploy >>>purge, purgedata and forgetkeys and I can pretty much start again >>>(this is a lab environment). >>> >>>Is there an equivalent salt-run invocation to basically "reset" the >>>salt-run steps so that I can start at stage 1 again? >> >>I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. I completed this Saturday and it effectively removes the Ceph cluster and resets DeepSea. I am waiting for feedback from another developer, but you are welcome to try it. >> >>The commands would be >> >>salt-run disengage.safety >>salt-run state.orch ceph.purge >> >>and optionally >> >>salt 'admin*' purge.proposals >> >>That third step can be included in the default ceph.purge. That's one of the questions I was hoping for feedback. (The default removes the cluster and most of the pillar configuration and allows you to start at Stage 2. If your Stage 1 will remain the same, then removing the proposals doesn't really add >>anything.) >> >>Are you cloning from github or working from an rpm? If the latter, I will do another release of master as soon as this branch is merged. >> >>Eric >> >>> >>>Best, >>> >>>Jos? Betancourt >>>Linux Architecture - IHV Alliances & Embedded Systems >>>jose.betancourt at suse.com >>>+1.908.672.2719 >>_______________________________________________ >>Deepsea-users mailing list >>Deepsea-users at lists.suse.com >>http://lists.suse.com/mailman/listinfo/deepsea-users > >-- >Jan Fajerski >Engineer Enterprise Storage >SUSE Linux GmbH >jfajerski at suse.com -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com From jose.betancourt at suse.com Tue Jan 24 09:06:27 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 16:06:27 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <3357045.UujxmkCVYX@ruby> <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> <20170124155538.mbr3cct22f7c6ph5@jf_suse_laptop> <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> Message-ID: <8178666E014FC54090086AA4BC65AC442E8858A0@prvxmb03.microfocus.com> OK. Will download shortly. Thx, Jose -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Jan Fajerski Sent: Tuesday, January 24, 2017 11:05 AM To: deepsea-users at lists.suse.com Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea On Tue, Jan 24, 2017 at 04:55:38PM +0100, Jan Fajerski wrote: >On Tue, Jan 24, 2017 at 03:50:02PM +0000, Jose Betancourt wrote: >>Hi Eric, >> >>Thank you for your quick turnaround. I did the deepsea installation following section 4.2 of the SES4 installation guide via zypper. If you have the rpm version of the packages that you mentioned below, let me know their location and I'll give them a try. >I happen to have that branch as a rpm: >https://build.opensuse.org/package/show/home:jfajerski/deepsea Be aware though that it'll only be there temporally. >> >>Thanks again, >> >> >>Jos? Betancourt >>Linux Architecture - IHV Alliances & Embedded Systems >>jose.betancourt at suse.com >>+1.908.672.2719 >> >> >> >>-----Original Message----- >>From: deepsea-users-bounces at lists.suse.com >>[mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric >>Jackson >>Sent: Tuesday, January 24, 2017 5:36 AM >>To: Discussions about the DeepSea management framework for Ceph >> >>Subject: Re: [Deepsea-users] How to "undo" a complete installation >>using deepsea >> >>Hi Jos?, >> >>On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: >>>Hello Everyone, >>> >>>I have a small setup with 6 physical devices. I've been using >>>deepsea and probably ran salt-run state.orch ceph.stage.configure and >>>ceph.stage.deploy too many times and it's now complaining that I have >>>too few monitors and storage nodes. >> >>I'm curious, but to your question... >>> >>>What I would like to do is to be able to roll back to a point where I can >>>start over. With ceph-deploy, I have the option of running ceph-deploy >>>purge, purgedata and forgetkeys and I can pretty much start again >>>(this is a lab environment). >>> >>>Is there an equivalent salt-run invocation to basically "reset" the >>>salt-run steps so that I can start at stage 1 again? >> >>I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. I completed this Saturday and it effectively removes the Ceph cluster and resets DeepSea. I am waiting for feedback from another developer, but you are welcome to try it. >> >>The commands would be >> >>salt-run disengage.safety >>salt-run state.orch ceph.purge >> >>and optionally >> >>salt 'admin*' purge.proposals >> >>That third step can be included in the default ceph.purge. That's one >>of the questions I was hoping for feedback. (The default removes the >>cluster and most of the pillar configuration and allows you to start >>at Stage 2. If your Stage 1 will remain the same, then removing the >>proposals doesn't really add >>anything.) >> >>Are you cloning from github or working from an rpm? If the latter, I will do another release of master as soon as this branch is merged. >> >>Eric >> >>> >>>Best, >>> >>>Jos? Betancourt >>>Linux Architecture - IHV Alliances & Embedded Systems >>>jose.betancourt at suse.com >>>+1.908.672.2719 >>_______________________________________________ >>Deepsea-users mailing list >>Deepsea-users at lists.suse.com >>http://lists.suse.com/mailman/listinfo/deepsea-users > >-- >Jan Fajerski >Engineer Enterprise Storage >SUSE Linux GmbH >jfajerski at suse.com -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From jose.betancourt at suse.com Tue Jan 24 09:17:07 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 16:17:07 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <3357045.UujxmkCVYX@ruby> <8178666E014FC54090086AA4BC65AC442E884806@prvxmb03.microfocus.com> <20170124155538.mbr3cct22f7c6ph5@jf_suse_laptop> <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> Message-ID: <8178666E014FC54090086AA4BC65AC442E8858E4@prvxmb03.microfocus.com> I downloaded the rpm. The disengage safety portion ran OK. When I tried the ceph.purge, it returned a number of errors like "Module function purge.configuration is not available" and "Module function retry.pkill is not available". I installed the deepsea-0.7.1-1.1.noarch.rpm. Am I missing some other package? Jose -----Original Message ----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Jan Fajerski Sent: Tuesday, January 24, 2017 11:05 AM To: deepsea-users at lists.suse.com Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea On Tue, Jan 24, 2017 at 04:55:38PM +0100, Jan Fajerski wrote: >On Tue, Jan 24, 2017 at 03:50:02PM +0000, Jose Betancourt wrote: >>Hi Eric, >> >>Thank you for your quick turnaround. I did the deepsea installation following section 4.2 of the SES4 installation guide via zypper. If you have the rpm version of the packages that you mentioned below, let me know their location and I'll give them a try. >I happen to have that branch as a rpm: >https://build.opensuse.org/package/show/home:jfajerski/deepsea Be aware though that it'll only be there temporally. >> >>Thanks again, >> >> >>Jos? Betancourt >>Linux Architecture - IHV Alliances & Embedded Systems >>jose.betancourt at suse.com >>+1.908.672.2719 >> >> >> >>-----Original Message----- >>From: deepsea-users-bounces at lists.suse.com >>[mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric >>Jackson >>Sent: Tuesday, January 24, 2017 5:36 AM >>To: Discussions about the DeepSea management framework for Ceph >> >>Subject: Re: [Deepsea-users] How to "undo" a complete installation >>using deepsea >> >>Hi Jos?, >> >>On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: >>>Hello Everyone, >>> >>>I have a small setup with 6 physical devices. I've been using >>>deepsea and probably ran salt-run state.orch ceph.stage.configure and >>>ceph.stage.deploy too many times and it's now complaining that I have >>>too few monitors and storage nodes. >> >>I'm curious, but to your question... >>> >>>What I would like to do is to be able to roll back to a point where I can >>>start over. With ceph-deploy, I have the option of running ceph-deploy >>>purge, purgedata and forgetkeys and I can pretty much start again >>>(this is a lab environment). >>> >>>Is there an equivalent salt-run invocation to basically "reset" the >>>salt-run steps so that I can start at stage 1 again? >> >>I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. I completed this Saturday and it effectively removes the Ceph cluster and resets DeepSea. I am waiting for feedback from another developer, but you are welcome to try it. >> >>The commands would be >> >>salt-run disengage.safety >>salt-run state.orch ceph.purge >> >>and optionally >> >>salt 'admin*' purge.proposals >> >>That third step can be included in the default ceph.purge. That's one >>of the questions I was hoping for feedback. (The default removes the >>cluster and most of the pillar configuration and allows you to start >>at Stage 2. If your Stage 1 will remain the same, then removing the >>proposals doesn't really add >>anything.) >> >>Are you cloning from github or working from an rpm? If the latter, I will do another release of master as soon as this branch is merged. >> >>Eric >> >>> >>>Best, >>> >>>Jos? Betancourt >>>Linux Architecture - IHV Alliances & Embedded Systems >>>jose.betancourt at suse.com >>>+1.908.672.2719 >>_______________________________________________ >>Deepsea-users mailing list >>Deepsea-users at lists.suse.com >>http://lists.suse.com/mailman/listinfo/deepsea-users > >-- >Jan Fajerski >Engineer Enterprise Storage >SUSE Linux GmbH >jfajerski at suse.com -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH jfajerski at suse.com _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From jose.betancourt at suse.com Tue Jan 24 09:26:46 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 16:26:46 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <1503090.GixP4uj9vd@ruby> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <20170124160512.c3basugkm5jzyhhg@jf_suse_laptop> <8178666E014FC54090086AA4BC65AC442E8858E4@prvxmb03.microfocus.com> <1503090.GixP4uj9vd@ruby> Message-ID: <8178666E014FC54090086AA4BC65AC442E885915@prvxmb03.microfocus.com> That worked. Thank You. -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Tuesday, January 24, 2017 11:23 AM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea Salt needs to be told about the new modules. That normally happens in Stage 0. Run salt '*' saltutil.sync_modules On Tuesday, January 24, 2017 04:17:07 PM Jose Betancourt wrote: > I downloaded the rpm. The disengage safety portion ran OK. > > When I tried the ceph.purge, it returned a number of errors like > "Module function purge.configuration is not available" and "Module > function retry.pkill is not available". > > I installed the deepsea-0.7.1-1.1.noarch.rpm. Am I missing some other > package? > > Jose > > -----Original Message ----- > From: deepsea-users-bounces at lists.suse.com > [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Jan > Fajerski > Sent: Tuesday, January 24, 2017 11:05 AM > To: deepsea-users at lists.suse.com > Subject: Re: [Deepsea-users] How to "undo" a complete installation > using deepsea On Tue, Jan 24, 2017 at 04:55:38PM +0100, Jan Fajerski > wrote: > >On Tue, Jan 24, 2017 at 03:50:02PM +0000, Jose Betancourt wrote: > >>Hi Eric, > >> > >>Thank you for your quick turnaround. I did the deepsea installation > >>following section 4.2 of the SES4 installation guide via zypper. If you > >>have the rpm version of the packages that you mentioned below, let > >>me know their location and I'll give them a try.> > >I happen to have that branch as a rpm: > >https://build.opensuse.org/package/show/home:jfajerski/deepsea > > Be aware though that it'll only be there temporally. > > >>Thanks again, > >> > >> > >>Jos? Betancourt > >>Linux Architecture - IHV Alliances & Embedded Systems > >>jose.betancourt at suse.com > >>+1.908.672.2719 > >> > >> > >> > >>-----Original Message----- > >>From: deepsea-users-bounces at lists.suse.com > >>[mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric > >>Jackson > >>Sent: Tuesday, January 24, 2017 5:36 AM > >>To: Discussions about the DeepSea management framework for Ceph > >> > >>Subject: Re: [Deepsea-users] How to "undo" a complete installation > >>using deepsea > >> > >>Hi Jos?, > >> > >>On Tuesday, January 24, 2017 05:24:23 AM Jose Betancourt wrote: > >>>Hello Everyone, > >>> > >>>I have a small setup with 6 physical devices. I've been using > >>>deepsea and probably ran salt-run state.orch ceph.stage.configure > >>>and ceph.stage.deploy too many times and it's now complaining that > >>>I have too few monitors and storage nodes. > >> > >>I'm curious, but to your question... > >> > >>>What I would like to do is to be able to roll back to a point where I can > >>>start over. With ceph-deploy, I have the option of running ceph-deploy > >>>purge, purgedata and forgetkeys and I can pretty much start again > >>>(this is a lab environment). > >>> > >>>Is there an equivalent salt-run invocation to basically "reset" the > >>>salt-run steps so that I can start at stage 1 again? > >> > >>I have added ceph.purge in https://github.com/SUSE/DeepSea/pull/78. > >>I completed this Saturday and it effectively removes the Ceph > >>cluster and resets DeepSea. I am waiting for feedback from another > >>developer, but you are welcome to try it. > >> > >>The commands would be > >> > >>salt-run disengage.safety > >>salt-run state.orch ceph.purge > >> > >>and optionally > >> > >>salt 'admin*' purge.proposals > >> > >>That third step can be included in the default ceph.purge. That's > >>one of the questions I was hoping for feedback. (The default > >>removes the cluster and most of the pillar configuration and allows > >>you to start at Stage 2. If your Stage 1 will remain the same, then > >>removing the proposals doesn't really add > >>anything.) > >> > >>Are you cloning from github or working from an rpm? If the latter, > >>I will do another release of master as soon as this branch is merged. > >> > >>Eric > >> > >>>Best, > >>> > >>>Jos? Betancourt > >>>Linux Architecture - IHV Alliances & Embedded Systems > >>>jose.betancourt at suse.com > >>>+1.908.672.2719 > >> > >>_______________________________________________ > >>Deepsea-users mailing list > >>Deepsea-users at lists.suse.com > >>http://lists.suse.com/mailman/listinfo/deepsea-users > > > >-- > >Jan Fajerski > >Engineer Enterprise Storage > >SUSE Linux GmbH > >jfajerski at suse.com > > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH > jfajerski at suse.com > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From jose.betancourt at suse.com Tue Jan 24 11:44:20 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 18:44:20 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <2785756.D6B7pLDEAP@ruby> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <1503090.GixP4uj9vd@ruby> <8178666E014FC54090086AA4BC65AC442E885915@prvxmb03.microfocus.com> <2785756.D6B7pLDEAP@ruby> Message-ID: <8178666E014FC54090086AA4BC65AC442E885A42@prvxmb03.microfocus.com> Yes it did. I'm starting the re-build again. I will most likely end up posting another inquiry around yml files for the disk proposal for osds and journals. Thanks again. Jose -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Tuesday, January 24, 2017 1:42 PM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea Did the ceph.purge work as you had hoped? On Tuesday, January 24, 2017 04:26:46 PM Jose Betancourt wrote: > That worked. Thank You. > > -----Original Message----- > From: deepsea-users-bounces at lists.suse.com > [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric > Jackson > Sent: Tuesday, January 24, 2017 11:23 AM > To: Discussions about the DeepSea management framework for Ceph > Subject: Re: [Deepsea-users] How to > "undo" a complete installation using deepsea > > Salt needs to be told about the new modules. That normally happens in > Stage 0. Run > > salt '*' saltutil.sync_modules > From jose.betancourt at suse.com Tue Jan 24 12:03:34 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 19:03:34 +0000 Subject: [Deepsea-users] How to "undo" a complete installation using deepsea In-Reply-To: <9586804.Myh3rHQYdf@ruby> References: <8178666E014FC54090086AA4BC65AC442E8844F8@prvxmb03.microfocus.com> <2785756.D6B7pLDEAP@ruby> <8178666E014FC54090086AA4BC65AC442E885A42@prvxmb03.microfocus.com> <9586804.Myh3rHQYdf@ruby> Message-ID: <8178666E014FC54090086AA4BC65AC442E885A75@prvxmb03.microfocus.com> I am currently testing on my own mini lab with six Intel NUCs. However, if the setup worked, the next step would have been to potentially use and demonstrate with some gear that we have at two different Executive Briefing Centers (EBCs). If it makes sense to wait on the EBC gear and stick with ceph-deploy, that's fine for now. Jos? Betancourt Linux Architecture - IHV Alliances & Embedded Systems jose.betancourt at suse.com +1.908.672.2719 -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Tuesday, January 24, 2017 1:53 PM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] How to "undo" a complete installation using deepsea Tangent question: Are you just exploring/familiarizing yourself with DeepSea or working on this for a specific customer? The reason I ask is that SES4 will be 0.6.12 (MR in progress) and that master is 0.7. I haven't planned to add the ceph.purge on the 0.6.x branch. On Tuesday, January 24, 2017 06:44:20 PM Jose Betancourt wrote: > Yes it did. I'm starting the re-build again. I will most likely end up > posting another inquiry around yml files for the disk proposal for > osds and journals. > > Thanks again. > > Jose > > -----Original Message----- > From: deepsea-users-bounces at lists.suse.com > [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric > Jackson > Sent: Tuesday, January 24, 2017 1:42 PM > To: Discussions about the DeepSea management framework for Ceph > Subject: Re: [Deepsea-users] How to > "undo" a complete installation using deepsea > > Did the ceph.purge work as you had hoped? > > On Tuesday, January 24, 2017 04:26:46 PM Jose Betancourt wrote: > > That worked. Thank You. > > > > -----Original Message----- > > From: deepsea-users-bounces at lists.suse.com > > [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric > > Jackson > > Sent: Tuesday, January 24, 2017 11:23 AM > > To: Discussions about the DeepSea management framework for Ceph > > Subject: Re: [Deepsea-users] How to > > "undo" a complete installation using deepsea > > > > Salt needs to be told about the new modules. That normally happens > > in Stage 0. Run > > > > salt '*' saltutil.sync_modules > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From jose.betancourt at suse.com Tue Jan 24 13:35:57 2017 From: jose.betancourt at suse.com (Jose Betancourt) Date: Tue, 24 Jan 2017 20:35:57 +0000 Subject: [Deepsea-users] osd and journal profile inquiries In-Reply-To: <3764553.ub7kbXWK9P@ruby> References: <8178666E014FC54090086AA4BC65AC442E885AC2@prvxmb03.microfocus.com> <3764553.ub7kbXWK9P@ruby> Message-ID: <8178666E014FC54090086AA4BC65AC442E885B3D@prvxmb03.microfocus.com> Hi Eric, As a point of education for everyone; 1. Does the listing order matters? - Meaning, should I have a data, journal, data, journal pattern? 2. And assuming one journal for multiple OSDs, should I just use the same data, journal pattern? Thanks, Jos? -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Tuesday, January 24, 2017 3:22 PM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] osd and journal profile inquiries Hi Jos?, The file would look like this: storage: data+journals: - /dev/disk/by-id/ata-ST1000LM014-1EJ164_W771X1G9: /dev/disk/by-id/usb-SanDisk_Ultra_Fit_4C530001060612116591-0:0 osds: [] For some notes: I didn't plan on usb journals so I skip removable media. (Nothing like have a usb stick or drive attached to server which is suddenly discovered and added to your Ceph cluster.) In your scenario, this is a little painful to go and edit all six files. Another note: We do have a card on the list https://github.com/SUSE/DeepSea/projects/2 to allow custom profiles which could be run after Stage 1. The original idea here was to allow custom hardware profiles that are not recommended (e.g. 10:1 ratio for instance for a single SSD journal). I may have to have an option for removable media. :) Last note: there is nothing special about the hardware profile name. The name is simply chosen in hopes that you, the administrator, can identify which hardware is which. Stage 1 will not overwrite the sls and yml files under proposals. However, you can still create your own profile rather than edit the existing one. Either 'rsync -a' or 'cp -rp' from any of the profile directories to a name of your own choosing. Edit as you wish. Include your directory in your policy.cfg. Hope that helps. Eric On Tuesday, January 24, 2017 07:43:27 PM Jose Betancourt wrote: > Hello Everyone, > > Working on my own mini-lab performing an SES installation with DeepSea > and Salt per > https://www.suse.com/documentation/ses-4/book_storage_admin/data/ceph_ > insta > ll_stack.html > > I have six identical Intel NUCs. From a storage configuration: > > ? M.2 250GB for the OS > > ? 1TB HDD to be used as SSD > > ? 64GB USB stick to be used as "journal" > > I successfully used that configuration in the past using ceph-deploy. > > I completed ceph.stage.0 and ceph.stage.1 successfully. > > When stage 1 completes, I have the osd proposal (for lack of a better > word) under > /srv/pillar/ceph/proposals/profile-*/stack/default/ceph/minions as a yml file. > > Using one of the nodes as an example [physical machine is called: > phynode22] > > storage: > data+journals: [] > osds: > - /dev/disk/by-id/ata-ST1000LM014-1EJ164_W771X1G9 > > > The OSD entry accurately selected the 1TB drive by-id. > > The question is: How do I add a different disk for the journal? In > my particular case, I would like to add > /dev/disk/by-id/usb-SanDisk_Ultra_Fit_4C530001060612116591-0:0 > > What is the correct syntax? And is there something that I can do > prior to executing ceph.stage.0 and ceph.stage.1 to properly identify > the different drives? > > Best, > > > Jos? Betancourt > Linux Architecture - IHV Alliances & Embedded Systems > jose.betancourt at suse.com > +1.908.672.2719 From Martin.Weiss at suse.com Wed Jan 25 00:25:01 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Wed, 25 Jan 2017 00:25:01 -0700 Subject: [Deepsea-users] Antw: Re: Antw: Re: Antw: Re: Integration of DeepSea in a ceph-deploy based installation In-Reply-To: <2614616.Gh5PHuo1qj@ruby> References: <5880D81E0200001C002CB8DE@prv-mh.provo.novell.com> <2069930.8tyYCd1lu8@ruby> <588710A10200001C002CC4EC@prv-mh.provo.novell.com> <2614616.Gh5PHuo1qj@ruby> Message-ID: <588860DD0200001C002CC982@prv-mh.provo.novell.com> > On Tuesday, January 24, 2017 12:30:25 AM Martin Weiss wrote: >> How can we verify if the policy.cfg is matching the current deployment >> without changing anything? Is there some sort of a "dry?run"? > > I'll say this is unexplored. Salt has two different methods of dry runs, > but I > do not know if either would give the desired result without trying it. > Ok - as long as we have this functionality before we drop / deprecate ceph-deploy I am fine ;-). And - will give that a try once I find some time.. >> >> Regarding DeepSea vs. ceph?deploy ? if we are on par or even have more >> functionality in DeepSea I believe the only thing that we need to add to >> DeepSea is a "migration method" for existing ceph?deploy based clusters >> before we can move away from ceph?deploy. So getting the inventory, >> building the policy.cfg and then comparing DeepSea with the existing >> deployment with a "dry?run" would be nice to have "automated" or in a >> script... Btw. I do not need a "command line editor for ceph.conf".. > > Good to know about ceph.conf. I'll say I may have to recruit someone for > assimilating a ceph?deploy cluster. Otherwise, it's a little lower on the > list. We might not need a direct command line editor for ceph.conf - but we need a way to adjust any ceph.conf using Salt / DeepSea. (not sure if that is already included) > >> >> Oh ? and we need proper integration of DeepSea into existing Salt >> deployments i.e. in case the customer has SUSE Manager ? DeepSea needs to >> integrate, there.. >> >> Thanks >> Martin >> From jschmid at suse.de Tue Jan 31 05:12:02 2017 From: jschmid at suse.de (Joshua Schmid) Date: Tue, 31 Jan 2017 13:12:02 +0100 Subject: [Deepsea-users] Choosing unittesting frameworks Message-ID: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> Hey list, first of all I'd like to get the terminology straight. We use 'tox' serves us as a virtualenv manager and allows us: * checking your package installs correctly with different Python versions and interpreters * running your tests in each of the environments, configuring your test tool of choice * acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing. tox internally executes 'pytest' which acts currently (for us) as a testcollector that scans directories for 'test_*' files or methods starting with 'test_'. pytest could be replaced with nosetests but after reading some articles pytest seem to have more functionality and a bigger user-base. pytest does not solely act act as a testcollector and runner, but is also a full-fledged unittest framework. as we are moving towards more unittesting, we should agree _now_ on one framework. I for example picked the native python 'unittest' library because it's widespread and has a rather complete documentation. Abhi otoh went for 'pytest'. After some reading I see some benefits in also switching to it. benefits suchs as: * writing setup_functions for specific blocks of tests rather than for an entire test class. * less boilerplate assert vs self.asserTrue(e.g) we also will use mocking quite extensively. We have only one options here: Mock, which is a standalone lib in py2.7 but was adopted in the native unittest lib in py3.x. (There is also a thin wrapper for pytest called pytest-mocker but this is out of scope here I guess.) I don't have a strong opinion because I basically just started reading.. If anyone has a profound python background, please speak up now :) I currently lean towards pytest + mock. Thoughts? -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From ncutler at suse.cz Tue Jan 31 06:06:01 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Tue, 31 Jan 2017 14:06:01 +0100 Subject: [Deepsea-users] Choosing unittesting frameworks In-Reply-To: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> References: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> Message-ID: I asked Loic, here's his reply: (01:55:08 PM) loicd: pytest is *wwwaaaaaaay* better (02:01:38 PM) smithfarm: can I quote you? ;-) (02:01:47 PM) loicd: yes (02:02:18 PM) loicd: seriously, there is nothing but pytest nowadays On 01/31/2017 01:12 PM, Joshua Schmid wrote: > Hey list, > > first of all I'd like to get the terminology straight. > > We use 'tox' serves us as a virtualenv manager and allows us: > > * checking your package installs correctly with different Python > versions and interpreters > * running your tests in each of the environments, configuring your test > tool of choice > * acting as a frontend to Continuous Integration servers, greatly > reducing boilerplate and merging CI and shell-based testing. > > tox internally executes 'pytest' which acts currently (for us) as a > testcollector that scans directories for 'test_*' files or methods > starting with 'test_'. > > pytest could be replaced with nosetests but after reading some articles > pytest seem to have more functionality and a bigger user-base. > > pytest does not solely act act as a testcollector and runner, but is > also a full-fledged unittest framework. > > as we are moving towards more unittesting, we should agree _now_ on one > framework. I for example picked the native python 'unittest' library > because it's widespread and has a rather complete documentation. > > Abhi otoh went for 'pytest'. After some reading I see some benefits in > also switching to it. > > benefits suchs as: > > * writing setup_functions for specific blocks of tests rather than for > an entire test class. > * less boilerplate assert vs self.asserTrue(e.g) > > > we also will use mocking quite extensively. We have only one options > here: Mock, which is a standalone lib in py2.7 but was adopted in the > native unittest lib in py3.x. (There is also a thin wrapper for pytest > called pytest-mocker but this is out of scope here I guess.) > > I don't have a strong opinion because I basically just started reading.. > If anyone has a profound python background, please speak up now :) > > I currently lean towards pytest + mock. > > Thoughts? > -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 From abhishek at suse.com Tue Jan 31 06:21:56 2017 From: abhishek at suse.com (Abhishek L) Date: Tue, 31 Jan 2017 14:21:56 +0100 Subject: [Deepsea-users] Choosing unittesting frameworks In-Reply-To: References: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> Message-ID: <87o9ynuzyj.fsf@suse.com> Nathan Cutler writes: > I asked Loic, here's his reply: > > (01:55:08 PM) loicd: pytest is *wwwaaaaaaay* better > (02:01:38 PM) smithfarm: can I quote you? ;-) > (02:01:47 PM) loicd: yes > (02:02:18 PM) loicd: seriously, there is nothing but pytest nowadays yay! > > On 01/31/2017 01:12 PM, Joshua Schmid wrote: >> Hey list, >> >> first of all I'd like to get the terminology straight. >> >> We use 'tox' serves us as a virtualenv manager and allows us: >> >> * checking your package installs correctly with different Python >> versions and interpreters >> * running your tests in each of the environments, configuring your test >> tool of choice >> * acting as a frontend to Continuous Integration servers, greatly >> reducing boilerplate and merging CI and shell-based testing. >> >> tox internally executes 'pytest' which acts currently (for us) as a >> testcollector that scans directories for 'test_*' files or methods >> starting with 'test_'. >> >> pytest could be replaced with nosetests but after reading some articles >> pytest seem to have more functionality and a bigger user-base. >> >> pytest does not solely act act as a testcollector and runner, but is >> also a full-fledged unittest framework. >> >> as we are moving towards more unittesting, we should agree _now_ on one >> framework. I for example picked the native python 'unittest' library >> because it's widespread and has a rather complete documentation. >> >> Abhi otoh went for 'pytest'. After some reading I see some benefits in >> also switching to it. >> >> benefits suchs as: >> >> * writing setup_functions for specific blocks of tests rather than for >> an entire test class. >> * less boilerplate assert vs self.asserTrue(e.g) This was pretty much my motivation for using pytest style tests as well, using just assert, and minimal boilerplate, no need to write the main function to just call unittest.main etc. Also their fixtures _look_ neat compared to the unittest. >> >> >> we also will use mocking quite extensively. We have only one options >> here: Mock, which is a standalone lib in py2.7 but was adopted in the >> native unittest lib in py3.x. (There is also a thin wrapper for pytest >> called pytest-mocker but this is out of scope here I guess.) Yeah I also expect we'll have to use mock heavily, and mock seems good enough and well documented. There is a module called salttesting which salt upstream uses[1], if we find at some stage that mocking is not sufficient for a few salt calls (not sure, but maybe __pillar__, __salt__ calls have some magic around them) we may consider adding this dependency. But again as I understand this wouldn't modify any existing tests in the framework we choose. [1]: https://docs.saltstack.com/en/latest/topics/tutorials/writing_tests.html >> >> I don't have a strong opinion because I basically just started reading.. >> If anyone has a profound python background, please speak up now :) >> >> I currently lean towards pytest + mock. >> >> Thoughts? >> -- Abhishek Lekshmanan SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From jschmid at suse.de Tue Jan 31 06:28:22 2017 From: jschmid at suse.de (Joshua Schmid) Date: Tue, 31 Jan 2017 14:28:22 +0100 Subject: [Deepsea-users] Choosing unittesting frameworks In-Reply-To: References: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> Message-ID: <54d1a9f9-6d23-5651-53a3-cb73563a9376@suse.de> On 01/31/2017 02:06 PM, Nathan Cutler wrote: > I asked Loic, here's his reply: > > (01:55:08 PM) loicd: pytest is *wwwaaaaaaay* better > (02:01:38 PM) smithfarm: can I quote you? ;-) > (02:01:47 PM) loicd: yes > (02:02:18 PM) loicd: seriously, there is nothing but pytest nowadays that's a clear statement :) > > On 01/31/2017 01:12 PM, Joshua Schmid wrote: >> Hey list, >> >> first of all I'd like to get the terminology straight. >> >> We use 'tox' serves us as a virtualenv manager and allows us: >> >> * checking your package installs correctly with different Python >> versions and interpreters >> * running your tests in each of the environments, configuring your test >> tool of choice >> * acting as a frontend to Continuous Integration servers, greatly >> reducing boilerplate and merging CI and shell-based testing. >> >> tox internally executes 'pytest' which acts currently (for us) as a >> testcollector that scans directories for 'test_*' files or methods >> starting with 'test_'. >> >> pytest could be replaced with nosetests but after reading some articles >> pytest seem to have more functionality and a bigger user-base. >> >> pytest does not solely act act as a testcollector and runner, but is >> also a full-fledged unittest framework. >> >> as we are moving towards more unittesting, we should agree _now_ on one >> framework. I for example picked the native python 'unittest' library >> because it's widespread and has a rather complete documentation. >> >> Abhi otoh went for 'pytest'. After some reading I see some benefits in >> also switching to it. >> >> benefits suchs as: >> >> * writing setup_functions for specific blocks of tests rather than for >> an entire test class. >> * less boilerplate assert vs self.asserTrue(e.g) >> >> >> we also will use mocking quite extensively. We have only one options >> here: Mock, which is a standalone lib in py2.7 but was adopted in the >> native unittest lib in py3.x. (There is also a thin wrapper for pytest >> called pytest-mocker but this is out of scope here I guess.) >> >> I don't have a strong opinion because I basically just started reading.. >> If anyone has a profound python background, please speak up now :) >> >> I currently lean towards pytest + mock. >> >> Thoughts? >> > -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From lgrimmer at suse.com Tue Jan 31 06:42:41 2017 From: lgrimmer at suse.com (Lenz Grimmer) Date: Tue, 31 Jan 2017 14:42:41 +0100 Subject: [Deepsea-users] Choosing unittesting frameworks In-Reply-To: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> References: <576b01e2-f40a-442d-3b7a-f9f072f51692@suse.de> Message-ID: <2fdcdab4-1675-ff5f-e7ce-35750d24d235@suse.com> Hi, On 01/31/2017 01:12 PM, Joshua Schmid wrote: > we also will use mocking quite extensively. We have only one options > here: Mock, which is a standalone lib in py2.7 but was adopted in the > native unittest lib in py3.x. (There is also a thin wrapper for pytest > called pytest-mocker but this is out of scope here I guess.) > > I don't have a strong opinion because I basically just started reading.. > If anyone has a profound python background, please speak up now :) > > I currently lean towards pytest + mock. FWIW, we use the Django Test framework for unit testing in openATTIC, which is based on the "unittest" Python standard library module. https://docs.djangoproject.com/en/dev/topics/testing/ This module is also a core component of our "gatling" REST API testing framework. We also make use of "mock" in many places. Lenz -- SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF:Felix Imend?rffer,Jane Smithard,Graham Norton,HRB 21284 (AG N?rnberg) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: