From ncutler at suse.cz Thu Jun 1 05:12:29 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Thu, 1 Jun 2017 13:12:29 +0200 Subject: [Deepsea-users] Fwd: [sepia] python setuptools is broken again "No module named six" In-Reply-To: <80a14874-c79c-4d39-2780-e61befa9e3d4@redhat.com> References: <80a14874-c79c-4d39-2780-e61befa9e3d4@redhat.com> Message-ID: <4b0256e4-5172-3c70-f432-ddc76c9a45be@suse.cz> JFYI (see below) tl;dr If Python suddenly starts complaining about "No module named six", the workaround is to manually install the python-six module in the environment first. -------- Forwarded Message -------- Subject: [sepia] python setuptools is broken again "No module named six" Date: Wed, 31 May 2017 20:22:33 -0700 From: Dan Mick To: sepia at lists.ceph.com , ncutler at suse.cz, ceph-devel python setuptools is broken again: https://github.com/pypa/setuptools/issues/1042 This probably breaks nearly everything. -- Dan Mick Red Hat, Inc. Ceph docs: http://ceph.com/docs _______________________________________________ Sepia mailing list Sepia at lists.ceph.com http://lists.ceph.com/listinfo.cgi/sepia-ceph.com From jfajerski at suse.com Thu Jun 1 10:30:32 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 1 Jun 2017 18:30:32 +0200 Subject: [Deepsea-users] SES4, What is the correct process to remove cephfs? In-Reply-To: References: Message-ID: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> Hi Boyd, I came across this recently too and I'll add the functionality to DeepSea before SES5 is released. Until then you need to deactivate the MDS daemon(s) before removing the file system. The following steps should get you there: salt '*' cmd.run 'systemctl stop ceph-mds.target' # stop all mds daemons # then on the master run: ceph mds fail 0 # repeat this for every mds you have # now you can remove the file system with ceph fs rm cephfs ?yes-i-really-mean-it # start up your mds daemons again salt '*' cmd.run 'systemctl start ceph-mds.target' # and you're read to recreate your CephFS Hope that helps! Best, Jan On Tue, May 30, 2017 at 05:51:56PM +0000, Boyd Memmott wrote: > Hi All > > > I am somewhat new to SES product. I have been experimenting with > cephfs and would like to remove and add again. But do not find > documentation on the process. I did install ceph with deapsea. I > commented out the role-mds in policy.cfg and ran ?salt-run state.orch > ceph.stage.2? and 3. > > > Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: > all MDS deamons must be inactive before removing filesystem? > > > Any suggestions would be appreciated. > > > Thank you > > Boyd Memmott > SLES L2 Support Engineer > > Email: boyd.memmott at suse.com > [1]SUSE > >References > > 1. http://www.suse.com/ >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From ejackson at suse.com Thu Jun 1 11:08:22 2017 From: ejackson at suse.com (Eric Jackson) Date: Thu, 01 Jun 2017 13:08:22 -0400 Subject: [Deepsea-users] SES4, What is the correct process to remove cephfs? In-Reply-To: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> References: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> Message-ID: <3497739.hqYnGrJDEi@ruby> Hi Boyd, Sorry for the delay, but your message was waiting for a moderator. Removing the role and adding it back should be sufficient as well. The general procedure for removing and re-adding any Ceph functionality is 1) comment out/edit the line in your policy.cfg to exclude the role assignment for the specific node 2) Run Stage 2-5 (Technically, you can run Stage 2 and Stage 5 if you *know* that you are not adding or migrating anything else.) 3) Uncomment/restore the previous policy.cfg 4) Run Stage 2-4 If you are looking for what steps are really performed, take a look in /srv/salt/ceph/remove and /srv/salt/ceph/rescind. The remove steps happen on the master node (i.e. typically need Ceph admin keyring access). The rescind steps happen on the specific node. Eric On Thursday, June 01, 2017 06:30:32 PM Jan Fajerski wrote: > Hi Boyd, > I came across this recently too and I'll add the functionality to DeepSea > before SES5 is released. > Until then you need to deactivate the MDS daemon(s) before removing the file > system. The following steps should get you there: > > salt '*' cmd.run 'systemctl stop ceph-mds.target' # stop all mds daemons > # then on the master run: > ceph mds fail 0 # repeat this for every mds you have > # now you can remove the file system with > ceph fs rm cephfs ?yes-i-really-mean-it > # start up your mds daemons again > salt '*' cmd.run 'systemctl start ceph-mds.target' > # and you're read to recreate your CephFS > > Hope that helps! > Best, > Jan > > On Tue, May 30, 2017 at 05:51:56PM +0000, Boyd Memmott wrote: > > Hi All > > > > > > I am somewhat new to SES product. I have been experimenting with > > cephfs and would like to remove and add again. But do not find > > documentation on the process. I did install ceph with deapsea. I > > commented out the role-mds in policy.cfg and ran ?salt-run state.orch > > ceph.stage.2? and 3. > > > > > > Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: > > all MDS deamons must be inactive before removing filesystem? > > > > > > Any suggestions would be appreciated. > > > > > > Thank you > > > > Boyd Memmott > > SLES L2 Support Engineer > > > > Email: boyd.memmott at suse.com > > [1]SUSE > > > >References > > > > 1. http://www.suse.com/ > > > >_______________________________________________ > >Deepsea-users mailing list > >Deepsea-users at lists.suse.com > >http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. URL: From Robert.Grosschopff at suse.com Fri Jun 2 02:11:07 2017 From: Robert.Grosschopff at suse.com (Robert Grosschopff) Date: Fri, 2 Jun 2017 08:11:07 +0000 Subject: [Deepsea-users] SES4, What is the correct process to remove cephfs? Message-ID: stage.5 will remove services. You will have to run stage.2 to stage.5 Robert On 30.05.17, 19:51, "deepsea-users-bounces at lists.suse.com on behalf of Boyd Memmott" wrote: Hi All I am somewhat new to SES product. I have been experimenting with cephfs and would like to remove and add again. But do not find documentation on the process. I did install ceph with deapsea. I commented out the role-mds in policy.cfg and ran ?salt-run state.orch ceph.stage.2? and 3. Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: all MDS deamons must be inactive before removing filesystem? Any suggestions would be appreciated. Thank you Boyd Memmott SLES L2 Support Engineer Email: boyd.memmott at suse.com From Martin.Weiss at suse.com Fri Jun 2 02:21:02 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Fri, 02 Jun 2017 02:21:02 -0600 Subject: [Deepsea-users] Antw: Re: SES4, What is the correct process to remove cephfs? In-Reply-To: References: Message-ID: <59311FEE0200001C002E4D34@prv-mh.provo.novell.com> But please be very careful with removing - that can also be causing data-loss in case there are some typos.. Martin stage.5 will remove services. You will have to run stage.2 to stage.5 Robert On 30.05.17, 19:51, "deepsea-users-bounces at lists.suse.com on behalf of Boyd Memmott" wrote: Hi All I am somewhat new to SES product. I have been experimenting with cephfs and would like to remove and add again. But do not find documentation on the process. I did install ceph with deapsea. I commented out the role-mds in policy.cfg and ran ?salt-run state.orch ceph.stage.2? and 3. Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: all MDS deamons must be inactive before removing filesystem? Any suggestions would be appreciated. Thank you Boyd Memmott SLES L2 Support Engineer Email: boyd.memmott at suse.com _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Boyd.Memmott at suse.com Fri Jun 9 12:02:50 2017 From: Boyd.Memmott at suse.com (Boyd Memmott) Date: Fri, 9 Jun 2017 18:02:50 +0000 Subject: [Deepsea-users] SES4, What is the correct process to remove cephfs? In-Reply-To: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> References: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> Message-ID: Thanks for the insights. I will it again when I get time... Here is how I accomplish the task: 1- Stop mds services on the nodes running the service: The follow nodes will be running the mds service. cat /srv/pillar/ceph/proposals/policy.cfg Look for: role-mds/cluster/*.sls ssh to each of these nodes and stop the service. systemctl stop ceph-mds.target 2- List the name of the cephfs filesystems: ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] Note: "data pools is plural", so it may support more than one.. 3- Remove the cephfs with the following command ceph fs rm [--yes-i-really-mean-it] 4- Remove the corresponding pools: ceph osd pool ls ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it ceph osd pool ls Now start mds services on all nodes: systemctl start ceph-mds.target thanks Boyd Memmott SLES L2?Support Engineer Email: boyd.memmott at suse.com -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Jan Fajerski Sent: Thursday, June 01, 2017 10:31 AM To: deepsea-users at lists.suse.com Subject: Re: [Deepsea-users] SES4, What is the correct process to remove cephfs? Hi Boyd, I came across this recently too and I'll add the functionality to DeepSea before SES5 is released. Until then you need to deactivate the MDS daemon(s) before removing the file system. The following steps should get you there: salt '*' cmd.run 'systemctl stop ceph-mds.target' # stop all mds daemons # then on the master run: ceph mds fail 0 # repeat this for every mds you have # now you can remove the file system with ceph fs rm cephfs ?yes-i-really-mean-it # start up your mds daemons again salt '*' cmd.run 'systemctl start ceph-mds.target' # and you're read to recreate your CephFS Hope that helps! Best, Jan On Tue, May 30, 2017 at 05:51:56PM +0000, Boyd Memmott wrote: > Hi All > > > I am somewhat new to SES product. I have been experimenting with > cephfs and would like to remove and add again. But do not find > documentation on the process. I did install ceph with deapsea. I > commented out the role-mds in policy.cfg and ran ?salt-run state.orch > ceph.stage.2? and 3. > > > Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: > all MDS deamons must be inactive before removing filesystem? > > > Any suggestions would be appreciated. > > > Thank you > > Boyd Memmott > SLES L2 Support Engineer > > Email: boyd.memmott at suse.com > [1]SUSE > >References > > 1. http://www.suse.com/ >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users From Boyd.Memmott at suse.com Fri Jun 9 12:05:40 2017 From: Boyd.Memmott at suse.com (Boyd Memmott) Date: Fri, 9 Jun 2017 18:05:40 +0000 Subject: [Deepsea-users] SES4, What is the correct process to remove cephfs? In-Reply-To: <3497739.hqYnGrJDEi@ruby> References: <20170601163031.ph4ahai46avw57tj@jf_suse_laptop> <3497739.hqYnGrJDEi@ruby> Message-ID: I was originally thinking down these lines. But did not connect the dots with Stage 5. Good information. thanks Boyd Memmott SLES L2?Support Engineer Email: boyd.memmott at suse.com -----Original Message----- From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Eric Jackson Sent: Thursday, June 01, 2017 11:08 AM To: Discussions about the DeepSea management framework for Ceph Subject: Re: [Deepsea-users] SES4, What is the correct process to remove cephfs? Hi Boyd, Sorry for the delay, but your message was waiting for a moderator. Removing the role and adding it back should be sufficient as well. The general procedure for removing and re-adding any Ceph functionality is 1) comment out/edit the line in your policy.cfg to exclude the role assignment for the specific node 2) Run Stage 2-5 (Technically, you can run Stage 2 and Stage 5 if you *know* that you are not adding or migrating anything else.) 3) Uncomment/restore the previous policy.cfg 4) Run Stage 2-4 If you are looking for what steps are really performed, take a look in /srv/salt/ceph/remove and /srv/salt/ceph/rescind. The remove steps happen on the master node (i.e. typically need Ceph admin keyring access). The rescind steps happen on the specific node. Eric On Thursday, June 01, 2017 06:30:32 PM Jan Fajerski wrote: > Hi Boyd, > I came across this recently too and I'll add the functionality to > DeepSea before SES5 is released. > Until then you need to deactivate the MDS daemon(s) before removing > the file system. The following steps should get you there: > > salt '*' cmd.run 'systemctl stop ceph-mds.target' # stop all mds > daemons # then on the master run: > ceph mds fail 0 # repeat this for every mds you have # now you can > remove the file system with ceph fs rm cephfs ?yes-i-really-mean-it # > start up your mds daemons again salt '*' cmd.run 'systemctl start > ceph-mds.target' > # and you're read to recreate your CephFS > > Hope that helps! > Best, > Jan > > On Tue, May 30, 2017 at 05:51:56PM +0000, Boyd Memmott wrote: > > Hi All > > > > > > I am somewhat new to SES product. I have been experimenting with > > cephfs and would like to remove and add again. But do not find > > documentation on the process. I did install ceph with deapsea. I > > commented out the role-mds in policy.cfg and ran ?salt-run state.orch > > ceph.stage.2? and 3. > > > > > > Yet, ?ceph fs rm cephfs ?yes-i-really-mean-it? returns ? Error EINVAL: > > all MDS deamons must be inactive before removing filesystem? > > > > > > Any suggestions would be appreciated. > > > > > > Thank you > > > > Boyd Memmott > > SLES L2 Support Engineer > > > > Email: boyd.memmott at suse.com > > [1]SUSE > > > >References > > > > 1. http://www.suse.com/ > > > >_______________________________________________ > >Deepsea-users mailing list > >Deepsea-users at lists.suse.com > >http://lists.suse.com/mailman/listinfo/deepsea-users From Supriti.Singh at suse.com Wed Jun 14 09:44:26 2017 From: Supriti.Singh at suse.com (Supriti Singh) Date: Wed, 14 Jun 2017 17:44:26 +0200 Subject: [Deepsea-users] NFS Ganesha custom roles Message-ID: <594175FA02000042001D57D0@smtp.nue.novell.com> Hello, I am writing the mail to explain the logic behind introducing custom roles for NFS-Ganesha in Deepsea. Normal use case: ------------------------- Deepsea provides a default "Ganesha" role. This role uses the configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If you look at this file it defines both FSALs, cephfs and rgw. For cephfs, things are pretty simple. But for rgw, we also need to populate the FSAL block with user_id, access_key and secret_access_key. To do so we need a list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. Example rgw.sls: rgw_configurations: rgw: users: - { uid: "demo", name: "Demo", email: "demo at demo.nil" } - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } Ganesha will read this file and populate the RGW FSAL with users "demo" and "demo1". Custom roles: -------------------- In the default setup, we are running both the FSALs on the same ganesha server node. But there may be a case where user wants to run NFS-Ganesha + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set of nodes. With just the role "ganesha", its not possible. There is another case possible, where admin may want to run NFS-Ganesha + RGW with user1, and for other user2 on other node. To handle such cases we added support for custom roles. 1. NFS-Ganesha server with CephFS and RGW on different nodes * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in policy.cfg) * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, with their respective FSALs. * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 * Update rgw.sls to reflect new ganesha_configurations and rgw_configurations. * Run the stages upto 4 In this case rgw.sls: rgw_configurations: ganesha_rgw: users: - { uid: "demo", name: "Demo", email: "demo at demo.nil" } - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } ganesha_configurations: - ganesha_rgw: 2. NFS-Ganesha + Custom RGW: The other use case could be where admin wants to run NFS-Ganesha RGW FSAL but with different users on different nodes. Let say these different set of nodes are named as "silver" and "gold". So, we want only silver users to run on silver node. Similarly, only gold users to run on gold nodes. To do so, we define the following rgw.sls rgw.sls ----------- rgw_configurations: silver: users: - { uid: "demo", name: "Demo", email: "demo at demo.nil" } gold: users: - { uid: "demo", name: "Demo", email: "demo at demo.nil" } ganesha_configurations: - silver - gold * Assign custom roles,silver and gold to nodes (in policy.cfg) * Add new conf file, silver.conf.j2 and gold.conf.j2, with their respective FSALs. * Add keyring file, silver.j2 and gold.j2 * Update rgw.sls to reflect new ganesha_configurations and rgw_configurations. Similar to above example. * Run the stages upto 4 SIlver and gold are just names. Thanks, Supriti ------ Supriti Singh??SUSE Linux GmbH, GF: Felix Imend??rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N??rnberg) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncutler at suse.cz Wed Jun 14 11:18:23 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Wed, 14 Jun 2017 19:18:23 +0200 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <594175FA02000042001D57D0@smtp.nue.novell.com> References: <594175FA02000042001D57D0@smtp.nue.novell.com> Message-ID: <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> I like this idea, I think. If I understand correctly, with the default "ganesha" role I can do cephfs FSAL by itself, or cephfs + rgw FSAL, but I cannot do rgw FSAL by itself. So I will need the custom role ganesha_rgw for that. On 06/14/2017 05:44 PM, Supriti Singh wrote: > Hello, > > I am writing the mail to explain the logic behind introducing custom > roles for NFS-Ganesha in Deepsea. > > Normal use case: > ------------------------- > Deepsea provides a default "Ganesha" role. This role uses the > configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If > you look at this file it defines both FSALs, cephfs and rgw. For cephfs, > things are pretty simple. But for rgw, we also need to populate the FSAL > block with user_id, access_key and secret_access_key. To do so we need a > list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. > > Example rgw.sls: > rgw_configurations: > rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > Ganesha will read this file and populate the RGW FSAL with users "demo" > and "demo1". > > Custom roles: > -------------------- > In the default setup, we are running both the FSALs on the same ganesha > server node. But there may be a case where user wants to run NFS-Ganesha > + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set > of nodes. With just the role "ganesha", its not possible. There is > another case possible, where admin may want to run NFS-Ganesha + RGW > with user1, and for other user2 on other node. To handle such cases we > added support for custom roles. > > 1. NFS-Ganesha server with CephFS and RGW on different nodes > > * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in > policy.cfg) > * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, > with their respective FSALs. > * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 > * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. > * Run the stages upto 4 > > In this case rgw.sls: > > rgw_configurations: > ganesha_rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > > ganesha_configurations: > > - ganesha_rgw: > > > 2. NFS-Ganesha + Custom RGW: > > The other use case could be where admin wants to run NFS-Ganesha RGW > FSAL but with different users on different nodes. Let say these > different set of nodes are named as "silver" and "gold". > > So, we want only silver users to run on silver node. Similarly, only > gold users to run on gold nodes. To do so, we define the following rgw.sls > > > rgw.sls > > ----------- > > rgw_configurations: > silver: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > gold: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > > > ganesha_configurations: > > - silver > - gold > > * Assign custom roles,silver and gold to nodes (in policy.cfg) > * Add new conf file, silver.conf.j2 and gold.conf.j2, with their > respective FSALs. > * Add keyring file, silver.j2 and gold.j2 > * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. Similar to above example. > * Run the stages upto 4 > > SIlver and gold are just names. > > > Thanks, > > Supriti > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 From Supriti.Singh at suse.com Wed Jun 14 13:38:48 2017 From: Supriti.Singh at suse.com (Supriti Singh) Date: Wed, 14 Jun 2017 21:38:48 +0200 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> References: <594175FA02000042001D57D0@smtp.nue.novell.com> <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> Message-ID: <5941ACE802000042001D581F@smtp.nue.novell.com> With default ganesha you have possible options: 1. Ganesha + RGW + Cephfs When both mds and rgw are defined. 2. Ganesha + RGW: If you don't have any mfs role define, but have a rgw, ganesha.conf will define export block for each user defined in rgw.sls under "rgw_configurations". 3. Ganesha + Cephfs Only mds role. The example ganesha_rgw and ganesha_cephfs is applicable when admin wants to run only Ganesha+Cephfs on one node and Ganesha +RGW on other. With defaul "ganesha" role its not possible. ------ Supriti Singh??SUSE Linux GmbH, GF: Felix Imend??rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N??rnberg) >>> Nathan Cutler 06/14/17 7:18 PM >>> I like this idea, I think. If I understand correctly, with the default "ganesha" role I can do cephfs FSAL by itself, or cephfs + rgw FSAL, but I cannot do rgw FSAL by itself. So I will need the custom role ganesha_rgw for that. On 06/14/2017 05:44 PM, Supriti Singh wrote: > Hello, > > I am writing the mail to explain the logic behind introducing custom > roles for NFS-Ganesha in Deepsea. > > Normal use case: > ------------------------- > Deepsea provides a default "Ganesha" role. This role uses the > configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If > you look at this file it defines both FSALs, cephfs and rgw. For cephfs, > things are pretty simple. But for rgw, we also need to populate the FSAL > block with user_id, access_key and secret_access_key. To do so we need a > list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. > > Example rgw.sls: > rgw_configurations: > rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > Ganesha will read this file and populate the RGW FSAL with users "demo" > and "demo1". > > Custom roles: > -------------------- > In the default setup, we are running both the FSALs on the same ganesha > server node. But there may be a case where user wants to run NFS-Ganesha > + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set > of nodes. With just the role "ganesha", its not possible. There is > another case possible, where admin may want to run NFS-Ganesha + RGW > with user1, and for other user2 on other node. To handle such cases we > added support for custom roles. > > 1. NFS-Ganesha server with CephFS and RGW on different nodes > > * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in > policy.cfg) > * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, > with their respective FSALs. > * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 > * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. > * Run the stages upto 4 > > In this case rgw.sls: > > rgw_configurations: > ganesha_rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > > ganesha_configurations: > > - ganesha_rgw: > > > 2. NFS-Ganesha + Custom RGW: > > The other use case could be where admin wants to run NFS-Ganesha RGW > FSAL but with different users on different nodes. Let say these > different set of nodes are named as "silver" and "gold". > > So, we want only silver users to run on silver node. Similarly, only > gold users to run on gold nodes. To do so, we define the following rgw.sls > > > rgw.sls > > ----------- > > rgw_configurations: > silver: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > gold: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > > > ganesha_configurations: > > - silver > - gold > > * Assign custom roles,silver and gold to nodes (in policy.cfg) > * Add new conf file, silver.conf.j2 and gold.con> * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. Similar to above example. > * Run the stages upto 4 > > SIlver and gold are just names. > > > Thanks, > > Supriti > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rick.Ashford at suse.com Wed Jun 14 23:30:35 2017 From: Rick.Ashford at suse.com (Rick Ashford) Date: Thu, 15 Jun 2017 05:30:35 +0000 Subject: [Deepsea-users] Orch discovery not finding all disks Message-ID: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> I'm installing SES4 via DeepSea and in the discovery stage it appears that the runner only discovers drives that are attached to the same controller as the root disk, which is leaving my profiles- section very underpopulated. Has anybody else seen behavior like this and/or know how to work around? Rick Ashford Sales Engineering Manager - West rick.ashford at suse.com (512)731-3035 From rdias at suse.com Thu Jun 15 00:53:58 2017 From: rdias at suse.com (Ricardo Dias) Date: Thu, 15 Jun 2017 07:53:58 +0100 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <5941ACE802000042001D581F@smtp.nue.novell.com> References: <594175FA02000042001D57D0@smtp.nue.novell.com> <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> <5941ACE802000042001D581F@smtp.nue.novell.com> Message-ID: <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> Hi, In a near future version of openATTIC, it will support the management of NFS-Ganesha services and exports using either CephFS or RGW storage backends. This feature already allows to manage different nfs-Ganesha services in different hosts, and also allows to create an export assigning it to a particular host/service. Some screenshots available here: https://tracker.openattic.org/plugins/servlet/mobile#issue/OP-2195 Ricardo Dias > On 14 Jun 2017, at 20:38, Supriti Singh wrote: > > With default ganesha you have possible options: > > 1. Ganesha + RGW + Cephfs > When both mds and rgw are defined. > > 2. Ganesha + RGW: > If you don't have any mfs role define, but have a rgw, ganesha.conf will define export block for each user defined in rgw.sls under "rgw_configurations". > > 3. Ganesha + Cephfs > Only mds role. > > The example ganesha_rgw and ganesha_cephfs is applicable when admin wants to run only Ganesha+Cephfs on one node and Ganesha +RGW on other. With defaul "ganesha" role its not possible. > > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > > >>> Nathan Cutler 06/14/17 7:18 PM >>> > I like this idea, I think. If I understand correctly, with the default > "ganesha" role I can do cephfs FSAL by itself, or cephfs + rgw FSAL, but > I cannot do rgw FSAL by itself. So I will need the custom role > ganesha_rgw for that. > > On 06/14/2017 05:44 PM, Supriti Singh wrote: > > Hello, > > > > I am writing the mail to explain the logic behind introducing custom > > roles for NFS-Ganesha in Deepsea. > > > > Normal use case: > > ------------------------- > > Deepsea provides a default "Ganesha" role. This role uses the > > configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If > > you look at this file it defines both FSALs, cephfs and rgw. For cephfs, > > things are pretty simple. But for rgw, we also need to populate the FSAL > > block with user_id, access_key and secret_access_key. To do so we need a > > list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. > > > > Example rgw.sls: > > rgw_configurations: > > rgw: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > > > Ganesha will read this file and populate the RGW FSAL with users "demo" > > and "demo1". > > > > Custom roles: > > -------------------- > > In the default setup, we are running both the FSALs on the same ganesha > > server node. But there may be a case where user wants to run NFS-Ganesha > > + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set > > of nodes. With just the role "ganesha", its not possible. There is > > another case possible, where admin may want to run NFS-Ganesha + RGW > > with user1, and for other user2 on other node. To handle such cases we > > added support for custom roles. > > > > 1. NFS-Ganesha server with CephFS and RGW on different nodes > > > > * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in > > policy.cfg) > > * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, > > with their respective FSALs. > > * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 > > * Update rgw.sls to reflect new ganesha_configurations and > > rgw_configurations. > > * Run the stages upto 4 > > > > In this case rgw.sls: > > > > rgw_configurations: > > ganesha_rgw: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > > > > > ganesha_configurations: > > > > - ganesha_rgw: > > > > > > 2. NFS-Ganesha + Custom RGW: > > > > The other use case could be where admin wants to run NFS-Ganesha RGW > > FSAL but with different users on different nodes. Let say these > > different set of nodes are named as "silver" and "gold". > > > > So, we want only silver users to run on silver node. Similarly, only > > gold users to run on gold nodes. To do so, we define the following rgw.sls > > > > > > rgw.sls > > > > ----------- > > > > rgw_configurations: > > silver: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > > > gold: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > > > > > > > ganesha_configurations: > > > > - silver > > - gold > > > > * Assign custom roles,silver and gold to nodes (in policy.cfg) > > * Add new conf file, silver.conf.j2 and gold.conf.j2, with their > > respective FSALs. > > * Add keyring file, silver.j2 and gold.j2 > > * Update rgw.sls to reflect new ganesha_configurations and > > rgw_configurations. Similar to above example. > > * Run the stages upto 4 > > > > SIlver and gold are just names. > > > > > > Thanks, > > > > Supriti > > > > > > ------ > > Supriti Singh > > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > > HRB 21284 (AG N?rnberg) > > > > > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > > > > -- > Nathan Cutler > Software Engineer Distributed Storage > SUSE LINUX, s.r.o. > Tel.: +420 284 084 037 > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfajerski at suse.com Thu Jun 15 01:19:14 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 15 Jun 2017 09:19:14 +0200 Subject: [Deepsea-users] Orch discovery not finding all disks In-Reply-To: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> References: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> Message-ID: <20170615071914.r5anl2qt3xh7as66@jf_suse_laptop> Hi, I have never seen this behaviour. Could you please provide some more info about the hardware configuration? Disks, controllers and so on. Thanks, Jan On Thu, Jun 15, 2017 at 05:30:35AM +0000, Rick Ashford wrote: > I'm installing SES4 via DeepSea and in the discovery stage it appears that the runner only discovers drives that are attached to the same controller as the root disk, which is leaving my profiles- section very underpopulated. Has anybody else seen behavior like this and/or know how to work around? > >Rick Ashford >Sales Engineering Manager - West >rick.ashford at suse.com >(512)731-3035 >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From dbyte at suse.com Thu Jun 15 04:24:48 2017 From: dbyte at suse.com (David Byte) Date: Thu, 15 Jun 2017 10:24:48 +0000 Subject: [Deepsea-users] Orch discovery not finding all disks In-Reply-To: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> References: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> Message-ID: <42D80E8B-25D2-4C30-A376-1C7E91094237@suse.com> Is this the HPE hardware? David Byte Sr. Technical Strategist IHV Alliances and Embedded SUSE Sent from my iPhone. Typos are Apple's fault. > On Jun 15, 2017, at 12:30 AM, Rick Ashford wrote: > > I'm installing SES4 via DeepSea and in the discovery stage it appears that the runner only discovers drives that are attached to the same controller as the root disk, which is leaving my profiles- section very underpopulated. Has anybody else seen behavior like this and/or know how to work around? > > Rick Ashford > Sales Engineering Manager - West > rick.ashford at suse.com > (512)731-3035 > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From Rick.Ashford at suse.com Thu Jun 15 06:58:04 2017 From: Rick.Ashford at suse.com (Rick Ashford) Date: Thu, 15 Jun 2017 12:58:04 +0000 Subject: [Deepsea-users] Orch discovery not finding all disks In-Reply-To: <42D80E8B-25D2-4C30-A376-1C7E91094237@suse.com> References: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com>, <42D80E8B-25D2-4C30-A376-1C7E91094237@suse.com> Message-ID: <6ED7895E-22A4-4E2E-952F-8C332111C226@suse.com> Yes Rick Ashford Sales Engineering Manager - West rick.ashford at suse.com (512)731-3035 > On Jun 15, 2017, at 5:24 AM, David Byte wrote: > > Is this the HPE hardware? > > David Byte > Sr. Technical Strategist > IHV Alliances and Embedded > SUSE > > Sent from my iPhone. Typos are Apple's fault. > >> On Jun 15, 2017, at 12:30 AM, Rick Ashford wrote: >> >> I'm installing SES4 via DeepSea and in the discovery stage it appears that the runner only discovers drives that are attached to the same controller as the root disk, which is leaving my profiles- section very underpopulated. Has anybody else seen behavior like this and/or know how to work around? >> >> Rick Ashford >> Sales Engineering Manager - West >> rick.ashford at suse.com >> (512)731-3035 >> _______________________________________________ >> Deepsea-users mailing list >> Deepsea-users at lists.suse.com >> http://lists.suse.com/mailman/listinfo/deepsea-users > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From suse at ash4d.com Thu Jun 15 08:35:59 2017 From: suse at ash4d.com (SUSE Archive) Date: Thu, 15 Jun 2017 09:35:59 -0500 Subject: [Deepsea-users] Orch discovery not finding all disks In-Reply-To: <20170615071914.r5anl2qt3xh7as66@jf_suse_laptop> References: <33AD790D-B0AD-49C2-B59B-68BEDBCAE9C5@suse.com> <20170615071914.r5anl2qt3xh7as66@jf_suse_laptop> Message-ID: <1497537359.22758.1.camel@ash4d.com> Jan, Here's a zip file with the supportconfig from all 5 of the boxes. Each of the boxes has a single ~500GB IDE drive on the HCI controller and 8 2TB drives on the HPSA (each is set up as single-drive RAID0). On ses1 (manual install) the root filesystem went to the single drive on the HCI controller, and it detected the 8 drives on the HPSA and added them to the profile. On the ses1 through ses6 systems (deployed via autoyast) the root filesystem was put on one of the 8 drives on the HPSA, and the discovery only created profiles for the single IDE drive. The ses5 system was pulled out due to hardware issues with the HPSA, hence the hole in the numbering. https://www.dropbox.com/s/7bqq5ffdc3x955f/seslog_diskmissing.zip?dl=0 On Thu, 2017-06-15 at 09:19 +0200, Jan Fajerski wrote: > Hi, > I have never seen this behaviour. Could you please provide some more > info about > the hardware configuration? > Disks, controllers and so on. > > Thanks, > Jan > > On Thu, Jun 15, 2017 at 05:30:35AM +0000, Rick Ashford wrote: > > I'm installing SES4 via DeepSea and in the discovery stage it > > appears that the runner only discovers drives that are attached to > > the same controller as the root disk, which is leaving my profiles- > > section very underpopulated. Has anybody else seen behavior > > like this and/or know how to work around? > > > > Rick Ashford > > Sales Engineering Manager - West > > rick.ashford at suse.com > > (512)731-3035 > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > > From sseebergelverfeldt at suse.de Fri Jun 16 10:45:58 2017 From: sseebergelverfeldt at suse.de (Sven Seeberg) Date: Fri, 16 Jun 2017 18:45:58 +0200 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> References: <594175FA02000042001D57D0@smtp.nue.novell.com> <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> <5941ACE802000042001D581F@smtp.nue.novell.com> <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> Message-ID: Hello Ricardo, On 15.06.2017 08:53, Ricardo Dias wrote: > In a near future version of openATTIC, it will support the management > of NFS-Ganesha services and exports using either CephFS or RGW storage > backends. does this "near future" concern SES 5? Cheers Sven -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From Supriti.Singh at suse.com Mon Jun 19 09:36:13 2017 From: Supriti.Singh at suse.com (Supriti Singh) Date: Mon, 19 Jun 2017 17:36:13 +0200 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> References: <594175FA02000042001D57D0@smtp.nue.novell.com> <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> <5941ACE802000042001D581F@smtp.nue.novell.com> <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> Message-ID: <59480B8D02000042001D6250@smtp.nue.novell.com> For Openattic + RGW backend, will custom ganesha roles (in rgw.sls) be required to create different ganesha export files for different "rgw" users? ------ Supriti Singh??SUSE Linux GmbH, GF: Felix Imend??rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N??rnberg) >>> Ricardo Dias 06/15/17 8:54 AM >>> Hi, In a near future version of openATTIC, it will support the management of NFS-Ganesha services and exports using either CephFS or RGW storage backends. This feature already allows to manage different nfs-Ganesha services in different hosts, and also allows to create an export assigning it to a particular host/service. Some screenshots available here: https://tracker.openattic.org/plugins/servlet/mobile#issue/OP-2195 Ricardo Dias On 14 Jun 2017, at 20:38, Supriti Singh wrote: With default ganesha you have possible options: 1. Ganesha + RGW + Cephfs When both mds and rgw are defined. 2. Ganesha + RGW: If you don't have any mfs role define, but have a rgw, ganesha.conf will define export block for each user defined in rgw.sls under "rgw_configurations". 3. Ganesha + Cephfs Only mds role. The example ganesha_rgw and ganesha_cephfs is applicable when admin wants to run only Ganesha+Cephfs on one node and Ganesha +RGW on other. With defaul "ganesha" role its not possible. ------ Supriti Singh SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) >>> Nathan Cutler 06/14/17 7:18 PM >>> I like this idea, I think. If I understand correctly, with the default "ganesha" role I can do cephfs FSAL by itself, or cephfs + rgw FSAL, but I cannot do rgw FSAL by itself. So I will need the custom role ganesha_rgw for that. On 06/14/2017 05:44 PM, Supriti Singh wrote: > Hello, > > I am writing the mail to explain the logic behind introducing custom > roles for NFS-Ganesha in Deepsea. > > Normal use case: > ------------------------- > Deepsea provides a default "Ganesha" role. This role uses the > configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If > you look at this file it defines both FSALs, cephfs and rgw. For cephfs, > things are pretty simple. But for rgw, we also need to populate the FSAL > block with user_id, access_key and secret_access_key. To do so we need a > list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. > > Example rgw.sls: > rgw_configurations: > rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > Ganesha will read this file and populate the RGW FSAL with users "demo" > and "demo1". > > Custom roles: > -------------------- > In the default setup, we are running both the FSALs on the same ganesha > server node. But there may be a case where user wants to run NFS-Ganesha > + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set > of nodes. With just the role "ganesha", its not possible. There is > another case possible, where admin may want to run NFS-Ganesha + RGW > with user1, and for other user2 on other node. To handle such cases we > added support for custom roles. > > 1. NFS-Ganesha server with CephFS and RGW on different nodes > > * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in > policy.cfg) > * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, > with their respective FSALs. > * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 > * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. > * Run the stages upto 4 > > In this case rgw.sls: > > rgw_configurations: > ganesha_rgw: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil" } > > > ganesha_configurations: > > -> The other use case could be where admin wants to run NFS-Ganesha RGW > FSAL but with different users on different nodes. Let say these > different set of nodes are named as "silver" and "gold". > > So, we want only silver users to run on silver node. Similarly, only > gold users to run on gold nodes. To do so, we define the following rgw.sls > > > rgw.sls > > ----------- > > rgw_configurations: > silver: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > gold: > users: > - { uid: "demo", name: "Demo", email: "demo at demo.nil" } > > > > ganesha_configurations: > > - silver > - gold > > * Assign custom roles,silver and gold to nodes (in policy.cfg) > * Add new conf file, silver.conf.j2 and gold.conf.j2, with their > respective FSALs. > * Add keyring file, silver.j2 and gold.j2 > * Update rgw.sls to reflect new ganesha_configurations and > rgw_configurations. Similar to above example. > * Run the stages upto 4 > > SIlver and gold are just names. > > > Thanks, > > Supriti > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users _______________________________________________ Deepsea-users mailing list Deepsea-users at lists.suse.com http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdias at suse.com Mon Jun 19 09:37:55 2017 From: rdias at suse.com (Ricardo Dias) Date: Mon, 19 Jun 2017 16:37:55 +0100 Subject: [Deepsea-users] NFS Ganesha custom roles In-Reply-To: <59480B8D02000042001D6250@smtp.nue.novell.com> References: <594175FA02000042001D57D0@smtp.nue.novell.com> <7f1e520a-01e9-1614-886c-1d2483306efb@suse.cz> <5941ACE802000042001D581F@smtp.nue.novell.com> <9CE38985-1B27-45A1-8691-100B1267580B@suse.com> <59480B8D02000042001D6250@smtp.nue.novell.com> Message-ID: <8a342930-0070-fee4-4a47-6db70f87b72e@suse.com> No, openATTIC does not rely on custom ganesha roles. On 19-06-2017 16:36, Supriti Singh wrote: > For Openattic + RGW backend, will custom ganesha roles (in rgw.sls) be required > to create different ganesha export files for different "rgw" users? > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > >>>> Ricardo Dias 06/15/17 8:54 AM >>> > > Hi, > > In a near future version of openATTIC, it will support the management of > NFS-Ganesha services and exports using either CephFS or RGW storage backends. > > This feature already allows to manage different nfs-Ganesha services in > different hosts, and also allows to create an export assigning it to a > particular host/service. > > Some screenshots available here: > https://tracker.openattic.org/plugins/servlet/mobile#issue/OP-2195 > > > Ricardo Dias > > On 14 Jun 2017, at 20:38, Supriti Singh > wrote: > > With default ganesha you have possible options: > > 1. Ganesha + RGW + Cephfs > When both mds and rgw are defined. > > 2. Ganesha + RGW: > If you don't have any mfs role define, but have a rgw, ganesha.conf will > define export block for each user defined in rgw.sls under > "rgw_configurations". > > 3. Ganesha + Cephfs > Only mds role. > > The example ganesha_rgw and ganesha_cephfs is applicable when admin wants to > run only Ganesha+Cephfs on one node and Ganesha +RGW on other. With defaul > "ganesha" role its not possible. > > > > ------ > Supriti Singh > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > HRB 21284 (AG N?rnberg) > > >>> Nathan Cutler > 06/14/17 7:18 PM >>> > I like this idea, I think. If I understand correctly, with the default > "ganesha" role I can do cephfs FSAL by itself, or cephfs + rgw FSAL, but > I cannot do rgw FSAL by itself. So I will need the custom role > ganesha_rgw for that. > > On 06/14/2017 05:44 PM, Supriti Singh wrote: > > Hello, > > > > I am writing the mail to explain the logic behind introducing custom > > roles for NFS-Ganesha in Deepsea. > > > > Normal use case: > > ------------------------- > > Deepsea provides a default "Ganesha" role. This role uses the > > configuration from the /srv/salt/ceph/ganesha/files/ganesha.conf.j2. If > > you look at this file it defines both FSALs, cephfs and rgw. For cephfs, > > things are pretty simple. But for rgw, we also need to populate the FSAL > > block with user_id, access_key and secret_access_key. To do so we need a > > list of users. That should be provided in the file /srv/pillar/ceph/rgw.sls. > > > > Example rgw.sls: > > rgw_configurations: > > rgw: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil " } > > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil " } > > > > Ganesha will read this file and populate the RGW FSAL with users "demo" > > and "demo1". > > > > Custom roles: > > -------------------- > > In the default setup, we are running both the FSALs on the same ganesha > > server node. But there may be a case where user wants to run NFS-Ganesha > > + Ceph FSAL on one set of nodes and NFS-Ganesha + RGW FSAL on other set > > of nodes. With just the role "ganesha", its not possible. There is > > another case possible, where admin may want to run NFS-Ganesha + RGW > > with user1, and for other user2 on other node. To handle such cases we > > added support for custom roles. > > > > 1. NFS-Ganesha server with CephFS and RGW on different nodes > > > > * Assign custom roles, ganesha_cephfs and ganesha_rgw to nodes (in > > policy.cfg) > > * Add new conf file, ganesha_cephfs.conf.j2 and ganesha_rgw.conf.j2, > > with their respective FSALs. > > * Add keyring file, ganesha_cephfs.j2 and ganesha_rgw.j2 > > * Update rgw.sls to reflect new ganesha_configurations and > > rgw_configurations. > > * Run the stages upto 4 > > > > In this case rgw.sls: > > > > rgw_configurations: > > ganesha_rgw: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil " } > > - { uid: "demo1", name: "Demo1", email: "demo1 at demo.nil " } > > > > > > ganesha_configurations: > > > > - ganesha_rgw: > > > > > > 2. NFS-Ganesha + Custom RGW: > > > > The other use case could be where admin wants to run NFS-Ganesha RGW > > FSAL but with different users on different nodes. Let say these > > different set of nodes are named as "silver" and "gold". > > > > So, we want only silver users to run on silver node. Similarly, only > > gold users to run on gold nodes. To do so, we define the following rgw.sls > > > > > > rgw.sls > > > > ----------- > > > > rgw_configurations: > > silver: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil " } > > > > gold: > > users: > > - { uid: "demo", name: "Demo", email: "demo at demo.nil " } > > > > > > > > ganesha_configurations: > > > > - silver > > - gold > > > > * Assign custom roles,silver and gold to nodes (in policy.cfg) > > * Add new conf file, silver.conf.j2 and gold.conf.j2, with their > > respective FSALs. > > * Add keyring file, silver.j2 and gold.j2 > > * Update rgw.sls to reflect new ganesha_configurations and > > rgw_configurations. Similar to above example. > > * Run the stages upto 4 > > > > SIlver and gold are just names. > > > > > > Thanks, > > > > Supriti > > > > > > ------ > > Supriti Singh > > SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, > > HRB 21284 (AG N?rnberg) > > > > > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users > > > > -- > Nathan Cutler > Software Engineer Distributed Storage > SUSE LINUX, s.r.o. > Tel.: +420 284 084 037 > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > > > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users > -- Ricardo Dias Senior Software Engineer - Storage Team SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From ncutler at suse.cz Thu Jun 22 01:37:14 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Thu, 22 Jun 2017 09:37:14 +0200 Subject: [Deepsea-users] DeepSea integration testing Message-ID: <8dce8ded-4e80-59f8-aedd-bbf383f9cf2e@suse.cz> Last night I ran the "full" DeepSea integration suite (just six jobs for now, covering basic deployment of Ceph, CephFS, RGW, and NFS-Ganesha) for the first time using the code in DeepSea's master branch. The result: all green! http://167.114.236.223:8081/ubuntu-2017-06-21_21:55:29-deepsea:basic-ses5---basic-openstack/ If you're interested in how the tests work, click on the LOG icon (next to each test). If you don't have time to study the entire log, you can skip the setup bits and just read from "Running DeepSea Stage 0" until "Unwinding manager deepsea" (a small subset of the whole log file). Perhaps you'd like to see a failed test and learn how to debug it? It's easy! ;-) First, find a run that had a failed test - here's one: http://167.114.236.223:8081/ubuntu-2017-06-21_07:59:06-deepsea:basic-ses5---basic-openstack/ Then open the log of the failed test and search for "Traceback". The first python traceback indicates where the test failed. The root cause might be higher up, of course. Also, don't forget that you can run any of these tests outside of teuthology, though that implies you'll have to set up the Salt cluster yourself. See https://github.com/SUSE/DeepSea/blob/master/qa/README Nathan From jschmid at suse.de Thu Jun 22 02:31:26 2017 From: jschmid at suse.de (Joshua Schmid) Date: Thu, 22 Jun 2017 10:31:26 +0200 Subject: [Deepsea-users] DeepSea integration testing In-Reply-To: <8dce8ded-4e80-59f8-aedd-bbf383f9cf2e@suse.cz> References: <8dce8ded-4e80-59f8-aedd-bbf383f9cf2e@suse.cz> Message-ID: <20170622103126.596846cd@d155.suse.de> On Thu, 22 Jun 2017 09:37:14 +0200 Nathan Cutler wrote: Thanks Nathan, that's awesome news. Is it sufficient to put more tests under qa/suites/basic/ to extend the coverage in teuthology? > Last night I ran the "full" DeepSea integration suite (just six jobs > for now, covering basic deployment of Ceph, CephFS, RGW, and > NFS-Ganesha) for the first time using the code in DeepSea's master > branch. The result: all green! > > http://167.114.236.223:8081/ubuntu-2017-06-21_21:55:29-deepsea:basic-ses5---basic-openstack/ > > If you're interested in how the tests work, click on the LOG icon > (next to each test). If you don't have time to study the entire log, > you can skip the setup bits and just read from "Running DeepSea Stage > 0" until "Unwinding manager deepsea" (a small subset of the whole log > file). > > Perhaps you'd like to see a failed test and learn how to debug it? > It's easy! ;-) First, find a run that had a failed test - here's one: > http://167.114.236.223:8081/ubuntu-2017-06-21_07:59:06-deepsea:basic-ses5---basic-openstack/ > > Then open the log of the failed test and search for "Traceback". The > first python traceback indicates where the test failed. The root > cause might be higher up, of course. > > Also, don't forget that you can run any of these tests outside of > teuthology, though that implies you'll have to set up the Salt > cluster yourself. See > https://github.com/SUSE/DeepSea/blob/master/qa/README > > Nathan > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users From ncutler at suse.cz Thu Jun 22 03:45:21 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Thu, 22 Jun 2017 11:45:21 +0200 Subject: [Deepsea-users] DeepSea integration testing In-Reply-To: <20170622103126.596846cd@d155.suse.de> References: <8dce8ded-4e80-59f8-aedd-bbf383f9cf2e@suse.cz> <20170622103126.596846cd@d155.suse.de> Message-ID: <2e6d896b-7250-131f-b662-decd8737f596@suse.cz> Hi Joshua: Thanks for your interest in the integration test suite. If you want to add to it, I would suggest writing the test first, make sure it works in your environment, and then ping me and/or Kyr (or bring it up at the DevOps meeting) to get it added to the CI. Now, to answer your question. > Is it sufficient to put more tests under qa/suites/basic/ to extend the > coverage in teuthology? Note that teuthology is not used for CI purposes. For CI we will need to program Jenkins to set up the Salt cluster and run the tests. When you add a test, you will always have to ensure that the CI framework (whatever it is) is aware of the test and is configured to run it. For teuthology, that means adding to qa/suites/deepsea/basic in SUSE/ceph.git, branch "ses5". That directory structure was set up by [1] and the rgw test was added by [2]. [1] https://github.com/SUSE/ceph/pull/125 [2] https://github.com/SUSE/ceph/pull/126 From Martin.Weiss at suse.com Fri Jun 30 00:26:01 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Fri, 30 Jun 2017 00:26:01 -0600 Subject: [Deepsea-users] Add management of disks/devices that already have partitions References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> Message-ID: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> Hi *, as far as I have understood deepsea - it ignores all disks / devices that already have partitions. This causes an additional effort to clean these disks manually before they can show up in profiles. Would it be possible to enhance deepsea ?not? to ignore these disks? IMO it would be great to discover them and add them to the proposals - maybe in a separate section of the files like ?already partitioned?. Then in a further step these disks could be specified / moved in the profiles by an administrator what then would allow to clear / use them with deepsea during deployment stage in case they are specified to be an OSD filesystem/journal/wal/db... (deepsea could clean the disks before putting OSD data on them). IMP such a feature would allow a more complete disk management with deepsea... Thoughts? Thanks Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncutler at suse.cz Fri Jun 30 01:20:08 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Fri, 30 Jun 2017 09:20:08 +0200 Subject: [Deepsea-users] Add management of disks/devices that already have partitions In-Reply-To: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> <5955EF040200001C00104EFA@prv-mh.provo.novell.com> Message-ID: > as far as I have understood deepsea - it ignores all disks / devices > that already have partitions. > > This causes an additional effort to clean these disks manually before > they can show up in profiles. > > Would it be possible to enhance deepsea ?not? to ignore these disks? > > IMO it would be great to discover them and add them to the proposals - > maybe in a separate section of the files like ?already partitioned?. > > Then in a further step these disks could be specified / moved in the > profiles by an administrator what then would allow to clear / use them > with deepsea during deployment stage in case they are specified to be an > OSD filesystem/journal/wal/db... (deepsea could clean the disks before > putting OSD data on them). > > IMP such a feature would allow a more complete disk management with > deepsea... > > Thoughts? Fine until somebody uses this feature to wipe their data for which they have no backup. Automating disk-zapping/filesystem-wiping is a tricky business. Nathan From Martin.Weiss at suse.com Fri Jun 30 01:31:28 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Fri, 30 Jun 2017 01:31:28 -0600 Subject: [Deepsea-users] Antw: Re: Add management of disks/devices that already have partitions In-Reply-To: References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> <5955EF040200001C00104EFA@prv-mh.provo.novell.com> Message-ID: <5955FE500200001C002E9ABE@prv-mh.provo.novell.com> >> as far as I have understood deepsea - it ignores all disks / devices >> that already have partitions. >> >> This causes an additional effort to clean these disks manually before >> they can show up in profiles. >> >> Would it be possible to enhance deepsea ?not? to ignore these disks? >> >> IMO it would be great to discover them and add them to the proposals - >> maybe in a separate section of the files like ?already partitioned?. >> >> Then in a further step these disks could be specified / moved in the >> profiles by an administrator what then would allow to clear / use them >> with deepsea during deployment stage in case they are specified to be an >> OSD filesystem/journal/wal/db... (deepsea could clean the disks before >> putting OSD data on them). >> >> IMP such a feature would allow a more complete disk management with >> deepsea... >> >> Thoughts? > > Fine until somebody uses this feature to wipe their data for which they > have no backup. Automating disk-zapping/filesystem-wiping is a tricky > business. Is it a higher risk in case we provide a framework around this within deepsea or in case the customer uses manual steps? IMO - In case we would have this support in deepsea we could add checks and verifications and make it easier and safer for the admin and ensure he is not deleting the OS or existing OSD data.. Martin From ejackson at suse.com Fri Jun 30 05:30:36 2017 From: ejackson at suse.com (Eric Jackson) Date: Fri, 30 Jun 2017 07:30:36 -0400 Subject: [Deepsea-users] Antw: Re: Add management of disks/devices that already have partitions In-Reply-To: <5955FE500200001C002E9ABE@prv-mh.provo.novell.com> References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> <5955FE500200001C002E9ABE@prv-mh.provo.novell.com> Message-ID: <5407412.xEQ2N0ij9I@fury.home> The issue is https://github.com/SUSE/DeepSea/issues/259, but we have not had the time to write anything for this. On Friday, June 30, 2017 01:31:28 AM Martin Weiss wrote: > >> as far as I have understood deepsea - it ignores all disks / devices > >> > >> that already have partitions. > >> > >> This causes an additional effort to clean these disks manually before > >> they can show up in profiles. > >> > >> Would it be possible to enhance deepsea ?not? to ignore these disks? > >> > >> IMO it would be great to discover them and add them to the proposals - > >> maybe in a separate section of the files like ?already partitioned?. > >> > >> Then in a further step these disks could be specified / moved in the > >> profiles by an administrator what then would allow to clear / use them > >> with deepsea during deployment stage in case they are specified to be an > >> OSD filesystem/journal/wal/db... (deepsea could clean the disks before > >> putting OSD data on them). > >> > >> IMP such a feature would allow a more complete disk management with > >> deepsea... > >> > >> Thoughts? > > > > Fine until somebody uses this feature to wipe their data for which they > > have no backup. Automating disk-zapping/filesystem-wiping is a tricky > > business. > > Is it a higher risk in case we provide a framework around this within > deepsea or in case the customer uses manual steps? > > IMO - In case we would have this support in deepsea we could add checks and > verifications and make it easier and safer for the admin and ensure he is > not deleting the OS or existing OSD data.. > > Martin > > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. URL: From Martin.Weiss at suse.com Fri Jun 30 08:46:23 2017 From: Martin.Weiss at suse.com (Martin Weiss) Date: Fri, 30 Jun 2017 08:46:23 -0600 Subject: [Deepsea-users] Antw: Re: Add management of disks/devices that already have partitions In-Reply-To: <5407412.xEQ2N0ij9I@fury.home> References: <5955EF040200001C00104EFA@prv-mh.provo.novell.com> <5955FE500200001C002E9ABE@prv-mh.provo.novell.com> <5407412.xEQ2N0ij9I@fury.home> Message-ID: <5956644E0200001C00104FB1@prv-mh.provo.novell.com> Thanks - commented there! Martin > Am 30.06.2017 um 12:30 schrieb Eric Jackson : > > The issue is https://github.com/SUSE/DeepSea/issues/259, but we have not had > the time to write anything for this. > > > On Friday, June 30, 2017 01:31:28 AM Martin Weiss wrote: >>>> as far as I have understood deepsea - it ignores all disks / devices >>>> >>>> that already have partitions. >>>> >>>> This causes an additional effort to clean these disks manually before >>>> they can show up in profiles. >>>> >>>> Would it be possible to enhance deepsea ?not? to ignore these disks? >>>> >>>> IMO it would be great to discover them and add them to the proposals - >>>> maybe in a separate section of the files like ?already partitioned?. >>>> >>>> Then in a further step these disks could be specified / moved in the >>>> profiles by an administrator what then would allow to clear / use them >>>> with deepsea during deployment stage in case they are specified to be an >>>> OSD filesystem/journal/wal/db... (deepsea could clean the disks before >>>> putting OSD data on them). >>>> >>>> IMP such a feature would allow a more complete disk management with >>>> deepsea... >>>> >>>> Thoughts? >>> >>> Fine until somebody uses this feature to wipe their data for which they >>> have no backup. Automating disk-zapping/filesystem-wiping is a tricky >>> business. >> >> Is it a higher risk in case we provide a framework around this within >> deepsea or in case the customer uses manual steps? >> >> IMO - In case we would have this support in deepsea we could add checks and >> verifications and make it easier and safer for the admin and ensure he is >> not deleting the OS or existing OSD data.. >> >> Martin >> >> _______________________________________________ >> Deepsea-users mailing list >> Deepsea-users at lists.suse.com >> http://lists.suse.com/mailman/listinfo/deepsea-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: