From jschmid at suse.de Tue May 2 06:24:42 2017 From: jschmid at suse.de (Joshua Schmid) Date: Tue, 2 May 2017 14:24:42 +0200 Subject: [Deepsea-users] New storage profile proposals In-Reply-To: <20170428154201.rcbb63eingn3uvfb@jf_suse_laptop> References: <20170428154201.rcbb63eingn3uvfb@jf_suse_laptop> Message-ID: On 04/28/2017 05:42 PM, Jan Fajerski wrote: > Hi *, > I have pushed the basic functionality of the new proposal process to > this branch: https://github.com/SUSE/DeepSea/tree/propose-custom-ratios. > I would like to encourage everyone to play with it a bit and share their > experience. > > Most of the implementation is in the proposal runner and can be used > without it interfering with the way things work currently. That being > said, this new proposal process is not fit for productions usage just yet. > > To get the new runner and module run the following on the master: > git clone https://github.com/SUSE/deepsea > cd deepsea > git checkout propose-custom-ratios > make install > salt '*' state.apply ceph.sync > > Now you can run 'salt-run proposal.peek' and the runner will return a > storage profile proposal. This proposal can be influenced by a number of > parameters. I have tried to document everything, which can be called via > 'salt-run proposal.help'. Bare in mind, that this is still very much > under development, so the docs might be incomplete or just not bring the > point across very well. Also I don't do a lot of parameter validation > just yet. I think we are heading in the right direction with this. Using such a descriptive way to define profiles will add more flexibility. After playing with it on two different systems I was looking for an easy way to find my journals{name, size, driver, type}. There are lots of quick and easy ways of doing that of course, but I think it would be nice to give the user a short preview of drives that he is being able to use. Re-using a stripped down version of cephdisks.list is predestined as it will correctly represent salt's view of the disks. salt-run proposals.candidates /dev/disk/by-id/xxxx/: size: 500G type: ssd /dev/disk/by-id/xxxy/: size: 2TB type: hdd /dev/disk/by-id/xxyy/: size: 50G type: nvme By having a direct view on what deepsea knows about my system might also reduce frustration related to wrong assumptions about presence of disks. Allowing to provide a name/path to the 'journal' parameter would be important if someone wants to target a specific disk if more than one would match. > > After you have played around a bit with the peek method you can write > out the proposal user 'salt-run proposal.populate'. This will write the > storage profile files to /srv/pillar/ceph/proposal/profile-default/*. > > The proposal.populate method can be run multiple times. It will not > overwrite existing files but with the help of that 'target' parameter > you can create proposals for groups of minions (or even single minions). Tuning on the commandline until you have the desired proposal is way better than modifying files over and over again, imho. > The idea is that the proposal runner is run multiple times to create > storage profiles for every kind of OSD the cluster will have. E.g. a > cluster might have some machines with NVME drives and SSDs and some > machines with only spinners. To keep the runner fairly simple such a > setup would be attacked by to runs of the proposal runner, one for the > nvme/ssd storage profiles and one for the standalone profiles. > > I hope this all makes some kind of sense to anyone but me. I haven't yet > had a chance to gain some distance to the code. I have discussed quite a > few details with Eric so he might be someone who can help get people > unstuck (sorry Eric;), since I will be out the coming week. seems like a sane solution to me. > > Best, > Jan > _______________________________________________ > Deepsea-users mailing list > Deepsea-users at lists.suse.com > http://lists.suse.com/mailman/listinfo/deepsea-users -- Freundliche Gr??e - Kind regards, Joshua Schmid SUSE Enterprise Storage SUSE Linux GmbH - Maxfeldstr. 5 - 90409 N?rnberg -------------------------------------------------------------------------------------------------------------------- SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG N?rnberg) -------------------------------------------------------------------------------------------------------------------- From blaine.gardner at suse.com Thu May 4 12:43:23 2017 From: blaine.gardner at suse.com (Blaine Gardner) Date: Thu, 04 May 2017 12:43:23 -0600 Subject: [Deepsea-users] Using SaltStack's NTP Formula Message-ID: <1493923403.11628.1.camel@suse.com> Hi all, Just want to remind y'all about the status of the time sync/NTP feature rework. I would consider the feature done and ready to go in for about a week now, but I'm not going to pull the PR myself without getting at least 2 LGTMs from other devs. https://github.com/SUSE/DeepSea/pull/207 Blaine -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From vtheile at suse.com Fri May 5 02:15:17 2017 From: vtheile at suse.com (Volker Theile) Date: Fri, 5 May 2017 10:15:17 +0200 Subject: [Deepsea-users] Problems deploying CherryPy Message-ID: <51daa6db-23a7-103d-90ae-3b8efba6b421@suse.com> Hi all, i realized a problem while deploying CherryPy via DeepSea. When i execute # salt '' state.apply ceph.cherrypy the command hangs and did not return. The problem seems to be that the state restarts salt-master.service. Is this really necessary, isn't it enough to restart the salt-api.service? Regards Volker -- Volker Theile Software Engineer | openATTIC SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) Phone: +49 173 5876879 E-Mail: vtheile at suse.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From tserong at suse.com Mon May 8 04:35:37 2017 From: tserong at suse.com (Tim Serong) Date: Mon, 8 May 2017 20:35:37 +1000 Subject: [Deepsea-users] Problems deploying CherryPy In-Reply-To: <2832228.fy3YIWnVP4@ruby> References: <51daa6db-23a7-103d-90ae-3b8efba6b421@suse.com> <2832228.fy3YIWnVP4@ruby> Message-ID: <4e95b15d-165f-64b2-d2a0-fa19b7794cd3@suse.com> On 05/05/2017 10:11 PM, Eric Jackson wrote: > It's the files/eauth.conf. If there's a dynamic way to poke the salt-master to > get it to read that file without a restart, I'm all for it. https://github.com/saltstack/salt/issues/570 (master/minion should accept a SIGHUP and reload config) has been open since Jan 25, 2012, so, probably not... Tim -- Tim Serong Senior Clustering Engineer SUSE tserong at suse.com From loic.devulder at mpsa.com Tue May 9 09:21:58 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER - U329683) Date: Tue, 9 May 2017 15:21:58 +0000 Subject: [Deepsea-users] Strange behaviour with ceph.stage.configure Message-ID: Hi guys! I have a strange behaviour on my ceph cluster, I try to "simply" execute the configure stage with DeepSea and I have the following error with the admin key: ylal8620:~ # salt-run state.orch ceph.stage.configure [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [CRITICAL] No suitable gitfs provider module is installed. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate. [ERROR ] Run failed on minions: ylal8620.inetpsa.com Failures: ylal8620.inetpsa.com: Data failed to compile: ---------- Rendering SLS 'base:ceph.admin.key.default' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'keyring.secret' ylal8620.inetpsa.com_master: Name: push.proposal - Function: salt.runner - Result: Changed Started: - 17:13:31.858716 Duration: 481.519 ms Name: refresh_pillar1 - Function: salt.state - Result: Changed Started: - 17:13:32.340671 Duration: 624.121 ms Name: configure.cluster - Function: salt.runner - Result: Changed Started: - 17:13:32.965248 Duration: 902.821 ms Name: refresh_pillar2 - Function: salt.state - Result: Changed Started: - 17:13:33.868575 Duration: 636.625 ms ---------- ID: admin key Function: salt.state Result: False Comment: Run failed on minions: ylal8620.inetpsa.com Failures: ylal8620.inetpsa.com: Data failed to compile: ---------- Rendering SLS 'base:ceph.admin.key.default' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'keyring.secret' Started: 17:13:34.505359 Duration: 521.657 ms Changes: Summary for ylal8620.inetpsa.com_master ------------ Succeeded: 4 (changed=4) Failed: 1 ------------ Total states run: 5 Total run time: 3.167 s I don't understand what I'm doing wrong... The same thing executes well on my test cluster. I have searched for keyring.secret (I know that it's a Python function) and the Salt configuration seems to be ok: ylal8620:~ # grep -r keyring.secret /srv/* Binary file /srv/modules/runners/populate.pyc matches /srv/modules/runners/populate.py: Track cluster name, writer, root directory and a keyring secret /srv/modules/runners/populate.py: The master role can access all keyring secrets /srv/salt/ceph/admin/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/igw/key/default-shared.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/igw/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/mds/key/default-shared.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/mds/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/mon/key/default.sls: mon_secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/mon/key/default.sls: admin_secret: {{ salt['keyring.secret'](admin_keyring) }} /srv/salt/ceph/openattic/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/osd/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/rgw/key/default-shared.sls: secret: {{ salt['keyring.secret'](keyring_file) }} /srv/salt/ceph/rgw/key/default.sls: secret: {{ salt['keyring.secret'](keyring_file) }} ylal8620:~ # cat /srv/salt/ceph/admin/key/default.sls {# The mon creation needs this key as well #} {# Named the file the same as other components, there is only one keyring #} {% set keyring_file = "/srv/salt/ceph/admin/cache/ceph.client.admin.keyring" %} {{ keyring_file }}: file.managed: - source: - salt://ceph/admin/files/keyring.j2 - template: jinja - user: salt - group: salt - mode: 600 - makedirs: True - context: secret: {{ salt['keyring.secret'](keyring_file) }} - fire_event: True ylal8620:~ # cat /srv/salt/ceph/admin/cache/ceph.client.admin.keyring [client.admin] key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== caps mds = "allow *" caps mon = "allow *" caps osd = "allow *" ylal8620:~ # cat /etc/ceph/ceph.c ceph.client.admin.keyring ceph.client.openattic.keyring ceph.conf ceph.conf_new ceph.conf_old ylal8620:~ # cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== caps mds = "allow *" caps mon = "allow *" caps osd = "allow *" ylal8620:~ # ll /etc/ceph/ total 28 -rw------- 1 root root 129 Feb 7 16:13 ceph.client.admin.keyring -rw-rw---- 1 openattic openattic 111 Feb 7 17:43 ceph.client.openattic.keyring -rw-r--r-- 1 root root 785 May 9 16:12 ceph.conf -rw-r--r-- 1 root root 785 Feb 15 09:41 ceph.conf_new -rw-r--r-- 1 root root 336 Feb 9 14:05 ceph.conf_old -rwxr-xr-x 1 root root 658 Feb 14 14:47 osd_location.sh -rwxr-xr-x 1 root root 92 Feb 4 01:22 rbdmap If anyone have an idea, it would be great for me! Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. From loic.devulder at mpsa.com Wed May 10 01:26:13 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER - U329683) Date: Wed, 10 May 2017 07:26:13 +0000 Subject: [Deepsea-users] Strange behaviour with ceph.stage.configure In-Reply-To: <2613619.Ec0PTZPJJn@ruby> References: <2613619.Ec0PTZPJJn@ruby> Message-ID: Hi Eric, Thanks for your command and your explanation! And no problem for the delay, your response was quick :-) We will try this, the reason is maybe because we added some minions in the Salt configuration (futur new OSDs) and we didn't do the Stage 0 before executing the configure stage. I was thinking that DeepSea will just do changes on the "old" nodes, not the new too as I hadn't do the Stage 0 for the new nodes. So it was purely a chair/keyboard interface problem :D I will tell you if it's OK after the tests. Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: mardi 9 > mai 2017 23:48 ??: Discussions about the DeepSea management framework > for Ceph Objet?: Re: [Deepsea-users] > Strange behaviour with ceph.stage.configure > > >>> Real sender address / Reelle adresse d expedition : > >>> deepsea-users-bounces at lists.suse.com <<< > > ********************************************************************** > Hi Loic, > The "has no attribute" is Salt's friendly description that it can't > find the Salt module. The keyring.py lives in /srv/salt/_modules. > The keyring.secret extracts the value of the "key" from a Ceph keyring. > Nothing too exciting. > > Normally, you can call it with > > salt 'admin.ceph' keyring.secret \ > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring #That's one line > > assuming your master node is called 'admin.ceph'. Now, if you run the > command, I expect you will get an error with > > keyring.secret not available > > There's a few reasons this can happen. My first suggestion is to run > the command to sync the modules to all minions. Now, this normally > happens in Stage 0. The command is > > salt '*' saltutil.sync_modules > > If you get only a list of minion names, then nothing needed to be copied. > Otherwise, you will see a list of modules copied to each minion. > (Rerun the command to see nothing needs to be copied.) > > Once copied, try the example from above. You should get a fairly long > string of characters (i.e. the keyring secret). > > If that does not work, then something else is going on. At that > point, let me see what the results are from the Salt command. > > If you are working, then the question is how you got here. Did Stage > 0 run successfully? If the sync_modules did not run successfully, > then I expect other things such as Salt mines will not work either > which giving equally interesting errors. > > Sorry for the delay in answering... email troubles. > > Eric > > > On Tuesday, May 09, 2017 03:21:58 PM LOIC DEVULDER - U329683 wrote: > > Hi guys! > > > > I have a strange behaviour on my ceph cluster, I try to "simply" > > execute the configure stage with DeepSea and I have the following > > error with the admin > > key: > > > > ylal8620:~ # salt-run state.orch ceph.stage.configure [WARNING ] > > Although 'dmidecode' was found in path, the current user cannot > > execute it. Grains output might not be accurate. [WARNING ] Although > > 'dmidecode' was found in path, the current user cannot execute it. > > Grains output might not be accurate. [CRITICAL] No suitable gitfs > > provider module is installed. > > [WARNING ] Although 'dmidecode' was found in path, the current user > > cannot execute it. Grains output might not be accurate. [WARNING ] > > Although 'dmidecode' was found in path, the current user cannot > > execute > it. Grains > > output might not be accurate. [ERROR ] Run failed on minions: > > ylal8620.inetpsa.com > > Failures: > > ylal8620.inetpsa.com: > > Data failed to compile: > > ---------- > > Rendering SLS 'base:ceph.admin.key.default' failed: Jinja > > variable 'salt.utils.templates.AliasedLoader object' has no > > attribute 'keyring.secret' > > > > ylal8620.inetpsa.com_master: > > Name: push.proposal - Function: salt.runner - Result: Changed > > Started: - > > 17:13:31.858716 Duration: 481.519 ms Name: refresh_pillar1 - Function: > > salt.state - Result: Changed Started: - 17:13:32.340671 Duration: > > 624.121 ms Name: configure.cluster - Function: salt.runner - Result: > > Changed > > Started: - 17:13:32.965248 Duration: 902.821 ms Name: > > refresh_pillar2 > > - > > Function: salt.state - Result: Changed Started: - 17:13:33.868575 > Duration: > > 636.625 ms ---------- > > ID: admin key > > Function: salt.state > > Result: False > > Comment: Run failed on minions: ylal8620.inetpsa.com > > Failures: > > ylal8620.inetpsa.com: > > Data failed to compile: > > ---------- > > Rendering SLS 'base:ceph.admin.key.default' > failed: > > Jinja variable 'salt.utils.templates.AliasedLoader object' has no > > attribute 'keyring.secret' Started: 17:13:34.505359 > > Duration: 521.657 ms > > Changes: > > > > Summary for ylal8620.inetpsa.com_master > > ------------ > > Succeeded: 4 (changed=4) > > Failed: 1 > > ------------ > > Total states run: 5 > > Total run time: 3.167 s > > > > I don't understand what I'm doing wrong... The same thing executes > > well on my test cluster. > > > > I have searched for keyring.secret (I know that it's a Python > > function) and the Salt configuration seems to be ok: > > > > ylal8620:~ # grep -r keyring.secret /srv/* Binary file > > /srv/modules/runners/populate.pyc matches > > /srv/modules/runners/populate.py: Track cluster name, writer, > root > > directory and a keyring secret /srv/modules/runners/populate.py: > The > > master role can access all keyring secrets > > /srv/salt/ceph/admin/key/default.sls: secret: {{ > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/igw/key/default-shared.sls: secret: {{ > > salt['keyring.secret'](keyring_file) }} > /srv/salt/ceph/igw/key/default.sls: > > secret: {{ salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/mds/key/default-shared.sls: secret: {{ > > salt['keyring.secret'](keyring_file) }} > /srv/salt/ceph/mds/key/default.sls: > > secret: {{ salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/mon/key/default.sls: mon_secret: {{ > > salt['keyring.secret'](keyring_file) }} > /srv/salt/ceph/mon/key/default.sls: > > admin_secret: {{ salt['keyring.secret'](admin_keyring) }} > > /srv/salt/ceph/openattic/key/default.sls: secret: {{ > > salt['keyring.secret'](keyring_file) }} > /srv/salt/ceph/osd/key/default.sls: > > secret: {{ salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/rgw/key/default-shared.sls: secret: {{ > > salt['keyring.secret'](keyring_file) }} > /srv/salt/ceph/rgw/key/default.sls: > > secret: {{ salt['keyring.secret'](keyring_file) }} ylal8620:~ # > > cat /srv/salt/ceph/admin/key/default.sls > > > > {# The mon creation needs this key as well #} {# Named the file the > > same as other components, there is only one keyring #} {% set > > keyring_file = "/srv/salt/ceph/admin/cache/ceph.client.admin.keyring" > > %} {{ keyring_file > > }}: > > file.managed: > > - source: > > - salt://ceph/admin/files/keyring.j2 > > - template: jinja > > - user: salt > > - group: salt > > - mode: 600 > > - makedirs: True > > - context: > > secret: {{ salt['keyring.secret'](keyring_file) }} > > - fire_event: True > > > > ylal8620:~ # cat > > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring > > [client.admin] > > key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== > > caps mds = "allow *" > > caps mon = "allow *" > > caps osd = "allow *" > > ylal8620:~ # cat /etc/ceph/ceph.c > > ceph.client.admin.keyring ceph.client.openattic.keyring ceph.conf > > ceph.conf_new ceph.conf_old ylal8620:~ > # > > cat /etc/ceph/ceph.client.admin.keyring > > [client.admin] > > key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== > > caps mds = "allow *" > > caps mon = "allow *" > > caps osd = "allow *" > > ylal8620:~ # ll /etc/ceph/ > > total 28 > > -rw------- 1 root root 129 Feb 7 16:13 > ceph.client.admin.keyring > > -rw-rw---- 1 openattic openattic 111 Feb 7 17:43 > > ceph.client.openattic.keyring -rw-r--r-- 1 root root 785 May > 9 > > 16:12 ceph.conf > > -rw-r--r-- 1 root root 785 Feb 15 09:41 ceph.conf_new > > -rw-r--r-- 1 root root 336 Feb 9 14:05 ceph.conf_old > > -rwxr-xr-x 1 root root 658 Feb 14 14:47 osd_location.sh > > -rwxr-xr-x 1 root root 92 Feb 4 01:22 rbdmap > > > > If anyone have an idea, it would be great for me! > > > > Regards / Cordialement, > > ___________________________________________________________________ > > PSA Groupe > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / > > EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone > > Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > Mobile: +33 (0)6 87 72 47 31 > > ___________________________________________________________________ > > > > This message may contain confidential information. If you are not > > the intended recipient, please advise the sender immediately and > > delete this message. For further information on confidentiality and > > the risks inherent in electronic communication see > > http://disclaimer.psa-peugeot- > citroen.com. > > > > _______________________________________________ > > Deepsea-users mailing list > > Deepsea-users at lists.suse.com > > http://lists.suse.com/mailman/listinfo/deepsea-users From loic.devulder at mpsa.com Wed May 10 01:29:39 2017 From: loic.devulder at mpsa.com (LOIC DEVULDER - U329683) Date: Wed, 10 May 2017 07:29:39 +0000 Subject: [Deepsea-users] Strange behaviour with ceph.stage.configure References: <2613619.Ec0PTZPJJn@ruby> Message-ID: I can add: it's OK now! Thanks. Regards / Cordialement, ___________________________________________________________________ PSA Groupe Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 Office: +33 (0)9 66 66 69 06 (27 69 06) Mobile: +33 (0)6 87 72 47 31 ___________________________________________________________________ This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > -----Message d'origine----- > De?: LOIC DEVULDER - U329683 > Envoy??: mercredi 10 mai 2017 09:20 > ??: Discussions about the DeepSea management framework for Ceph users at lists.suse.com> > Objet?: RE: [Deepsea-users] Strange behaviour with ceph.stage.configure > > Hi Eric, > > Thanks for your command and your explanation! And no problem for the > delay, your response was quick :-) > > We will try this, the reason is maybe because we added some minions in the > Salt configuration (futur new OSDs) and we didn't do the Stage 0 before > executing the configure stage. > I was thinking that DeepSea will just do changes on the "old" nodes, not > the new too as I hadn't do the Stage 0 for the new nodes. > So it was purely a chair/keyboard interface problem :D > > I will tell you if it's OK after the tests. > > Regards / Cordialement, > ___________________________________________________________________ > PSA Groupe > Lo?c Devulder (loic.devulder at mpsa.com) > Senior Linux System Engineer / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - > Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal > address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - > Level 4: 22 92 40 > Office: +33 (0)9 66 66 69 06 (27 69 06) > Mobile: +33 (0)6 87 72 47 31 > ___________________________________________________________________ > > This message may contain confidential information. If you are not the > intended recipient, please advise the sender immediately and delete this > message. For further information on confidentiality and the risks inherent > in electronic communication see http://disclaimer.psa-peugeot-citroen.com. > > > -----Message d'origine----- > > De?: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users- > > bounces at lists.suse.com] De la part de Eric Jackson Envoy??: mardi 9 > > mai 2017 23:48 ??: Discussions about the DeepSea management framework > > for Ceph Objet?: Re: [Deepsea-users] > > Strange behaviour with ceph.stage.configure > > > > >>> Real sender address / Reelle adresse d expedition : > > >>> deepsea-users-bounces at lists.suse.com <<< > > > > ********************************************************************** > > Hi Loic, > > The "has no attribute" is Salt's friendly description that it can't > > find the Salt module. The keyring.py lives in /srv/salt/_modules. > > The keyring.secret extracts the value of the "key" from a Ceph keyring. > > Nothing too exciting. > > > > Normally, you can call it with > > > > salt 'admin.ceph' keyring.secret \ > > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring #That's one line > > > > assuming your master node is called 'admin.ceph'. Now, if you run the > > command, I expect you will get an error with > > > > keyring.secret not available > > > > There's a few reasons this can happen. My first suggestion is to run > > the command to sync the modules to all minions. Now, this normally > > happens in Stage 0. The command is > > > > salt '*' saltutil.sync_modules > > > > If you get only a list of minion names, then nothing needed to be > copied. > > Otherwise, you will see a list of modules copied to each minion. > > (Rerun the command to see nothing needs to be copied.) > > > > Once copied, try the example from above. You should get a fairly long > > string of characters (i.e. the keyring secret). > > > > If that does not work, then something else is going on. At that > > point, let me see what the results are from the Salt command. > > > > If you are working, then the question is how you got here. Did Stage > > 0 run successfully? If the sync_modules did not run successfully, > > then I expect other things such as Salt mines will not work either > > which giving equally interesting errors. > > > > Sorry for the delay in answering... email troubles. > > > > Eric > > > > > > On Tuesday, May 09, 2017 03:21:58 PM LOIC DEVULDER - U329683 wrote: > > > Hi guys! > > > > > > I have a strange behaviour on my ceph cluster, I try to "simply" > > > execute the configure stage with DeepSea and I have the following > > > error with the admin > > > key: > > > > > > ylal8620:~ # salt-run state.orch ceph.stage.configure [WARNING ] > > > Although 'dmidecode' was found in path, the current user cannot > > > execute it. Grains output might not be accurate. [WARNING ] Although > > > 'dmidecode' was found in path, the current user cannot execute it. > > > Grains output might not be accurate. [CRITICAL] No suitable gitfs > > > provider module is installed. > > > [WARNING ] Although 'dmidecode' was found in path, the current user > > > cannot execute it. Grains output might not be accurate. [WARNING ] > > > Although 'dmidecode' was found in path, the current user cannot > > > execute > > it. Grains > > > output might not be accurate. [ERROR ] Run failed on minions: > > > ylal8620.inetpsa.com > > > Failures: > > > ylal8620.inetpsa.com: > > > Data failed to compile: > > > ---------- > > > Rendering SLS 'base:ceph.admin.key.default' failed: Jinja > > > variable 'salt.utils.templates.AliasedLoader object' has no > > > attribute 'keyring.secret' > > > > > > ylal8620.inetpsa.com_master: > > > Name: push.proposal - Function: salt.runner - Result: Changed > > > Started: - > > > 17:13:31.858716 Duration: 481.519 ms Name: refresh_pillar1 - Function: > > > salt.state - Result: Changed Started: - 17:13:32.340671 Duration: > > > 624.121 ms Name: configure.cluster - Function: salt.runner - Result: > > > Changed > > > Started: - 17:13:32.965248 Duration: 902.821 ms Name: > > > refresh_pillar2 > > > - > > > Function: salt.state - Result: Changed Started: - 17:13:33.868575 > > Duration: > > > 636.625 ms ---------- > > > ID: admin key > > > Function: salt.state > > > Result: False > > > Comment: Run failed on minions: ylal8620.inetpsa.com > > > Failures: > > > ylal8620.inetpsa.com: > > > Data failed to compile: > > > ---------- > > > Rendering SLS 'base:ceph.admin.key.default' > > failed: > > > Jinja variable 'salt.utils.templates.AliasedLoader object' has no > > > attribute 'keyring.secret' Started: 17:13:34.505359 > > > Duration: 521.657 ms > > > Changes: > > > > > > Summary for ylal8620.inetpsa.com_master > > > ------------ > > > Succeeded: 4 (changed=4) > > > Failed: 1 > > > ------------ > > > Total states run: 5 > > > Total run time: 3.167 s > > > > > > I don't understand what I'm doing wrong... The same thing executes > > > well on my test cluster. > > > > > > I have searched for keyring.secret (I know that it's a Python > > > function) and the Salt configuration seems to be ok: > > > > > > ylal8620:~ # grep -r keyring.secret /srv/* Binary file > > > /srv/modules/runners/populate.pyc matches > > > /srv/modules/runners/populate.py: Track cluster name, writer, > > root > > > directory and a keyring secret /srv/modules/runners/populate.py: > > The > > > master role can access all keyring secrets > > > /srv/salt/ceph/admin/key/default.sls: secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > > /srv/salt/ceph/igw/key/default-shared.sls: secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/igw/key/default.sls: > > > secret: {{ salt['keyring.secret'](keyring_file) }} > > > /srv/salt/ceph/mds/key/default-shared.sls: secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/mds/key/default.sls: > > > secret: {{ salt['keyring.secret'](keyring_file) }} > > > /srv/salt/ceph/mon/key/default.sls: mon_secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/mon/key/default.sls: > > > admin_secret: {{ salt['keyring.secret'](admin_keyring) }} > > > /srv/salt/ceph/openattic/key/default.sls: secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/osd/key/default.sls: > > > secret: {{ salt['keyring.secret'](keyring_file) }} > > > /srv/salt/ceph/rgw/key/default-shared.sls: secret: {{ > > > salt['keyring.secret'](keyring_file) }} > > /srv/salt/ceph/rgw/key/default.sls: > > > secret: {{ salt['keyring.secret'](keyring_file) }} ylal8620:~ # > > > cat /srv/salt/ceph/admin/key/default.sls > > > > > > {# The mon creation needs this key as well #} {# Named the file the > > > same as other components, there is only one keyring #} {% set > > > keyring_file = "/srv/salt/ceph/admin/cache/ceph.client.admin.keyring" > > > %} {{ keyring_file > > > }}: > > > file.managed: > > > - source: > > > - salt://ceph/admin/files/keyring.j2 > > > - template: jinja > > > - user: salt > > > - group: salt > > > - mode: 600 > > > - makedirs: True > > > - context: > > > secret: {{ salt['keyring.secret'](keyring_file) }} > > > - fire_event: True > > > > > > ylal8620:~ # cat > > > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring > > > [client.admin] > > > key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== > > > caps mds = "allow *" > > > caps mon = "allow *" > > > caps osd = "allow *" > > > ylal8620:~ # cat /etc/ceph/ceph.c > > > ceph.client.admin.keyring ceph.client.openattic.keyring > ceph.conf > > > ceph.conf_new ceph.conf_old > ylal8620:~ > > # > > > cat /etc/ceph/ceph.client.admin.keyring > > > [client.admin] > > > key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag== > > > caps mds = "allow *" > > > caps mon = "allow *" > > > caps osd = "allow *" > > > ylal8620:~ # ll /etc/ceph/ > > > total 28 > > > -rw------- 1 root root 129 Feb 7 16:13 > > ceph.client.admin.keyring > > > -rw-rw---- 1 openattic openattic 111 Feb 7 17:43 > > > ceph.client.openattic.keyring -rw-r--r-- 1 root root 785 May > > 9 > > > 16:12 ceph.conf > > > -rw-r--r-- 1 root root 785 Feb 15 09:41 ceph.conf_new > > > -rw-r--r-- 1 root root 336 Feb 9 14:05 ceph.conf_old > > > -rwxr-xr-x 1 root root 658 Feb 14 14:47 osd_location.sh > > > -rwxr-xr-x 1 root root 92 Feb 4 01:22 rbdmap > > > > > > If anyone have an idea, it would be great for me! > > > > > > Regards / Cordialement, > > > ___________________________________________________________________ > > > PSA Groupe > > > Lo?c Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer > > > / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT / > > > EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone > > > Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40 > > > Office: +33 (0)9 66 66 69 06 (27 69 06) > > > Mobile: +33 (0)6 87 72 47 31 > > > ___________________________________________________________________ > > > > > > This message may contain confidential information. If you are not > > > the intended recipient, please advise the sender immediately and > > > delete this message. For further information on confidentiality and > > > the risks inherent in electronic communication see > > > http://disclaimer.psa-peugeot- > > citroen.com. > > > > > > _______________________________________________ > > > Deepsea-users mailing list > > > Deepsea-users at lists.suse.com > > > http://lists.suse.com/mailman/listinfo/deepsea-users From tserong at suse.com Wed May 10 23:12:06 2017 From: tserong at suse.com (Tim Serong) Date: Thu, 11 May 2017 15:12:06 +1000 Subject: [Deepsea-users] add arbitrary pieces to ceph.conf Message-ID: <3718023f-db5b-5eee-f9e1-083291c04846@suse.com> Hi All, Forgive my ignorance - how do I use DeepSea to add an arbitrary extra piece to ceph.conf? Lets say, for example, I want this added: [mgr] mgr modules = fsstatus restful Where do I put it? Do I edit /srv/salt/ceph/configuration/files/ceph.conf.j2? That works, BTW, but I imagine it will be overwritten next time I upgrade my deepsea package... Thanks, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong at suse.com From jfajerski at suse.com Thu May 11 01:10:25 2017 From: jfajerski at suse.com (Jan Fajerski) Date: Thu, 11 May 2017 09:10:25 +0200 Subject: [Deepsea-users] add arbitrary pieces to ceph.conf In-Reply-To: <3718023f-db5b-5eee-f9e1-083291c04846@suse.com> References: <3718023f-db5b-5eee-f9e1-083291c04846@suse.com> Message-ID: <20170511071025.wwpqdthj2hsouv25@jf_suse_laptop> On Thu, May 11, 2017 at 03:12:06PM +1000, Tim Serong wrote: >Hi All, > >Forgive my ignorance - how do I use DeepSea to add an arbitrary extra >piece to ceph.conf? Lets say, for example, I want this added: > > [mgr] > mgr modules = fsstatus restful > >Where do I put it? Do I edit >/srv/salt/ceph/configuration/files/ceph.conf.j2? That works, BTW, but I >imagine it will be overwritten next time I upgrade my deepsea package... You are absolutely correct. Works but is not the right way. The current way is unfortunately a bit convoluted: - add your own /srv/salt/ceph/configuration/files/ceph-custom.confj2 - add your own /srv/salt/ceph/configuration/files/custom.sls This looks bassically like the default.sls but uses your custom config - add configuration_init: custom to the pillar (in /srv/pillar/ceph/stack/global.yml iirc) So yeah its not straight forward. There are plans to make all this more comfortable: https://github.com/SUSE/DeepSea/issues/205. Feel free to add more requirements, opinions and such. I'd like to see this feature soon. > >Thanks, > >Tim >-- >Tim Serong >Senior Clustering Engineer >SUSE >tserong at suse.com >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users -- Jan Fajerski Engineer Enterprise Storage SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) From tserong at suse.com Thu May 25 04:48:50 2017 From: tserong at suse.com (Tim Serong) Date: Thu, 25 May 2017 20:48:50 +1000 Subject: [Deepsea-users] RFC: Experimental import of existing cluster Message-ID: Hi All, We want DeepSea to be able to "import" existing ceph clusters, for example if you've used ceph-deploy in the past, and want to migrate to DeepSea. I've implemented what you might call a rough outline of this functionality at https://github.com/SUSE/DeepSea/commit/7a60715 (i.e. the implementation is incomplete), and would appreciate feedback on the approach in general. There's more detail in the commit, but essentially you use it like so: 1) Install salt on every ceph node. 2) Run the first two DeepSea stages, plus my extra importer bit. 3) The importer checks what ceph services are running on all the minions (mon, mds, osd, rgw), and generates a policy.cfg reflecting the currently running cluser. 4) Now you can keep using DeepSea as usual. WARNING: Do *NOT* try step 4 with the current code. You WILL BREAK YOUR CLUSTER. I'm just trying to get feedback on whether there's any large holes in the general shape of this thing (I had an earlier experiment where I was interrogating ceph.conf to find the MONs, then realised I'd end up having to talk to the cluster to ask it where the OSDs were, so I decided that checking for running services was going to be simpler). Thanks, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong at suse.com From cgardner at suse.com Thu May 25 07:19:00 2017 From: cgardner at suse.com (Craig Gardner) Date: Thu, 25 May 2017 15:19:00 +0200 Subject: [Deepsea-users] RFC: Experimental import of existing cluster In-Reply-To: References: Message-ID: <7C90CA4B-19D7-4F50-8DED-64162F0BD1A4@suse.com> On May 25, 2017 12:48:50 PM GMT+02:00, Tim Serong wrote: >Hi All, > >We want DeepSea to be able to "import" existing ceph clusters, for >example if you've used ceph-deploy in the past, and want to migrate to >DeepSea. I've implemented what you might call a rough outline of this >functionality at https://github.com/SUSE/DeepSea/commit/7a60715 (i.e. >the implementation is incomplete), and would appreciate feedback on the >approach in general. > >There's more detail in the commit, but essentially you use it like so: > >1) Install salt on every ceph node. >2) Run the first two DeepSea stages, plus my extra importer bit. >3) The importer checks what ceph services are running on all the >minions >(mon, mds, osd, rgw), and generates a policy.cfg reflecting the >currently running cluser. >4) Now you can keep using DeepSea as usual. > >WARNING: Do *NOT* try step 4 with the current code. You WILL BREAK >YOUR >CLUSTER. I'm just trying to get feedback on whether there's any large >holes in the general shape of this thing (I had an earlier experiment >where I was interrogating ceph.conf to find the MONs, then realised I'd >end up having to talk to the cluster to ask it where the OSDs were, so >I >decided that checking for running services was going to be simpler). > >Thanks, > >Tim >-- >Tim Serong >Senior Clustering Engineer >SUSE >tserong at suse.com >_______________________________________________ >Deepsea-users mailing list >Deepsea-users at lists.suse.com >http://lists.suse.com/mailman/listinfo/deepsea-users Oh, this looks very promising. Eric was quite worried that we wouldn't have enough time and too many other priorities to get to this. Thanks for looking into it, Tim. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmorris at suse.com Thu May 25 07:36:01 2017 From: lmorris at suse.com (Larry Morris) Date: Thu, 25 May 2017 13:36:01 +0000 Subject: [Deepsea-users] RFC: Experimental import of existing cluster In-Reply-To: <7C90CA4B-19D7-4F50-8DED-64162F0BD1A4@suse.com> References: <7C90CA4B-19D7-4F50-8DED-64162F0BD1A4@suse.com> Message-ID: <65144C5C80746A4EB8869FE3F027358D0637DD9B@prvxmb04.microfocus.com> The PM was also worried that we would not have enough time to get to this. Thanks Tim for making the time to look more into this. While I don?t have the customer data I would like, it is a safe assumption that 1/3 to ? of our current customers are running clusters that will need this type of process. Remember, it is all about making SES easier to use than any other Ceph system. From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Craig Gardner Sent: Thursday, May 25, 2017 7:19 AM To: Discussions about the DeepSea management framework for Ceph ; Tim Serong ; deepsea-users at lists.suse.com Subject: Re: [Deepsea-users] RFC: Experimental import of existing cluster On May 25, 2017 12:48:50 PM GMT+02:00, Tim Serong > wrote: Hi All, We want DeepSea to be able to "import" existing ceph clusters, for example if you've used ceph-deploy in the past, and want to migrate to DeepSea. I've implemented what you might call a rough outline of this functionality at https://github.com/SUSE/DeepSea/commit/7a60715 (i.e. the implementation is incomplete), and would appreciate feedback on the approach in general. There's more detail in the commit, but essentially you use it like so: 1) Install salt on every ceph node. 2) Run the first two DeepSea stages, plus my extra importer bit. 3) The importer checks what ceph services are running on all the minions (mon, mds, osd, rgw), and generates a policy.cfg reflecting the currently running cluser. 4) Now you can keep using DeepSea as usual. WARNING: Do *NOT* try step 4 with the current code. You WILL BREAK YOUR CLUSTER. I'm just trying to get feedback on whether there's any large holes in the general shape of this thing (I had an earlier experiment where I was interrogating ceph.conf to find the MONs, then realised I'd end up having to talk to the cluster to ask it where the OSDs were, so I decided that checking for running services was going to be simpler). Thanks, Tim Oh, this looks very promising. Eric was quite worried that we wouldn't have enough time and too many other priorities to get to this. Thanks for looking into it, Tim. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncutler at suse.cz Sat May 27 00:59:44 2017 From: ncutler at suse.cz (Nathan Cutler) Date: Sat, 27 May 2017 08:59:44 +0200 Subject: [Deepsea-users] RFC: Experimental import of existing cluster In-Reply-To: References: Message-ID: <33d99b2d-f395-309a-6f8a-77aaf3b1a02e@suse.cz> Hi Tim: Your overall approach of "running service = DS role" is sound, and I like it. Speaking as a user here. . . Before running it, I'd be concerned that DS might clobber something in ceph.conf. So I would make a backup of it and then, after running all the DS stages, I'd diff against the backup and analyze any changes that DS made. Nathan On 05/25/2017 12:48 PM, Tim Serong wrote: > Hi All, > > We want DeepSea to be able to "import" existing ceph clusters, for > example if you've used ceph-deploy in the past, and want to migrate to > DeepSea. I've implemented what you might call a rough outline of this > functionality at https://github.com/SUSE/DeepSea/commit/7a60715 (i.e. > the implementation is incomplete), and would appreciate feedback on the > approach in general. > > There's more detail in the commit, but essentially you use it like so: > > 1) Install salt on every ceph node. > 2) Run the first two DeepSea stages, plus my extra importer bit. > 3) The importer checks what ceph services are running on all the minions > (mon, mds, osd, rgw), and generates a policy.cfg reflecting the > currently running cluser. > 4) Now you can keep using DeepSea as usual. > > WARNING: Do *NOT* try step 4 with the current code. You WILL BREAK YOUR > CLUSTER. I'm just trying to get feedback on whether there's any large > holes in the general shape of this thing (I had an earlier experiment > where I was interrogating ceph.conf to find the MONs, then realised I'd > end up having to talk to the cluster to ask it where the OSDs were, so I > decided that checking for running services was going to be simpler). > > Thanks, > > Tim > -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 From jschmid at suse.de Mon May 29 09:35:25 2017 From: jschmid at suse.de (Joshua Schmid) Date: Mon, 29 May 2017 17:35:25 +0200 Subject: [Deepsea-users] Deepsea version 0.7.9.1 Message-ID: <20170529173525.4edd28c7@d155.suse.de> Hey, due to an issue in the 'osd.py' custom module that was introduced in the last version, we had to push two fixes and release a new minor version 0.7.9.1 That includes: - Internal vs external rep of disks #290 - byte to gb conversion #289 Thanks, Joshua