[Deepsea-users] Strange behaviour with ceph.stage.configure

LOIC DEVULDER - U329683 loic.devulder at mpsa.com
Wed May 10 01:29:39 MDT 2017


I can add: it's OK now!

Thanks.

Regards / Cordialement,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devulder at mpsa.com)
Senior Linux System Engineer / HPC Specialist
DF/DDCE/ISTA/DSEP/ULES - Linux Team
BESSONCOURT / EXTENSION RIVE DROITE / B19
Internal postal address: SX.BES.15
Phone Incident - Level 3: 22 94 39
Phone Incident - Level 4: 22 92 40
Office: +33 (0)9 66 66 69 06 (27 69 06)
Mobile: +33 (0)6 87 72 47 31
___________________________________________________________________

This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com.


> -----Message d'origine-----
> De : LOIC DEVULDER - U329683
> Envoyé : mercredi 10 mai 2017 09:20
> À : Discussions about the DeepSea management framework for Ceph <deepsea-
> users at lists.suse.com>
> Objet : RE: [Deepsea-users] Strange behaviour with ceph.stage.configure
> 
> Hi Eric,
> 
> Thanks for your command and your explanation! And no problem for the
> delay, your response was quick :-)
> 
> We will try this, the reason is maybe because we added some minions in the
> Salt configuration (futur new OSDs) and we didn't do the Stage 0 before
> executing the configure stage.
> I was thinking that DeepSea will just do changes on the "old" nodes, not
> the new too as I hadn't do the Stage 0 for the new nodes.
> So it was purely a chair/keyboard interface problem :D
> 
> I will tell you if it's OK after the tests.
> 
> Regards / Cordialement,
> ___________________________________________________________________
> PSA Groupe
> Loïc Devulder (loic.devulder at mpsa.com)
> Senior Linux System Engineer / HPC Specialist DF/DDCE/ISTA/DSEP/ULES -
> Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal
> address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident -
> Level 4: 22 92 40
> Office: +33 (0)9 66 66 69 06 (27 69 06)
> Mobile: +33 (0)6 87 72 47 31
> ___________________________________________________________________
> 
> This message may contain confidential information. If you are not the
> intended recipient, please advise the sender immediately and delete this
> message. For further information on confidentiality and the risks inherent
> in electronic communication see http://disclaimer.psa-peugeot-citroen.com.
> 
> > -----Message d'origine-----
> > De : deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-
> > bounces at lists.suse.com] De la part de Eric Jackson Envoyé : mardi 9
> > mai 2017 23:48 À : Discussions about the DeepSea management framework
> > for Ceph <deepsea- users at lists.suse.com> Objet : Re: [Deepsea-users]
> > Strange behaviour with ceph.stage.configure
> >
> > >>> Real sender address / Reelle adresse d expedition :
> > >>> deepsea-users-bounces at lists.suse.com <<<
> >
> > **********************************************************************
> > Hi Loic,
> >   The "has no attribute" is Salt's friendly description that it can't
> > find the Salt module.  The keyring.py lives in /srv/salt/_modules.
> > The keyring.secret extracts the value of the "key" from a Ceph keyring.
> > Nothing too exciting.
> >
> >   Normally, you can call it with
> >
> > salt 'admin.ceph' keyring.secret \
> > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring #That's one line
> >
> > assuming your master node is called 'admin.ceph'.  Now, if you run the
> > command, I expect you will get an error with
> >
> > keyring.secret not available
> >
> > There's a few reasons this can happen.  My first suggestion is to run
> > the command to sync the modules to all minions.  Now, this normally
> > happens in Stage 0.  The command is
> >
> > salt '*' saltutil.sync_modules
> >
> > If you get only a list of minion names, then nothing needed to be
> copied.
> > Otherwise, you will see a list of modules copied to each minion.
> > (Rerun the command to see nothing needs to be copied.)
> >
> > Once copied, try the example from above.  You should get a fairly long
> > string of characters (i.e. the keyring secret).
> >
> > If that does not work, then something else is going on.  At that
> > point, let me see what the results are from the Salt command.
> >
> > If you are working, then the question is how you got here.  Did Stage
> > 0 run successfully?  If the sync_modules did not run successfully,
> > then I expect other things such as Salt mines will not work either
> > which giving equally interesting errors.
> >
> > Sorry for the delay in answering... email troubles.
> >
> > Eric
> >
> >
> > On Tuesday, May 09, 2017 03:21:58 PM LOIC DEVULDER - U329683 wrote:
> > > Hi guys!
> > >
> > > I have a strange behaviour on my ceph cluster, I try to "simply"
> > > execute the configure stage with DeepSea and I have the following
> > > error with the admin
> > > key:
> > >
> > > ylal8620:~ # salt-run state.orch ceph.stage.configure [WARNING ]
> > > Although 'dmidecode' was found in path, the current user cannot
> > > execute it. Grains output might not be accurate. [WARNING ] Although
> > > 'dmidecode' was found in path, the current user cannot execute it.
> > > Grains output might not be accurate. [CRITICAL] No suitable gitfs
> > > provider module is installed.
> > > [WARNING ] Although 'dmidecode' was found in path, the current user
> > > cannot execute it. Grains output might not be accurate. [WARNING ]
> > > Although 'dmidecode' was found in path, the current user cannot
> > > execute
> > it. Grains
> > > output might not be accurate. [ERROR   ] Run failed on minions:
> > > ylal8620.inetpsa.com
> > > Failures:
> > >     ylal8620.inetpsa.com:
> > >         Data failed to compile:
> > >     ----------
> > >         Rendering SLS 'base:ceph.admin.key.default' failed: Jinja
> > > variable 'salt.utils.templates.AliasedLoader object' has no
> > > attribute 'keyring.secret'
> > >
> > > ylal8620.inetpsa.com_master:
> > >   Name: push.proposal - Function: salt.runner - Result: Changed
> > > Started: -
> > > 17:13:31.858716 Duration: 481.519 ms Name: refresh_pillar1 - Function:
> > > salt.state - Result: Changed Started: - 17:13:32.340671 Duration:
> > > 624.121 ms Name: configure.cluster - Function: salt.runner - Result:
> > > Changed
> > > Started: - 17:13:32.965248 Duration: 902.821 ms Name:
> > > refresh_pillar2
> > > -
> > > Function: salt.state - Result: Changed Started: - 17:13:33.868575
> > Duration:
> > > 636.625 ms ----------
> > >           ID: admin key
> > >     Function: salt.state
> > >       Result: False
> > >      Comment: Run failed on minions: ylal8620.inetpsa.com
> > >               Failures:
> > >                   ylal8620.inetpsa.com:
> > >                       Data failed to compile:
> > >                   ----------
> > >                       Rendering SLS 'base:ceph.admin.key.default'
> > failed:
> > > Jinja variable 'salt.utils.templates.AliasedLoader object' has no
> > > attribute 'keyring.secret' Started: 17:13:34.505359
> > >     Duration: 521.657 ms
> > >      Changes:
> > >
> > > Summary for ylal8620.inetpsa.com_master
> > > ------------
> > > Succeeded: 4 (changed=4)
> > > Failed:    1
> > > ------------
> > > Total states run:     5
> > > Total run time:   3.167 s
> > >
> > > I don't understand what I'm doing wrong... The same thing executes
> > > well on my test cluster.
> > >
> > > I have searched for keyring.secret (I know that it's a Python
> > > function) and the Salt configuration seems to be ok:
> > >
> > > ylal8620:~ # grep -r keyring.secret /srv/* Binary file
> > > /srv/modules/runners/populate.pyc matches
> > > /srv/modules/runners/populate.py:        Track cluster name, writer,
> > root
> > > directory and a keyring secret /srv/modules/runners/populate.py:
> > The
> > > master role can access all keyring secrets
> > > /srv/salt/ceph/admin/key/default.sls:      secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > > /srv/salt/ceph/igw/key/default-shared.sls:      secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > /srv/salt/ceph/igw/key/default.sls:
> > >      secret: {{ salt['keyring.secret'](keyring_file) }}
> > > /srv/salt/ceph/mds/key/default-shared.sls:      secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > /srv/salt/ceph/mds/key/default.sls:
> > >      secret: {{ salt['keyring.secret'](keyring_file) }}
> > > /srv/salt/ceph/mon/key/default.sls:      mon_secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > /srv/salt/ceph/mon/key/default.sls:
> > >      admin_secret: {{ salt['keyring.secret'](admin_keyring) }}
> > > /srv/salt/ceph/openattic/key/default.sls:      secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > /srv/salt/ceph/osd/key/default.sls:
> > >      secret: {{ salt['keyring.secret'](keyring_file) }}
> > > /srv/salt/ceph/rgw/key/default-shared.sls:      secret: {{
> > > salt['keyring.secret'](keyring_file) }}
> > /srv/salt/ceph/rgw/key/default.sls:
> > >      secret: {{ salt['keyring.secret'](keyring_file) }} ylal8620:~ #
> > > cat /srv/salt/ceph/admin/key/default.sls
> > >
> > > {# The mon creation needs this key as well #} {# Named the file the
> > > same as other components, there is only one keyring #} {% set
> > > keyring_file = "/srv/salt/ceph/admin/cache/ceph.client.admin.keyring"
> > > %} {{ keyring_file
> > > }}:
> > >   file.managed:
> > >     - source:
> > >       - salt://ceph/admin/files/keyring.j2
> > >     - template: jinja
> > >     - user: salt
> > >     - group: salt
> > >     - mode: 600
> > >     - makedirs: True
> > >     - context:
> > >       secret: {{ salt['keyring.secret'](keyring_file) }}
> > >     - fire_event: True
> > >
> > > ylal8620:~ # cat
> > > /srv/salt/ceph/admin/cache/ceph.client.admin.keyring
> > > [client.admin]
> > > 	key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag==
> > > 	caps mds = "allow *"
> > > 	caps mon = "allow *"
> > > 	caps osd = "allow *"
> > > ylal8620:~ # cat /etc/ceph/ceph.c
> > > ceph.client.admin.keyring      ceph.client.openattic.keyring
> ceph.conf
> > >                 ceph.conf_new                  ceph.conf_old
> ylal8620:~
> > #
> > > cat /etc/ceph/ceph.client.admin.keyring
> > > [client.admin]
> > > 	key = AQCk1plYAAAAABAAAUCiFWAcXJ3HCXizdojlag==
> > > 	caps mds = "allow *"
> > > 	caps mon = "allow *"
> > > 	caps osd = "allow *"
> > > ylal8620:~ # ll /etc/ceph/
> > > total 28
> > > -rw------- 1 root      root      129 Feb  7 16:13
> > ceph.client.admin.keyring
> > > -rw-rw---- 1 openattic openattic 111 Feb  7 17:43
> > > ceph.client.openattic.keyring -rw-r--r-- 1 root      root      785 May
> > 9
> > > 16:12 ceph.conf
> > > -rw-r--r-- 1 root      root      785 Feb 15 09:41 ceph.conf_new
> > > -rw-r--r-- 1 root      root      336 Feb  9 14:05 ceph.conf_old
> > > -rwxr-xr-x 1 root      root      658 Feb 14 14:47 osd_location.sh
> > > -rwxr-xr-x 1 root      root       92 Feb  4 01:22 rbdmap
> > >
> > > If anyone have an idea, it would be great for me!
> > >
> > > Regards / Cordialement,
> > > ___________________________________________________________________
> > > PSA Groupe
> > > Loïc Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer
> > > / HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team BESSONCOURT /
> > > EXTENSION RIVE DROITE / B19 Internal postal address: SX.BES.15 Phone
> > > Incident - Level 3: 22 94 39 Phone Incident - Level 4: 22 92 40
> > > Office: +33 (0)9 66 66 69 06 (27 69 06)
> > > Mobile: +33 (0)6 87 72 47 31
> > > ___________________________________________________________________
> > >
> > > This message may contain confidential information. If you are not
> > > the intended recipient, please advise the sender immediately and
> > > delete this message. For further information on confidentiality and
> > > the risks inherent in electronic communication see
> > > http://disclaimer.psa-peugeot-
> > citroen.com.
> > >
> > > _______________________________________________
> > > Deepsea-users mailing list
> > > Deepsea-users at lists.suse.com
> > > http://lists.suse.com/mailman/listinfo/deepsea-users


More information about the Deepsea-users mailing list