[Deepsea-users] Error during Stage 3 (deploy) of DeepSea

LOIC DEVULDER loic.devulder at mpsa.com
Fri Jan 20 02:38:05 MST 2017


Hi all!

I have a strange behaviour with the Stage 3 of DeepSea (I'm pretty sure I've correctly read the SES4 manual :-)).

I have an error in the storage part:
ylal8020:~ # salt-run -l info state.orch ceph.stage.deploy
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "default/None/cluster.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "default/None/ceph_conf.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "default/None/minions/ylal8020.inetpsa.com_master.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "global.yml": Can't parse as a valid yaml dictionary
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "None/cluster.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "None/ceph_conf.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Ignoring pillar stack template "None/minions/ylal8020.inetpsa.com_master.yml": can't find from root dir "/srv/pillar/ceph/stack"
[INFO    ] Loading fresh modules for state activity
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://ceph/stage/deploy/init.sls'
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://ceph/stage/deploy/default.sls'
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
firewall                 : ['enabled on minion ylxl0080.inetpsa.com', 'enabled on minion ylal8300.inetpsa.com', 'enabled on minion ylxl0050.inetpsa.com', 'enabled on minion ylal8020.inetpsa.com', 'enabled on minion ylal8290.inetpsa.com', 'enabled on minion ylxl0060.inetpsa.com', 'enabled on minion ylxl0070.inetpsa.com', 'enabled on minion ylal8030.inetpsa.com']
[INFO    ] Runner completed: 20170120094422738754
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
fsid                     : valid
public_network           : valid
public_interface         : valid
cluster_network          : valid
cluster_interface        : valid
monitors                 : valid
master_role              : valid
mon_host                 : valid
mon_initial_members      : valid
time_server              : valid
fqdn                     : valid
storage                  : ['Storage nodes ylxl0080.inetpsa.com,ylxl0050.inetpsa.com,ylxl0060.inetpsa.com,ylxl0070.inetpsa.com missing storage attribute.  Check /srv/pillar/ceph/stack/ceph/minions/*.yml and /srv/pillar/ceph/stack/default/ceph/minions/*.yml']
[INFO    ] Runner completed: 20170120094423541025
[INFO    ] Running state [Fail on Warning is True] at time 09:44:24.218000
[INFO    ] Executing state salt.state for Fail on Warning is True
[ERROR   ] No highstate or sls specified, no execution made
[INFO    ] Completed state [Fail on Warning is True] at time 09:44:24.219177
ylal8020.inetpsa.com_master:
----------
          ID: ready check failed
    Function: salt.state
        Name: Fail on Warning is True
      Result: False
     Comment: No highstate or sls specified, no execution made
     Started: 09:44:24.218000
    Duration: 1.177 ms
     Changes:

Summary for ylal8020.inetpsa.com_master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   1.177 ms
[WARNING ] Could not write out jid file for job 20170120094421330276. Retrying.
[WARNING ] Could not write out jid file for job 20170120094421330276. Retrying.
[WARNING ] Could not write out jid file for job 20170120094421330276. Retrying.
[WARNING ] Could not write out jid file for job 20170120094421330276. Retrying.
[WARNING ] Could not write out jid file for job 20170120094421330276. Retrying.
[ERROR   ] prep_jid could not store a jid after 5 tries.
[ERROR   ] Could not store job cache info. Job details for this run may be unavailable.
[INFO    ] Runner completed: 20170120094421330276

Re-executing the Stage 2 doesn't do anythings.

I saw a role-storage directory that was empty, I tried to create the sls files inside it but no change (I re-executed the Stage 2 after the change).

Is someone has an idea of what can I do?

Regards / Cordialement,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devulder at mpsa.com)
Senior Linux System Engineer / Linux HPC Specialist
DF/DDCE/ISTA/DSEP/ULES - Linux Team
BESSONCOURT / EXTENSION RIVE DROITE / B19
Internal postal address: SX.BES.15
Phone Incident - Level 3: 22 94 39
Phone Incident - Level 4: 22 92 40
Office: +33 (0)9 66 66 69 06 (27 69 06)
Mobile: +33 (0)6 87 72 47 31
___________________________________________________________________

This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com.



More information about the Deepsea-users mailing list