[Deepsea-users] NTP configuration

LOIC DEVULDER loic.devulder at mpsa.com
Mon Jan 23 08:40:04 MST 2017


Hi again,

After reading the validate.py file I found another method to disable NTP configuration from DeepSea: I can set time_service to disable in global.yml file:
ylal8020:/srv/pillar/ceph/proposals # cat config/stack/default/global.yml
time_service: disabled

It's better because I can see that time_server is disabled when I run the stage 3 but I have an error whith sntp. DeepSea tries to execute a sntp command with no hostname and so it fails:
ylal8020:/srv/pillar/ceph/proposals # salt-run state.orch ceph.stage.deploy
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
firewall                 : disabled
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
fsid                     : valid
public_network           : valid
public_interface         : valid
cluster_network          : valid
cluster_interface        : valid
monitors                 : valid
storage                  : valid
master_role              : valid
mon_host                 : valid
mon_initial_members      : valid
time_server              : disabled
fqdn                     : valid
[ERROR   ] Run failed on minions: ylxl0080.inetpsa.com, ylal8300.inetpsa.com, ylxl0050.inetpsa.com, ylal8020.inetpsa.com, ylal8290.inetpsa.com, ylxl0060.inetpsa.com, ylxl0070.inetpsa.com, ylal8030.inetpsa.com
Failures:
    ylxl0080.inetpsa.com:
      Name: ntp - Function: pkg.installed - Result: Clean Started: - 15:48:10.924640 Duration: 711.651 ms
    ----------
              ID: sync time
        Function: cmd.run
            Name: sntp -S -c
          Result: False
         Comment: Command "sntp -S -c " run
         Started: 15:48:11.637698
        Duration: 59.368 ms
         Changes:
                  ----------
                  pid:
                      16982
                  retcode:
                      1
                  stderr:
                      /usr/sbin/sntp: The 'concurrent' option requires an argument.
                      sntp - standard Simple Network Time Protocol client program - Ver. 4.2.8p9
                      Usage:  sntp [ -<flag> [<val>] | --<name>[{=| }<val>] ]... \
                                [ hostname-or-IP ...]
                      Try 'sntp --help' for more information.
                  stdout:

    Summary for ylxl0080.inetpsa.com
    ------------
    Succeeded: 1 (changed=1)
    Failed:    1
    ------------
    Total states run:     2
    Total run time: 771.019 ms
    ylal8300.inetpsa.com:
      Name: ntp - Function: pkg.installed - Result: Clean Started: - 15:48:10.856634 Duration: 817.354 ms
    ----------
              ID: sync time
        Function: cmd.run
            Name: sntp -S -c
          Result: False
         Comment: Command "sntp -S -c " run
         Started: 15:48:11.675171
        Duration: 55.126 ms
         Changes:
                  ----------
                  pid:
                      2658
                  retcode:
                      1
                  stderr:
                      /usr/sbin/sntp: The 'concurrent' option requires an argument.
                      sntp - standard Simple Network Time Protocol client program - Ver. 4.2.8p9
                      Usage:  sntp [ -<flag> [<val>] | --<name>[{=| }<val>] ]... \
                                [ hostname-or-IP ...]
                      Try 'sntp --help' for more information.
                  stdout:

We can see that time_service has disappeared and have been replaced by "time_server: disabled". According to validate.py the time_server value is set to disabled when time_service is disabled, so it's seems to be normal.

I try to find how DeepSea executes this cmd.run but my Salt's knowledge seems to low :-(

I was able to bypass this error with Eric's informations from the wiki. I added the disabled.sls file and "time_init: disabled" in the /srv/pillar/ceph/proposals/config/stack/default/global.yml file and it works.

Is there a way to add a test that not execute this cmd.run when time_service/time_server is/are set to disable? It's not a big deal if this is not possible, as adding the disabled.sls file is not too complicated but it could be easier from a sysadmin point of view :-)

Regards / Cordialement,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devulder at mpsa.com)
Senior Linux System Engineer / Linux HPC Specialist
DF/DDCE/ISTA/DSEP/ULES - Linux Team
BESSONCOURT / EXTENSION RIVE DROITE / B19
Internal postal address: SX.BES.15
Phone Incident - Level 3: 22 94 39
Phone Incident - Level 4: 22 92 40
Office: +33 (0)9 66 66 69 06 (27 69 06)
Mobile: +33 (0)6 87 72 47 31
___________________________________________________________________

This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com.


> -----Message d'origine-----
> De : LOIC DEVULDER - U329683
> Envoyé : lundi 23 janvier 2017 09:14
> À : Discussions about the DeepSea management framework for Ceph <deepsea-
> users at lists.suse.com>
> Objet : RE: [Deepsea-users] NTP configuration
> 
> Hi,
> 
> Thanks Eric, that's what I need!
> 
> Maybe could it be a good idea to add this wiki link at the beginning of
> the DeepSea installation method paragraph in the SES documentation, to
> avoid this kind of dumb question :-)
> 
> 
> Regards / Cordialement,
> ___________________________________________________________________
> PSA Groupe
> Loïc Devulder (loic.devulder at mpsa.com)
> Senior Linux System Engineer / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES
> - Linux Team BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal
> address: SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident -
> Level 4: 22 92 40
> Office: +33 (0)9 66 66 69 06 (27 69 06)
> Mobile: +33 (0)6 87 72 47 31
> ___________________________________________________________________
> 
> This message may contain confidential information. If you are not the
> intended recipient, please advise the sender immediately and delete this
> message. For further information on confidentiality and the risks inherent
> in electronic communication see http://disclaimer.psa-peugeot-citroen.com.
> 
> > -----Message d'origine-----
> > De : deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-
> > bounces at lists.suse.com] De la part de Eric Jackson Envoyé : vendredi
> > 20 janvier 2017 20:53 À : Discussions about the DeepSea management
> > framework for Ceph <deepsea- users at lists.suse.com> Objet : Re:
> > [Deepsea-users] NTP configuration
> >
> > >>> Real sender address / Reelle adresse d expedition :
> > >>> deepsea-users-bounces at lists.suse.com <<<
> >
> > **********************************************************************
> > Hi Loic,
> >   The short answer is to tell DeepSea to do something else which
> > includes "do nothing".  Check the first example here
> > https://github.com/SUSE/DeepSea/wiki/customize.  I used ntp.
> >
> >   Salt is not fond of absence or empty configurations.  As many
> > defaults as we tried to put in, state files need at least a no-op.
> > The strategy throughout DeepSea is everything can be overridden since
> > I cannot predict what would need to be customized at a site.
> >
> > Eric
> >
> >
> > On Friday, January 20, 2017 03:16:13 PM LOIC DEVULDER wrote:
> > > Hi,
> > >
> > > During my tests with DeepSea I ran into a little problem: I can't be
> > > able to remove the NTP configuration.
> > >
> > > Ok I know: why should I want to do this? Simply because I already
> > > have NTP configured on my servers (we have a custom NTP config in my
> > company).
> > >
> > > I try to remove these lines from the
> > > /srv/pillar/ceph/proposals/config/stack/default/global.yml file:
> > > ylal8020:/srv/pillar # cat
> > > ceph/proposals/config/stack/default/global.yml
> > > time_server: '{{ pillar.get("master_minion") }}'
> > > time_service: ntp
> > >
> > > But I ran into a weird issue while trying to execute the
> > > configuration
> > > stage: ylal8020:/srv/pillar # salt-run state.orch
> > > ceph.stage.configure [WARNING ] Although 'dmidecode' was found in
> > > path, the current user cannot execute it. Grains output might not be
> > > accurate. [WARNING ] Although 'dmidecode' was found in path, the
> > > current user cannot execute it. Grains output might not be accurate.
> > > [WARNING ] Although 'dmidecode' was found in path, the current user
> cannot execute it.
> > > Grains output might not be accurate. [WARNING ] Although 'dmidecode'
> > > was found in path, the current user cannot execute it. Grains output
> > might not be accurate.
> > > ylal8020.inetpsa.com_master:
> > >   Name: push.proposal - Function: salt.runner - Result: Changed
> > > Started: -
> > > 15:53:58.352226 Duration: 563.82 ms Name: refresh_pillar1 - Function:
> > > salt.state - Result: Changed Started: - 15:53:58.916733 Duration:
> > > 589.218 ms Name: configure.cluster - Function: salt.runner - Result:
> > > Changed
> > > Started: - 15:53:59.506662 Duration: 1003.844 ms Name:
> > > refresh_pillar2
> > > -
> > > Function: salt.state - Result: Changed Started: - 15:54:00.511544
> > Duration:
> > > 662.566 ms Name: admin key - Function: salt.state - Result: Clean
> > Started:
> > > - 15:54:01.174305 Duration: 455.286 ms Name: mon key - Function:
> > > salt.state
> > > - Result: Clean Started: - 15:54:01.629844 Duration: 396.696 ms Name:
> > > osd key - Function: salt.state - Result: Clean Started: -
> > > 15:54:02.026768
> > > Duration: 391.508 ms Name: igw key - Function: salt.state - Result:
> > > Clean
> > > Started: - 15:54:02.418500 Duration: 1192.624 ms Name: mds key -
> > Function:
> > > salt.state - Result: Clean Started: - 15:54:03.611366 Duration:
> > > 1172.492 ms
> > > Name: rgw key - Function: salt.state - Result: Clean Started: -
> > > 15:54:04.784086 Duration: 1193.912 ms Name: openattic key - Function:
> > > salt.state - Result: Clean Started: - 15:54:05.978226 Duration:
> > > 393.879 ms
> > > Name: igw config - Function: salt.state - Result: Clean Started: -
> > > 15:54:06.372340 Duration: 1183.398 ms
> > >
> > > Summary for ylal8020.inetpsa.com_master
> > > -------------
> > > Succeeded: 12 (changed=4)
> > > Failed:     0
> > > -------------
> > > Total states run:     12
> > > Total run time:    9.199 s
> > >
> > > Ok I know there is no direct error but the pillar.items is not good,
> > > some items are missing: ylal8020:/srv/pillar # salt '*' pillar.items
> > > ylal8300.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - mon
> > > ylal8030.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - mon
> > > ylal8020.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - master
> > >         - admin
> > > ylxl0060.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - storage
> > > ylxl0050.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - storage
> > > ylal8290.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - mon
> > > ylxl0080.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - storage
> > > ylxl0070.inetpsa.com:
> > >     ----------
> > >     benchmark:
> > >         ----------
> > >         default-collection:
> > >             simple.yml
> > >         job-file-directory:
> > >             /run/cephfs_bench_jobs
> > >         log-file-directory:
> > >             /var/log/cephfs_bench_logs
> > >         work-directory:
> > >             /run/cephfs_bench
> > >     cluster:
> > >         ceph
> > >     master_minion:
> > >         ylal8020.inetpsa.com
> > >     mon_host:
> > >     mon_initial_members:
> > >         - ylal8290
> > >         - ylal8030
> > >         - ylal8300
> > >     roles:
> > >         - storage
> > >
> > > So my "simple" question is: how I can configure global.yml to not
> > > let DeepSea configure NTP?
> > >
> > > Regards / Cordialement,
> > > ___________________________________________________________________
> > > PSA Groupe
> > > Loïc Devulder (loic.devulder at mpsa.com) Senior Linux System Engineer
> > > / Linux HPC Specialist DF/DDCE/ISTA/DSEP/ULES - Linux Team
> > > BESSONCOURT / EXTENSION RIVE DROITE / B19 Internal postal address:
> > > SX.BES.15 Phone Incident - Level 3: 22 94 39 Phone Incident - Level
> > > 4: 22 92 40
> > > Office: +33 (0)9 66 66 69 06 (27 69 06)
> > > Mobile: +33 (0)6 87 72 47 31
> > > ___________________________________________________________________
> > >
> > > This message may contain confidential information. If you are not
> > > the intended recipient, please advise the sender immediately and
> > > delete this message. For further information on confidentiality and
> > > the risks inherent in electronic communication see
> > > http://disclaimer.psa-peugeot-
> > citroen.com.
> > >
> > > _______________________________________________
> > > Deepsea-users mailing list
> > > Deepsea-users at lists.suse.com
> > > http://lists.suse.com/mailman/listinfo/deepsea-users


More information about the Deepsea-users mailing list