[storage-beta] Question on SES5/salt and bluestore

Forghan, Armoun armoun.forghan at intel.com
Wed Jun 21 15:57:42 MDT 2017


Hi Eric,

Sorry for the long email, but we tried several things, details below. 
Questions: Shouldn't salt pick up the profiles from policy.cfg in proposals directory? We also don't understand why whe we run stage1, it is creating different profiles in multiple profiles directories even though the HW on the five OSD nodes are the same? 
Could we get a half hour of your time in the next few days to get this resolved please? 

1) Per your recommendations we copied all our OSD yml files (ceph1.yml, ceph2.yml, ...5) from current path in proposals directory to /srv/pillar/ceph/stack/ceph/minions/ceph1.yml, ceph2.yml, ....
We reran stage 2 (no errors) but stage3 gave us same errors as before.

2) We modified our policy.cfg (see bottom of email) to use full path, then reran stage2, which failed because it could not find the yml files. 

3) We went back to our original policy.cfg (with relative path) and reran stage 2 & 3 with the proper yml files in the proposal/profile directory, stage2 with no errors, but stage 3 gives us the same old errors. Also we did not get any warnings or errors about files not being found. Rerunning salt 'ceph*' pillar.get ceph:storage never shows us anything but the name of the minions. We have also run stage3 with debug and lots of info that is not pointing us to the solution.

4) We also tried to go back to stage1 to rebuild the profiles directory as a test. However it seems that stage1 rebuilds profiles differently each time and creates multiple profile folders even though the hardware has not changed and the disks are the same (7xHDD, 2xSSD/NVMe, 1xSSDboot). We did not run stage2 or 3 after that because these profiles do not reflect bluestore yml files that we had to create by hand. 


#Cxluster assignment
cluster-ceph/cluster/*.sls
# Hardware Profile
/srv/pillar/ceph/proposals/profile-1Intel1490GB-7WDC3726GB-1/stack/default/ceph/minions/ceph*.yml
/srv/pillar/ceph/proposals/profile-1Intel1490GB-7WDC3726GB-1/cluster/ceph*.sls
# Common configuration
config/stack/default/global.yml
config/stack/default/ceph/cluster.yml
# Role assignment
role-master/cluster/*.sls
role-admin/cluster/*.sls
role-mon/cluster/ceph[123].sls
role-mon/stack/default/ceph/minions/ceph[123].yml



Thanks
af
Armoun Forghan
Non-volatile Memory Solutions Group (NSG)
Intel Corporation
Office: +1-480-552-8352
Cell: +1-602-284-4495


-----Original Message-----
From: Eric Jackson [mailto:ejackson at suse.com] 
Sent: Wednesday, June 21, 2017 10:26 AM
To: Forghan, Armoun <armoun.forghan at intel.com>
Cc: storage-beta at lists.suse.com
Subject: Re: [storage-beta] Question on SES5/salt and bluestore

Well, that's a problem.  Which file are you using for ceph1?  Is that referenced by the policy.cfg or the stack.cfg?  When you run Stage 2, are you getting a warnings about some files getting ignored?

To make progress without diving too much into the above, add the contents of the yaml file to /srv/pillar/ceph/stack/ceph/minions/ceph1.DOMAIN.yml.  Run Stage 2 and rerun

salt 'ceph1*' pillar.get ceph:storage

You should see the same output as the yaml file.  If not, the next step is to turn on debugging on the Salt master and figure out where the error is.

On Wednesday, June 21, 2017 02:48:18 PM Forghan, Armoun wrote:
> Hi Eric, no worries, appreciate the help. Here is the output of salt 
> 'ceph1*' pillar.get ceph:storage ceph1:
> I assume the output should include the disks, but I don't know what is 
> missing to fix it. Stage2 runs without errors.
> 
> 
> Thanks
> af
> Armoun Forghan
> Non-volatile Memory Solutions Group (NSG) Intel Corporation
> Office: +1-480-552-8352
> Cell: +1-602-284-4495
> 
> -----Original Message-----
> From: Eric Jackson [mailto:ejackson at suse.com]
> Sent: Wednesday, June 21, 2017 6:13 AM
> To: storage-beta at lists.suse.com
> Cc: Forghan, Armoun <armoun.forghan at intel.com>
> Subject: Re: [storage-beta] Question on SES5/salt and bluestore
> 
> Hi Arnoum,
>   I am sorry for the delayed response.  I moved houses over the 
> weekend and discovered that my new ISP is blocking port 25 outgoing.  
> My response to you is sitting in the outgoing queue of my other PC.  I 
> found a workaround finally.
> :)
> 
>   Since you had the same results, let's verify that Salt has the 
> configuration. What does
> 
> salt 'ceph1*' pillar.get ceph:storage
> 
>   Does that result match your yaml file?
> 
> Eric
> 
> On Friday, June 16, 2017 05:57:48 PM Forghan, Armoun wrote:
> > Hi Eric,
> > 
> > This is very helpful, thank you! I made the changes you recommended, 
> > however I still get the same errors in stage3! ;(
> > 
> > BTW, stage2 does check for syntax errors in the profile so the first 
> > time I ran it I didn't have the two space in one of the files  and 
> > it complained but once I fixed that stage2 was good.
> > 
> > 
> > Thanks
> > af
> > Armoun Forghan
> > Non-volatile Memory Solutions Group (NSG) Intel Corporation
> > Office: +1-480-552-8352
> > Cell: +1-602-284-4495
> > 
> > 
> > -----Original Message-----
> > From: Eric Jackson [mailto:ejackson at suse.com]
> > Sent: Thursday, June 15, 2017 3:29 PM
> > To: storage-beta at lists.suse.com
> > Cc: Forghan, Armoun <armoun.forghan at intel.com>
> > Subject: Re: [storage-beta] Question on SES5/salt and bluestore
> > 
> > On Thursday, June 15, 2017 08:49:28 PM Forghan, Armoun wrote:
> > > ceph:
> > > 
> > > storage:
> > >   osds:
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F66YMK
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F8J763
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F0TRU8
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F02W68
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F1JHHZ
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F5JZE8
> > >    /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130FA4A28
> > >    wal:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> > >    db:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> > >    format:bluestore
> > 
> > I'm guessing you meant for each spinning drive to use the NVMe for 
> > the wal and db.  Each osd is a key.  Currently, we need at least one 
> > attribute.
> > Here's a snippet:
> > 
> > ceph:
> >   storage:
> >     osds:
> >       /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F66YMK:
> >         wal:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> >         db:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> >         format:bluestore
> >       
> >       /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UL1B3_WD-WMC130F8J763:
> >         wal:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> >         db:/dev/disk/by-id/nvme-SNVMe_INTEL_SSDPE2MD01PHFT5462000Q1P6JGN
> >         format:bluestore
> > 
> > I have attached a converted file.  BTW, make sure you have two space 
> > indents for each level.
> > 
> > We do have a yaml validation but I believe we need to extend it to 
> > help with catching some issues.  Let me know if this helps.
> > 
> > Eric
> > 
> > 
> > _______________________________________________
> > storage-beta mailing list
> > storage-beta at lists.suse.com
> > http://lists.suse.com/mailman/listinfo/storage-beta


More information about the storage-beta mailing list