[Deepsea-users] filestore to bluestore migration

Thomas Sundell thsundel at gmail.com
Fri Dec 8 07:50:20 MST 2017


Thank's Eric for your response, see inline comments.

On Fri, Dec 8, 2017 at 3:10 PM, Eric Jackson <ejackson at suse.com> wrote:
> Hi Thomas,
>   Were you running Stage 2 after making changes to the policy.cfg?  The reason
> I ask is that the behavior you describe (i.e. migration runs, but nothing
> changed) gives the impression that the migration "thinks" that you are already
> configured.
>
>   Try running
>
> salt -I roles:storage osd.report
>
> That will compare the existing configuration in the pillar against the
> configuration of the storage node.  If you see messages like
>
> All configured OSDs are active

salt -I roles:storage osd.report
osd1.ceph.mydomain.fi:
All configured OSDS are active
osd2.ceph.mydomain.fi:
All configured OSDS are active
osd3.ceph.mydomain.fi:
All configured OSDS are active
osd4.ceph.mydomain.fi:
All configured OSDS are active

>
> and a query of the pillar
>
> salt -l roles:storage pillar.get ceph
>

salt -I roles:storage pillar.get ceph
osd1.ceph.mydomain.fi:
osd2.ceph.mydomain.fi:
osd3.ceph.mydomain.fi:
osd4.ceph.mydomain.fi:

> reflects the policy.cfg correctly, then you should be migrated to bluestore.
> To manually verify, pick a storage node and run
>
> cat /var/lib/ceph/osd/ceph-*/type
>

cat /var/lib/ceph/osd/ceph-*/type
filestore
filestore
filestore
filestore
filestore
filestore
filestore
filestore

We only have 4 storage nodes in this test cluster and each of them have 8 OSDs.

> ***
> With respect to the migration, the commands to migrate one node are in
> /srv/salt/ceph/mgirate/osds/default.sls.   Effectively,
>
> salt 'data1*' state.apply ceph.redeploy.osds
> salt 'admin*' state.apply ceph.remove.migrated
>
> The rest of the state file is waiting for a healthy cluster.  Notice that the
> cleanup is batched (i.e. the 'old' OSDs are not removed until the end).
>
> ***
> With respect to the profile-default and the one generated by the
> ceph.migrate.policy orchestration, either should be fine given some conditions.
>
> Here's a quick history: DeepSea originally tried to encode a useful name for a
> profile for a range of defaults.  The general issue is that most sites fell
> outside of the guessed defaults and admins were left to hand crafting their
> hardware profiles.
>
> The strategy now is to create 'profile-default' with a hardcoded 1 to 5 ratio
> or standalone OSDs depending on the hardware available.
>
> If you prefer the configuration provided by the profile-default, then feel free
> to use it.  The only caveat is to verify that devices have not been left off
> or used in ways not originally intended.  The purpose of the
> ceph.migrate.policy was to take keep any manual modifications and only change
> the type.  The journal media would be used for the wal and db for bluestore.
>
> Feel free to experiment with `salt-run proposal.help` and `salt-run
> proposal.peek`.  Once you decide on what you really want, the migration will
> carry it out.
>

Another thing struck me, if the "old" profile was not correctly
formatted when they deployed the cluster could the
"ceph.migrate.policy" have created a "faulty" one? Here is a snip of
the premigrated yml:

storage:
  data+journals: []
  osds:
  - /dev/disk/by-id/ata-ST4000VN0001-1SF178_Z4F0PS2P

Because when I create a new proposal (salt-run proposal.populate
name=my7to1profile ratio=7) I see there are several attributes that
are not in the migrated profile like db, db_size, wal and wal_size.

One last thing, if I were to set the new my7to1profile in policy.cfg
will "salt-run state.orch ceph.migrate.osds" magically migrate the old
setup to the new without destroying our testdata :) ?

Thomas


More information about the Deepsea-users mailing list