[Deepsea-users] filestore to bluestore migration

Thomas Sundell thsundel at gmail.com
Mon Dec 11 05:26:28 MST 2017


Back at work.

On Fri, Dec 8, 2017 at 5:57 PM, Eric Jackson <ejackson at suse.com> wrote:

> It's okay to pull yaml files or even parts of them from the existing
> configuration, the results from proposal.populate and from ceph.migrate.policy.
> As long as the policy.cfg points to the one profile you want, you can evolve
> your configuration at your pace.
>
> I would suggest changing a yaml file and running through
>
> salt-run state.orch ceph.stage.2
> salt 'storage_minion*' pillar.get ceph
> salt 'storage_minion*' osd.report
>
> until you see what you want.  Then, run the migration
>
> salt-run state.orch ceph.migrate.osds
>
So I did the above and migration began, noticed osd weight dropping to
zero and osd went down and then they came back up again as bluestore
:) 5 of them got migrated but then I hit an error:

"Module function osd.redeploy threw an exception. Exception: Device
/dev/sda is not defined in pillar"

So I took a look in the osd*.yml and noticed the first disk of every
node was missing, so I manually edited the files and added the missing
disk of each node and reran the above steps but still I'm hitting the
same error.

And now after I edited the files and run osd.report it states "No OSD
configured for" the disks I added manually.

Any ideas what I could try next?

Thanks again!

Thomas


More information about the Deepsea-users mailing list