[Deepsea-users] remove osd
bo.jin at suse.com
Wed Jan 11 00:07:51 MST 2017
Thanks for your answer. see below.
On 01/10/2017 11:26 PM, Eric Jackson wrote:
> Hi Bo,
> On Tuesday, January 10, 2017 09:48:36 PM Bo Jin wrote:
>> What is the correct policy if e.g.
>> I have 5 cluster nodes but I don't want node1 being used for osd
>> (storage role) but only use node1 for being master. How should I define
>> it in policy.cfg?
>> node names convention: sesnode
>> # Cluster assignment
>> # Hardware Profile
>> # Common configuration
>> # Role assignment
> The above is fine. Rerun the stages 2-5 for the removal. Quick question: is
> this with the SES product DeepSea 0.6.10? Or are you using DeepSea master
> 0.7.1? The 0.6.10 works fine for removals and I need to merge that particular
> fix back to master yet.
I'm using deepsea-0.6.10-1.3.noarch
>> Or should I better define a line for storage role.
> There's no direct storage role in the policy.cfg. I wanted you to be able to
> use the "storage" role when issuing Salt commands, but also give you the
> flexibility to assign or customize different hardware profiles for groups of
> hardware. That rack has this media that I want to work as dedicated OSDs, but
> the other rack has separated journals. Once assigned, both racks are storage
> and doing anything on either does the "right" thing considering the profile.
> The role-storage directory under proposals probably doesn't help here. I'll
> get that removed.
>> And last question:
>> If I want to get rid of one existing osd node should I
>> 1. modify policy.cfg and re-run the stages? or
>> 2. just use command salt "sesnode3*" state.sls
>> ceph.rescind.storage.terminate but would next time if I run stage.3
>> again re-deploy the osd to sesnode3?
> The short answer for nearly every change to your cluster is
> 1) Modify your policy.cfg
> 2) Rerun Stages 2-5 (0-5 if adding new hardware or you really don't want to
> think about it.)
> If you know the subcommands and are aware of a couple of dependencies (e.g.
> you need Stage 2 to update/refresh the pillar), then you could run those Salt
> commands directly.
> Now, the good news: the subcommands are completely dependent on the Salt
> pillar as well. If you notice, I went a little paranoid on adding Jinja
> conditionals to every rescind sls file. I was fearful somebody might run
> salt '*' state.apply ceph.rescind.storage
So if I want to remove one particular osd.id like ceph osd rm osd.2 how
should I use salt to accomplish it?
Can I pass an argument to salt "sesnode2*" state.sls ceph.remove.storage
I see the pillar for instance for
But why are both hdd still listed here even I excluded this node in my
policy.cfg? After updating policy.cfg I re-run stage 2 and 3.
> and effectively delete their storage. Take a look at
> /srv/salt/ceph/rescind/storage/default.sls. Notice the conditional after the
> storage.nop. If that minion is still assigned the storage role, nothing is
> So, you can do the higher level Stage orchestrations or apply the state files
> and DeepSea will carry out your intention.
>> Deepsea-users mailing list
>> Deepsea-users at lists.suse.com
bo.jin at suse.com
More information about the Deepsea-users