[Deepsea-users] remove osd

Bo Jin bo.jin at suse.com
Wed Jan 11 00:07:51 MST 2017


Hi Eric,
Thanks for your answer. see below.

On 01/10/2017 11:26 PM, Eric Jackson wrote:
> Hi Bo,
>
> On Tuesday, January 10, 2017 09:48:36 PM Bo Jin wrote:
>> Hi,
>> What is the correct policy if e.g.
>> I have 5 cluster nodes but I don't want node1 being used for osd
>> (storage role) but only use node1 for being master. How should I define
>> it in policy.cfg?
>>
>> node names convention: sesnode[12345]
>>
>> # Cluster assignment
>> cluster-ceph/cluster/*.sls
>> # Hardware Profile
>> profile-*-1/cluster/sesnode[2345]*.sls
>> profile-*-1/stack/default/ceph/minions/*yml
>> # Common configuration
>> config/stack/default/global.yml
>> config/stack/default/ceph/cluster.yml
>> # Role assignment
>> role-master/cluster/sesnode1.sls
>> role-admin/cluster/ses*.sls
>> role-mon/cluster/sesnode[234]*.sls
>> role-mon/stack/default/ceph/minions/sesnode[234]*.yml
>>
>
> The above is fine.  Rerun the stages 2-5 for the removal.  Quick question: is
> this with the SES product DeepSea 0.6.10?  Or are you using DeepSea master
> 0.7.1?  The 0.6.10 works fine for removals and I need to merge that particular
> fix back to master yet.
>
I'm using deepsea-0.6.10-1.3.noarch

>> Or should I better define a line for storage role.
>>
>> role-storage/cluster/sesnode[2345].sls
>
> There's no direct storage role in the policy.cfg.  I wanted you to be able to
> use the "storage" role when issuing Salt commands, but also give you the
> flexibility to assign or customize different hardware profiles for groups of
> hardware.  That rack has this media that I want to work as dedicated OSDs, but
> the other rack has separated journals.  Once assigned, both racks are storage
> and doing anything on either does the "right" thing considering the profile.
>
> The role-storage directory under proposals probably doesn't help here.  I'll
> get that removed.
>
>>
>> And last question:
>> If I want to get rid of one existing osd node should I
>> 1. modify policy.cfg and re-run the stages? or
>> 2. just use command salt "sesnode3*" state.sls
>> ceph.rescind.storage.terminate but would next time if I run stage.3
>> again re-deploy the osd to sesnode3?
>
> The short answer for nearly every change to your cluster is
>
> 1) Modify your policy.cfg
> 2) Rerun Stages 2-5 (0-5 if adding new hardware or you really don't want to
> think about it.)
>
> If you know the subcommands and are aware of a couple of dependencies (e.g.
> you need Stage 2 to update/refresh the pillar), then you could run those Salt
> commands directly.
>
> Now, the good news: the subcommands are completely dependent on the Salt
> pillar as well.  If you notice, I went a little paranoid on adding Jinja
> conditionals to every rescind sls file.  I was fearful somebody might run
>
> salt '*' state.apply ceph.rescind.storage
>
So if I want to remove one particular osd.id like ceph osd rm osd.2 how 
should I use salt to accomplish it?
Can I pass an argument to salt "sesnode2*" state.sls ceph.remove.storage 
osd.2 ?

I see the pillar for instance for 
/srv/pillar/ceph/proposals/profile-2Disk100GB-1/stack/default/ceph/minions/sesnode1.mydomain.sls
go.home.yml
storage:
   data+journals: []
   osds:
   - /dev/vdb
   - /dev/vdc

But why are both hdd still listed here even I excluded this node in my 
policy.cfg? After updating policy.cfg I re-run stage 2 and 3.

> and effectively delete their storage.  Take a look at
> /srv/salt/ceph/rescind/storage/default.sls.  Notice the conditional after the
> storage.nop.  If that minion is still assigned the storage role, nothing is
> executed.
>
> So, you can do the higher level Stage orchestrations or apply the state files
> and DeepSea will carry out your intention.
>
>> ?
>> Thanks
>>
>>
>> _______________________________________________
>> Deepsea-users mailing list
>> Deepsea-users at lists.suse.com
>> http://lists.suse.com/mailman/listinfo/deepsea-users

-- 
Bo Jin
Sales Engineer
SUSE Linux
Mobile: +41792586688
bo.jin at suse.com
www.suse.com


More information about the Deepsea-users mailing list