[Deepsea-users] SES4, What is the correct process to remove cephfs?
Boyd.Memmott at suse.com
Fri Jun 9 12:02:50 MDT 2017
Thanks for the insights. I will it again when I get time...
Here is how I accomplish the task:
1- Stop mds services on the nodes running the service:
The follow nodes will be running the mds service.
ssh to each of these nodes and stop the service.
systemctl stop ceph-mds.target
2- List the name of the cephfs filesystems:
ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
Note: "data pools is plural", so it may support more than one..
3- Remove the cephfs with the following command
ceph fs rm <filesystem name> [--yes-i-really-mean-it]
4- Remove the corresponding pools:
ceph osd pool ls
ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
ceph osd pool ls
Now start mds services on all nodes:
systemctl start ceph-mds.target
SLES L2 Support Engineer
Email: boyd.memmott at suse.com
From: deepsea-users-bounces at lists.suse.com [mailto:deepsea-users-bounces at lists.suse.com] On Behalf Of Jan Fajerski
Sent: Thursday, June 01, 2017 10:31 AM
To: deepsea-users at lists.suse.com
Subject: Re: [Deepsea-users] SES4, What is the correct process to remove cephfs?
I came across this recently too and I'll add the functionality to DeepSea before
SES5 is released.
Until then you need to deactivate the MDS daemon(s) before removing the file system. The following steps should get you there:
salt '*' cmd.run 'systemctl stop ceph-mds.target' # stop all mds daemons # then on the master run:
ceph mds fail 0 # repeat this for every mds you have # now you can remove the file system with ceph fs rm cephfs –yes-i-really-mean-it # start up your mds daemons again salt '*' cmd.run 'systemctl start ceph-mds.target'
# and you're read to recreate your CephFS
Hope that helps!
On Tue, May 30, 2017 at 05:51:56PM +0000, Boyd Memmott wrote:
> Hi All
> I am somewhat new to SES product. I have been experimenting with
> cephfs and would like to remove and add again. But do not find
> documentation on the process. I did install ceph with deapsea. I
> commented out the role-mds in policy.cfg and ran “salt-run state.orch
> ceph.stage.2” and 3.
> Yet, “ceph fs rm cephfs –yes-i-really-mean-it” returns “ Error EINVAL:
> all MDS deamons must be inactive before removing filesystem”
> Any suggestions would be appreciated.
> Thank you
> Boyd Memmott
> SLES L2 Support Engineer
> Email: boyd.memmott at suse.com
> 1. http://www.suse.com/
>Deepsea-users mailing list
>Deepsea-users at lists.suse.com
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) _______________________________________________
Deepsea-users mailing list
Deepsea-users at lists.suse.com
More information about the Deepsea-users