[Deepsea-users] DeepSea 0.7.4

Supriti Singh Supriti.Singh at suse.com
Mon Feb 20 01:53:26 MST 2017


Adding to Eric's comment: 

The documentation for nfs-ganesha is present at Deepsea wiki: https://github.com/SUSE/DeepSea/wiki/NFS-Ganesha 


------
Supriti Singh��SUSE Linux GmbH, GF: Felix Imend��rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N��rnberg)
 



>>> Eric Jackson <ejackson at suse.com> 02/17/17 9:36 PM >>>
Hello all,
  DeepSea 0.7.4 has been released.  Few changes, but one significant milestone:

- Add missing import
- Add kernel.replace 
  Only applies to SUSE and systems running the minimal kernel-default-base,
  but supports other systems.  This also generalizes the update step to use 
  Salt primitives and not call zypper specifically.
- Add Ganesha

  Before explaining some caveats, DeepSea 0.7.4 is tested against Ceph 11.1 
with Ganesha 2.5.  Ganesha 2.4 does not support certain features (e.g. no 
admin keyring required for cephfs FSAL)

  The rpm is available from
https://build.opensuse.org/package/show/home:swiftgist/deepsea

-------------------------------------------------------------
The remainder of this announcement is for those wanting quick hints about 
Ganesha.  The documentation is in progress.


* What is Ganesha?
  Ganesha is an NFS frontend to a variety of backends.  For Ceph, two backends 
are supported: cephfs and rgw.

* Why would I use Ganesha?  What about CephFS?
  If your clients support CephFS directly, then use CephFS.  However, a Linux 
host that does not have CephFS support can still connect to the same filesystem 
using NFS.

  For Rados Gateway, several S3 clients exist, but some users are more 
comfortable and familiar with a filesystem interface.  With NFS, the buckets 
and contents are presented as directories with files.

* How does this work in DeepSea?
  By default, assign the ganesha role to the minions in your policy.cfg.  
Also, include either cephfs (mds), radosgw (rgw) or both assignments in your 
policy.cfg.  For rgw, uncomment the example left in 
/srv/pillar/ceph/stack/ceph/cluster.yml for new installations.  

  Run Stages 2-4 as you normally would.  From an NFS client, run

mount ganesha1:/cephfs /mnt1
mount ganesha1:/demo /mnt2

  where ganesha1 is the name of the ganesha host and demo is the rgw user.

* Caveats:
  The mounts should work immediately.  If you get an error, raise the debug 
level of your ganesha server (e.g. /etc/sysconfig/ganesha, systemctl restart 
nfs-ganesha).
  If the first write results in an ENOMEM, check that your VM on your monitor 
has enough RAM.  If the first write hangs, check that 

default.rgw.buckets.index
default.rgw.buckets.data

  are created.  (In a slow enough VM, this may take a moment.)  These seem to 
be the most common initial obstacles.

  WARNING: While both cephfs and rgw mounts work, the rgw mounts do not show 
up under 'df', but do appear in the mount command.  

  Custom ganesha roles are supported.  This needs documentation, but is 
currently working. (Rather than using the label ganesha, create any labels 
needed with customized configurations.  c.f. ganesha_configurations)

  Lastly, using tools such as s3 from the libs3-2 package for creating buckets 
and placing objects will result in different permissions and ownership than 
using the NFS mount with the normal filesystem operations (e.g. root vs. 
nobody).

Eric

  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.suse.com/pipermail/deepsea-users/attachments/20170220/c490610c/attachment.htm>


More information about the Deepsea-users mailing list