All of lore.kernel.org
 help / color / mirror / Atom feed
* who is using nfs-ganesha and cephfs?
@ 2017-11-08 21:41 Sage Weil
  2017-11-08 21:52 ` Wyllys Ingersoll
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Sage Weil @ 2017-11-08 21:41 UTC (permalink / raw)
  To: ceph-users, ceph-devel

Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
experience been?

(We are working on building proper testing and support for this into 
Mimic, but the ganesha FSAL has been around for years.)

Thanks!
sage


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
  2017-11-08 21:41 who is using nfs-ganesha and cephfs? Sage Weil
@ 2017-11-08 21:52 ` Wyllys Ingersoll
  2017-11-08 21:53 ` [ceph-users] " Marc Roos
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2 siblings, 0 replies; 14+ messages in thread
From: Wyllys Ingersoll @ 2017-11-08 21:52 UTC (permalink / raw)
  To: Sage Weil; +Cc: Ceph-User, Ceph Development

We use Ganesha (currently version 2.5.2) for exporting cephfs
subdirectories over NFS, but we have been using the VFS FSAL as
opposed to the ceph one due to problems encountered in the past. Im
not sure if the past issues have been resolved or not.  It would be
good to know with some confidence if the ceph FSAL is production
ready, so proper testing would be great.

Wyllys Ingersoll
Keeper Technology, LLC

On Wed, Nov 8, 2017 at 4:41 PM, Sage Weil <sweil@redhat.com> wrote:
> Who is running nfs-ganesha's FSAL to export CephFS?  What has your
> experience been?
>
> (We are working on building proper testing and support for this into
> Mimic, but the ganesha FSAL has been around for years.)
>
> Thanks!
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-11-08 21:53   ` Marc Roos
  2017-11-08 22:42   ` Lincoln Bryant
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Marc Roos @ 2017-11-08 21:53 UTC (permalink / raw)
  To: ceph-devel, ceph-users, sweil

 

I, in test environment, centos7, on a luminous osd node, with binaries 
from 
download.ceph.com::ceph/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/

Having these:
Nov  6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip 
00007fa80a151a43 sp 00007fa755ffa2f0 error 4 in 
libdbus-1.so.3.7.4[7fa80a12b000+46000]
Nov  6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip 
00007fa80a151a43 sp 00007fa755ffa2f0 error 4 in 
libdbus-1.so.3.7.4[7fa80a12b000+46000]
Nov  6 17:42:16 c01 kernel: ganesha.nfsd[6839]: segfault at 8 ip 
00007fc97a5d3f98 sp 00007fc8c6ffc2f8 error 6 in 
libdbus-1.so.3.7.4[7fc97a5ac000+46000]
Nov  6 17:42:16 c01 kernel: ganesha.nfsd[6839]: segfault at 8 ip 
00007fc97a5d3f98 sp 00007fc8c6ffc2f8 error 6 in 
libdbus-1.so.3.7.4[7fc97a5ac000+46000]
Nov  6 17:47:47 c01 kernel: ganesha.nfsd[7662]: segfault at 4 ip 
00007f15e2afc060 sp 00007f152effc388 error 6 in 
libdbus-1.so.3.7.4[7f15e2ad6000+46000]
Nov  6 17:47:47 c01 kernel: ganesha.nfsd[7662]: segfault at 4 ip 
00007f15e2afc060 sp 00007f152effc388 error 6 in 
libdbus-1.so.3.7.4[7f15e2ad6000+46000]
Nov  6 17:52:25 c01 kernel: ganesha.nfsd[14415]: segfault at 88 ip 
00007f9258eed453 sp 00007f91a9ff2348 error 4 in 
libdbus-1.so.3.7.4[7f9258eda000+46000]
Nov  6 17:52:25 c01 kernel: ganesha.nfsd[14415]: segfault at 88 ip 
00007f9258eed453 sp 00007f91a9ff2348 error 4 in 
libdbus-1.so.3.7.4[7f9258eda000+46000]


And reported this
https://github.com/nfs-ganesha/nfs-ganesha/issues/215



-----Original Message-----
From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] 
Sent: woensdag 8 november 2017 22:42
To: ceph-users-Qp0mS5GaXlQ@public.gmane.org; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: [ceph-users] who is using nfs-ganesha and cephfs?

Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
experience been?

(We are working on building proper testing and support for this into 
Mimic, but the ganesha FSAL has been around for years.)

Thanks!
sage

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [ceph-users] who is using nfs-ganesha and cephfs?
  2017-11-08 21:41 who is using nfs-ganesha and cephfs? Sage Weil
  2017-11-08 21:52 ` Wyllys Ingersoll
@ 2017-11-08 21:53 ` Marc Roos
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2 siblings, 0 replies; 14+ messages in thread
From: Marc Roos @ 2017-11-08 21:53 UTC (permalink / raw)
  To: ceph-devel, ceph-users, sweil

 

I, in test environment, centos7, on a luminous osd node, with binaries 
from 
download.ceph.com::ceph/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/

Having these:
Nov  6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip 
00007fa80a151a43 sp 00007fa755ffa2f0 error 4 in 
libdbus-1.so.3.7.4[7fa80a12b000+46000]
Nov  6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip 
00007fa80a151a43 sp 00007fa755ffa2f0 error 4 in 
libdbus-1.so.3.7.4[7fa80a12b000+46000]
Nov  6 17:42:16 c01 kernel: ganesha.nfsd[6839]: segfault at 8 ip 
00007fc97a5d3f98 sp 00007fc8c6ffc2f8 error 6 in 
libdbus-1.so.3.7.4[7fc97a5ac000+46000]
Nov  6 17:42:16 c01 kernel: ganesha.nfsd[6839]: segfault at 8 ip 
00007fc97a5d3f98 sp 00007fc8c6ffc2f8 error 6 in 
libdbus-1.so.3.7.4[7fc97a5ac000+46000]
Nov  6 17:47:47 c01 kernel: ganesha.nfsd[7662]: segfault at 4 ip 
00007f15e2afc060 sp 00007f152effc388 error 6 in 
libdbus-1.so.3.7.4[7f15e2ad6000+46000]
Nov  6 17:47:47 c01 kernel: ganesha.nfsd[7662]: segfault at 4 ip 
00007f15e2afc060 sp 00007f152effc388 error 6 in 
libdbus-1.so.3.7.4[7f15e2ad6000+46000]
Nov  6 17:52:25 c01 kernel: ganesha.nfsd[14415]: segfault at 88 ip 
00007f9258eed453 sp 00007f91a9ff2348 error 4 in 
libdbus-1.so.3.7.4[7f9258eda000+46000]
Nov  6 17:52:25 c01 kernel: ganesha.nfsd[14415]: segfault at 88 ip 
00007f9258eed453 sp 00007f91a9ff2348 error 4 in 
libdbus-1.so.3.7.4[7f9258eda000+46000]


And reported this
https://github.com/nfs-ganesha/nfs-ganesha/issues/215



-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com] 
Sent: woensdag 8 november 2017 22:42
To: ceph-users@ceph.com; ceph-devel@vger.kernel.org
Subject: [ceph-users] who is using nfs-ganesha and cephfs?

Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
experience been?

(We are working on building proper testing and support for this into 
Mimic, but the ganesha FSAL has been around for years.)

Thanks!
sage

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2017-11-08 21:53   ` Marc Roos
@ 2017-11-08 22:42   ` Lincoln Bryant
  2017-11-09  7:10   ` Wido den Hollander
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Lincoln Bryant @ 2017-11-08 22:42 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

Hi Sage,

We have been running the Ganesha FSAL for a while (as far back as Hammer / Ganesha 2.2.0), primarily for uid/gid squashing.

Things are basically OK for our application, but we've seen the following weirdness*:
	- Sometimes there are duplicated entries when directories are listed. Same filename, same inode, just shows up twice in 'ls'.
	- There can be a considerable latency between new files added to CephFS and those files becoming visible on our NFS clients. I understand this might be related to dentry caching. 
	- Occasionally, the Ganesha FSAL seems to max out at 100,000 caps claimed which don't get released until the MDS is restarted.

*note: these issues are with Ganesha 2.2.0 and Hammer/Jewel, and have perhaps since been fixed upstream. 

(We've recently updated to Luminous / Ganesha 2.5.2, and will be happy to complain if any issues show up :))

Cheers,
Lincoln

> On Nov 8, 2017, at 3:41 PM, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 
> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)
> 
> Thanks!
> sage
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2017-11-08 21:53   ` Marc Roos
  2017-11-08 22:42   ` Lincoln Bryant
@ 2017-11-09  7:10   ` Wido den Hollander
  2017-11-09 10:04   ` Lars Marowsky-Bree
  2017-11-16  8:17   ` Rafael Lopez
  4 siblings, 0 replies; 14+ messages in thread
From: Wido den Hollander @ 2017-11-09  7:10 UTC (permalink / raw)
  To: ceph-users-Qp0mS5GaXlQ, Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA


> Op 8 november 2017 om 22:41 schreef Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> 
> 
> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 

A customer of mine is going this. They are running Ubuntu and my experience is that getting Ganesha compiled is already a pain sometimes.

When it runs it runs just fine. I don't hear a lot of complaints from their side with Ganesha or NFS not working.

Wido

> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)
> 
> Thanks!
> sage
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-11-09  7:10   ` Wido den Hollander
@ 2017-11-09 10:04   ` Lars Marowsky-Bree
       [not found]     ` <20171109100440.yut3qjvejxwd7oz3-IBi9RG/b67k@public.gmane.org>
  2017-11-16  8:17   ` Rafael Lopez
  4 siblings, 1 reply; 14+ messages in thread
From: Lars Marowsky-Bree @ 2017-11-09 10:04 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On 2017-11-08T21:41:41, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 
> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)

We use it currently, and it works, but let's not discuss the performance
;-)

How else do you want to build this into Mimic?

Regards,
    Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found]     ` <20171109100440.yut3qjvejxwd7oz3-IBi9RG/b67k@public.gmane.org>
@ 2017-11-09 11:15       ` Supriti Singh
  2017-11-09 13:21         ` Supriti Singh
  0 siblings, 1 reply; 14+ messages in thread
From: Supriti Singh @ 2017-11-09 11:15 UTC (permalink / raw)
  To: sweil-H+wXaHxf7aLQT0dZR+AlfA, Lars Marowsky-Bree
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 2379 bytes --]

Hi Sage,

As Lars mentioned, at SUSE, we use ganesha 2.5.2/luminous. We did a preliminary performance comparison of cephfs client
and nfs-ganesha client. I have attached the results. The results are aggregate bandwidth over 10 clients.

1. Test Setup:
We use fio to read/write to a single 5GB file per thread for 300 seconds. A single job (represented in x-axis) is of
type {number_of_worker_thread}rw_{block_size}_{op}, where, 
number_of_worker_threads: 1, 4, 8, 16
Block size: 4K,64K,1M,4M,8M
op: rw 

 
2. NFS-Ganesha configuration:
Parameters set (other than default):
1. Graceless = True
2. MaxRPCSendBufferSize/MaxRPCRecvBufferSize is set to max value.

3. Observations:
-  For single thread (on each client) and 4k block size, the b/w is around 45% of cephfs 
- As number of threads increases, the performance drops. It could be related to nfs-ganesha parameter
"Dispatch_Max_Reqs_Xprt", which defaults to 512. Note, this parameter is important only for v2.5. 
- We did run with both nfs-ganesha mdcache enabled/disabled. But there were no significant improvements with caching.
Not sure but it could be related to this issue: https://github.com/nfs-ganesha/nfs-ganesha/issues/223
  
The results are still preliminary, and I guess with proper tuning of nfs-ganesha parameters, it could be better.

Thanks,
Supriti 

------
Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 



>>> Lars Marowsky-Bree <lmb-IBi9RG/b67k@public.gmane.org> 11/09/17 11:07 AM >>>
On 2017-11-08T21:41:41, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 
> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)

We use it currently, and it works, but let's not discuss the performance
;-)

How else do you want to build this into Mimic?

Regards,
    Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[-- Attachment #1.2: HTML --]
[-- Type: text/html, Size: 3384 bytes --]

[-- Attachment #2: NFS_Ganesha_vs_CephFS.ods --]
[-- Type: application/octet-stream, Size: 25399 bytes --]

[-- Attachment #3: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
  2017-11-09 11:15       ` Supriti Singh
@ 2017-11-09 13:21         ` Supriti Singh
  2017-11-09 14:28           ` [ceph-users] " Jeff Layton
  0 siblings, 1 reply; 14+ messages in thread
From: Supriti Singh @ 2017-11-09 13:21 UTC (permalink / raw)
  To: sweil, Lars Marowsky-Bree; +Cc: ceph-users, ceph-devel

[-- Attachment #1: Type: text/plain, Size: 3128 bytes --]

The email was not delivered to ceph-devel@vger.kernel.org. So, re-sending it. 

Few more things regarding the hardware and clients used in our benchmarking setup:
- The cephfs benchmark were done using kernel cephfs client. 
- NFS-Ganesha was mounted using nfs version 4. 
- Single nfs-ganesha server was used. 

Ceph and Client setup:
- Each client node has 16 cores and 16 GB RAM.
- MDS and Ganesha server is running on the same node. 
- Network interconnect between client and ceph nodes is 40Gbit/s. 
- Ceph on 8 nodes: (each node has 24 cores/128 GB RAM). 
  - 5 OSD nodes
  - 3 MON/MDS nodes
  - 6 OSD daemons per node - Blustore - SSD/NVME journal 


------
Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 



>>> Supriti Singh 11/09/17 12:15 PM >>>
Hi Sage,

As Lars mentioned, at SUSE, we use ganesha 2.5.2/luminous. We did a preliminary performance comparison of cephfs client
and nfs-ganesha client. I have attached the results. The results are aggregate bandwidth over 10 clients.

1. Test Setup:
We use fio to read/write to a single 5GB file per thread for 300 seconds. A single job (represented in x-axis) is of
type {number_of_worker_thread}rw_{block_size}_{op}, where, 
number_of_worker_threads: 1, 4, 8, 16
Block size: 4K,64K,1M,4M,8M
op: rw 

 
2. NFS-Ganesha configuration:
Parameters set (other than default):
1. Graceless = True
2. MaxRPCSendBufferSize/MaxRPCRecvBufferSize is set to max value.

3. Observations:
-  For single thread (on each client) and 4k block size, the b/w is around 45% of cephfs 
- As number of threads increases, the performance drops. It could be related to nfs-ganesha parameter
"Dispatch_Max_Reqs_Xprt", which defaults to 512. Note, this parameter is important only for v2.5. 
- We did run with both nfs-ganesha mdcache enabled/disabled. But there were no significant improvements with caching.
Not sure but it could be related to this issue: https://github.com/nfs-ganesha/nfs-ganesha/issues/223
  
The results are still preliminary, and I guess with proper tuning of nfs-ganesha parameters, it could be better.

Thanks,
Supriti 

------
Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 



>>> Lars Marowsky-Bree <lmb@suse.com> 11/09/17 11:07 AM >>>
On 2017-11-08T21:41:41, Sage Weil <sweil@redhat.com> wrote:

> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 
> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)

We use it currently, and it works, but let's not discuss the performance
;-)

How else do you want to build this into Mimic?

Regards,
    Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html






[-- Attachment #2: NFS_Ganesha_vs_CephFS.ods --]
[-- Type: application/octet-stream, Size: 25399 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] who is using nfs-ganesha and cephfs?
  2017-11-09 13:21         ` Supriti Singh
@ 2017-11-09 14:28           ` Jeff Layton
  0 siblings, 0 replies; 14+ messages in thread
From: Jeff Layton @ 2017-11-09 14:28 UTC (permalink / raw)
  To: Supriti Singh, sweil, Lars Marowsky-Bree; +Cc: ceph-devel, ceph-users

Ouch... yeah the rotten performance is sad but not really surprising.

We add a lot of extra hops and data copies by going through ganesha.
Ganesha also uses the userland client libs and those are organized
around the BCCL (Big Ceph Client Lock).

I think the only way we'll get decent performance over the long haul is
get ganesha out of the data path. A flexfiles pnfs layout is something
of a natural fit on top of cephfs, and I imagine that would get us a lot
closer to the cephfs read/write numbers.

-- Jeff

On Thu, 2017-11-09 at 13:21 +0000, Supriti Singh wrote:
> The email was not delivered to ceph-devel@vger.kernel.org. So, re-sending it. 
> 
> Few more things regarding the hardware and clients used in our benchmarking setup:
> - The cephfs benchmark were done using kernel cephfs client. 
> - NFS-Ganesha was mounted using nfs version 4. 
> - Single nfs-ganesha server was used. 
> 
> Ceph and Client setup:
> - Each client node has 16 cores and 16 GB RAM.
> - MDS and Ganesha server is running on the same node. 
> - Network interconnect between client and ceph nodes is 40Gbit/s. 
> - Ceph on 8 nodes: (each node has 24 cores/128 GB RAM). 
>   - 5 OSD nodes
>   - 3 MON/MDS nodes
>   - 6 OSD daemons per node - Blustore - SSD/NVME journal 
> 
> 
> ------
> Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nürnberg)
>  
> 
> 
> 
> > > > Supriti Singh 11/09/17 12:15 PM >>>
> 
> Hi Sage,
> 
> As Lars mentioned, at SUSE, we use ganesha 2.5.2/luminous. We did a preliminary performance comparison of cephfs client
> and nfs-ganesha client. I have attached the results. The results are aggregate bandwidth over 10 clients.
> 
> 1. Test Setup:
> We use fio to read/write to a single 5GB file per thread for 300 seconds. A single job (represented in x-axis) is of
> type {number_of_worker_thread}rw_{block_size}_{op}, where, 
> number_of_worker_threads: 1, 4, 8, 16
> Block size: 4K,64K,1M,4M,8M
> op: rw 
> 
>  
> 2. NFS-Ganesha configuration:
> Parameters set (other than default):
> 1. Graceless = True
> 2. MaxRPCSendBufferSize/MaxRPCRecvBufferSize is set to max value.
> 
> 3. Observations:
> -  For single thread (on each client) and 4k block size, the b/w is around 45% of cephfs 
> - As number of threads increases, the performance drops. It could be related to nfs-ganesha parameter
> "Dispatch_Max_Reqs_Xprt", which defaults to 512. Note, this parameter is important only for v2.5. 
> - We did run with both nfs-ganesha mdcache enabled/disabled. But there were no significant improvements with caching.
> Not sure but it could be related to this issue: https://github.com/nfs-ganesha/nfs-ganesha/issues/223
>   
> The results are still preliminary, and I guess with proper tuning of nfs-ganesha parameters, it could be better.
> 
> Thanks,
> Supriti 
> 
> ------
> Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nürnberg)
>  
> 
> 
> 
> > > > Lars Marowsky-Bree <lmb@suse.com> 11/09/17 11:07 AM >>>
> 
> On 2017-11-08T21:41:41, Sage Weil <sweil@redhat.com> wrote:
> 
> > Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> > experience been?
> > 
> > (We are working on building proper testing and support for this into 
> > Mimic, but the ganesha FSAL has been around for years.)
> 
> We use it currently, and it works, but let's not discuss the performance
> ;-)
> 
> How else do you want to build this into Mimic?
> 
> Regards,
>     Lars
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Jeff Layton <jlayton@redhat.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: who is using nfs-ganesha and cephfs?
       [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-11-09 10:04   ` Lars Marowsky-Bree
@ 2017-11-16  8:17   ` Rafael Lopez
  2017-11-16 16:29     ` [ceph-users] " Matt Benjamin
  4 siblings, 1 reply; 14+ messages in thread
From: Rafael Lopez @ 2017-11-16  8:17 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 3526 bytes --]

We are running RHCS2.3 (jewel) with ganesha 2.4.2 and cephfs fsal, compiled
from srpm. experimenting with CTDB for controlling ganesha HA since we run
samba on same servers.

Haven't done much functionality/stress testing but on face value basic
stuff seems to work well (file operations).

In terms of performance, last time I tested ganesha it seemed comparable to
ceph-fuse (RHCS2.x/jewel, i think luminous ceph-fuse is better). Though I
haven't done rigorous metadata tests or multiple client tests. Also our
ganesha servers are quite small, as we are thus far only serving cephfs
natively. eg 4G ram 1 core. Here are some FIO results:

jobs in order are:
1. async 1M
2. sync 1M
3. async 4k
4. sync 4k
5. seq read 1M
6. rand read 4k

Ceph cluster is RHCS 2.3 (10.2.7)

CEPH-FUSE (10.2.x)
  WRITE: io=143652MB, aggrb=490328KB/s, minb=490328KB/s, maxb=490328KB/s,
mint=300002msec, maxt=300002msec
  WRITE: io=14341MB, aggrb=48947KB/s, minb=48947KB/s, maxb=48947KB/s,
mint=300018msec, maxt=300018msec
  WRITE: io=9808.2MB, aggrb=33478KB/s, minb=33478KB/s, maxb=33478KB/s,
mint=300001msec, maxt=300001msec
  WRITE: io=424476KB, aggrb=1414KB/s, minb=1414KB/s, maxb=1414KB/s,
mint=300003msec, maxt=300003ms
   READ: io=158069MB, aggrb=539527KB/s, minb=539527KB/s, maxb=539527KB/s,
mint=300008msec, maxt=300008msec
   READ: io=1881.2MB, aggrb=6420KB/s, minb=6420KB/s, maxb=6420KB/s,
mint=300001msec, maxt=300001msec

ganesha (nfs3)
  WRITE: io=157891MB, aggrb=538923KB/s, minb=538923KB/s, maxb=538923KB/s,
mint=300006msec, maxt=300006msec
  WRITE: io=38700MB, aggrb=132093KB/s, minb=132093KB/s, maxb=132093KB/s,
mint=300006msec, maxt=300006msec
  WRITE: io=3072.0MB, aggrb=10148KB/s, minb=10148KB/s, maxb=10148KB/s,
mint=309957msec, maxt=309957msec
  WRITE: io=397516KB, aggrb=1325KB/s, minb=1325KB/s, maxb=1325KB/s,
mint=300001msec, maxt=300001msec
   READ: io=82521MB, aggrb=281669KB/s, minb=281669KB/s, maxb=281669KB/s,
mint=300002msec, maxt=300002msec
   READ: io=1322.2MB, aggrb=4513KB/s, minb=4513KB/s, maxb=4513KB/s,
mint=300001msec, maxt=300001msec

cephfs kernel client
  WRITE: io=471041MB, aggrb=1568.8MB/s, minb=1568.8MB/s, maxb=1568.8MB/s,
mint=300394msec, maxt=300394msec
  WRITE: io=50005MB, aggrb=170680KB/s, minb=170680KB/s, maxb=170680KB/s,
mint=300006msec, maxt=300006msec
  WRITE: io=169092MB, aggrb=577166KB/s, minb=577166KB/s, maxb=577166KB/s,
mint=300000msec, maxt=300000msec
  WRITE: io=530548KB, aggrb=1768KB/s, minb=1768KB/s, maxb=1768KB/s,
mint=300003msec, maxt=300003msec
   READ: io=121501MB, aggrb=414720KB/s, minb=414720KB/s, maxb=414720KB/s,
mint=300002msec, maxt=300002msec
   READ: io=3264.6MB, aggrb=11142KB/s, minb=11142KB/s, maxb=11142KB/s,
mint=300001msec, maxt=300001msec

happy to share fio job file if anyone wants it.


On 9 November 2017 at 08:41, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Who is running nfs-ganesha's FSAL to export CephFS?  What has your
> experience been?
>
> (We are working on building proper testing and support for this into
> Mimic, but the ganesha FSAL has been around for years.)
>
> Thanks!
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
*Rafael Lopez*
Research Devops Engineer
Monash University eResearch Centre

T: +61 3 9905 9118
M: +61 (0)427682670 <%2B61%204%2027682%20670>
E: rafael.lopez-sFFfwlTeHG43uPMLIKxrzw@public.gmane.org

[-- Attachment #1.2: Type: text/html, Size: 5935 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] who is using nfs-ganesha and cephfs?
  2017-11-16  8:17   ` Rafael Lopez
@ 2017-11-16 16:29     ` Matt Benjamin
  2017-11-17 13:27       ` Jeff Layton
  0 siblings, 1 reply; 14+ messages in thread
From: Matt Benjamin @ 2017-11-16 16:29 UTC (permalink / raw)
  To: Rafael Lopez; +Cc: Sage Weil, Ceph Development, ceph-users

Hi Rafael,

Thanks for taking the time to report your results.

The similarity to Ceph fuse performance is to be expected, because
both Ceph fuse and the nfs-ganesha FSAL driver use libcephfs, as Jeff
Layton noted.  It's worth noting that nfs-ganesha does not appear to
be adding i/o or metadata operation latency.

The interesting questions, pushing further on Jeff's point, I think are

1. libcephfs vs kernel cephfs performance delta, and in particular
2. the portion of that delta NOT accounted for by the direct OSD data
path available to the kernel mode ceph client--the latter can
eventually be made available to nfs-ganesha via pNFS as Jeff hinted,
but the former is potentially available for performance improvement

The topic of the big client lock is an old one.  I experimented with
removing it in 2014, branch api-concurrent here
git@github.com:linuxbox2/linuxbox-ceph.git.  I'm not confident that
just removing the client lock bottleneck will bring visible
improvements, though, especially until MDS concurrency improvements
are in place, but it may be worth revisiting.

Matt

On Thu, Nov 16, 2017 at 3:17 AM, Rafael Lopez <rafael.lopez@monash.edu> wrote:
> We are running RHCS2.3 (jewel) with ganesha 2.4.2 and cephfs fsal, compiled
> from srpm. experimenting with CTDB for controlling ganesha HA since we run
> samba on same servers.
>
> Haven't done much functionality/stress testing but on face value basic stuff
> seems to work well (file operations).
>
> In terms of performance, last time I tested ganesha it seemed comparable to
> ceph-fuse (RHCS2.x/jewel, i think luminous ceph-fuse is better). Though I
> haven't done rigorous metadata tests or multiple client tests. Also our
> ganesha servers are quite small, as we are thus far only serving cephfs
> natively. eg 4G ram 1 core. Here are some FIO results:
>
> jobs in order are:
> 1. async 1M
> 2. sync 1M
> 3. async 4k
> 4. sync 4k
> 5. seq read 1M
> 6. rand read 4k
>
> Ceph cluster is RHCS 2.3 (10.2.7)
>
> CEPH-FUSE (10.2.x)
>   WRITE: io=143652MB, aggrb=490328KB/s, minb=490328KB/s, maxb=490328KB/s,
> mint=300002msec, maxt=300002msec
>   WRITE: io=14341MB, aggrb=48947KB/s, minb=48947KB/s, maxb=48947KB/s,
> mint=300018msec, maxt=300018msec
>   WRITE: io=9808.2MB, aggrb=33478KB/s, minb=33478KB/s, maxb=33478KB/s,
> mint=300001msec, maxt=300001msec
>   WRITE: io=424476KB, aggrb=1414KB/s, minb=1414KB/s, maxb=1414KB/s,
> mint=300003msec, maxt=300003ms
>    READ: io=158069MB, aggrb=539527KB/s, minb=539527KB/s, maxb=539527KB/s,
> mint=300008msec, maxt=300008msec
>    READ: io=1881.2MB, aggrb=6420KB/s, minb=6420KB/s, maxb=6420KB/s,
> mint=300001msec, maxt=300001msec
>
> ganesha (nfs3)
>   WRITE: io=157891MB, aggrb=538923KB/s, minb=538923KB/s, maxb=538923KB/s,
> mint=300006msec, maxt=300006msec
>   WRITE: io=38700MB, aggrb=132093KB/s, minb=132093KB/s, maxb=132093KB/s,
> mint=300006msec, maxt=300006msec
>   WRITE: io=3072.0MB, aggrb=10148KB/s, minb=10148KB/s, maxb=10148KB/s,
> mint=309957msec, maxt=309957msec
>   WRITE: io=397516KB, aggrb=1325KB/s, minb=1325KB/s, maxb=1325KB/s,
> mint=300001msec, maxt=300001msec
>    READ: io=82521MB, aggrb=281669KB/s, minb=281669KB/s, maxb=281669KB/s,
> mint=300002msec, maxt=300002msec
>    READ: io=1322.2MB, aggrb=4513KB/s, minb=4513KB/s, maxb=4513KB/s,
> mint=300001msec, maxt=300001msec
>
> cephfs kernel client
>   WRITE: io=471041MB, aggrb=1568.8MB/s, minb=1568.8MB/s, maxb=1568.8MB/s,
> mint=300394msec, maxt=300394msec
>   WRITE: io=50005MB, aggrb=170680KB/s, minb=170680KB/s, maxb=170680KB/s,
> mint=300006msec, maxt=300006msec
>   WRITE: io=169092MB, aggrb=577166KB/s, minb=577166KB/s, maxb=577166KB/s,
> mint=300000msec, maxt=300000msec
>   WRITE: io=530548KB, aggrb=1768KB/s, minb=1768KB/s, maxb=1768KB/s,
> mint=300003msec, maxt=300003msec
>    READ: io=121501MB, aggrb=414720KB/s, minb=414720KB/s, maxb=414720KB/s,
> mint=300002msec, maxt=300002msec
>    READ: io=3264.6MB, aggrb=11142KB/s, minb=11142KB/s, maxb=11142KB/s,
> mint=300001msec, maxt=300001msec
>
> happy to share fio job file if anyone wants it.
>
>
> On 9 November 2017 at 08:41, Sage Weil <sweil@redhat.com> wrote:
>>
>> Who is running nfs-ganesha's FSAL to export CephFS?  What has your
>> experience been?
>>
>> (We are working on building proper testing and support for this into
>> Mimic, but the ganesha FSAL has been around for years.)
>>
>> Thanks!
>> sage
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
>
> --
> Rafael Lopez
> Research Devops Engineer
> Monash University eResearch Centre
>
> T: +61 3 9905 9118
> M: +61 (0)427682670
> E: rafael.lopez@monash.edu
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] who is using nfs-ganesha and cephfs?
  2017-11-16 16:29     ` [ceph-users] " Matt Benjamin
@ 2017-11-17 13:27       ` Jeff Layton
  2017-11-17 14:09         ` Wyllys Ingersoll
  0 siblings, 1 reply; 14+ messages in thread
From: Jeff Layton @ 2017-11-17 13:27 UTC (permalink / raw)
  To: Matt Benjamin, Rafael Lopez; +Cc: Sage Weil, Ceph Development, ceph-users

FWIW, it might be interesting at some point to hack together a libcephfs
backend driver for fio. It already has one for librbd so I imagine it
wouldn't be too hard to do, and would probably give us a better raw
comparison between the kernel client and libcephfs.

On Thu, 2017-11-16 at 11:29 -0500, Matt Benjamin wrote:
> Hi Rafael,
> 
> Thanks for taking the time to report your results.
> 
> The similarity to Ceph fuse performance is to be expected, because
> both Ceph fuse and the nfs-ganesha FSAL driver use libcephfs, as Jeff
> Layton noted.  It's worth noting that nfs-ganesha does not appear to
> be adding i/o or metadata operation latency.
> 
> The interesting questions, pushing further on Jeff's point, I think are
> 
> 1. libcephfs vs kernel cephfs performance delta, and in particular
> 2. the portion of that delta NOT accounted for by the direct OSD data
> path available to the kernel mode ceph client--the latter can
> eventually be made available to nfs-ganesha via pNFS as Jeff hinted,
> but the former is potentially available for performance improvement
> 
> The topic of the big client lock is an old one.  I experimented with
> removing it in 2014, branch api-concurrent here
> git@github.com:linuxbox2/linuxbox-ceph.git.  I'm not confident that
> just removing the client lock bottleneck will bring visible
> improvements, though, especially until MDS concurrency improvements
> are in place, but it may be worth revisiting.
> 
> Matt
> 
> On Thu, Nov 16, 2017 at 3:17 AM, Rafael Lopez <rafael.lopez@monash.edu> wrote:
> > We are running RHCS2.3 (jewel) with ganesha 2.4.2 and cephfs fsal, compiled
> > from srpm. experimenting with CTDB for controlling ganesha HA since we run
> > samba on same servers.
> > 
> > Haven't done much functionality/stress testing but on face value basic stuff
> > seems to work well (file operations).
> > 
> > In terms of performance, last time I tested ganesha it seemed comparable to
> > ceph-fuse (RHCS2.x/jewel, i think luminous ceph-fuse is better). Though I
> > haven't done rigorous metadata tests or multiple client tests. Also our
> > ganesha servers are quite small, as we are thus far only serving cephfs
> > natively. eg 4G ram 1 core. Here are some FIO results:
> > 
> > jobs in order are:
> > 1. async 1M
> > 2. sync 1M
> > 3. async 4k
> > 4. sync 4k
> > 5. seq read 1M
> > 6. rand read 4k
> > 
> > Ceph cluster is RHCS 2.3 (10.2.7)
> > 
> > CEPH-FUSE (10.2.x)
> >   WRITE: io=143652MB, aggrb=490328KB/s, minb=490328KB/s, maxb=490328KB/s,
> > mint=300002msec, maxt=300002msec
> >   WRITE: io=14341MB, aggrb=48947KB/s, minb=48947KB/s, maxb=48947KB/s,
> > mint=300018msec, maxt=300018msec
> >   WRITE: io=9808.2MB, aggrb=33478KB/s, minb=33478KB/s, maxb=33478KB/s,
> > mint=300001msec, maxt=300001msec
> >   WRITE: io=424476KB, aggrb=1414KB/s, minb=1414KB/s, maxb=1414KB/s,
> > mint=300003msec, maxt=300003ms
> >    READ: io=158069MB, aggrb=539527KB/s, minb=539527KB/s, maxb=539527KB/s,
> > mint=300008msec, maxt=300008msec
> >    READ: io=1881.2MB, aggrb=6420KB/s, minb=6420KB/s, maxb=6420KB/s,
> > mint=300001msec, maxt=300001msec
> > 
> > ganesha (nfs3)
> >   WRITE: io=157891MB, aggrb=538923KB/s, minb=538923KB/s, maxb=538923KB/s,
> > mint=300006msec, maxt=300006msec
> >   WRITE: io=38700MB, aggrb=132093KB/s, minb=132093KB/s, maxb=132093KB/s,
> > mint=300006msec, maxt=300006msec
> >   WRITE: io=3072.0MB, aggrb=10148KB/s, minb=10148KB/s, maxb=10148KB/s,
> > mint=309957msec, maxt=309957msec
> >   WRITE: io=397516KB, aggrb=1325KB/s, minb=1325KB/s, maxb=1325KB/s,
> > mint=300001msec, maxt=300001msec
> >    READ: io=82521MB, aggrb=281669KB/s, minb=281669KB/s, maxb=281669KB/s,
> > mint=300002msec, maxt=300002msec
> >    READ: io=1322.2MB, aggrb=4513KB/s, minb=4513KB/s, maxb=4513KB/s,
> > mint=300001msec, maxt=300001msec
> > 
> > cephfs kernel client
> >   WRITE: io=471041MB, aggrb=1568.8MB/s, minb=1568.8MB/s, maxb=1568.8MB/s,
> > mint=300394msec, maxt=300394msec
> >   WRITE: io=50005MB, aggrb=170680KB/s, minb=170680KB/s, maxb=170680KB/s,
> > mint=300006msec, maxt=300006msec
> >   WRITE: io=169092MB, aggrb=577166KB/s, minb=577166KB/s, maxb=577166KB/s,
> > mint=300000msec, maxt=300000msec
> >   WRITE: io=530548KB, aggrb=1768KB/s, minb=1768KB/s, maxb=1768KB/s,
> > mint=300003msec, maxt=300003msec
> >    READ: io=121501MB, aggrb=414720KB/s, minb=414720KB/s, maxb=414720KB/s,
> > mint=300002msec, maxt=300002msec
> >    READ: io=3264.6MB, aggrb=11142KB/s, minb=11142KB/s, maxb=11142KB/s,
> > mint=300001msec, maxt=300001msec
> > 
> > happy to share fio job file if anyone wants it.
> > 
> > 
> > On 9 November 2017 at 08:41, Sage Weil <sweil@redhat.com> wrote:
> > > 
> > > Who is running nfs-ganesha's FSAL to export CephFS?  What has your
> > > experience been?
> > > 
> > > (We are working on building proper testing and support for this into
> > > Mimic, but the ganesha FSAL has been around for years.)
> > > 
> > > Thanks!
> > > sage
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> > 
> > 
> > --
> > Rafael Lopez
> > Research Devops Engineer
> > Monash University eResearch Centre
> > 
> > T: +61 3 9905 9118
> > M: +61 (0)427682670
> > E: rafael.lopez@monash.edu
> > 
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> 
> 

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] who is using nfs-ganesha and cephfs?
  2017-11-17 13:27       ` Jeff Layton
@ 2017-11-17 14:09         ` Wyllys Ingersoll
  0 siblings, 0 replies; 14+ messages in thread
From: Wyllys Ingersoll @ 2017-11-17 14:09 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Matt Benjamin, Rafael Lopez, Sage Weil, Ceph Development, ceph-users

+100 to this idea.

On Fri, Nov 17, 2017 at 8:27 AM, Jeff Layton <jlayton@kernel.org> wrote:
> FWIW, it might be interesting at some point to hack together a libcephfs
> backend driver for fio. It already has one for librbd so I imagine it
> wouldn't be too hard to do, and would probably give us a better raw
> comparison between the kernel client and libcephfs.
>
> On Thu, 2017-11-16 at 11:29 -0500, Matt Benjamin wrote:
>> Hi Rafael,
>>
>> Thanks for taking the time to report your results.
>>
>> The similarity to Ceph fuse performance is to be expected, because
>> both Ceph fuse and the nfs-ganesha FSAL driver use libcephfs, as Jeff
>> Layton noted.  It's worth noting that nfs-ganesha does not appear to
>> be adding i/o or metadata operation latency.
>>
>> The interesting questions, pushing further on Jeff's point, I think are
>>
>> 1. libcephfs vs kernel cephfs performance delta, and in particular
>> 2. the portion of that delta NOT accounted for by the direct OSD data
>> path available to the kernel mode ceph client--the latter can
>> eventually be made available to nfs-ganesha via pNFS as Jeff hinted,
>> but the former is potentially available for performance improvement
>>
>> The topic of the big client lock is an old one.  I experimented with
>> removing it in 2014, branch api-concurrent here
>> git@github.com:linuxbox2/linuxbox-ceph.git.  I'm not confident that
>> just removing the client lock bottleneck will bring visible
>> improvements, though, especially until MDS concurrency improvements
>> are in place, but it may be worth revisiting.
>>
>> Matt
>>
>> On Thu, Nov 16, 2017 at 3:17 AM, Rafael Lopez <rafael.lopez@monash.edu> wrote:
>> > We are running RHCS2.3 (jewel) with ganesha 2.4.2 and cephfs fsal, compiled
>> > from srpm. experimenting with CTDB for controlling ganesha HA since we run
>> > samba on same servers.
>> >
>> > Haven't done much functionality/stress testing but on face value basic stuff
>> > seems to work well (file operations).
>> >
>> > In terms of performance, last time I tested ganesha it seemed comparable to
>> > ceph-fuse (RHCS2.x/jewel, i think luminous ceph-fuse is better). Though I
>> > haven't done rigorous metadata tests or multiple client tests. Also our
>> > ganesha servers are quite small, as we are thus far only serving cephfs
>> > natively. eg 4G ram 1 core. Here are some FIO results:
>> >
>> > jobs in order are:
>> > 1. async 1M
>> > 2. sync 1M
>> > 3. async 4k
>> > 4. sync 4k
>> > 5. seq read 1M
>> > 6. rand read 4k
>> >
>> > Ceph cluster is RHCS 2.3 (10.2.7)
>> >
>> > CEPH-FUSE (10.2.x)
>> >   WRITE: io=143652MB, aggrb=490328KB/s, minb=490328KB/s, maxb=490328KB/s,
>> > mint=300002msec, maxt=300002msec
>> >   WRITE: io=14341MB, aggrb=48947KB/s, minb=48947KB/s, maxb=48947KB/s,
>> > mint=300018msec, maxt=300018msec
>> >   WRITE: io=9808.2MB, aggrb=33478KB/s, minb=33478KB/s, maxb=33478KB/s,
>> > mint=300001msec, maxt=300001msec
>> >   WRITE: io=424476KB, aggrb=1414KB/s, minb=1414KB/s, maxb=1414KB/s,
>> > mint=300003msec, maxt=300003ms
>> >    READ: io=158069MB, aggrb=539527KB/s, minb=539527KB/s, maxb=539527KB/s,
>> > mint=300008msec, maxt=300008msec
>> >    READ: io=1881.2MB, aggrb=6420KB/s, minb=6420KB/s, maxb=6420KB/s,
>> > mint=300001msec, maxt=300001msec
>> >
>> > ganesha (nfs3)
>> >   WRITE: io=157891MB, aggrb=538923KB/s, minb=538923KB/s, maxb=538923KB/s,
>> > mint=300006msec, maxt=300006msec
>> >   WRITE: io=38700MB, aggrb=132093KB/s, minb=132093KB/s, maxb=132093KB/s,
>> > mint=300006msec, maxt=300006msec
>> >   WRITE: io=3072.0MB, aggrb=10148KB/s, minb=10148KB/s, maxb=10148KB/s,
>> > mint=309957msec, maxt=309957msec
>> >   WRITE: io=397516KB, aggrb=1325KB/s, minb=1325KB/s, maxb=1325KB/s,
>> > mint=300001msec, maxt=300001msec
>> >    READ: io=82521MB, aggrb=281669KB/s, minb=281669KB/s, maxb=281669KB/s,
>> > mint=300002msec, maxt=300002msec
>> >    READ: io=1322.2MB, aggrb=4513KB/s, minb=4513KB/s, maxb=4513KB/s,
>> > mint=300001msec, maxt=300001msec
>> >
>> > cephfs kernel client
>> >   WRITE: io=471041MB, aggrb=1568.8MB/s, minb=1568.8MB/s, maxb=1568.8MB/s,
>> > mint=300394msec, maxt=300394msec
>> >   WRITE: io=50005MB, aggrb=170680KB/s, minb=170680KB/s, maxb=170680KB/s,
>> > mint=300006msec, maxt=300006msec
>> >   WRITE: io=169092MB, aggrb=577166KB/s, minb=577166KB/s, maxb=577166KB/s,
>> > mint=300000msec, maxt=300000msec
>> >   WRITE: io=530548KB, aggrb=1768KB/s, minb=1768KB/s, maxb=1768KB/s,
>> > mint=300003msec, maxt=300003msec
>> >    READ: io=121501MB, aggrb=414720KB/s, minb=414720KB/s, maxb=414720KB/s,
>> > mint=300002msec, maxt=300002msec
>> >    READ: io=3264.6MB, aggrb=11142KB/s, minb=11142KB/s, maxb=11142KB/s,
>> > mint=300001msec, maxt=300001msec
>> >
>> > happy to share fio job file if anyone wants it.
>> >
>> >
>> > On 9 November 2017 at 08:41, Sage Weil <sweil@redhat.com> wrote:
>> > >
>> > > Who is running nfs-ganesha's FSAL to export CephFS?  What has your
>> > > experience been?
>> > >
>> > > (We are working on building proper testing and support for this into
>> > > Mimic, but the ganesha FSAL has been around for years.)
>> > >
>> > > Thanks!
>> > > sage
>> > >
>> > > --
>> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > > the body of a message to majordomo@vger.kernel.org
>> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> >
>> >
>> >
>> > --
>> > Rafael Lopez
>> > Research Devops Engineer
>> > Monash University eResearch Centre
>> >
>> > T: +61 3 9905 9118
>> > M: +61 (0)427682670
>> > E: rafael.lopez@monash.edu
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>
> --
> Jeff Layton <jlayton@kernel.org>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-11-17 14:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-08 21:41 who is using nfs-ganesha and cephfs? Sage Weil
2017-11-08 21:52 ` Wyllys Ingersoll
2017-11-08 21:53 ` [ceph-users] " Marc Roos
     [not found] ` <alpine.DEB.2.11.1711082140180.29217-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-11-08 21:53   ` Marc Roos
2017-11-08 22:42   ` Lincoln Bryant
2017-11-09  7:10   ` Wido den Hollander
2017-11-09 10:04   ` Lars Marowsky-Bree
     [not found]     ` <20171109100440.yut3qjvejxwd7oz3-IBi9RG/b67k@public.gmane.org>
2017-11-09 11:15       ` Supriti Singh
2017-11-09 13:21         ` Supriti Singh
2017-11-09 14:28           ` [ceph-users] " Jeff Layton
2017-11-16  8:17   ` Rafael Lopez
2017-11-16 16:29     ` [ceph-users] " Matt Benjamin
2017-11-17 13:27       ` Jeff Layton
2017-11-17 14:09         ` Wyllys Ingersoll

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.