All of lore.kernel.org
 help / color / mirror / Atom feed
* Fwd: State of play for RDMA on Luminous
       [not found] <f2071e94-2b4e-b809-b244-bf7ca62be9ba@hastexo.com>
@ 2017-08-28  8:42 ` Florian Haas
  0 siblings, 0 replies; only message in thread
From: Florian Haas @ 2017-08-28  8:42 UTC (permalink / raw)
  To: ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 2767 bytes --]

Hello everyone,

I'm taking the liberty to cross-post to the -devel list. I did get a
couple of off-list replies but they were of the "oh cool, I want to know
too" type. :)

Looking forward to people's thoughts.

Cheers,
Florian


-------- Forwarded Message --------
Subject: State of play for RDMA on Luminous
Date: Wed, 23 Aug 2017 10:26:45 +0200
From: Florian Haas <florian@hastexo.com>
To: ceph-users@lists.ceph.com <ceph-users@lists.ceph.com>

Hello everyone,

I'm trying to get a handle on the current state of the async messenger's
RDMA transport in Luminous, and I've noticed that the information
available is a little bit sparse (I've found
https://community.mellanox.com/docs/DOC-2693 and
https://community.mellanox.com/docs/DOC-2721, which are a great start
but don't look very complete). So I'm kicking off this thread that might
hopefully bring interested parties and developers together.

Could someone in the know please confirm that the following assumptions
of mine are accurate:

- RDMA support for the async messenger is available in Luminous.

- You enable it globally by setting ms_type to "async+rdma", and by
setting appropriate values for the various ms_async_rdma* options (most
importantly, ms_async_rdma_device_name).

- You can also set RDMA messaging just for the public or cluster
network, via ms_public_type and ms_cluster_type.

- Users have to make a global async+rdma vs. async+posix decision on
either network. For example, if either ms_type or ms_public_type is
configured to async+rdma on cluster nodes, then a client configured with
ms_type = async+posix can't communicate.

Based on those assumptions, I have the following questions:

- What is the current state of RDMA support in kernel libceph? In other
words, is there currently a way to map RBDs, or mount CephFS, if a Ceph
cluster uses RDMA messaging?

- In case there is no such support in the kernel yet: What's the current
status of RDMA support (and testing) with regard to
  * libcephfs?
  * the Samba Ceph VFS?
  * nfs-ganesha?
  * tcmu-runner?

- In summary, if a user wants to access their Ceph cluster via a POSIX
filesystem or via iSCSI, is enabling the RDMA-enabled async messenger in
the public network an option? Or would they have to continue running on
TCP/IP (possibly on IPoIB if they already have InfiniBand hardware)
until the client libraries catch up?

- And more broadly, if a user wants to use the performance benefits of
RDMA, but not all of their potential Ceph clients have InfiniBand HCAs,
what are their options? RoCE?

Thanks very much in advance for everyone's insight!

Cheers,
Florian


-- 
Please feel free to verify my identity:
https://keybase.io/fghaas




[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 829 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2017-08-28  8:42 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <f2071e94-2b4e-b809-b244-bf7ca62be9ba@hastexo.com>
2017-08-28  8:42 ` Fwd: State of play for RDMA on Luminous Florian Haas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.