All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: 25G RDMA networking thoughts???
       [not found] ` <16962d43-7f03-4e92-9eee-b3f4ab80408c.james.liu@alibaba-inc.com>
@ 2016-11-08 21:45   ` Sage Weil
       [not found]   ` <35adaeb1-25ff-4f16-be56-0589e483f054.james.liu@alibaba-inc.com>
  1 sibling, 0 replies; 2+ messages in thread
From: Sage Weil @ 2016-11-08 21:45 UTC (permalink / raw)
  To: LIU, Fei; +Cc: Haomai Wang, ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1788 bytes --]

On Wed, 9 Nov 2016, LIU, Fei wrote:
> Hi Sage,
>    Yes, Totally understood.  25G RDMA network for Ceph cluster is built for
> interal test. Xio messenger and Async messenger(But it can only support
> infiniband,right) are going to be two options. We are carefully evaluate
> both of these two options. But the most important goal in the end is to see
> how bluestore works with rdma to bring down the total latency for workload
> like OLTP.  
> 
> Hi Haomai,
>    Would you mind let us know when the async messenger is going to support
> ethernet if not support yet?

The default async backend is PosixStack which is all TCP-based.  (And 
async is now the default messenger in kraken.)

sage

> 
>    Regards,
>    James
>       ------------------------------------------------------------------
> From:Sage Weil <sweil@redhat.com>
> Time:2016 Nov 8 (Tue) 13:19
> To:James <james.liu@alibaba-inc.com>
> Subject:Re: 25G RDMA networking thoughts???
> 
> [adding ceph-devel]
> 
> On Wed, 9 Nov 2016, LIU, Fei wrote:
> > Hi Sage,
> >    I was wondering do you have any thoughts of 25G RDMA networking
> > construction besides of xio-messenger/async?  Is there any guidance to bu
> ild
> > 25G RDMA netowrk to better control the whole Ceph cluster latency?
> 
> The only RDMA options right now are XioMessenger and AsyncMessenger's new 
> RDMA backend. Both are experimental, but we'd be very interested in 
> hearing about your experience.
> 
> I wouldn't assume that latency is network-related, though.  More often 
> than not we're finding it's the OSD backend or the OSD request internals 
> (e.g., request scheduling or peering) that's the culprit...
> 
> sage
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: 25G RDMA networking thoughts???
       [not found]   ` <35adaeb1-25ff-4f16-be56-0589e483f054.james.liu@alibaba-inc.com>
@ 2016-11-09  3:33     ` Haomai Wang
  0 siblings, 0 replies; 2+ messages in thread
From: Haomai Wang @ 2016-11-09  3:33 UTC (permalink / raw)
  To: LIU, Fei; +Cc: Sage Weil, ceph-devel

On Wed, Nov 9, 2016 at 5:51 AM, LIU, Fei <james.liu@alibaba-inc.com> wrote:
> Hi Sage,
>    thanks,Sorry for confusing you.  What i am trying to say is async with
> RDMA over ethernet.  I understand that async messenger support TCP based
> ethernet well.
> Secondly, would be possible to have data traffice among osds categorized
> into different service level to better provide QoS for the whole Ceph
> cluster service if we can within the 25G RDMA faclitiis?
> Thirdly, would be possible to provide unified network for both storage and
> compute within QoS control? We don't expect the replication/recovery/refill
> to have bad impact to the latency of application.

Currently I only use rdma over ib to test. I don't have eth over ib
nic by hand. I don't know what need to do for eth rdma nic...

>
>
>    Regards,
>    James
>
> ------------------------------------------------------------------
> From:Sage Weil <sweil@redhat.com>
> Time:2016 Nov 8 (Tue) 13:45
> To:James <james.liu@alibaba-inc.com>
> Cc:Haomai Wang <haomai@xsky.com>; ceph-devel <ceph-devel@vger.kernel.org>
> Subject:Re: 25G RDMA networking thoughts???
>
> On Wed, 9 Nov 2016, LIU, Fei wrote:
>> Hi Sage,
>>    Yes, Totally understood.  25G RDMA network for Ceph cluster is built
>> for
>> interal test. Xio messenger and Async messenger(But it can only support
>> infiniband,right) are going to be two options. We are carefully evaluate
>> both of these two options. But the most important goal in the end is to
>> see
>> how bluestore works with rdma to bring down the total latency for workload
>> like OLTP.
>>
>> Hi Haomai,
>>    Would you mind let us know when the async messenger is going to support
>> ethernet if not support yet?
>
> The default async backend is PosixStack which is all TCP-based.  (And
> async is now the default messenger in kraken.)
>
> sage
>
>>
>>    Regards,
>>    James
>>       ------------------------------------------------------------------
>> From:Sage Weil <sweil@redhat.com>
>> Time:2016 Nov 8 (Tue) 13:19
>> To:James <james.liu@alibaba-inc.com>
>> Subject:Re: 25G RDMA networking thoughts???
>>
>> [adding ceph-devel]
>>
>> On Wed, 9 Nov 2016, LIU, Fei wrote:
>> > Hi Sage,
>> >    I was wondering do you have any thoughts of 25G RDMA networking
>> > construction besides of xio-messenger/async?  Is there any guidance to
>> > bu
>> ild
>> > 25G RDMA netowrk to better control the whole Ceph cluster latency?
>>
>> The only RDMA options right now are XioMessenger and AsyncMessenger's new
>> RDMA backend. Both are experimental, but we'd be very interested in
>> hearing about your experience.
>>
>> I wouldn't assume that latency is network-related, though.  More often
>> than not we're finding it's the OSD backend or the OSD request internals
>> (e.g., request scheduling or peering) that's the culprit...
>>
>> sage
>>
>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-11-09  3:33 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <acbd757e-d360-4e37-bc26-60eb9d6ec162.james.liu@alibaba-inc.com>
     [not found] ` <16962d43-7f03-4e92-9eee-b3f4ab80408c.james.liu@alibaba-inc.com>
2016-11-08 21:45   ` 25G RDMA networking thoughts??? Sage Weil
     [not found]   ` <35adaeb1-25ff-4f16-be56-0589e483f054.james.liu@alibaba-inc.com>
2016-11-09  3:33     ` Haomai Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.