All of lore.kernel.org
 help / color / mirror / Atom feed
* Cluster Network & Public Network w.r.t XIO ?
@ 2015-07-31  3:21 kernel neophyte
  2015-07-31 14:49 ` Matt Benjamin
  0 siblings, 1 reply; 8+ messages in thread
From: kernel neophyte @ 2015-07-31  3:21 UTC (permalink / raw)
  To: vu, raju.kurunkad, ceph-devel

Hi Vu, Raju,

I am trying to bring up ceph cluster on a powerful dell server with
two 40Gbe ROCEv2 NIC.

I have assigned one as my cluster network (would prefer all osd
communications happen on that) and have assigned one as my public n/w.
this works fine for simple messenger case. (ofcourse no rdma)

but when I try to bring this up on XIO, this gets very complicated, as
in how do I specify two RDMA_LOCAL  ? one for cluster n/w and other
for public ? can choose XIO for client to osd communication and simple
for cluster n/w ?

any thoughts ?

Thanks,
Neo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-07-31  3:21 Cluster Network & Public Network w.r.t XIO ? kernel neophyte
@ 2015-07-31 14:49 ` Matt Benjamin
  2015-07-31 16:20   ` kernel neophyte
  0 siblings, 1 reply; 8+ messages in thread
From: Matt Benjamin @ 2015-07-31 14:49 UTC (permalink / raw)
  To: kernel neophyte; +Cc: vu, raju kurunkad, ceph-devel, Marcus Watts

Hi Neo,

On our formerly-internal firefly-based branch, what we did was create additional Messenger
instances ad infinitum, which at least let you do this, but it's not what anybody wanted for
upstream or long-term.  What's upstream now doesn't let you IIRC describe that.  The rdma_local
parameter like you say is insufficient (and actually a hack).

What we plan to do (and have in progress) is extending work Sage started on wip-address, which
will enable multi-homing and identify instances by their transport type(s).  We might put more
information there to help with future topologies.  Improved configuration language to let you
describe your desired network setup would be packaged with that.

The plan is that an improved situation might arrive as early as J.  If we need an interim method,
now would be a good time to start discussion.

Matt

----- Original Message -----
From: "kernel neophyte" <neophyte.hacker001@gmail.com>
To: vu@mellanox.com, "raju kurunkad" <raju.kurunkad@sandisk.com>, ceph-devel@vger.kernel.org
Sent: Thursday, July 30, 2015 11:21:06 PM
Subject: Cluster Network & Public Network w.r.t XIO ?

Hi Vu, Raju,

I am trying to bring up ceph cluster on a powerful dell server with
two 40Gbe ROCEv2 NIC.

I have assigned one as my cluster network (would prefer all osd
communications happen on that) and have assigned one as my public n/w.
this works fine for simple messenger case. (ofcourse no rdma)

but when I try to bring this up on XIO, this gets very complicated, as
in how do I specify two RDMA_LOCAL  ? one for cluster n/w and other
for public ? can choose XIO for client to osd communication and simple
for cluster n/w ?

any thoughts ?

Thanks,
Neo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-07-31 14:49 ` Matt Benjamin
@ 2015-07-31 16:20   ` kernel neophyte
  2015-07-31 17:04     ` Matt Benjamin
  0 siblings, 1 reply; 8+ messages in thread
From: kernel neophyte @ 2015-07-31 16:20 UTC (permalink / raw)
  To: Matt Benjamin; +Cc: vu, raju kurunkad, ceph-devel, Marcus Watts

On Fri, Jul 31, 2015 at 7:49 AM, Matt Benjamin <mbenjamin@redhat.com> wrote:
> Hi Neo,
>
> On our formerly-internal firefly-based branch, what we did was create additional Messenger
> instances ad infinitum, which at least let you do this, but it's not what anybody wanted for
> upstream or long-term.  What's upstream now doesn't let you IIRC describe that.  The rdma_local
> parameter like you say is insufficient (and actually a hack).
>
> What we plan to do (and have in progress) is extending work Sage started on wip-address, which
> will enable multi-homing and identify instances by their transport type(s).  We might put more
> information there to help with future topologies.  Improved configuration language to let you
> describe your desired network setup would be packaged with that.

This is awesome! could you please point me to your WIP branch ? Also
please let know if I could help speed up the dev/test process ?

-Neo

>
> The plan is that an improved situation might arrive as early as J.  If we need an interim method,
> now would be a good time to start discussion.
>
> Matt
>
> ----- Original Message -----
> From: "kernel neophyte" <neophyte.hacker001@gmail.com>
> To: vu@mellanox.com, "raju kurunkad" <raju.kurunkad@sandisk.com>, ceph-devel@vger.kernel.org
> Sent: Thursday, July 30, 2015 11:21:06 PM
> Subject: Cluster Network & Public Network w.r.t XIO ?
>
> Hi Vu, Raju,
>
> I am trying to bring up ceph cluster on a powerful dell server with
> two 40Gbe ROCEv2 NIC.
>
> I have assigned one as my cluster network (would prefer all osd
> communications happen on that) and have assigned one as my public n/w.
> this works fine for simple messenger case. (ofcourse no rdma)
>
> but when I try to bring this up on XIO, this gets very complicated, as
> in how do I specify two RDMA_LOCAL  ? one for cluster n/w and other
> for public ? can choose XIO for client to osd communication and simple
> for cluster n/w ?
>
> any thoughts ?
>
> Thanks,
> Neo
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-07-31 16:20   ` kernel neophyte
@ 2015-07-31 17:04     ` Matt Benjamin
  2015-07-31 20:28       ` Marcus Watts
  0 siblings, 1 reply; 8+ messages in thread
From: Matt Benjamin @ 2015-07-31 17:04 UTC (permalink / raw)
  To: kernel neophyte; +Cc: vu, raju kurunkad, ceph-devel, Marcus Watts

Hi Neo,

I'm not sure of the state of current WIP, but Marcus (CC'd) is coordinating atm
from our end.  I'm sure we'd love help, so we'll check back in with updated branch
info--maybe next week?

Matt

----- Original Message -----
From: "kernel neophyte" <neophyte.hacker001@gmail.com>
To: "Matt Benjamin" <mbenjamin@redhat.com>
Cc: vu@mellanox.com, "raju kurunkad" <raju.kurunkad@sandisk.com>, ceph-devel@vger.kernel.org, "Marcus Watts" <mwatts@redhat.com>
Sent: Friday, July 31, 2015 12:20:33 PM
Subject: Re: Cluster Network & Public Network w.r.t XIO ?

On Fri, Jul 31, 2015 at 7:49 AM, Matt Benjamin <mbenjamin@redhat.com> wrote:
> Hi Neo,
>
> On our formerly-internal firefly-based branch, what we did was create additional Messenger
> instances ad infinitum, which at least let you do this, but it's not what anybody wanted for
> upstream or long-term.  What's upstream now doesn't let you IIRC describe that.  The rdma_local
> parameter like you say is insufficient (and actually a hack).
>
> What we plan to do (and have in progress) is extending work Sage started on wip-address, which
> will enable multi-homing and identify instances by their transport type(s).  We might put more
> information there to help with future topologies.  Improved configuration language to let you
> describe your desired network setup would be packaged with that.

This is awesome! could you please point me to your WIP branch ? Also
please let know if I could help speed up the dev/test process ?

-Neo

>
> The plan is that an improved situation might arrive as early as J.  If we need an interim method,
> now would be a good time to start discussion.
>
> Matt
>
> ----- Original Message -----
> From: "kernel neophyte" <neophyte.hacker001@gmail.com>
> To: vu@mellanox.com, "raju kurunkad" <raju.kurunkad@sandisk.com>, ceph-devel@vger.kernel.org
> Sent: Thursday, July 30, 2015 11:21:06 PM
> Subject: Cluster Network & Public Network w.r.t XIO ?
>
> Hi Vu, Raju,
>
> I am trying to bring up ceph cluster on a powerful dell server with
> two 40Gbe ROCEv2 NIC.
>
> I have assigned one as my cluster network (would prefer all osd
> communications happen on that) and have assigned one as my public n/w.
> this works fine for simple messenger case. (ofcourse no rdma)
>
> but when I try to bring this up on XIO, this gets very complicated, as
> in how do I specify two RDMA_LOCAL  ? one for cluster n/w and other
> for public ? can choose XIO for client to osd communication and simple
> for cluster n/w ?
>
> any thoughts ?
>
> Thanks,
> Neo
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-07-31 17:04     ` Matt Benjamin
@ 2015-07-31 20:28       ` Marcus Watts
  0 siblings, 0 replies; 8+ messages in thread
From: Marcus Watts @ 2015-07-31 20:28 UTC (permalink / raw)
  To: Matt Benjamin; +Cc: kernel neophyte, vu, raju kurunkad, ceph-devel

On Fri, Jul 31, 2015 at 01:04:12PM -0400, Matt Benjamin wrote:
> Hi Neo,
> 
> I'm not sure of the state of current WIP, but Marcus (CC'd) is coordinating atm
> from our end.  I'm sure we'd love help, so we'll check back in with updated branch
> info--maybe next week?
> 
> Matt

I have a branch that builds on what Sage did, but specifically intended
to fix the xio addressing problem.  I think I got most of the changes
necessary in; it builds, but doesn't run.  Besides fixing the "not running" problem
it will also need a bit more smarts so connections can based on connection
flavor pick the right address.  Ie, definitely "work in progress".
Also needs to be updated to the latest whatever.

I think it's already somewhere public, but not obvious.  If it's not, I'll
make sure it is.  I'll post information later today with where it is.
I'll also work to get it in the right place, but that make take a bit longer.
Finally, I'll also work on making it useable.

					-Marcus Watts

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-08-01  1:07 ` Sage Weil
@ 2015-08-03 16:43   ` kernel neophyte
  0 siblings, 0 replies; 8+ messages in thread
From: kernel neophyte @ 2015-08-03 16:43 UTC (permalink / raw)
  To: Sage Weil; +Cc: Marcus Watts, Matt Benjamin, vu, raju kurunkad, ceph-devel

Thanks Marcus & Matt. I will take a look at it this week.

-Neo

On Fri, Jul 31, 2015 at 6:07 PM, Sage Weil <sage@newdream.net> wrote:
> On Fri, 31 Jul 2015, Marcus Watts wrote:
>> I promised information on my copy of wip-address.  It's not in as
>> good a shape as I promised-- got pulled onto something else, so
>> the last commit I made left it not building.  And it will definitely
>> need more past that.
>>
>> So here's where it lives right now,
>> repo
>> git@github.com:linuxbox2/linuxbox-ceph.git
>> branch
>> xio-firefly-mpc1
>>
>> This is definitely on my "todo" list, and I plan to work on it next week.
>>
>> I also plan to push a copy of this or something much like it, as a wip
>> branch to the main ceph git, as soon as practical.
>
> As a first step I'd rebase on master as any cleanup work done before
> that'll probably get shredded by the rebase conflicts.  This one is
> tedious unfortunately since every encode needs that features arg.  :(
>
> Thanks, Marcus!
> sage

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
  2015-08-01  0:12 Marcus Watts
@ 2015-08-01  1:07 ` Sage Weil
  2015-08-03 16:43   ` kernel neophyte
  0 siblings, 1 reply; 8+ messages in thread
From: Sage Weil @ 2015-08-01  1:07 UTC (permalink / raw)
  To: Marcus Watts; +Cc: kernel neophyte, mbenjamin, vu, raju kurunkad, ceph-devel

On Fri, 31 Jul 2015, Marcus Watts wrote:
> I promised information on my copy of wip-address.  It's not in as
> good a shape as I promised-- got pulled onto something else, so
> the last commit I made left it not building.  And it will definitely
> need more past that.
> 
> So here's where it lives right now,
> repo
> git@github.com:linuxbox2/linuxbox-ceph.git
> branch
> xio-firefly-mpc1
> 
> This is definitely on my "todo" list, and I plan to work on it next week.
> 
> I also plan to push a copy of this or something much like it, as a wip
> branch to the main ceph git, as soon as practical.

As a first step I'd rebase on master as any cleanup work done before 
that'll probably get shredded by the rebase conflicts.  This one is 
tedious unfortunately since every encode needs that features arg.  :(

Thanks, Marcus!
sage

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Cluster Network & Public Network w.r.t XIO ?
@ 2015-08-01  0:12 Marcus Watts
  2015-08-01  1:07 ` Sage Weil
  0 siblings, 1 reply; 8+ messages in thread
From: Marcus Watts @ 2015-08-01  0:12 UTC (permalink / raw)
  To: kernel neophyte; +Cc: mbenjamin, vu, raju kurunkad, ceph-devel

I promised information on my copy of wip-address.  It's not in as
good a shape as I promised-- got pulled onto something else, so
the last commit I made left it not building.  And it will definitely
need more past that.

So here's where it lives right now,
repo
git@github.com:linuxbox2/linuxbox-ceph.git
branch
xio-firefly-mpc1

This is definitely on my "todo" list, and I plan to work on it next week.

I also plan to push a copy of this or something much like it, as a wip
branch to the main ceph git, as soon as practical.

						-Marcus Watts

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-08-03 16:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-31  3:21 Cluster Network & Public Network w.r.t XIO ? kernel neophyte
2015-07-31 14:49 ` Matt Benjamin
2015-07-31 16:20   ` kernel neophyte
2015-07-31 17:04     ` Matt Benjamin
2015-07-31 20:28       ` Marcus Watts
2015-08-01  0:12 Marcus Watts
2015-08-01  1:07 ` Sage Weil
2015-08-03 16:43   ` kernel neophyte

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.