All of lore.kernel.org
 help / color / mirror / Atom feed
* Independent server management of Multicasting connections
@ 2011-11-21 14:15 dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found] ` <20111121071551.14306gf6ogr9z8yf-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-21 14:15 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

 From what I understand, and I may not understand correctly, to  
perform IB Multicast, betweeen two nodes, I need to swap addresses and  
remote keys for the two nodes.  In the examples I have seen this has  
been done via RDMA CM directly between the two nodes.  Does it matter  
how this information is exchanged?  In my application I will have  
several nodes multicasting to several other nodes and I want to manage  
the connections from a independent application.  What I would like to  
do is write this application, server, etc. so that each node would  
request connection with another node and then be provied with the  
information it needs to multicast.  For example... Node 1 would  
request a connection (from the server app) Node 2 and Node 2 would  
request a connection (again, from the server app) to Node 1.  The  
Server app would provide Node 2's "credentials" to Node 1 and likewise  
to Node 2.  Is this even possible?


CD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found] ` <20111121071551.14306gf6ogr9z8yf-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
@ 2011-11-21 17:05   ` Hefty, Sean
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: Hefty, Sean @ 2011-11-21 17:05 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ, linux-rdma-u79uwXL29TY76Z2rM5mHXA

>  From what I understand, and I may not understand correctly, to
> perform IB Multicast, betweeen two nodes, I need to swap addresses and
> remote keys for the two nodes.  In the examples I have seen this has

You only need to do this for unicast UD.  Multicast doesn't require exchanging addresses and qkeys, but see below.

> been done via RDMA CM directly between the two nodes.  Does it matter
> how this information is exchanged?  In my application I will have

For multicast, you need to 1. have the SA create the multicast group and 2. join the group.  To create the group, you need to either have the SA automatically create the group (if this is possible) or create it dynamically.  To create the group dynamically from user space, you should use the rdma_cm or ib_umad interfaces.  The rdma_cm is easier.

To join the group, you need to let the SA know that the node should receive multicast traffic, so that it can program the switches.  This is done through the rdma_cm (easy way) or using ib_umad (hard way that gets harder if you want to support multiple applications joining the same group from the same system).

> several nodes multicasting to several other nodes and I want to manage
> the connections from a independent application.  What I would like to
> do is write this application, server, etc. so that each node would
> request connection with another node and then be provied with the
> information it needs to multicast.  For example... Node 1 would
> request a connection (from the server app) Node 2 and Node 2 would
> request a connection (again, from the server app) to Node 1.  The
> Server app would provide Node 2's "credentials" to Node 1 and likewise
> to Node 2.  Is this even possible?

The SA basically does the work that you're describing for your server app.  Node 1 can ask the SA to create a multicast group.  Node 2 can ask to join that group.  Somehow node 2 needs to know what group node 1 created.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2011-11-21 17:16       ` dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found]         ` <20111121101650.162928ijxrgu49ky-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
  2011-11-21 19:05       ` Ira Weiny
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-21 17:16 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

SA = Subnet Administrator?

Quoting "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:

>>  From what I understand, and I may not understand correctly, to
>> perform IB Multicast, betweeen two nodes, I need to swap addresses and
>> remote keys for the two nodes.  In the examples I have seen this has
>
> You only need to do this for unicast UD.  Multicast doesn't require  
> exchanging addresses and qkeys, but see below.
>
>> been done via RDMA CM directly between the two nodes.  Does it matter
>> how this information is exchanged?  In my application I will have
>
> For multicast, you need to 1. have the SA create the multicast group  
> and 2. join the group.  To create the group, you need to either have  
> the SA automatically create the group (if this is possible) or  
> create it dynamically.  To create the group dynamically from user  
> space, you should use the rdma_cm or ib_umad interfaces.  The  
> rdma_cm is easier.
>
> To join the group, you need to let the SA know that the node should  
> receive multicast traffic, so that it can program the switches.   
> This is done through the rdma_cm (easy way) or using ib_umad (hard  
> way that gets harder if you want to support multiple applications  
> joining the same group from the same system).
>
>> several nodes multicasting to several other nodes and I want to manage
>> the connections from a independent application.  What I would like to
>> do is write this application, server, etc. so that each node would
>> request connection with another node and then be provied with the
>> information it needs to multicast.  For example... Node 1 would
>> request a connection (from the server app) Node 2 and Node 2 would
>> request a connection (again, from the server app) to Node 1.  The
>> Server app would provide Node 2's "credentials" to Node 1 and likewise
>> to Node 2.  Is this even possible?
>
> The SA basically does the work that you're describing for your  
> server app.  Node 1 can ask the SA to create a multicast group.   
> Node 2 can ask to join that group.  Somehow node 2 needs to know  
> what group node 1 created.
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]         ` <20111121101650.162928ijxrgu49ky-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
@ 2011-11-21 17:53           ` Hefty, Sean
  0 siblings, 0 replies; 15+ messages in thread
From: Hefty, Sean @ 2011-11-21 17:53 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

> SA = Subnet Administrator?

yes
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Independent server management of Multicasting connections
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
  2011-11-21 17:16       ` dev-x2spCj9RiN0z5UmgcLIfJQ
@ 2011-11-21 19:05       ` Ira Weiny
       [not found]         ` <20111121110529.5090464a.weiny2-i2BcT+NCU+M@public.gmane.org>
  2011-11-22 18:00       ` dev-x2spCj9RiN0z5UmgcLIfJQ
  2011-11-22 19:33       ` dev-x2spCj9RiN0z5UmgcLIfJQ
  3 siblings, 1 reply; 15+ messages in thread
From: Ira Weiny @ 2011-11-21 19:05 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: dev-x2spCj9RiN0z5UmgcLIfJQ, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, 21 Nov 2011 09:05:22 -0800
"Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:

> >  From what I understand, and I may not understand correctly, to
> > perform IB Multicast, betweeen two nodes, I need to swap addresses and
> > remote keys for the two nodes.  In the examples I have seen this has
> 
> You only need to do this for unicast UD.  Multicast doesn't require exchanging addresses and qkeys, but see below.
> 
> > been done via RDMA CM directly between the two nodes.  Does it matter
> > how this information is exchanged?  In my application I will have
> 
> For multicast, you need to 1. have the SA create the multicast group and 2. join the group.  To create the group, you need to either have the SA automatically create the group (if this is possible) or create it dynamically.  To create the group dynamically from user space, you should use the rdma_cm or ib_umad interfaces.  The rdma_cm is easier.
>

FWIW, OpenSM (current master branch) now has the ability to create multicast groups at startup, so 1 is possible.

See: http://www.spinics.net/lists/linux-rdma/msg10021.html

Ira
 
> To join the group, you need to let the SA know that the node should receive multicast traffic, so that it can program the switches.  This is done through the rdma_cm (easy way) or using ib_umad (hard way that gets harder if you want to support multiple applications joining the same group from the same system).
> 
> > several nodes multicasting to several other nodes and I want to manage
> > the connections from a independent application.  What I would like to
> > do is write this application, server, etc. so that each node would
> > request connection with another node and then be provied with the
> > information it needs to multicast.  For example... Node 1 would
> > request a connection (from the server app) Node 2 and Node 2 would
> > request a connection (again, from the server app) to Node 1.  The
> > Server app would provide Node 2's "credentials" to Node 1 and likewise
> > to Node 2.  Is this even possible?
> 
> The SA basically does the work that you're describing for your server app.  Node 1 can ask the SA to create a multicast group.  Node 2 can ask to join that group.  Somehow node 2 needs to know what group node 1 created.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Ira Weiny
Member of Technical Staff
Lawrence Livermore National Lab
925-423-8008
weiny2-i2BcT+NCU+M@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Independent server management of Multicasting connections
       [not found]         ` <20111121110529.5090464a.weiny2-i2BcT+NCU+M@public.gmane.org>
@ 2011-11-22 14:17           ` dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found]             ` <20111122071700.10712l3tf30i64m4-6c4oY0qWgKW61VtY7fu8aA@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-22 14:17 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Where can I find out more on how to do this, i.e. use opensm to create  
mc groups at startup?


Quoting Ira Weiny <weiny2-i2BcT+NCU+M@public.gmane.org>:

> On Mon, 21 Nov 2011 09:05:22 -0800
> "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
>
>> >  From what I understand, and I may not understand correctly, to
>> > perform IB Multicast, betweeen two nodes, I need to swap addresses and
>> > remote keys for the two nodes.  In the examples I have seen this has
>>
>> You only need to do this for unicast UD.  Multicast doesn't require  
>> exchanging addresses and qkeys, but see below.
>>
>> > been done via RDMA CM directly between the two nodes.  Does it matter
>> > how this information is exchanged?  In my application I will have
>>
>> For multicast, you need to 1. have the SA create the multicast  
>> group and 2. join the group.  To create the group, you need to  
>> either have the SA automatically create the group (if this is  
>> possible) or create it dynamically.  To create the group  
>> dynamically from user space, you should use the rdma_cm or ib_umad  
>> interfaces.  The rdma_cm is easier.
>>
>
> FWIW, OpenSM (current master branch) now has the ability to create  
> multicast groups at startup, so 1 is possible.
>
> See: http://www.spinics.net/lists/linux-rdma/msg10021.html
>
> Ira
>
>> To join the group, you need to let the SA know that the node should  
>> receive multicast traffic, so that it can program the switches.   
>> This is done through the rdma_cm (easy way) or using ib_umad (hard  
>> way that gets harder if you want to support multiple applications  
>> joining the same group from the same system).
>>
>> > several nodes multicasting to several other nodes and I want to manage
>> > the connections from a independent application.  What I would like to
>> > do is write this application, server, etc. so that each node would
>> > request connection with another node and then be provied with the
>> > information it needs to multicast.  For example... Node 1 would
>> > request a connection (from the server app) Node 2 and Node 2 would
>> > request a connection (again, from the server app) to Node 1.  The
>> > Server app would provide Node 2's "credentials" to Node 1 and likewise
>> > to Node 2.  Is this even possible?
>>
>> The SA basically does the work that you're describing for your  
>> server app.  Node 1 can ask the SA to create a multicast group.   
>> Node 2 can ask to join that group.  Somehow node 2 needs to know  
>> what group node 1 created.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> --
> Ira Weiny
> Member of Technical Staff
> Lawrence Livermore National Lab
> 925-423-8008
> weiny2-i2BcT+NCU+M@public.gmane.org
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
  2011-11-21 17:16       ` dev-x2spCj9RiN0z5UmgcLIfJQ
  2011-11-21 19:05       ` Ira Weiny
@ 2011-11-22 18:00       ` dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found]         ` <20111122110030.112519gpho0ikta6-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
  2011-11-22 19:33       ` dev-x2spCj9RiN0z5UmgcLIfJQ
  3 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-22 18:00 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

In reviewing the mckey.c source code, it looks like, in post_sends,  
that a remote key and remote qpn is being assigned in the wr.  I am  
confused as to why this is done if rmeote keys etc. are not required.

Also, how do I multicast once the SA has created the group and I have  
joined it?

Thanks,
CD



Quoting "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:

>>  From what I understand, and I may not understand correctly, to
>> perform IB Multicast, betweeen two nodes, I need to swap addresses and
>> remote keys for the two nodes.  In the examples I have seen this has
>
> You only need to do this for unicast UD.  Multicast doesn't require  
> exchanging addresses and qkeys, but see below.
>
>> been done via RDMA CM directly between the two nodes.  Does it matter
>> how this information is exchanged?  In my application I will have
>
> For multicast, you need to 1. have the SA create the multicast group  
> and 2. join the group.  To create the group, you need to either have  
> the SA automatically create the group (if this is possible) or  
> create it dynamically.  To create the group dynamically from user  
> space, you should use the rdma_cm or ib_umad interfaces.  The  
> rdma_cm is easier.
>
> To join the group, you need to let the SA know that the node should  
> receive multicast traffic, so that it can program the switches.   
> This is done through the rdma_cm (easy way) or using ib_umad (hard  
> way that gets harder if you want to support multiple applications  
> joining the same group from the same system).
>
>> several nodes multicasting to several other nodes and I want to manage
>> the connections from a independent application.  What I would like to
>> do is write this application, server, etc. so that each node would
>> request connection with another node and then be provied with the
>> information it needs to multicast.  For example... Node 1 would
>> request a connection (from the server app) Node 2 and Node 2 would
>> request a connection (again, from the server app) to Node 1.  The
>> Server app would provide Node 2's "credentials" to Node 1 and likewise
>> to Node 2.  Is this even possible?
>
> The SA basically does the work that you're describing for your  
> server app.  Node 1 can ask the SA to create a multicast group.   
> Node 2 can ask to join that group.  Somehow node 2 needs to know  
> what group node 1 created.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]         ` <20111122110030.112519gpho0ikta6-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
@ 2011-11-22 18:18           ` Hefty, Sean
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BD23-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: Hefty, Sean @ 2011-11-22 18:18 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

> In reviewing the mckey.c source code, it looks like, in post_sends,
> that a remote key and remote qpn is being assigned in the wr.  I am
> confused as to why this is done if rmeote keys etc. are not required.

The qpn used for multicast traffic is 0xFFFFFF.  The qkey is part of the multicast group attributes and is specified by the creator of the multicast group.  The kernel rdma_cm returns these parameters to the user after joining the multicast group.  The qkey is discovered as part of the join process, and the qpn is hard-coded.
 
> Also, how do I multicast once the SA has created the group and I have
> joined it?

I'm not sure I understand this question, but posting a send with the multicast group as the destination sends the multicast message.  The send destination is indicated by the qpn, qkey, and address handle.  These are returned as part of the rdma_join_multicast response.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BD23-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2011-11-22 18:27               ` dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found]                 ` <20111122112723.153527yw26bz6n4b-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-22 18:27 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA



Quoting "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:

>> In reviewing the mckey.c source code, it looks like, in post_sends,
>> that a remote key and remote qpn is being assigned in the wr.  I am
>> confused as to why this is done if rmeote keys etc. are not required.
>
> The qpn used for multicast traffic is 0xFFFFFF.  The qkey is part of  
> the multicast group attributes and is specified by the creator of  
> the multicast group.  The kernel rdma_cm returns these parameters to  
> the user after joining the multicast group.  The qkey is discovered  
> as part of the join process, and the qpn is hard-coded.

Are there examples of how to create the multicast group using rdma_cm?


>> Also, how do I multicast once the SA has created the group and I have
>> joined it?
>
> I'm not sure I understand this question, but posting a send with the  
> multicast group as the destination sends the multicast message.  The  
> send destination is indicated by the qpn, qkey, and address handle.   
> These are returned as part of the rdma_join_multicast response.

Ok... I think I understand this part... I was just a bit confused  
about the destination of the msg.


> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]                 ` <20111122112723.153527yw26bz6n4b-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
@ 2011-11-22 18:47                   ` Hefty, Sean
  0 siblings, 0 replies; 15+ messages in thread
From: Hefty, Sean @ 2011-11-22 18:47 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ, linux-rdma-u79uwXL29TY76Z2rM5mHXA

> Are there examples of how to create the multicast group using rdma_cm?

rdma_join_multicast will create the group if it does not exist.

- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
                         ` (2 preceding siblings ...)
  2011-11-22 18:00       ` dev-x2spCj9RiN0z5UmgcLIfJQ
@ 2011-11-22 19:33       ` dev-x2spCj9RiN0z5UmgcLIfJQ
       [not found]         ` <20111122123351.3062401wh8eggf8f-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
  3 siblings, 1 reply; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-22 19:33 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

[These questions are related to my original questions in the post data  
11/21 7:15 AM MST, subject: Independent server management of  
Multicasting connections]

The original post was to determine if it were possible to have server  
app that managed the data required to establish multicast IB  
communctions between 2 or more nodes.  Each node would initialize  
itself as needed wrt IB and each node would request from the server,  
as I now understand, the qpn, qkey and address handle for the  
multicast group it desired to communicate with.  The server, having  
created dynamically through the SA, the multicast group, would return  
said data and then the node would be able to begin posting multicast  
sends, or receives.  Alternatively, if I understand correctly, I can  
create the multicast group on start up of the opensmd.

In the rdma and multicast examples I have seen, each node sets up an  
rdma cm event channel.  The node then polls for events.

I had hoped to be able to avoid using the rdma_cm and avoid having to  
monitor an rdma_cm event channel. What I think I would like to do is  
have each node of my sim initialize it's side of the communiction,  
which I think should include

rdma_bind_addr
rdma_resolve_addr
ibv_create_ah
rdma_join_multicast

then ibv_post_send/ibv_post_recv as required.

However, the rdma_* calls require an rdma_cm_id which I won't have if  
I don't use the rdma cm.

Can I bypass using the rdma cm and the polling of the event channel?   
Or perhaps am I going to have to establish an event channel between my  
management server and each individual node?  On the other hand, if I  
can terminate the polling of the event channel once initialization is  
done, maybe I don't mind the rdma cm....

Can I bypass the polling of the completion queue?... which would imply  
I am simply trusting the data arrived at its destination?

Sorry to ask so many questions.  Are they any good books on this  
programming infiniband?

CD


Quoting "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:

>>  From what I understand, and I may not understand correctly, to
>> perform IB Multicast, betweeen two nodes, I need to swap addresses and
>> remote keys for the two nodes.  In the examples I have seen this has
>
> You only need to do this for unicast UD.  Multicast doesn't require  
> exchanging addresses and qkeys, but see below.
>
>> been done via RDMA CM directly between the two nodes.  Does it matter
>> how this information is exchanged?  In my application I will have
>
> For multicast, you need to 1. have the SA create the multicast group  
> and 2. join the group.  To create the group, you need to either have  
> the SA automatically create the group (if this is possible) or  
> create it dynamically.  To create the group dynamically from user  
> space, you should use the rdma_cm or ib_umad interfaces.  The  
> rdma_cm is easier.
>
> To join the group, you need to let the SA know that the node should  
> receive multicast traffic, so that it can program the switches.   
> This is done through the rdma_cm (easy way) or using ib_umad (hard  
> way that gets harder if you want to support multiple applications  
> joining the same group from the same system).
>
>> several nodes multicasting to several other nodes and I want to manage
>> the connections from a independent application.  What I would like to
>> do is write this application, server, etc. so that each node would
>> request connection with another node and then be provied with the
>> information it needs to multicast.  For example... Node 1 would
>> request a connection (from the server app) Node 2 and Node 2 would
>> request a connection (again, from the server app) to Node 1.  The
>> Server app would provide Node 2's "credentials" to Node 1 and likewise
>> to Node 2.  Is this even possible?
>
> The SA basically does the work that you're describing for your  
> server app.  Node 1 can ask the SA to create a multicast group.   
> Node 2 can ask to join that group.  Somehow node 2 needs to know  
> what group node 1 created.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]         ` <20111122123351.3062401wh8eggf8f-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
@ 2011-11-22 20:01           ` Hefty, Sean
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BDB4-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 15+ messages in thread
From: Hefty, Sean @ 2011-11-22 20:01 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

> The original post was to determine if it were possible to have server
> app that managed the data required to establish multicast IB
> communctions between 2 or more nodes.  Each node would initialize
> itself as needed wrt IB and each node would request from the server,
> as I now understand, the qpn, qkey and address handle for the
> multicast group it desired to communicate with.  The server, having
> created dynamically through the SA, the multicast group, would return
> said data and then the node would be able to begin posting multicast
> sends, or receives.  Alternatively, if I understand correctly, I can
> create the multicast group on start up of the opensmd.

Every node that wants to participate in a multicast group must join the group.  This is usually done by having the node that wishes to join send a multicast join request directly to the SA.  (Note that the join request can also create the group.)  There are 2 interfaces available to applications that result in sending join requests: umad and the rdma_cm.

Node X cannot join a multicast group and pass its multicast address to Node Y.  Each join request must be for a specific node.  I believe it's architecturally possible for node X to send the join request on behalf of node Y, I don't know if anyone has ever tried that or if the existing implementations support that.

Also, be aware that the SA manages multicast joins per node and not per request.  I.e. If node X joins a group twice, followed by 1 leave request, the node will be removed from the group.  The SA does not perform reference counting.  (It cannot distinguish between 2 separate requests, versus a single request that may have been retried.)

> In the rdma and multicast examples I have seen, each node sets up an
> rdma cm event channel.  The node then polls for events.

The rdma_cm also supports synchronous operation.  If an rdma_cm_id is created without an event channel, all calls will block until they complete.  Any results (e.g. communication parameters) are returned in the rdma_cm_id.  See rdma_client and rdma_server for examples of synchronous operation.  (Those establish a connection, but the same principals apply.)

> I had hoped to be able to avoid using the rdma_cm and avoid having to
> monitor an rdma_cm event channel. What I think I would like to do is
> have each node of my sim initialize it's side of the communiction,
> which I think should include
> 
> rdma_bind_addr
> rdma_resolve_addr
> ibv_create_ah
> rdma_join_multicast

You should be able to eliminate rdma_bind_addr and pass in the source address into rdma_resolve_addr.  If your IP routing tables will resolve a multicast address to an IPoIB device, you can eliminate the source address completely.  (rdma_resolve_addr calls rdma_bind_addr internally.)

The ibv_create_ah must come after rdma_join_multicast, after you have the join response.

> then ibv_post_send/ibv_post_recv as required.
> 
> However, the rdma_* calls require an rdma_cm_id which I won't have if
> I don't use the rdma cm.

correct
 
> Can I bypass using the rdma cm and the polling of the event channel?
> Or perhaps am I going to have to establish an event channel between my
> management server and each individual node?  On the other hand, if I
> can terminate the polling of the event channel once initialization is
> done, maybe I don't mind the rdma cm....

See above to use synchronous operation and avoid polling the event channel.
 
> Can I bypass the polling of the completion queue?... which would imply
> I am simply trusting the data arrived at its destination?

You must poll the CQ to avoid overrunning the send queue.  Multicast is unreliable, so a successful completion simply means that the data was transmitted without error.  It does not guarantee that the receiver has it.  Using QP based communication isn't trivial...
 
> Sorry to ask so many questions.  Are they any good books on this
> programming infiniband?

I'm not aware of any books, let alone good ones...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Independent server management of Multicasting connections
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BDB4-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2011-11-22 20:05               ` dev-x2spCj9RiN0z5UmgcLIfJQ
  0 siblings, 0 replies; 15+ messages in thread
From: dev-x2spCj9RiN0z5UmgcLIfJQ @ 2011-11-22 20:05 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Thnks.
CD

Quoting "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:

>> The original post was to determine if it were possible to have server
>> app that managed the data required to establish multicast IB
>> communctions between 2 or more nodes.  Each node would initialize
>> itself as needed wrt IB and each node would request from the server,
>> as I now understand, the qpn, qkey and address handle for the
>> multicast group it desired to communicate with.  The server, having
>> created dynamically through the SA, the multicast group, would return
>> said data and then the node would be able to begin posting multicast
>> sends, or receives.  Alternatively, if I understand correctly, I can
>> create the multicast group on start up of the opensmd.
>
> Every node that wants to participate in a multicast group must join  
> the group.  This is usually done by having the node that wishes to  
> join send a multicast join request directly to the SA.  (Note that  
> the join request can also create the group.)  There are 2 interfaces  
> available to applications that result in sending join requests: umad  
> and the rdma_cm.
>
> Node X cannot join a multicast group and pass its multicast address  
> to Node Y.  Each join request must be for a specific node.  I  
> believe it's architecturally possible for node X to send the join  
> request on behalf of node Y, I don't know if anyone has ever tried  
> that or if the existing implementations support that.
>
> Also, be aware that the SA manages multicast joins per node and not  
> per request.  I.e. If node X joins a group twice, followed by 1  
> leave request, the node will be removed from the group.  The SA does  
> not perform reference counting.  (It cannot distinguish between 2  
> separate requests, versus a single request that may have been  
> retried.)
>
>> In the rdma and multicast examples I have seen, each node sets up an
>> rdma cm event channel.  The node then polls for events.
>
> The rdma_cm also supports synchronous operation.  If an rdma_cm_id  
> is created without an event channel, all calls will block until they  
> complete.  Any results (e.g. communication parameters) are returned  
> in the rdma_cm_id.  See rdma_client and rdma_server for examples of  
> synchronous operation.  (Those establish a connection, but the same  
> principals apply.)
>
>> I had hoped to be able to avoid using the rdma_cm and avoid having to
>> monitor an rdma_cm event channel. What I think I would like to do is
>> have each node of my sim initialize it's side of the communiction,
>> which I think should include
>>
>> rdma_bind_addr
>> rdma_resolve_addr
>> ibv_create_ah
>> rdma_join_multicast
>
> You should be able to eliminate rdma_bind_addr and pass in the  
> source address into rdma_resolve_addr.  If your IP routing tables  
> will resolve a multicast address to an IPoIB device, you can  
> eliminate the source address completely.  (rdma_resolve_addr calls  
> rdma_bind_addr internally.)
>
> The ibv_create_ah must come after rdma_join_multicast, after you  
> have the join response.
>
>> then ibv_post_send/ibv_post_recv as required.
>>
>> However, the rdma_* calls require an rdma_cm_id which I won't have if
>> I don't use the rdma cm.
>
> correct
>
>> Can I bypass using the rdma cm and the polling of the event channel?
>> Or perhaps am I going to have to establish an event channel between my
>> management server and each individual node?  On the other hand, if I
>> can terminate the polling of the event channel once initialization is
>> done, maybe I don't mind the rdma cm....
>
> See above to use synchronous operation and avoid polling the event channel.
>
>> Can I bypass the polling of the completion queue?... which would imply
>> I am simply trusting the data arrived at its destination?
>
> You must poll the CQ to avoid overrunning the send queue.  Multicast  
> is unreliable, so a successful completion simply means that the data  
> was transmitted without error.  It does not guarantee that the  
> receiver has it.  Using QP based communication isn't trivial...
>
>> Sorry to ask so many questions.  Are they any good books on this
>> programming infiniband?
>
> I'm not aware of any books, let alone good ones...
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Independent server management of Multicasting connections
       [not found]             ` <20111122071700.10712l3tf30i64m4-6c4oY0qWgKW61VtY7fu8aA@public.gmane.org>
@ 2011-11-23 18:08               ` Ira Weiny
       [not found]                 ` <CAKzyTsyrQbczoQp8pQJt8prDfMDny6s6VxK_RvcOrgT_nXCUAg@mail.gmail.com>
  0 siblings, 1 reply; 15+ messages in thread
From: Ira Weiny @ 2011-11-23 18:08 UTC (permalink / raw)
  To: dev-x2spCj9RiN0z5UmgcLIfJQ; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Tue, 22 Nov 2011 06:17:00 -0800
"dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org" <dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org> wrote:

> Where can I find out more on how to do this, i.e. use opensm to create  
> mc groups at startup?

This is not in an official release yet.  But if you are willing, you can get the latest opensm code here: 
git://git.openfabrics.org/~alexnetes/opensm.git

Build it and check out the man page on how to configure your partition config file.

Let me know if the man page is insufficient documentation.

Ira

> 
> 
> Quoting Ira Weiny <weiny2-i2BcT+NCU+M@public.gmane.org>:
> 
> > On Mon, 21 Nov 2011 09:05:22 -0800
> > "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
> >
> >> >  From what I understand, and I may not understand correctly, to
> >> > perform IB Multicast, betweeen two nodes, I need to swap addresses and
> >> > remote keys for the two nodes.  In the examples I have seen this has
> >>
> >> You only need to do this for unicast UD.  Multicast doesn't require  
> >> exchanging addresses and qkeys, but see below.
> >>
> >> > been done via RDMA CM directly between the two nodes.  Does it matter
> >> > how this information is exchanged?  In my application I will have
> >>
> >> For multicast, you need to 1. have the SA create the multicast  
> >> group and 2. join the group.  To create the group, you need to  
> >> either have the SA automatically create the group (if this is  
> >> possible) or create it dynamically.  To create the group  
> >> dynamically from user space, you should use the rdma_cm or ib_umad  
> >> interfaces.  The rdma_cm is easier.
> >>
> >
> > FWIW, OpenSM (current master branch) now has the ability to create  
> > multicast groups at startup, so 1 is possible.
> >
> > See: http://www.spinics.net/lists/linux-rdma/msg10021.html
> >
> > Ira
> >
> >> To join the group, you need to let the SA know that the node should  
> >> receive multicast traffic, so that it can program the switches.   
> >> This is done through the rdma_cm (easy way) or using ib_umad (hard  
> >> way that gets harder if you want to support multiple applications  
> >> joining the same group from the same system).
> >>
> >> > several nodes multicasting to several other nodes and I want to manage
> >> > the connections from a independent application.  What I would like to
> >> > do is write this application, server, etc. so that each node would
> >> > request connection with another node and then be provied with the
> >> > information it needs to multicast.  For example... Node 1 would
> >> > request a connection (from the server app) Node 2 and Node 2 would
> >> > request a connection (again, from the server app) to Node 1.  The
> >> > Server app would provide Node 2's "credentials" to Node 1 and likewise
> >> > to Node 2.  Is this even possible?
> >>
> >> The SA basically does the work that you're describing for your  
> >> server app.  Node 1 can ask the SA to create a multicast group.   
> >> Node 2 can ask to join that group.  Somehow node 2 needs to know  
> >> what group node 1 created.
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> >> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> > --
> > Ira Weiny
> > Member of Technical Staff
> > Lawrence Livermore National Lab
> > 925-423-8008
> > weiny2-i2BcT+NCU+M@public.gmane.org
> >
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Ira Weiny
Member of Technical Staff
Lawrence Livermore National Lab
925-423-8008
weiny2-i2BcT+NCU+M@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Independent server management of Multicasting connections
       [not found]                   ` <CAKzyTsyrQbczoQp8pQJt8prDfMDny6s6VxK_RvcOrgT_nXCUAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2011-11-23 18:27                     ` Ira Weiny
  0 siblings, 0 replies; 15+ messages in thread
From: Ira Weiny @ 2011-11-23 18:27 UTC (permalink / raw)
  To: Hal Rosenstock
  Cc: dev-x2spCj9RiN0z5UmgcLIfJQ, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Wed, 23 Nov 2011 10:12:37 -0800
Hal Rosenstock <hal.rosenstock-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> 
> 
> On Wed, Nov 23, 2011 at 1:08 PM, Ira Weiny <weiny2-i2BcT+NCU+M@public.gmane.org<mailto:weiny2-i2BcT+NCU+M@public.gmane.org>> wrote:
> On Tue, 22 Nov 2011 06:17:00 -0800
> "dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org<mailto:dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>" <dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org<mailto:dev-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>> wrote:
> 
> > Where can I find out more on how to do this, i.e. use opensm to create
> > mc groups at startup?
> 
> This is not in an official release yet.  But if you are willing, you can get the latest opensm code here:
> git://git.openfabrics.org/~alexnetes/opensm.git<http://git.openfabrics.org/~alexnetes/opensm.git>
> 
> Build it and check out the man page on how to configure your partition config file.
> 
> Let me know if the man page is insufficient documentation.
> 
> FWIW - Yes but this only solves #1 and not #2 so is this really useful here ? The problem doesn't seem to be creation of MC group but rather using the RDMA CM APIs for this.
>

Your right this does not solve his problem directly.  But I figured I would chime in since someone is likely to run across this thread and wonder the same thing should they want OpenSM to create MC groups.  :-)

Sorry if I have derailed the conversation too far.

Ira
 
> -- Hal
> 
> 
> Ira
> 
> >
> >
> > Quoting Ira Weiny <weiny2-i2BcT+NCU+M@public.gmane.org<mailto:weiny2-i2BcT+NCU+M@public.gmane.org>>:
> >
> > > On Mon, 21 Nov 2011 09:05:22 -0800
> > > "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org<mailto:sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>> wrote:
> > >
> > >> >  From what I understand, and I may not understand correctly, to
> > >> > perform IB Multicast, betweeen two nodes, I need to swap addresses and
> > >> > remote keys for the two nodes.  In the examples I have seen this has
> > >>
> > >> You only need to do this for unicast UD.  Multicast doesn't require
> > >> exchanging addresses and qkeys, but see below.
> > >>
> > >> > been done via RDMA CM directly between the two nodes.  Does it matter
> > >> > how this information is exchanged?  In my application I will have
> > >>
> > >> For multicast, you need to 1. have the SA create the multicast
> > >> group and 2. join the group.  To create the group, you need to
> > >> either have the SA automatically create the group (if this is
> > >> possible) or create it dynamically.  To create the group
> > >> dynamically from user space, you should use the rdma_cm or ib_umad
> > >> interfaces.  The rdma_cm is easier.
> > >>
> > >
> > > FWIW, OpenSM (current master branch) now has the ability to create
> > > multicast groups at startup, so 1 is possible.
> > >
> > > See: http://www.spinics.net/lists/linux-rdma/msg10021.html
> > >
> > > Ira
> > >
> > >> To join the group, you need to let the SA know that the node should
> > >> receive multicast traffic, so that it can program the switches.
> > >> This is done through the rdma_cm (easy way) or using ib_umad (hard
> > >> way that gets harder if you want to support multiple applications
> > >> joining the same group from the same system).
> > >>
> > >> > several nodes multicasting to several other nodes and I want to manage
> > >> > the connections from a independent application.  What I would like to
> > >> > do is write this application, server, etc. so that each node would
> > >> > request connection with another node and then be provied with the
> > >> > information it needs to multicast.  For example... Node 1 would
> > >> > request a connection (from the server app) Node 2 and Node 2 would
> > >> > request a connection (again, from the server app) to Node 1.  The
> > >> > Server app would provide Node 2's "credentials" to Node 1 and likewise
> > >> > to Node 2.  Is this even possible?
> > >>
> > >> The SA basically does the work that you're describing for your
> > >> server app.  Node 1 can ask the SA to create a multicast group.
> > >> Node 2 can ask to join that group.  Somehow node 2 needs to know
> > >> what group node 1 created.
> > >> --
> > >> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > >> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> > >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > >
> > >
> > > --
> > > Ira Weiny
> > > Member of Technical Staff
> > > Lawrence Livermore National Lab
> > > 925-423-8008<tel:925-423-8008>
> > > weiny2-i2BcT+NCU+M@public.gmane.org<mailto:weiny2-i2BcT+NCU+M@public.gmane.org>
> > >
> >
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> --
> Ira Weiny
> Member of Technical Staff
> Lawrence Livermore National Lab
> 925-423-8008<tel:925-423-8008>
> weiny2-i2BcT+NCU+M@public.gmane.org<mailto:weiny2-i2BcT+NCU+M@public.gmane.org>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Ira Weiny
Math Programmer/Computer Scientist
Lawrence Livermore National Lab
925-423-8008
weiny2-i2BcT+NCU+M@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2011-11-23 18:27 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-21 14:15 Independent server management of Multicasting connections dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found] ` <20111121071551.14306gf6ogr9z8yf-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
2011-11-21 17:05   ` Hefty, Sean
     [not found]     ` <1828884A29C6694DAF28B7E6B8A8237316E9B9E9-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
2011-11-21 17:16       ` dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found]         ` <20111121101650.162928ijxrgu49ky-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
2011-11-21 17:53           ` Hefty, Sean
2011-11-21 19:05       ` Ira Weiny
     [not found]         ` <20111121110529.5090464a.weiny2-i2BcT+NCU+M@public.gmane.org>
2011-11-22 14:17           ` dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found]             ` <20111122071700.10712l3tf30i64m4-6c4oY0qWgKW61VtY7fu8aA@public.gmane.org>
2011-11-23 18:08               ` Ira Weiny
     [not found]                 ` <CAKzyTsyrQbczoQp8pQJt8prDfMDny6s6VxK_RvcOrgT_nXCUAg@mail.gmail.com>
     [not found]                   ` <CAKzyTsyrQbczoQp8pQJt8prDfMDny6s6VxK_RvcOrgT_nXCUAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-11-23 18:27                     ` Ira Weiny
2011-11-22 18:00       ` dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found]         ` <20111122110030.112519gpho0ikta6-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
2011-11-22 18:18           ` Hefty, Sean
     [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BD23-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
2011-11-22 18:27               ` dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found]                 ` <20111122112723.153527yw26bz6n4b-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
2011-11-22 18:47                   ` Hefty, Sean
2011-11-22 19:33       ` dev-x2spCj9RiN0z5UmgcLIfJQ
     [not found]         ` <20111122123351.3062401wh8eggf8f-x2spCj9RiN0z5UmgcLIfJQ@public.gmane.org>
2011-11-22 20:01           ` Hefty, Sean
     [not found]             ` <1828884A29C6694DAF28B7E6B8A8237316E9BDB4-P5GAC/sN6hmkrb+BlOpmy7fspsVTdybXVpNB7YpNyf8@public.gmane.org>
2011-11-22 20:05               ` dev-x2spCj9RiN0z5UmgcLIfJQ

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.