All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: IRC snippet
       [not found] <d05f15b3381a4827e4c55826ca5597d9242d222f.camel@redhat.com>
@ 2021-07-23  4:01 ` Geliang Tang
  2021-07-23  7:19   ` Paolo Abeni
  2021-07-23  9:02   ` Paolo Abeni
  0 siblings, 2 replies; 4+ messages in thread
From: Geliang Tang @ 2021-07-23  4:01 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thanks for your information, it's very helpful. And I have two more
questions below.

Paolo Abeni <pabeni@redhat.com> 于2021年7月22日周四 下午7:23写道:
>
> Hello,
>
> moving here from IRC, to avoid disconnection issues...
> < geliang >   Yes, Netd has the similar thing
>   < geliang >   It can get the IP up/down event too
>   < geliang >   https://android.googlesource.com/platform/system/netd/
>   < geliang >   So I need to add a new flag full-mesh to netlink
>   < geliang >   and drop the notifiers
>   < geliang >   and add the addresses array in
> mptcp_pm_create_subflow_or_signal_addr
> < geliang >   I'll do these things, thanks Paolo
>
> Note that the array size should be limited by the
> current add_addr_signal_max (and thus should be max 8).
>
> Each array entry should be mptcp_addr_info, that is 24 bytes.
>
> 24*8 = 192
>
> The above *could* be a little too much for a stack-based local
> variable, anyhow the kernel should tell, if so, you will get a warning
> at run time alike 'x function is using y bytes of stack'.
>
> In case of such warn you could:
> - dynamically allocate such array (and fail if allocation fail)
> [or]
> - add such stiorage inside mptcp_pm_data
> [or]
> - something in between e.g. use a local variable to a very limited
> amount of addresses and then dynamically allocate a variable.

If the endpoint's flag is subflow,fullmesh, we need to get all the remote
addresses. So we need to record all the received ADD_ADDRs. Should these
two old patches work?

mptcp: rename mptcp_pm_add_entry to mptcp_pm_anno_entry
mptcp: add add_list in mptcp_pm_data
https://patchwork.kernel.org/project/mptcp/patch/ae0d32922128771f09b230e5718774472a088207.1621002341.git.geliangtang@gmail.com/
https://patchwork.kernel.org/project/mptcp/patch/850406b0f49fa0a76b4825b36c55426a0d033d52.1621002341.git.geliangtang@gmail.com/

If we use these two patches, then we can connect all the remote addresses
like this in mptcp_pm_create_subflow_or_signal_addr:

if (local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH) {
        struct mptcp_pm_add_entry *entry;

        list_for_each_entry(entry, &msk->pm.add_list. list)
                __mptcp_subflow_connect(sk, &local->addr,
&entry->addr, local->flags, local->ifindex);
}

What do you think about this approach?

>
> I *think* the latter could be an interesting option.
>
> Finally note that similar changes will be required
> in mptcp_pm_nl_add_addr_received(), with the main difference
> that mptcp_pm_nl_add_addr_received will fill the array with all the
> known local addresses, while mptcp_pm_create_subflow_or_signal_addr()
> will fill it with remote addresses.

If the endpoint's flag is signal,fullmesh, we announce this address, send
this address with ADD_ADDR to the peer. The peer received the ADD_ADDR in
mptcp_pm_nl_add_addr_received. It's easy to get all the local addresses,
it's on the local_addr_list.

But how can we know whether the received address is a fullmesh one or not
in mptcp_pm_nl_add_addr_received? The endpoint's flag is set on the other
side. In mptcp_pm_nl_add_addr_received, we can't get this flag.

Do you have a good way to do this?

Thanks,
-Geliang

>
> Cheers,
>
> Paolo
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: IRC snippet
  2021-07-23  4:01 ` IRC snippet Geliang Tang
@ 2021-07-23  7:19   ` Paolo Abeni
  2021-07-23 11:18     ` Geliang Tang
  2021-07-23  9:02   ` Paolo Abeni
  1 sibling, 1 reply; 4+ messages in thread
From: Paolo Abeni @ 2021-07-23  7:19 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

On Fri, 2021-07-23 at 12:01 +0800, Geliang Tang wrote:
> Hi Paolo,
> 
> Thanks for your information, it's very helpful. And I have two more
> questions below.
> 
> Paolo Abeni <pabeni@redhat.com> 于2021年7月22日周四 下午7:23写道:
> > Hello,
> > 
> > moving here from IRC, to avoid disconnection issues...
> > < geliang >   Yes, Netd has the similar thing
> >   < geliang >   It can get the IP up/down event too
> >   < geliang >   https://android.googlesource.com/platform/system/netd/
> >   < geliang >   So I need to add a new flag full-mesh to netlink
> >   < geliang >   and drop the notifiers
> >   < geliang >   and add the addresses array in
> > mptcp_pm_create_subflow_or_signal_addr
> > < geliang >   I'll do these things, thanks Paolo
> > 
> > Note that the array size should be limited by the
> > current add_addr_signal_max (and thus should be max 8).
> > 
> > Each array entry should be mptcp_addr_info, that is 24 bytes.
> > 
> > 24*8 = 192
> > 
> > The above *could* be a little too much for a stack-based local
> > variable, anyhow the kernel should tell, if so, you will get a warning
> > at run time alike 'x function is using y bytes of stack'.
> > 
> > In case of such warn you could:
> > - dynamically allocate such array (and fail if allocation fail)
> > [or]
> > - add such stiorage inside mptcp_pm_data
> > [or]
> > - something in between e.g. use a local variable to a very limited
> > amount of addresses and then dynamically allocate a variable.
> 
> If the endpoint's flag is subflow,fullmesh, we need to get all the remote
> addresses. So we need to record all the received ADD_ADDRs. 

For simplicity's sake, I think we could start using the remote_addr of
the established subflows, plus the one in pm.remote, if avail.

> Should these
> two old patches work?
> 
> mptcp: rename mptcp_pm_add_entry to mptcp_pm_anno_entry
> mptcp: add add_list in mptcp_pm_data
> https://patchwork.kernel.org/project/mptcp/patch/ae0d32922128771f09b230e5718774472a088207.1621002341.git.geliangtang@gmail.com/
> https://patchwork.kernel.org/project/mptcp/patch/850406b0f49fa0a76b4825b36c55426a0d033d52.1621002341.git.geliangtang@gmail.com/
> 
> If we use these two patches, then we can connect all the remote addresses
> like this in mptcp_pm_create_subflow_or_signal_addr:
> 
> if (local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH) {
>         struct mptcp_pm_add_entry *entry;
> 
>         list_for_each_entry(entry, &msk->pm.add_list. list)
>                 __mptcp_subflow_connect(sk, &local->addr,
> &entry->addr, local->flags, local->ifindex);
> }
> 
> What do you think about this approach?

We still need to copy the addresses in a temporary array - or refactor
the patches - as such list is under the pm lock and we must walk the
remote addresses without helding such lock - we can't
call __mptcp_subflow_connect under such lock.

I think we should consider that approch only if the simpler one shows
unsolvable [including: too much new code needed] problems.

> > I *think* the latter could be an interesting option.
> > 
> > Finally note that similar changes will be required
> > in mptcp_pm_nl_add_addr_received(), with the main difference
> > that mptcp_pm_nl_add_addr_received will fill the array with all the
> > known local addresses, while mptcp_pm_create_subflow_or_signal_addr()
> > will fill it with remote addresses.
> 
> If the endpoint's flag is signal,fullmesh, we announce this address, send
> this address with ADD_ADDR to the peer. The peer received the ADD_ADDR in
> mptcp_pm_nl_add_addr_received. It's easy to get all the local addresses,
> it's on the local_addr_list.

Uhmm... I think there is some misunderstanding here. Full-mash topology
should _not_ relay on any additional actions to be performed on the
peer side (e.g. taking action for ADD_ADDR send by the local side).
AFAICS the schema I proposed don't require that.

The point is, when the client using 'signal,fullmesh' endpoints to
implement a full-mash topology _receives_ an ADD_ADDR from the peer, it
should create a subflow for each 'signal,fullmesh' endpoint, as opposed
to the current behaviour where a single subflow, using the local
address selected by the routing, is created.

To do the above, the client, in mptcp_pm_nl_add_addr_received() should
again copy the known local list in a temporary array -
because local_addr_list is RCU protected and we can't
call __mptcp_subflow_connect() in an RCU critical section.

> But how can we know whether the received address is a fullmesh one or not
> in mptcp_pm_nl_add_addr_received? The endpoint's flag is set on the other
> side. In mptcp_pm_nl_add_addr_received, we can't get this flag.

Hopefully the above should clarify/respond to your question. The main
point is that the full-mash topology configuration is all on a single
side (e.g. the client one). The other side (the server) simply
announces additional addreses (if any). It's the client to take any
required actions as needed (and as specified by the 'subflow,fullmash'
endpoints).

Please let me know if the above is clear!

Thanks,

Paolo



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: IRC snippet
  2021-07-23  4:01 ` IRC snippet Geliang Tang
  2021-07-23  7:19   ` Paolo Abeni
@ 2021-07-23  9:02   ` Paolo Abeni
  1 sibling, 0 replies; 4+ messages in thread
From: Paolo Abeni @ 2021-07-23  9:02 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

On Fri, 2021-07-23 at 12:01 +0800, Geliang Tang wrote:
> Hi Paolo,
> 
> Thanks for your information, it's very helpful. And I have two more
> questions below.
> 
> Paolo Abeni <pabeni@redhat.com> 于2021年7月22日周四 下午7:23写道:
> > Hello,
> > 
> > moving here from IRC, to avoid disconnection issues...
> > < geliang >   Yes, Netd has the similar thing
> >   < geliang >   It can get the IP up/down event too
> >   < geliang >   https://android.googlesource.com/platform/system/netd/
> >   < geliang >   So I need to add a new flag full-mesh to netlink
> >   < geliang >   and drop the notifiers
> >   < geliang >   and add the addresses array in
> > mptcp_pm_create_subflow_or_signal_addr
> > < geliang >   I'll do these things, thanks Paolo
> > 
> > Note that the array size should be limited by the
> > current add_addr_signal_max (and thus should be max 8).
> > 
> > Each array entry should be mptcp_addr_info, that is 24 bytes.
> > 
> > 24*8 = 192
> > 
> > The above *could* be a little too much for a stack-based local
> > variable, anyhow the kernel should tell, if so, you will get a warning
> > at run time alike 'x function is using y bytes of stack'.

Well, tcp_info is allocated on the stack, and tcp_info is 232 bytes
wide, so I think there are no problem to keep such array into the
stack.

/P


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: IRC snippet
  2021-07-23  7:19   ` Paolo Abeni
@ 2021-07-23 11:18     ` Geliang Tang
  0 siblings, 0 replies; 4+ messages in thread
From: Geliang Tang @ 2021-07-23 11:18 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Paolo Abeni <pabeni@redhat.com> 于2021年7月23日周五 下午3:19写道:
>
> On Fri, 2021-07-23 at 12:01 +0800, Geliang Tang wrote:
> > Hi Paolo,
> >
> > Thanks for your information, it's very helpful. And I have two more
> > questions below.
> >
> > Paolo Abeni <pabeni@redhat.com> 于2021年7月22日周四 下午7:23写道:
> > > Hello,
> > >
> > > moving here from IRC, to avoid disconnection issues...
> > > < geliang >   Yes, Netd has the similar thing
> > >   < geliang >   It can get the IP up/down event too
> > >   < geliang >   https://android.googlesource.com/platform/system/netd/
> > >   < geliang >   So I need to add a new flag full-mesh to netlink
> > >   < geliang >   and drop the notifiers
> > >   < geliang >   and add the addresses array in
> > > mptcp_pm_create_subflow_or_signal_addr
> > > < geliang >   I'll do these things, thanks Paolo
> > >
> > > Note that the array size should be limited by the
> > > current add_addr_signal_max (and thus should be max 8).
> > >
> > > Each array entry should be mptcp_addr_info, that is 24 bytes.
> > >
> > > 24*8 = 192
> > >
> > > The above *could* be a little too much for a stack-based local
> > > variable, anyhow the kernel should tell, if so, you will get a warning
> > > at run time alike 'x function is using y bytes of stack'.
> > >
> > > In case of such warn you could:
> > > - dynamically allocate such array (and fail if allocation fail)
> > > [or]
> > > - add such stiorage inside mptcp_pm_data
> > > [or]
> > > - something in between e.g. use a local variable to a very limited
> > > amount of addresses and then dynamically allocate a variable.
> >
> > If the endpoint's flag is subflow,fullmesh, we need to get all the remote
> > addresses. So we need to record all the received ADD_ADDRs.
>
> For simplicity's sake, I think we could start using the remote_addr of
> the established subflows, plus the one in pm.remote, if avail.
>
> > Should these
> > two old patches work?
> >
> > mptcp: rename mptcp_pm_add_entry to mptcp_pm_anno_entry
> > mptcp: add add_list in mptcp_pm_data
> > https://patchwork.kernel.org/project/mptcp/patch/ae0d32922128771f09b230e5718774472a088207.1621002341.git.geliangtang@gmail.com/
> > https://patchwork.kernel.org/project/mptcp/patch/850406b0f49fa0a76b4825b36c55426a0d033d52.1621002341.git.geliangtang@gmail.com/
> >
> > If we use these two patches, then we can connect all the remote addresses
> > like this in mptcp_pm_create_subflow_or_signal_addr:
> >
> > if (local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH) {
> >         struct mptcp_pm_add_entry *entry;
> >
> >         list_for_each_entry(entry, &msk->pm.add_list. list)
> >                 __mptcp_subflow_connect(sk, &local->addr,
> > &entry->addr, local->flags, local->ifindex);
> > }
> >
> > What do you think about this approach?
>
> We still need to copy the addresses in a temporary array - or refactor
> the patches - as such list is under the pm lock and we must walk the
> remote addresses without helding such lock - we can't
> call __mptcp_subflow_connect under such lock.
>
> I think we should consider that approch only if the simpler one shows
> unsolvable [including: too much new code needed] problems.
>
> > > I *think* the latter could be an interesting option.
> > >
> > > Finally note that similar changes will be required
> > > in mptcp_pm_nl_add_addr_received(), with the main difference
> > > that mptcp_pm_nl_add_addr_received will fill the array with all the
> > > known local addresses, while mptcp_pm_create_subflow_or_signal_addr()
> > > will fill it with remote addresses.
> >
> > If the endpoint's flag is signal,fullmesh, we announce this address, send
> > this address with ADD_ADDR to the peer. The peer received the ADD_ADDR in
> > mptcp_pm_nl_add_addr_received. It's easy to get all the local addresses,
> > it's on the local_addr_list.
>
> Uhmm... I think there is some misunderstanding here. Full-mash topology
> should _not_ relay on any additional actions to be performed on the
> peer side (e.g. taking action for ADD_ADDR send by the local side).
> AFAICS the schema I proposed don't require that.
>
> The point is, when the client using 'signal,fullmesh' endpoints to
> implement a full-mash topology _receives_ an ADD_ADDR from the peer, it
> should create a subflow for each 'signal,fullmesh' endpoint, as opposed
> to the current behaviour where a single subflow, using the local
> address selected by the routing, is created.
>
> To do the above, the client, in mptcp_pm_nl_add_addr_received() should
> again copy the known local list in a temporary array -
> because local_addr_list is RCU protected and we can't
> call __mptcp_subflow_connect() in an RCU critical section.
>
> > But how can we know whether the received address is a fullmesh one or not
> > in mptcp_pm_nl_add_addr_received? The endpoint's flag is set on the other
> > side. In mptcp_pm_nl_add_addr_received, we can't get this flag.
>
> Hopefully the above should clarify/respond to your question. The main
> point is that the full-mash topology configuration is all on a single
> side (e.g. the client one). The other side (the server) simply
> announces additional addreses (if any). It's the client to take any
> required actions as needed (and as specified by the 'subflow,fullmash'
> endpoints).
>
> Please let me know if the above is clear!

Thanks Paolo, it's much clearer now. I'll implement it recently and send
the code to you for review.

-Geliang

>
> Thanks,
>
> Paolo
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-07-23 11:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <d05f15b3381a4827e4c55826ca5597d9242d222f.camel@redhat.com>
2021-07-23  4:01 ` IRC snippet Geliang Tang
2021-07-23  7:19   ` Paolo Abeni
2021-07-23 11:18     ` Geliang Tang
2021-07-23  9:02   ` Paolo Abeni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.