From: Paolo Abeni <pabeni@redhat.com>
To: Geliang Tang <geliangtang@gmail.com>, mptcp@lists.linux.dev
Subject: Re: [MPTCP][PATCH mptcp-next 0/9] fullmesh path manager support
Date: Fri, 16 Jul 2021 19:28:56 +0200 [thread overview]
Message-ID: <afe3a1366f84d584ce092381d02df6204b560d06.camel@redhat.com> (raw)
In-Reply-To: <cover.1625825505.git.geliangtang@gmail.com>
On Fri, 2021-07-09 at 19:04 +0800, Geliang Tang wrote:
> Implement the in-kernel fullmesh path manager like on the mptcp.org
> kernel.
>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/193
Following-up the yday disucssion in the public mtg, I skimmed over this
series.
I think this approach leads to quite a bit of duplicate code and
avoidable complexity.
I also think we could obtain a full-mash topology with some not to-
complex extension to the current NL PM:
- add and manage a new per endpoint flag, something alike 'fullmesh'
- in mptcp_pm_create_subflow_or_signal_addr(), if such flag is set,
instead of:
remote_address((struct sock_common *)sk, &remote);
fill a temporary allocated array of all known remote address.
After releaseing the pm lock loop on such array and create a subflow
for each remote address from the given local.
Note that the we could still use an array even for non 'fullmash'
endpoint: with a single entry corresponding to the primary MPC
subflow remote address.
- mptcp_pm_nl_add_addr_received(), fill a temporary allocate array of
all local address corresponding to fullmash endpoint. If such array
is empty, keep the current behavior.
Elsewhere loop on such array and create a subflow for each local
address towards the given remote address
I hope that overall the above would require a limited amount of
changes. If so, I believe this way to be preferrable:
- 1 in kernel path manager
- a simple one
- a single configuration interface
- should cover the full-mesh use case and possibly more
- the idea is - some far day - use ebpf for more funcy stuff, if
needed.
WDYT?
Thanks!
Paolo
prev parent reply other threads:[~2021-07-16 17:29 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-09 11:04 [MPTCP][PATCH mptcp-next 0/9] fullmesh path manager support Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 1/9] mptcp: add a new sysctl path_manager Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 2/9] mptcp: add fullmesh path manager Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 3/9] mptcp: add fullmesh worker Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 4/9] mptcp: register ipv4 addr notifier Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 5/9] mptcp: register ipv6 " Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 6/9] mptcp: add netdev up event handler Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 7/9] mptcp: add netdev down " Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 8/9] mptcp: add proc file mptcp_fullmesh Geliang Tang
2021-07-09 11:04 ` [MPTCP][PATCH mptcp-next 9/9] selftests: mptcp: add fullmesh testcases Geliang Tang
2021-07-13 0:04 ` [MPTCP][PATCH mptcp-next 0/9] fullmesh path manager support Mat Martineau
2021-07-16 17:28 ` Paolo Abeni [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afe3a1366f84d584ce092381d02df6204b560d06.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=geliangtang@gmail.com \
--cc=mptcp@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).