* [PATCH mptcp-next v4 00/10] BPF packet scheduler
@ 2022-05-31 9:09 Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
` (9 more replies)
0 siblings, 10 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
v4:
- merge "mptcp: move is_scheduled into mptcp_subflow_context"
- rename bpf_backup tp bpf_bkup
- full patches of this series: https://github.com/geliangtang/mptcp_net-next
v3:
- use new BPF scheduler API:
- add backup scheduler
- add round-robin scheduler
- check bytes_sent of 'ss' output.
v2:
- Use new BPF scheduler API:
unsigned long (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
struct mptcp_sched_data *data);
Geliang Tang (10):
Squash to "mptcp: add struct mptcp_sched_ops"
Squash to "mptcp: add sched in mptcp_sock"
Squash to "mptcp: add get_subflow wrappers"
Squash to "mptcp: add bpf_mptcp_sched_ops"
Squash to "selftests/bpf: add bpf_first scheduler"
Squash to "selftests/bpf: add bpf_first test"
selftests/bpf: add bpf_bkup scheduler
selftests/bpf: add bpf_backup test
selftests/bpf: add bpf_rr scheduler
selftests/bpf: add bpf_rr test
include/net/mptcp.h | 7 +-
net/mptcp/bpf.c | 18 ++--
net/mptcp/protocol.h | 1 +
net/mptcp/sched.c | 54 ++++++++---
tools/testing/selftests/bpf/bpf_tcp_helpers.h | 16 +++-
.../testing/selftests/bpf/prog_tests/mptcp.c | 89 ++++++++++++++++++-
.../selftests/bpf/progs/mptcp_bpf_bkup.c | 43 +++++++++
.../selftests/bpf/progs/mptcp_bpf_first.c | 5 +-
.../selftests/bpf/progs/mptcp_bpf_rr.c | 46 ++++++++++
9 files changed, 247 insertions(+), 32 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
--
2.34.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-06-01 0:38 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
` (8 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
Use bitmap instead of sock in struct mptcp_sched_data.
Please update the commit log:
'''
This patch defines struct mptcp_sched_ops, which has three struct members,
name, owner and list, and three function pointers, init, release and
get_subflow.
Add the scheduler registering, unregistering and finding functions to add,
delete and find a packet scheduler on the global list mptcp_sched_list.
The BPF scheduler function get_subflow() has a struct mptcp_sched_data
parameter, which contains a mptcp_subflow_context array. Add a new member
scheduled for mptcp_subflow_context, which will be set in the MPTCP
scheduler context when the scheduler picks this subflow to send data.
'''
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
include/net/mptcp.h | 7 ++++---
net/mptcp/protocol.h | 1 +
tools/testing/selftests/bpf/bpf_tcp_helpers.h | 11 ++++++++---
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 6456ea26e4c7..7af7fd48acc7 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -97,14 +97,15 @@ struct mptcp_out_options {
};
#define MPTCP_SCHED_NAME_MAX 16
+#define MPTCP_SUBFLOWS_MAX 8
struct mptcp_sched_data {
- struct sock *sock;
- bool call_again;
+ bool reinject;
+ struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
};
struct mptcp_sched_ops {
- void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+ void (*get_subflow)(const struct mptcp_sock *msk,
struct mptcp_sched_data *data);
char name[MPTCP_SCHED_NAME_MAX];
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 8739794166d8..48c5261b7b15 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -469,6 +469,7 @@ struct mptcp_subflow_context {
valid_csum_seen : 1; /* at least one csum validated */
enum mptcp_data_avail data_avail;
bool mp_fail_response_expect;
+ bool scheduled;
u32 remote_nonce;
u64 thmac;
u32 local_nonce;
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index aca4e3c6ac48..a705054f38c5 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -231,10 +231,15 @@ extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
#define MPTCP_SCHED_NAME_MAX 16
+#define MPTCP_SUBFLOWS_MAX 8
+
+struct mptcp_subflow_context {
+ bool scheduled;
+} __attribute__((preserve_access_index));
struct mptcp_sched_data {
- struct sock *sock;
- bool call_again;
+ bool reinject;
+ struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
};
struct mptcp_sched_ops {
@@ -243,7 +248,7 @@ struct mptcp_sched_ops {
void (*init)(const struct mptcp_sock *msk);
void (*release)(const struct mptcp_sock *msk);
- void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+ void (*get_subflow)(const struct mptcp_sock *msk,
struct mptcp_sched_data *data);
void *owner;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops"
2022-05-31 9:09 ` [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
@ 2022-06-01 0:38 ` Mat Martineau
0 siblings, 0 replies; 22+ messages in thread
From: Mat Martineau @ 2022-06-01 0:38 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Tue, 31 May 2022, Geliang Tang wrote:
> Use bitmap instead of sock in struct mptcp_sched_data.
>
> Please update the commit log:
>
> '''
> This patch defines struct mptcp_sched_ops, which has three struct members,
> name, owner and list, and three function pointers, init, release and
> get_subflow.
>
> Add the scheduler registering, unregistering and finding functions to add,
> delete and find a packet scheduler on the global list mptcp_sched_list.
>
> The BPF scheduler function get_subflow() has a struct mptcp_sched_data
> parameter, which contains a mptcp_subflow_context array. Add a new member
> scheduled for mptcp_subflow_context, which will be set in the MPTCP
> scheduler context when the scheduler picks this subflow to send data.
> '''
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> include/net/mptcp.h | 7 ++++---
> net/mptcp/protocol.h | 1 +
> tools/testing/selftests/bpf/bpf_tcp_helpers.h | 11 ++++++++---
> 3 files changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/include/net/mptcp.h b/include/net/mptcp.h
> index 6456ea26e4c7..7af7fd48acc7 100644
> --- a/include/net/mptcp.h
> +++ b/include/net/mptcp.h
> @@ -97,14 +97,15 @@ struct mptcp_out_options {
> };
>
> #define MPTCP_SCHED_NAME_MAX 16
> +#define MPTCP_SUBFLOWS_MAX 8
>
> struct mptcp_sched_data {
> - struct sock *sock;
> - bool call_again;
> + bool reinject;
> + struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
Ok - the BPF verifier is able to handle this array? That's good news.
> };
>
> struct mptcp_sched_ops {
> - void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
> + void (*get_subflow)(const struct mptcp_sock *msk,
> struct mptcp_sched_data *data);
>
> char name[MPTCP_SCHED_NAME_MAX];
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index 8739794166d8..48c5261b7b15 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -469,6 +469,7 @@ struct mptcp_subflow_context {
> valid_csum_seen : 1; /* at least one csum validated */
> enum mptcp_data_avail data_avail;
> bool mp_fail_response_expect;
> + bool scheduled;
> u32 remote_nonce;
> u64 thmac;
> u32 local_nonce;
> diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> index aca4e3c6ac48..a705054f38c5 100644
> --- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> +++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> @@ -231,10 +231,15 @@ extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
> extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
>
> #define MPTCP_SCHED_NAME_MAX 16
> +#define MPTCP_SUBFLOWS_MAX 8
> +
> +struct mptcp_subflow_context {
> + bool scheduled;
> +} __attribute__((preserve_access_index));
Ah, is this the array "magic"?
>
> struct mptcp_sched_data {
> - struct sock *sock;
> - bool call_again;
> + bool reinject;
> + struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
> };
>
> struct mptcp_sched_ops {
> @@ -243,7 +248,7 @@ struct mptcp_sched_ops {
> void (*init)(const struct mptcp_sock *msk);
> void (*release)(const struct mptcp_sock *msk);
>
> - void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
> + void (*get_subflow)(const struct mptcp_sock *msk,
> struct mptcp_sched_data *data);
> void *owner;
> };
> --
> 2.34.1
>
>
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 02/10] Squash to "mptcp: add sched in mptcp_sock"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
` (7 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
No need to export sched in bpf_tcp_helpers.h, drop it.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
tools/testing/selftests/bpf/bpf_tcp_helpers.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index a705054f38c5..870deb5cf5ed 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -258,7 +258,6 @@ struct mptcp_sock {
__u32 token;
struct sock *first;
- struct mptcp_sched_ops *sched;
char ca_name[TCP_CA_NAME_MAX];
} __attribute__((preserve_access_index));
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-06-01 0:59 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
` (6 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
Please update the commit log:
'''
This patch defines two new wrappers mptcp_sched_get_send() and
mptcp_sched_get_retrans(), invoke get_subflow() of msk->sched in them.
Use them instead of using mptcp_subflow_get_send() or
mptcp_subflow_get_retrans() directly.
Set the subflow pointers array in struct mptcp_sched_data before invoking
get_subflow(), then it can be used in get_subflow() in the BPF contexts.
Check the subflow scheduled flags to test which subflow or subflows are
picked by the scheduler.
'''
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
net/mptcp/sched.c | 54 +++++++++++++++++++++++++++++++++++++----------
1 file changed, 43 insertions(+), 11 deletions(-)
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 3ceb721e6489..613b7005938c 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -88,11 +88,25 @@ void mptcp_release_sched(struct mptcp_sock *msk)
bpf_module_put(sched, sched->owner);
}
-static int mptcp_sched_data_init(struct mptcp_sock *msk,
+static int mptcp_sched_data_init(struct mptcp_sock *msk, bool reinject,
struct mptcp_sched_data *data)
{
- data->sock = NULL;
- data->call_again = 0;
+ struct mptcp_subflow_context *subflow;
+ int i = 0;
+
+ data->reinject = reinject;
+
+ mptcp_for_each_subflow(msk, subflow) {
+ if (i == MPTCP_SUBFLOWS_MAX) {
+ pr_warn_once("too many subflows");
+ break;
+ }
+ WRITE_ONCE(subflow->scheduled, false);
+ data->contexts[i++] = subflow;
+ }
+
+ for (; i < MPTCP_SUBFLOWS_MAX; i++)
+ data->contexts[i] = NULL;
return 0;
}
@@ -100,6 +114,8 @@ static int mptcp_sched_data_init(struct mptcp_sock *msk,
struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
{
struct mptcp_sched_data data;
+ struct sock *ssk = NULL;
+ int i;
sock_owned_by_me((struct sock *)msk);
@@ -113,16 +129,25 @@ struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
if (!msk->sched)
return mptcp_subflow_get_send(msk);
- mptcp_sched_data_init(msk, &data);
- msk->sched->get_subflow(msk, false, &data);
+ mptcp_sched_data_init(msk, false, &data);
+ msk->sched->get_subflow(msk, &data);
+
+ for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
+ ssk = data.contexts[i]->tcp_sock;
+ msk->last_snd = ssk;
+ break;
+ }
+ }
- msk->last_snd = data.sock;
- return data.sock;
+ return ssk;
}
struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
{
struct mptcp_sched_data data;
+ struct sock *ssk = NULL;
+ int i;
sock_owned_by_me((const struct sock *)msk);
@@ -133,9 +158,16 @@ struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
if (!msk->sched)
return mptcp_subflow_get_retrans(msk);
- mptcp_sched_data_init(msk, &data);
- msk->sched->get_subflow(msk, true, &data);
+ mptcp_sched_data_init(msk, true, &data);
+ msk->sched->get_subflow(msk, &data);
+
+ for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
+ ssk = data.contexts[i]->tcp_sock;
+ msk->last_snd = ssk;
+ break;
+ }
+ }
- msk->last_snd = data.sock;
- return data.sock;
+ return ssk;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers"
2022-05-31 9:09 ` [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
@ 2022-06-01 0:59 ` Mat Martineau
0 siblings, 0 replies; 22+ messages in thread
From: Mat Martineau @ 2022-06-01 0:59 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Tue, 31 May 2022, Geliang Tang wrote:
> Please update the commit log:
>
> '''
> This patch defines two new wrappers mptcp_sched_get_send() and
> mptcp_sched_get_retrans(), invoke get_subflow() of msk->sched in them.
> Use them instead of using mptcp_subflow_get_send() or
> mptcp_subflow_get_retrans() directly.
>
> Set the subflow pointers array in struct mptcp_sched_data before invoking
> get_subflow(), then it can be used in get_subflow() in the BPF contexts.
>
> Check the subflow scheduled flags to test which subflow or subflows are
> picked by the scheduler.
> '''
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> net/mptcp/sched.c | 54 +++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 43 insertions(+), 11 deletions(-)
>
> diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
> index 3ceb721e6489..613b7005938c 100644
> --- a/net/mptcp/sched.c
> +++ b/net/mptcp/sched.c
> @@ -88,11 +88,25 @@ void mptcp_release_sched(struct mptcp_sock *msk)
> bpf_module_put(sched, sched->owner);
> }
>
> -static int mptcp_sched_data_init(struct mptcp_sock *msk,
> +static int mptcp_sched_data_init(struct mptcp_sock *msk, bool reinject,
> struct mptcp_sched_data *data)
> {
> - data->sock = NULL;
> - data->call_again = 0;
> + struct mptcp_subflow_context *subflow;
> + int i = 0;
> +
> + data->reinject = reinject;
> +
> + mptcp_for_each_subflow(msk, subflow) {
> + if (i == MPTCP_SUBFLOWS_MAX) {
> + pr_warn_once("too many subflows");
> + break;
> + }
> + WRITE_ONCE(subflow->scheduled, false);
If subflow->scheduled is using READ_ONCE/WRITE_ONCE semantics, then
writing it directly from BPF code is going to be a problem. The code in
this patch set would work ok since all read and write access is under the
msk lock, but I think integrating with all of the transmit and retransmit
code (especially transmission in mptcp_subflow_process_delegated()) would
make it important to use WRITE_ONCE() to set subflow->scheduled.
I think that requires using a C helper function called from BPF to do
WRITE_ONCE(subflow->scheduled), or using a lock to order accesses. The
mptcp_data_lock is already used in mptcp_subflow_process_delegated() but
we probably don't want to add more locking to mptcp_sendmsg(). That makes
me think the helper function might be better - unless there's a generic
BPF technique for using WRITE_ONCE.
> + data->contexts[i++] = subflow;
> + }
> +
> + for (; i < MPTCP_SUBFLOWS_MAX; i++)
> + data->contexts[i] = NULL;
>
> return 0;
> }
> @@ -100,6 +114,8 @@ static int mptcp_sched_data_init(struct mptcp_sock *msk,
> struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
> {
> struct mptcp_sched_data data;
> + struct sock *ssk = NULL;
> + int i;
>
> sock_owned_by_me((struct sock *)msk);
>
> @@ -113,16 +129,25 @@ struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
> if (!msk->sched)
> return mptcp_subflow_get_send(msk);
>
> - mptcp_sched_data_init(msk, &data);
> - msk->sched->get_subflow(msk, false, &data);
> + mptcp_sched_data_init(msk, false, &data);
> + msk->sched->get_subflow(msk, &data);
> +
> + for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
> + ssk = data.contexts[i]->tcp_sock;
> + msk->last_snd = ssk;
> + break;
> + }
> + }
I think this is ok for a placeholder until more of the transmit
integration is done.
>
> - msk->last_snd = data.sock;
> - return data.sock;
> + return ssk;
> }
>
> struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
> {
> struct mptcp_sched_data data;
> + struct sock *ssk = NULL;
> + int i;
>
> sock_owned_by_me((const struct sock *)msk);
>
> @@ -133,9 +158,16 @@ struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
> if (!msk->sched)
> return mptcp_subflow_get_retrans(msk);
>
> - mptcp_sched_data_init(msk, &data);
> - msk->sched->get_subflow(msk, true, &data);
> + mptcp_sched_data_init(msk, true, &data);
> + msk->sched->get_subflow(msk, &data);
> +
> + for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
> + ssk = data.contexts[i]->tcp_sock;
> + msk->last_snd = ssk;
> + break;
> + }
> + }
>
> - msk->last_snd = data.sock;
> - return data.sock;
> + return ssk;
> }
> --
> 2.34.1
>
>
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (2 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
` (5 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
Drop the access code for mptcp_sched_data.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
net/mptcp/bpf.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 338146d173f4..0529e70d53b1 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -42,29 +42,27 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
{
size_t end;
- if (atype == BPF_READ)
+ if (atype == BPF_READ) {
return btf_struct_access(log, btf, t, off, size, atype,
next_btf_id, flag);
+ }
if (t != mptcp_sched_type) {
- bpf_log(log, "only access to mptcp_sched_data is supported\n");
+ bpf_log(log, "only access to mptcp_subflow_context is supported\n");
return -EACCES;
}
switch (off) {
- case offsetof(struct mptcp_sched_data, sock):
- end = offsetofend(struct mptcp_sched_data, sock);
- break;
- case offsetof(struct mptcp_sched_data, call_again):
- end = offsetofend(struct mptcp_sched_data, call_again);
+ case offsetof(struct mptcp_subflow_context, scheduled):
+ end = offsetofend(struct mptcp_subflow_context, scheduled);
break;
default:
- bpf_log(log, "no write support to mptcp_sched_data at off %d\n", off);
+ bpf_log(log, "no write support to mptcp_subflow_context at off %d\n", off);
return -EACCES;
}
if (off + size > end) {
- bpf_log(log, "access beyond mptcp_sched_data at off %u size %u ended at %zu",
+ bpf_log(log, "access beyond mptcp_subflow_context at off %u size %u ended at %zu",
off, size, end);
return -EACCES;
}
@@ -144,7 +142,7 @@ static int bpf_mptcp_sched_init(struct btf *btf)
{
s32 type_id;
- type_id = btf_find_by_name_kind(btf, "mptcp_sched_data",
+ type_id = btf_find_by_name_kind(btf, "mptcp_subflow_context",
BTF_KIND_STRUCT);
if (type_id < 0)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 05/10] Squash to "selftests/bpf: add bpf_first scheduler"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (3 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
` (4 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
Use new get_subflow API.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
tools/testing/selftests/bpf/progs/mptcp_bpf_first.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
index fd67b5f42964..5f866f51ac70 100644
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
@@ -17,10 +17,9 @@ void BPF_PROG(mptcp_sched_first_release, const struct mptcp_sock *msk)
}
void BPF_STRUCT_OPS(bpf_first_get_subflow, const struct mptcp_sock *msk,
- bool reinject, struct mptcp_sched_data *data)
+ struct mptcp_sched_data *data)
{
- data->sock = msk->first;
- data->call_again = 0;
+ data->contexts[0]->scheduled = 1;
}
SEC(".struct_ops")
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 06/10] Squash to "selftests/bpf: add bpf_first test"
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (4 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler Geliang Tang
` (3 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
Add two helpers, add_veth() and cleanup().
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 8e9764275b07..eaea4105728d 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -250,6 +250,20 @@ static void send_data(int lfd, int fd)
PTR_ERR(thread_ret));
}
+static void add_veth(void)
+{
+ system("ip link add veth1 type veth");
+ system("ip addr add 10.0.1.1/24 dev veth1");
+ system("ip link set veth1 up");
+}
+
+static void cleanup(void)
+{
+ system("sysctl -qw net.mptcp.scheduler=default");
+ system("ip mptcp endpoint flush");
+ system("ip link del veth1");
+}
+
static void test_first(void)
{
struct mptcp_bpf_first *first_skel;
@@ -266,15 +280,18 @@ static void test_first(void)
return;
}
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
system("sysctl -qw net.mptcp.scheduler=bpf_first");
server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
client_fd = connect_to_fd(server_fd, 0);
send_data(server_fd, client_fd);
+ ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
close(client_fd);
close(server_fd);
- system("sysctl -qw net.mptcp.scheduler=default");
+ cleanup();
bpf_link__destroy(link);
mptcp_bpf_first__destroy(first_skel);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (5 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-06-01 1:11 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 08/10] selftests/bpf: add bpf_backup test Geliang Tang
` (2 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch implements the backup flag test scheduler, named bpf_bkup,
which picks the first non-backup subflow to send data.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
tools/testing/selftests/bpf/bpf_tcp_helpers.h | 2 +
.../selftests/bpf/progs/mptcp_bpf_bkup.c | 43 +++++++++++++++++++
2 files changed, 45 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 870deb5cf5ed..6fa496a65bef 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -234,6 +234,8 @@ extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
#define MPTCP_SUBFLOWS_MAX 8
struct mptcp_subflow_context {
+ __u32 padding : 12,
+ backup : 1;
bool scheduled;
} __attribute__((preserve_access_index));
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
new file mode 100644
index 000000000000..ad2b2b4de8a5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_bkup_init")
+void BPF_PROG(mptcp_sched_bkup_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_bkup_release")
+void BPF_PROG(mptcp_sched_bkup_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_bkup_get_subflow, const struct mptcp_sock *msk,
+ struct mptcp_sched_data *data)
+{
+ int nr = 0;
+
+ for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (!data->contexts[i])
+ break;
+
+ if (!data->contexts[i]->backup) {
+ nr = i;
+ break;
+ }
+ }
+
+ data->contexts[nr]->scheduled = 1;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops bkup = {
+ .init = (void *)mptcp_sched_bkup_init,
+ .release = (void *)mptcp_sched_bkup_release,
+ .get_subflow = (void *)bpf_bkup_get_subflow,
+ .name = "bpf_bkup",
+};
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler
2022-05-31 9:09 ` [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler Geliang Tang
@ 2022-06-01 1:11 ` Mat Martineau
0 siblings, 0 replies; 22+ messages in thread
From: Mat Martineau @ 2022-06-01 1:11 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Tue, 31 May 2022, Geliang Tang wrote:
> This patch implements the backup flag test scheduler, named bpf_bkup,
> which picks the first non-backup subflow to send data.
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> tools/testing/selftests/bpf/bpf_tcp_helpers.h | 2 +
> .../selftests/bpf/progs/mptcp_bpf_bkup.c | 43 +++++++++++++++++++
> 2 files changed, 45 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
>
> diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> index 870deb5cf5ed..6fa496a65bef 100644
> --- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> +++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
> @@ -234,6 +234,8 @@ extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
> #define MPTCP_SUBFLOWS_MAX 8
>
> struct mptcp_subflow_context {
> + __u32 padding : 12,
> + backup : 1;
Is the padding required? It looks like the BTF code might know how to
handle bitfield members.
- Mat
> bool scheduled;
> } __attribute__((preserve_access_index));
>
> diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
> new file mode 100644
> index 000000000000..ad2b2b4de8a5
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
> @@ -0,0 +1,43 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2022, SUSE. */
> +
> +#include <linux/bpf.h>
> +#include "bpf_tcp_helpers.h"
> +
> +char _license[] SEC("license") = "GPL";
> +
> +SEC("struct_ops/mptcp_sched_bkup_init")
> +void BPF_PROG(mptcp_sched_bkup_init, const struct mptcp_sock *msk)
> +{
> +}
> +
> +SEC("struct_ops/mptcp_sched_bkup_release")
> +void BPF_PROG(mptcp_sched_bkup_release, const struct mptcp_sock *msk)
> +{
> +}
> +
> +void BPF_STRUCT_OPS(bpf_bkup_get_subflow, const struct mptcp_sock *msk,
> + struct mptcp_sched_data *data)
> +{
> + int nr = 0;
> +
> + for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (!data->contexts[i])
> + break;
> +
> + if (!data->contexts[i]->backup) {
> + nr = i;
> + break;
> + }
> + }
> +
> + data->contexts[nr]->scheduled = 1;
> +}
> +
> +SEC(".struct_ops")
> +struct mptcp_sched_ops bkup = {
> + .init = (void *)mptcp_sched_bkup_init,
> + .release = (void *)mptcp_sched_bkup_release,
> + .get_subflow = (void *)bpf_bkup_get_subflow,
> + .name = "bpf_bkup",
> +};
> --
> 2.34.1
>
>
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 08/10] selftests/bpf: add bpf_backup test
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (6 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:24 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
9 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the backup BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command
to add this new endpoint to PM netlink with backup flag.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index eaea4105728d..f3c73cd2c786 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -7,6 +7,7 @@
#include "network_helpers.h"
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_bkup.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -296,10 +297,44 @@ static void test_first(void)
mptcp_bpf_first__destroy(first_skel);
}
+static void test_bkup(void)
+{
+ struct mptcp_bpf_bkup *bkup_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ bkup_skel = mptcp_bpf_bkup__open_and_load();
+ if (!ASSERT_OK_PTR(bkup_skel, "bpf_bkup__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(bkup_skel->maps.bkup);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_bkup__destroy(bkup_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow backup");
+ system("sysctl -qw net.mptcp.scheduler=bpf_bkup");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_bkup__destroy(bkup_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
test_base();
if (test__start_subtest("first"))
test_first();
+ if (test__start_subtest("bkup"))
+ test_bkup();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 09/10] selftests/bpf: add bpf_rr scheduler
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (7 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 08/10] selftests/bpf: add bpf_backup test Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
9 siblings, 0 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch implements the round-robin BPF MPTCP scheduler, named bpf_rr,
which always picks the next available subflow to send data. If no such
next subflow available, picks the first one.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
tools/testing/selftests/bpf/bpf_tcp_helpers.h | 2 +
.../selftests/bpf/progs/mptcp_bpf_rr.c | 46 +++++++++++++++++++
2 files changed, 48 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 6fa496a65bef..462bf6ca2dda 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -237,6 +237,7 @@ struct mptcp_subflow_context {
__u32 padding : 12,
backup : 1;
bool scheduled;
+ struct sock *tcp_sock; /* tcp sk backpointer */
} __attribute__((preserve_access_index));
struct mptcp_sched_data {
@@ -258,6 +259,7 @@ struct mptcp_sched_ops {
struct mptcp_sock {
struct inet_connection_sock sk;
+ struct sock *last_snd;
__u32 token;
struct sock *first;
char ca_name[TCP_CA_NAME_MAX];
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
new file mode 100644
index 000000000000..03c1032bc31a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_rr_init")
+void BPF_PROG(mptcp_sched_rr_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_rr_release")
+void BPF_PROG(mptcp_sched_rr_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_rr_get_subflow, const struct mptcp_sock *msk,
+ struct mptcp_sched_data *data)
+{
+ int nr = 0;
+
+ for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (!msk->last_snd || !data->contexts[i])
+ break;
+
+ if (data->contexts[i]->tcp_sock == msk->last_snd) {
+ if (i + 1 == MPTCP_SUBFLOWS_MAX || !data->contexts[i + 1])
+ break;
+
+ nr = i + 1;
+ break;
+ }
+ }
+
+ data->contexts[nr]->scheduled = 1;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops rr = {
+ .init = (void *)mptcp_sched_rr_init,
+ .release = (void *)mptcp_sched_rr_release,
+ .get_subflow = (void *)bpf_rr_get_subflow,
+ .name = "bpf_rr",
+};
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
` (8 preceding siblings ...)
2022-05-31 9:09 ` [PATCH mptcp-next v4 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
@ 2022-05-31 9:09 ` Geliang Tang
2022-05-31 9:21 ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
2022-05-31 10:39 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
9 siblings, 2 replies; 22+ messages in thread
From: Geliang Tang @ 2022-05-31 9:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index f3c73cd2c786..1ecc8a2b76b6 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
#include "mptcp_bpf_bkup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -329,6 +330,38 @@ static void test_bkup(void)
mptcp_bpf_bkup__destroy(bkup_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
test_first();
if (test__start_subtest("bkup"))
test_bkup();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Build Failure
2022-05-31 9:09 ` [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-31 9:21 ` MPTCP CI
2022-05-31 10:39 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-05-31 9:21 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
But sadly, our CI spotted some issues with it when trying to build it.
You can find more details there:
https://patchwork.kernel.org/project/mptcp/patch/31935db8f3291afbea6e5dc057f63032ed6f259a.1653987929.git.geliang.tang@suse.com/
https://github.com/multipath-tcp/mptcp_net-next/actions/runs/2413917761
Status: failure
Initiator: MPTCPimporter
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c91606f830c4
Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-05-31 9:09 ` [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-31 9:21 ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
@ 2022-05-31 10:39 ` MPTCP CI
1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-05-31 10:39 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Unstable: 1 failed test(s): selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/4799661683769344
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4799661683769344/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): selftest_diag selftest_mptcp_join selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/5925561590611968
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5925561590611968/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c91606f830c4
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test
@ 2022-06-02 4:53 Geliang Tang
2022-06-02 6:36 ` selftests/bpf: Add bpf_rr test: Tests Results MPTCP CI
0 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-06-02 4:53 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink. Send data and check bytes_sent of
'ss' output after it to make sure the data has been sent on the new veth
net device.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index f3c73cd2c786..1ecc8a2b76b6 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
#include "mptcp_bpf_bkup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -329,6 +330,38 @@ static void test_bkup(void)
mptcp_bpf_bkup__destroy(bkup_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
test_first();
if (test__start_subtest("bkup"))
test_bkup();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test
@ 2022-06-01 14:08 Geliang Tang
2022-06-01 16:02 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
0 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-06-01 14:08 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink. Send data and check bytes_sent of
'ss' output after it to make sure the data has been sent on the new veth
net device.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index f3c73cd2c786..1ecc8a2b76b6 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
#include "mptcp_bpf_bkup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -329,6 +330,38 @@ static void test_bkup(void)
mptcp_bpf_bkup__destroy(bkup_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
test_first();
if (test__start_subtest("bkup"))
test_bkup();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test
@ 2022-06-01 6:46 Geliang Tang
2022-06-01 8:16 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
0 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-06-01 6:46 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink. Send data and check bytes_sent of
'ss' output after it to make sure the data has been sent on the new veth
net device.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index f3c73cd2c786..1ecc8a2b76b6 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
#include "mptcp_bpf_bkup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -329,6 +330,38 @@ static void test_bkup(void)
mptcp_bpf_bkup__destroy(bkup_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
test_first();
if (test__start_subtest("bkup"))
test_bkup();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test
@ 2022-05-28 15:11 Geliang Tang
2022-05-28 17:05 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
0 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 35 +++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 1f92d251a9ba..17536ea8dbab 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
#include "mptcp_sock.skel.h"
#include "mptcp_bpf_first.skel.h"
#include "mptcp_bpf_backup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -329,6 +330,38 @@ static void test_backup(void)
mptcp_bpf_backup__destroy(backup_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ add_veth();
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+ ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+ close(client_fd);
+ close(server_fd);
+ cleanup();
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
test_first();
if (test__start_subtest("backup"))
test_backup();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test
@ 2022-05-09 14:56 Geliang Tang
2022-05-09 16:48 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
0 siblings, 1 reply; 22+ messages in thread
From: Geliang Tang @ 2022-05-09 14:56 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index cbd9367c1e0f..5058daf15ce5 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -6,6 +6,7 @@
#include "cgroup_helpers.h"
#include "network_helpers.h"
#include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -365,10 +366,47 @@ static void test_first(void)
mptcp_bpf_first__destroy(first_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ system("ip link add veth1 type veth");
+ system("ip addr add 10.0.1.1/24 dev veth1");
+ system("ip link set veth1 up");
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_mptcp_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+
+ close(client_fd);
+ close(server_fd);
+ system("sysctl -qw net.mptcp.scheduler=default");
+ system("ip mptcp endpoint flush");
+ system("ip link del veth1");
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
test_base();
if (test__start_subtest("first"))
test_first();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
end of thread, other threads:[~2022-06-02 6:36 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-31 9:09 [PATCH mptcp-next v4 00/10] BPF packet scheduler Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
2022-06-01 0:38 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
2022-06-01 0:59 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 07/10] selftests/bpf: add bpf_bkup scheduler Geliang Tang
2022-06-01 1:11 ` Mat Martineau
2022-05-31 9:09 ` [PATCH mptcp-next v4 08/10] selftests/bpf: add bpf_backup test Geliang Tang
2022-05-31 9:24 ` Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-05-31 9:09 ` [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-31 9:21 ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
2022-05-31 10:39 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
-- strict thread matches above, loose matches on Subject: below --
2022-06-02 4:53 [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test Geliang Tang
2022-06-02 6:36 ` selftests/bpf: Add bpf_rr test: Tests Results MPTCP CI
2022-06-01 14:08 [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01 16:02 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-06-01 6:46 [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01 8:16 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-28 15:11 [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-28 17:05 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-09 14:56 [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-09 16:48 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.