* [PATCH mptcp-next v13 0/3] BPF round-robin scheduler
@ 2022-05-09 14:56 Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 1/3] mptcp: add subflows array in sched data Geliang Tang
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Geliang Tang @ 2022-05-09 14:56 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
v13:
- add !msk->last_snd check in patch 2
- use ASSERT_OK_PTR instead of CHECK in patch 3
- base-commit: export/20220509T115202
v12:
- init ssk from data->contexts[0], instead of msk->first.
- cycle through all the subflows, instead of the first two.
v11:
- rename array to contexts.
- drop number of subflows in mptcp_sched_data.
- set unused array elements to NULL.
- add MPTCP_SUBFLOWS_MAX check in mptcp_sched_data_init.
v10:
- init subflows array in mptcp_sched_data_init.
- for (int i = 0; i < data->subflows; i++) is not allowed in BPF, using
this instead:
for (int i = 0; i < MPTCP_SUBFLOWS_MAX && i < data->subflows; i++)
- deponds on: "BPF packet scheduler" series v18.
v9:
- add subflows array in mptcp_sched_data
- deponds on: "BPF packet scheduler" series v17 +
Squash to "mptcp: add struct mptcp_sched_ops v17".
v8:
- use struct mptcp_sched_data.
- deponds on: "BPF packet scheduler" series v14.
v7:
- rename retrans to reinject.
- drop last_snd setting.
- deponds on: "BPF packet scheduler" series v13.
v6:
- set call_me_again flag.
- deponds on: "BPF packet scheduler" series v12.
v5:
- update patch 2, use temporary storage instead.
- update patch 3, use new helpers.
- deponds on: "BPF packet scheduler" series v11.
v4:
- add retrans argment for get_subflow()
v3:
- add last_snd write access.
- keep msk->last_snd setting in get_subflow().
- deponds on: "BPF packet scheduler" series v10.
v2:
- merge the squash-to patch.
- implement bpf_mptcp_get_subflows helper, instead of
bpf_mptcp_get_next_subflow.
- deponds on: "BPF packet scheduler v9".
This patchset implements round-robin scheduler using BPF. Address to
some commends for the RFC version:
https://patchwork.kernel.org/project/mptcp/cover/cover.1631011068.git.geliangtang@xiaomi.com/
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/75
Geliang Tang (3):
mptcp: add subflows array in sched data
selftests/bpf: add bpf_rr scheduler
selftests/bpf: add bpf_rr test
include/net/mptcp.h | 2 +
net/mptcp/sched.c | 14 ++++++
.../testing/selftests/bpf/bpf_mptcp_helpers.h | 8 ++++
.../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++++++++++++
.../selftests/bpf/progs/mptcp_bpf_rr.c | 47 +++++++++++++++++++
5 files changed, 109 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH mptcp-next v13 1/3] mptcp: add subflows array in sched data
2022-05-09 14:56 [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Geliang Tang
@ 2022-05-09 14:56 ` Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 2/3] selftests/bpf: add bpf_rr scheduler Geliang Tang
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Geliang Tang @ 2022-05-09 14:56 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds a subflow pointers array in struct mptcp_sched_data. Set
the array before invoking get_subflow(), then get it in get_subflow() in
the BPF contexts.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
include/net/mptcp.h | 2 ++
net/mptcp/sched.c | 14 ++++++++++++++
tools/testing/selftests/bpf/bpf_mptcp_helpers.h | 2 ++
3 files changed, 18 insertions(+)
diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index b596ba7a8494..345f27a68eaa 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -96,10 +96,12 @@ struct mptcp_out_options {
};
#define MPTCP_SCHED_NAME_MAX 16
+#define MPTCP_SUBFLOWS_MAX 8
struct mptcp_sched_data {
struct sock *sock;
bool call_again;
+ struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
};
struct mptcp_sched_ops {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 3ceb721e6489..9ee2d30a6f19 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -91,9 +91,23 @@ void mptcp_release_sched(struct mptcp_sock *msk)
static int mptcp_sched_data_init(struct mptcp_sock *msk,
struct mptcp_sched_data *data)
{
+ struct mptcp_subflow_context *subflow;
+ int i = 0;
+
data->sock = NULL;
data->call_again = 0;
+ mptcp_for_each_subflow(msk, subflow) {
+ if (i == MPTCP_SUBFLOWS_MAX) {
+ pr_warn_once("too many subflows");
+ break;
+ }
+ data->contexts[i++] = subflow;
+ }
+
+ for (; i < MPTCP_SUBFLOWS_MAX; i++)
+ data->contexts[i++] = NULL;
+
return 0;
}
diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
index 4b7812c335fc..7ecd0b14666a 100644
--- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
@@ -7,10 +7,12 @@
#include "bpf_tcp_helpers.h"
#define MPTCP_SCHED_NAME_MAX 16
+#define MPTCP_SUBFLOWS_MAX 8
struct mptcp_sched_data {
struct sock *sock;
bool call_again;
+ struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
};
struct mptcp_sched_ops {
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH mptcp-next v13 2/3] selftests/bpf: add bpf_rr scheduler
2022-05-09 14:56 [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 1/3] mptcp: add subflows array in sched data Geliang Tang
@ 2022-05-09 14:56 ` Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-11 0:57 ` [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Mat Martineau
3 siblings, 0 replies; 11+ messages in thread
From: Geliang Tang @ 2022-05-09 14:56 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch implements the round-robin BPF MPTCP scheduler, named bpf_rr,
which always picks the next available subflow to send data. If no such
next subflow available, picks the first one.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/bpf_mptcp_helpers.h | 6 +++
.../selftests/bpf/progs/mptcp_bpf_rr.c | 47 +++++++++++++++++++
2 files changed, 53 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
index 7ecd0b14666a..2d5109f459b4 100644
--- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
@@ -29,10 +29,16 @@ struct mptcp_sched_ops {
struct mptcp_sock {
struct inet_connection_sock sk;
+ struct sock *last_snd;
__u32 token;
struct sock *first;
struct mptcp_sched_ops *sched;
char ca_name[TCP_CA_NAME_MAX];
} __attribute__((preserve_access_index));
+struct mptcp_subflow_context {
+ __u32 token;
+ struct sock *tcp_sock; /* tcp sk backpointer */
+} __attribute__((preserve_access_index));
+
#endif
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
new file mode 100644
index 000000000000..2604e0783067
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_mptcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_rr_init")
+void BPF_PROG(mptcp_sched_rr_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_rr_release")
+void BPF_PROG(mptcp_sched_rr_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_rr_get_subflow, const struct mptcp_sock *msk,
+ bool reinject, struct mptcp_sched_data *data)
+{
+ struct sock *ssk = data->contexts[0]->tcp_sock;
+
+ for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (!msk->last_snd || !data->contexts[i])
+ break;
+
+ if (data->contexts[i]->tcp_sock == msk->last_snd) {
+ if (i + 1 == MPTCP_SUBFLOWS_MAX || !data->contexts[i + 1])
+ break;
+
+ ssk = data->contexts[i + 1]->tcp_sock;
+ break;
+ }
+ }
+
+ data->sock = ssk;
+ data->call_again = 0;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops rr = {
+ .init = (void *)mptcp_sched_rr_init,
+ .release = (void *)mptcp_sched_rr_release,
+ .get_subflow = (void *)bpf_rr_get_subflow,
+ .name = "bpf_rr",
+};
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test
2022-05-09 14:56 [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 1/3] mptcp: add subflows array in sched data Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 2/3] selftests/bpf: add bpf_rr scheduler Geliang Tang
@ 2022-05-09 14:56 ` Geliang Tang
2022-05-09 16:48 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-11 0:57 ` [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Mat Martineau
3 siblings, 1 reply; 11+ messages in thread
From: Geliang Tang @ 2022-05-09 14:56 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index cbd9367c1e0f..5058daf15ce5 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -6,6 +6,7 @@
#include "cgroup_helpers.h"
#include "network_helpers.h"
#include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_rr.skel.h"
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
@@ -365,10 +366,47 @@ static void test_first(void)
mptcp_bpf_first__destroy(first_skel);
}
+static void test_rr(void)
+{
+ struct mptcp_bpf_rr *rr_skel;
+ int server_fd, client_fd;
+ struct bpf_link *link;
+
+ rr_skel = mptcp_bpf_rr__open_and_load();
+ if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_rr__destroy(rr_skel);
+ return;
+ }
+
+ system("ip link add veth1 type veth");
+ system("ip addr add 10.0.1.1/24 dev veth1");
+ system("ip link set veth1 up");
+ system("ip mptcp endpoint add 10.0.1.1 subflow");
+ system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+ server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+ client_fd = connect_to_mptcp_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd);
+
+ close(client_fd);
+ close(server_fd);
+ system("sysctl -qw net.mptcp.scheduler=default");
+ system("ip mptcp endpoint flush");
+ system("ip link del veth1");
+ bpf_link__destroy(link);
+ mptcp_bpf_rr__destroy(rr_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
test_base();
if (test__start_subtest("first"))
test_first();
+ if (test__start_subtest("rr"))
+ test_rr();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-05-09 14:56 ` [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-09 16:48 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-05-09 16:48 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Success! ✅:
- Task: https://cirrus-ci.com/task/6702307382394880
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6702307382394880/summary/summary.txt
- KVM Validation: debug:
- Unstable: 1 failed test(s): selftest_diag 🔴:
- Task: https://cirrus-ci.com/task/4556060684976128
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4556060684976128/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4c681daf367a
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH mptcp-next v13 0/3] BPF round-robin scheduler
2022-05-09 14:56 [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Geliang Tang
` (2 preceding siblings ...)
2022-05-09 14:56 ` [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-11 0:57 ` Mat Martineau
3 siblings, 0 replies; 11+ messages in thread
From: Mat Martineau @ 2022-05-11 0:57 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Mon, 9 May 2022, Geliang Tang wrote:
> v13:
> - add !msk->last_snd check in patch 2
> - use ASSERT_OK_PTR instead of CHECK in patch 3
> - base-commit: export/20220509T115202
>
Thanks for the updates. Like the v12 review, I think we should wait until
this is rebased on the bpf header changes before applying the patches.
- Mat
> v12:
> - init ssk from data->contexts[0], instead of msk->first.
> - cycle through all the subflows, instead of the first two.
>
> v11:
> - rename array to contexts.
> - drop number of subflows in mptcp_sched_data.
> - set unused array elements to NULL.
> - add MPTCP_SUBFLOWS_MAX check in mptcp_sched_data_init.
>
> v10:
> - init subflows array in mptcp_sched_data_init.
> - for (int i = 0; i < data->subflows; i++) is not allowed in BPF, using
> this instead:
> for (int i = 0; i < MPTCP_SUBFLOWS_MAX && i < data->subflows; i++)
> - deponds on: "BPF packet scheduler" series v18.
>
> v9:
> - add subflows array in mptcp_sched_data
> - deponds on: "BPF packet scheduler" series v17 +
> Squash to "mptcp: add struct mptcp_sched_ops v17".
>
> v8:
> - use struct mptcp_sched_data.
> - deponds on: "BPF packet scheduler" series v14.
>
> v7:
> - rename retrans to reinject.
> - drop last_snd setting.
> - deponds on: "BPF packet scheduler" series v13.
>
> v6:
> - set call_me_again flag.
> - deponds on: "BPF packet scheduler" series v12.
>
> v5:
> - update patch 2, use temporary storage instead.
> - update patch 3, use new helpers.
> - deponds on: "BPF packet scheduler" series v11.
>
> v4:
> - add retrans argment for get_subflow()
>
> v3:
> - add last_snd write access.
> - keep msk->last_snd setting in get_subflow().
> - deponds on: "BPF packet scheduler" series v10.
>
> v2:
> - merge the squash-to patch.
> - implement bpf_mptcp_get_subflows helper, instead of
> bpf_mptcp_get_next_subflow.
> - deponds on: "BPF packet scheduler v9".
>
> This patchset implements round-robin scheduler using BPF. Address to
> some commends for the RFC version:
>
> https://patchwork.kernel.org/project/mptcp/cover/cover.1631011068.git.geliangtang@xiaomi.com/
>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/75
>
> Geliang Tang (3):
> mptcp: add subflows array in sched data
> selftests/bpf: add bpf_rr scheduler
> selftests/bpf: add bpf_rr test
>
> include/net/mptcp.h | 2 +
> net/mptcp/sched.c | 14 ++++++
> .../testing/selftests/bpf/bpf_mptcp_helpers.h | 8 ++++
> .../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++++++++++++
> .../selftests/bpf/progs/mptcp_bpf_rr.c | 47 +++++++++++++++++++
> 5 files changed, 109 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
>
> --
> 2.34.1
>
>
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: selftests/bpf: Add bpf_rr test: Tests Results
2022-06-02 4:53 [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test Geliang Tang
@ 2022-06-02 6:36 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-06-02 6:36 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/6186355024723968
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6186355024723968/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/5623405071302656
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5623405071302656/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/09f1dd4b0a46
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-06-01 14:08 [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-06-01 16:02 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-06-01 16:02 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/6191871876661248
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6191871876661248/summary/summary.txt
- KVM Validation: debug:
- Unstable: 1 failed test(s): selftest_diag 🔴:
- Task: https://cirrus-ci.com/task/4951362948562944
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4951362948562944/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4d61a8b660c4
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-06-01 6:46 [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-06-01 8:16 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-06-01 8:16 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Success! ✅:
- Task: https://cirrus-ci.com/task/6448318602543104
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6448318602543104/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/5040943718989824
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5040943718989824/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f2e8aeb9dc5b
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-05-31 9:09 [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-31 10:39 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-05-31 10:39 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Unstable: 1 failed test(s): selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/4799661683769344
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4799661683769344/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): selftest_diag selftest_mptcp_join selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/5925561590611968
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5925561590611968/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c91606f830c4
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: selftests/bpf: add bpf_rr test: Tests Results
2022-05-28 15:11 [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-28 17:05 ` MPTCP CI
0 siblings, 0 replies; 11+ messages in thread
From: MPTCP CI @ 2022-05-28 17:05 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Unstable: 1 failed test(s): selftest_simult_flows 🔴:
- Task: https://cirrus-ci.com/task/5604026279526400
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5604026279526400/summary/summary.txt
- KVM Validation: debug:
- Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/6729926186369024
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6729926186369024/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/982d66320a49
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-06-02 6:36 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-09 14:56 [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 1/3] mptcp: add subflows array in sched data Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 2/3] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-05-09 14:56 ` [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-09 16:48 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-11 0:57 ` [PATCH mptcp-next v13 0/3] BPF round-robin scheduler Mat Martineau
2022-05-28 15:11 [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-28 17:05 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-31 9:09 [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-31 10:39 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-06-01 6:46 [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01 8:16 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-06-01 14:08 [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01 16:02 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-06-02 4:53 [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test Geliang Tang
2022-06-02 6:36 ` selftests/bpf: Add bpf_rr test: Tests Results MPTCP CI
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.