All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next v13 0/9] BPF packet scheduler
@ 2022-04-21  6:22 Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 1/9] mptcp: add struct mptcp_sched_ops Geliang Tang
                   ` (8 more replies)
  0 siblings, 9 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

v13:
 - rename retrans to reinject
 - drop "add last_snd write access" patch
 - change %lu to %zu to fix the build break on i386.

base-commit: export/20220420T152103

v12:
 - add call_me_again flag.

base-commit: export/20220419T055400

v11:
 - add retrans argment for get_subflow()

base-commit: export/20220408T100826

v10:
 - patch 5: keep msk->last_snd setting in get_subflow().
 - patch 6: add bpf_mptcp_sched_btf_struct_access().
 - patch 8: use MIN() in sys/param.h, instead of defining a new one.
 - update commit logs.

base-commit: export/20220406T054706

v9:
 - patch 2: add the missing mptcp_sched_init() invoking in
   mptcp_proto_init().
 - patch 5: set last_snd after invoking get_subflow().
 - patch 7: merge the squash-to patch.

v8:
 - use global sched_list instead of pernet sched_list.
 - drop synchronize_rcu() in mptcp_unregister_scheduler().
 - update mptcp_init_sched and mptcp_release_sched as Mat and Florian
   suggested.
 - fix the build break in patch 8.
 - depends on: "add skc_to_mptcp_sock" v14.
 - export/20220325T055307

v7:
 - add bpf_try_module_get in mptcp_init_sched.
 - add bpf_module_put in mptcp_release_sched.
 - rename bpf_first to mptcp_bpf_first.
 - update commit logs.

v6:
 - still use pernet sched_list, use current->nsproxy->net_ns in BPF
   context instead of using init_net.
 - patch 1:
   - use rcu_read_lock instead of spin_lock in mptcp_sched_find as Florian suggested.
   - drop synchronize_rcu in sched_exit_net as Florian suggested.
   - keep synchronize_rcu in mptcp_unregister_scheduler, otherwise, got
     a workqueue lockup in my test.
   - update Makefile as Mat suggested.
 - patch 2:
   - add mptcp_sched_data_init to register default sched, instead of
     registering it in init_net.
 - patch 5:
   - move mptcp_sched_get_subflow to protocol.h as Mat suggested.
 - patch 6:
   - use current->nsproxy->net_ns instead of init_net.
 - patch 8:
   - add send_data to send more data, instead of send_byte.

v5:
 - patch 1: define per-namespace sched_list (but only used init_net
   namespace. It is difficult to get 'net' in bpf_mptcp_sched_reg and
   bpf_mptcp_sched_unreg. I need some suggestions here.)
 - patch 2: skip mptcp_sched_default in mptcp_unregister_scheduler.
 - patch 8: add tests into mptcp.c, instead of bpf_tcp_ca.c.

v4:
 - set msk->sched to &mptcp_sched_default when the sched argument is NULL
in mptcp_init_sched().

v3:
 - add mptcp_release_sched helper in patch 4.
 - rename mptcp_set_sched to mptcp_init_sched in patch 4.
 - add mptcp_sched_first_release in patch 7.
 - do some cleanups.

v2:
 - split into more small patches.
 - change all parameters of mptcp_sched_ops from sk to msk:
       void (*init)(struct mptcp_sock *msk);
       void (*release)(struct mptcp_sock *msk);
       struct sock *   (*get_subflow)(struct mptcp_sock *msk);
 - add tests in bpf_tcp_ca.c, instead of adding a new one.

v1:
 - Addressed to the commends in the RFC version.

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/75

Geliang Tang (9):
  mptcp: add struct mptcp_sched_ops
  mptcp: register default scheduler
  mptcp: add a new sysctl scheduler
  mptcp: add sched in mptcp_sock
  mptcp: add get_subflow wrapper
  mptcp: add bpf_mptcp_sched_ops
  mptcp: add call_me_again flag
  selftests: bpf: add bpf_first scheduler
  selftests: bpf: add bpf_first test

 Documentation/networking/mptcp-sysctl.rst     |   8 +
 include/net/mptcp.h                           |  13 ++
 kernel/bpf/bpf_struct_ops_types.h             |   4 +
 net/mptcp/Makefile                            |   2 +-
 net/mptcp/bpf.c                               | 145 ++++++++++++++++++
 net/mptcp/ctrl.c                              |  14 ++
 net/mptcp/protocol.c                          |  25 ++-
 net/mptcp/protocol.h                          |  22 +++
 net/mptcp/sched.c                             | 105 +++++++++++++
 .../testing/selftests/bpf/bpf_mptcp_helpers.h |  14 ++
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 113 ++++++++++++++
 .../selftests/bpf/progs/mptcp_bpf_first.c     |  33 ++++
 12 files changed, 492 insertions(+), 6 deletions(-)
 create mode 100644 net/mptcp/sched.c
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_first.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 1/9] mptcp: add struct mptcp_sched_ops
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 2/9] mptcp: register default scheduler Geliang Tang
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch defines struct mptcp_sched_ops, which has three struct members,
name, owner and list, and three function pointers, init, release and
get_subflow.

Add the scheduler registering, unregistering and finding functions to add,
delete and find a packet scheduler on the global list mptcp_sched_list.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 include/net/mptcp.h                           | 13 +++++
 net/mptcp/Makefile                            |  2 +-
 net/mptcp/protocol.h                          |  3 +
 net/mptcp/sched.c                             | 56 +++++++++++++++++++
 .../testing/selftests/bpf/bpf_mptcp_helpers.h | 12 ++++
 5 files changed, 85 insertions(+), 1 deletion(-)
 create mode 100644 net/mptcp/sched.c

diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 877077b53200..469a93da32a2 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -95,6 +95,19 @@ struct mptcp_out_options {
 #endif
 };
 
+#define MPTCP_SCHED_NAME_MAX 16
+
+struct mptcp_sched_ops {
+	struct sock *(*get_subflow)(struct mptcp_sock *msk, bool reinject);
+
+	char			name[MPTCP_SCHED_NAME_MAX];
+	struct module		*owner;
+	struct list_head	list;
+
+	void (*init)(struct mptcp_sock *msk);
+	void (*release)(struct mptcp_sock *msk);
+} ____cacheline_aligned_in_smp;
+
 #ifdef CONFIG_MPTCP
 extern struct request_sock_ops mptcp_subflow_request_sock_ops;
 
diff --git a/net/mptcp/Makefile b/net/mptcp/Makefile
index 4004347db47e..702b86e8ecb0 100644
--- a/net/mptcp/Makefile
+++ b/net/mptcp/Makefile
@@ -2,7 +2,7 @@
 obj-$(CONFIG_MPTCP) += mptcp.o
 
 mptcp-y := protocol.o subflow.o options.o token.o crypto.o ctrl.o pm.o diag.o \
-	   mib.o pm_netlink.o sockopt.o pm_userspace.o
+	   mib.o pm_netlink.o sockopt.o pm_userspace.o sched.o
 
 obj-$(CONFIG_SYN_COOKIES) += syncookies.o
 obj-$(CONFIG_INET_MPTCP_DIAG) += mptcp_diag.o
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index f542aeaa5b09..18f8739bbf9c 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -621,6 +621,9 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock);
 void mptcp_info2sockaddr(const struct mptcp_addr_info *info,
 			 struct sockaddr_storage *addr,
 			 unsigned short family);
+struct mptcp_sched_ops *mptcp_sched_find(const char *name);
+int mptcp_register_scheduler(struct mptcp_sched_ops *sched);
+void mptcp_unregister_scheduler(struct mptcp_sched_ops *sched);
 
 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
 {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
new file mode 100644
index 000000000000..c5d3bbafba71
--- /dev/null
+++ b/net/mptcp/sched.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Multipath TCP
+ *
+ * Copyright (c) 2022, SUSE.
+ */
+
+#define pr_fmt(fmt) "MPTCP: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/rculist.h>
+#include <linux/spinlock.h>
+#include "protocol.h"
+
+static DEFINE_SPINLOCK(mptcp_sched_list_lock);
+static LIST_HEAD(mptcp_sched_list);
+
+/* Must be called with rcu read lock held */
+struct mptcp_sched_ops *mptcp_sched_find(const char *name)
+{
+	struct mptcp_sched_ops *sched, *ret = NULL;
+
+	list_for_each_entry_rcu(sched, &mptcp_sched_list, list) {
+		if (!strcmp(sched->name, name)) {
+			ret = sched;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+int mptcp_register_scheduler(struct mptcp_sched_ops *sched)
+{
+	if (!sched->get_subflow)
+		return -EINVAL;
+
+	spin_lock(&mptcp_sched_list_lock);
+	if (mptcp_sched_find(sched->name)) {
+		spin_unlock(&mptcp_sched_list_lock);
+		return -EEXIST;
+	}
+	list_add_tail_rcu(&sched->list, &mptcp_sched_list);
+	spin_unlock(&mptcp_sched_list_lock);
+
+	pr_debug("%s registered", sched->name);
+	return 0;
+}
+
+void mptcp_unregister_scheduler(struct mptcp_sched_ops *sched)
+{
+	spin_lock(&mptcp_sched_list_lock);
+	list_del_rcu(&sched->list);
+	spin_unlock(&mptcp_sched_list_lock);
+}
diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
index b5a43b108982..8587f951ccfe 100644
--- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
@@ -6,6 +6,18 @@
 
 #include "bpf_tcp_helpers.h"
 
+#define MPTCP_SCHED_NAME_MAX 16
+
+struct mptcp_sched_ops {
+	char name[MPTCP_SCHED_NAME_MAX];
+
+	void (*init)(struct mptcp_sock *msk);
+	void (*release)(struct mptcp_sock *msk);
+
+	struct sock *(*get_subflow)(struct mptcp_sock *msk, bool reinject);
+	void *owner;
+};
+
 struct mptcp_sock {
 	struct inet_connection_sock	sk;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 2/9] mptcp: register default scheduler
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 1/9] mptcp: add struct mptcp_sched_ops Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 3/9] mptcp: add a new sysctl scheduler Geliang Tang
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch defines the default packet scheduler mptcp_sched_default,
register it in mptcp_sched_init(), which is invoked in mptcp_proto_init().
Skip deleting this default scheduler in mptcp_unregister_scheduler().

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/protocol.c |  9 +++++++++
 net/mptcp/protocol.h |  2 ++
 net/mptcp/sched.c    | 14 ++++++++++++++
 3 files changed, 25 insertions(+)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 6d59bfdd6bbd..4a9a55c4e2b1 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2251,6 +2251,14 @@ static struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk)
 	return min_stale_count > 1 ? backup : NULL;
 }
 
+struct sock *mptcp_get_subflow_default(struct mptcp_sock *msk, bool reinject)
+{
+	if (reinject)
+		return mptcp_subflow_get_retrans(msk);
+
+	return mptcp_subflow_get_send(msk);
+}
+
 static void mptcp_dispose_initial_subflow(struct mptcp_sock *msk)
 {
 	if (msk->subflow) {
@@ -3805,6 +3813,7 @@ void __init mptcp_proto_init(void)
 
 	mptcp_subflow_init();
 	mptcp_pm_init();
+	mptcp_sched_init();
 	mptcp_token_init();
 
 	if (proto_register(&mptcp_prot, MPTCP_USE_SLAB) != 0)
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 18f8739bbf9c..b71d430ec868 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -624,6 +624,8 @@ void mptcp_info2sockaddr(const struct mptcp_addr_info *info,
 struct mptcp_sched_ops *mptcp_sched_find(const char *name);
 int mptcp_register_scheduler(struct mptcp_sched_ops *sched);
 void mptcp_unregister_scheduler(struct mptcp_sched_ops *sched);
+struct sock *mptcp_get_subflow_default(struct mptcp_sock *msk, bool reinject);
+void mptcp_sched_init(void);
 
 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
 {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index c5d3bbafba71..bd0beff8cac8 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -13,6 +13,12 @@
 #include <linux/spinlock.h>
 #include "protocol.h"
 
+static struct mptcp_sched_ops mptcp_sched_default = {
+	.get_subflow    = mptcp_get_subflow_default,
+	.name           = "default",
+	.owner          = THIS_MODULE,
+};
+
 static DEFINE_SPINLOCK(mptcp_sched_list_lock);
 static LIST_HEAD(mptcp_sched_list);
 
@@ -50,7 +56,15 @@ int mptcp_register_scheduler(struct mptcp_sched_ops *sched)
 
 void mptcp_unregister_scheduler(struct mptcp_sched_ops *sched)
 {
+	if (sched == &mptcp_sched_default)
+		return;
+
 	spin_lock(&mptcp_sched_list_lock);
 	list_del_rcu(&sched->list);
 	spin_unlock(&mptcp_sched_list_lock);
 }
+
+void mptcp_sched_init(void)
+{
+	mptcp_register_scheduler(&mptcp_sched_default);
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 3/9] mptcp: add a new sysctl scheduler
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 1/9] mptcp: add struct mptcp_sched_ops Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 2/9] mptcp: register default scheduler Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 4/9] mptcp: add sched in mptcp_sock Geliang Tang
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds a new sysctl, named scheduler, to support for selection
of different schedulers. Export mptcp_get_scheduler helper to get this
sysctl.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 Documentation/networking/mptcp-sysctl.rst |  8 ++++++++
 net/mptcp/ctrl.c                          | 14 ++++++++++++++
 net/mptcp/protocol.h                      |  1 +
 3 files changed, 23 insertions(+)

diff --git a/Documentation/networking/mptcp-sysctl.rst b/Documentation/networking/mptcp-sysctl.rst
index e263dfcc4b40..d9e69fdc7ea3 100644
--- a/Documentation/networking/mptcp-sysctl.rst
+++ b/Documentation/networking/mptcp-sysctl.rst
@@ -75,3 +75,11 @@ stale_loss_cnt - INTEGER
 	This is a per-namespace sysctl.
 
 	Default: 4
+
+scheduler - STRING
+	Select the scheduler of your choice.
+
+	Support for selection of different schedulers. This is a per-namespace
+	sysctl.
+
+	Default: "default"
diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
index ae20b7d92e28..c46c22a84d23 100644
--- a/net/mptcp/ctrl.c
+++ b/net/mptcp/ctrl.c
@@ -32,6 +32,7 @@ struct mptcp_pernet {
 	u8 checksum_enabled;
 	u8 allow_join_initial_addr_port;
 	u8 pm_type;
+	char scheduler[MPTCP_SCHED_NAME_MAX];
 };
 
 static struct mptcp_pernet *mptcp_get_pernet(const struct net *net)
@@ -69,6 +70,11 @@ int mptcp_get_pm_type(const struct net *net)
 	return mptcp_get_pernet(net)->pm_type;
 }
 
+const char *mptcp_get_scheduler(const struct net *net)
+{
+	return mptcp_get_pernet(net)->scheduler;
+}
+
 static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet)
 {
 	pernet->mptcp_enabled = 1;
@@ -77,6 +83,7 @@ static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet)
 	pernet->allow_join_initial_addr_port = 1;
 	pernet->stale_loss_cnt = 4;
 	pernet->pm_type = MPTCP_PM_TYPE_KERNEL;
+	strcpy(pernet->scheduler, "default");
 }
 
 #ifdef CONFIG_SYSCTL
@@ -128,6 +135,12 @@ static struct ctl_table mptcp_sysctl_table[] = {
 		.extra1       = SYSCTL_ZERO,
 		.extra2       = &mptcp_pm_type_max
 	},
+	{
+		.procname = "scheduler",
+		.maxlen	= MPTCP_SCHED_NAME_MAX,
+		.mode = 0644,
+		.proc_handler = proc_dostring,
+	},
 	{}
 };
 
@@ -149,6 +162,7 @@ static int mptcp_pernet_new_table(struct net *net, struct mptcp_pernet *pernet)
 	table[3].data = &pernet->allow_join_initial_addr_port;
 	table[4].data = &pernet->stale_loss_cnt;
 	table[5].data = &pernet->pm_type;
+	table[6].data = &pernet->scheduler;
 
 	hdr = register_net_sysctl(net, MPTCP_SYSCTL_PATH, table);
 	if (!hdr)
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index b71d430ec868..bfa37f6b9063 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -596,6 +596,7 @@ int mptcp_is_checksum_enabled(const struct net *net);
 int mptcp_allow_join_id0(const struct net *net);
 unsigned int mptcp_stale_loss_cnt(const struct net *net);
 int mptcp_get_pm_type(const struct net *net);
+const char *mptcp_get_scheduler(const struct net *net);
 void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
 				     struct mptcp_options_received *mp_opt);
 bool __mptcp_retransmit_pending_data(struct sock *sk);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 4/9] mptcp: add sched in mptcp_sock
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (2 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 3/9] mptcp: add a new sysctl scheduler Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 5/9] mptcp: add get_subflow wrapper Geliang Tang
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds a new struct member sched in struct mptcp_sock.
And two helpers mptcp_init_sched() and mptcp_release_sched() to
init and release it.

Init it with the sysctl scheduler in mptcp_init_sock(), copy the
scheduler from the parent in mptcp_sk_clone(), and release it in
__mptcp_destroy_sock().

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/protocol.c                          |  7 ++++
 net/mptcp/protocol.h                          |  4 +++
 net/mptcp/sched.c                             | 34 +++++++++++++++++++
 .../testing/selftests/bpf/bpf_mptcp_helpers.h |  1 +
 4 files changed, 46 insertions(+)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 4a9a55c4e2b1..3a5189b2d9b8 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2665,6 +2665,11 @@ static int mptcp_init_sock(struct sock *sk)
 	if (ret)
 		return ret;
 
+	ret = mptcp_init_sched(mptcp_sk(sk),
+			       mptcp_sched_find(mptcp_get_scheduler(net)));
+	if (ret)
+		return ret;
+
 	/* fetch the ca name; do it outside __mptcp_init_sock(), so that clone will
 	 * propagate the correct value
 	 */
@@ -2824,6 +2829,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
 	sk_stop_timer(sk, &sk->sk_timer);
 	mptcp_data_unlock(sk);
 	msk->pm.status = 0;
+	mptcp_release_sched(msk);
 
 	/* clears msk->subflow, allowing the following loop to close
 	 * even the initial subflow
@@ -3001,6 +3007,7 @@ struct sock *mptcp_sk_clone(const struct sock *sk,
 	msk->snd_una = msk->write_seq;
 	msk->wnd_end = msk->snd_nxt + req->rsk_rcv_wnd;
 	msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq;
+	mptcp_init_sched(msk, mptcp_sk(sk)->sched);
 
 	if (mp_opt->suboptions & OPTIONS_MPTCP_MPC) {
 		msk->can_ack = true;
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index bfa37f6b9063..427a42e04ae8 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -298,6 +298,7 @@ struct mptcp_sock {
 	struct socket	*subflow; /* outgoing connect/listener/!mp_capable */
 	struct sock	*first;
 	struct mptcp_pm_data	pm;
+	struct mptcp_sched_ops	*sched;
 	struct {
 		u32	space;	/* bytes copied in last measurement window */
 		u32	copied; /* bytes copied in this measurement window */
@@ -627,6 +628,9 @@ int mptcp_register_scheduler(struct mptcp_sched_ops *sched);
 void mptcp_unregister_scheduler(struct mptcp_sched_ops *sched);
 struct sock *mptcp_get_subflow_default(struct mptcp_sock *msk, bool reinject);
 void mptcp_sched_init(void);
+int mptcp_init_sched(struct mptcp_sock *msk,
+		     struct mptcp_sched_ops *sched);
+void mptcp_release_sched(struct mptcp_sock *msk);
 
 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
 {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index bd0beff8cac8..8025dc51fbe9 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -68,3 +68,37 @@ void mptcp_sched_init(void)
 {
 	mptcp_register_scheduler(&mptcp_sched_default);
 }
+
+int mptcp_init_sched(struct mptcp_sock *msk,
+		     struct mptcp_sched_ops *sched)
+{
+	struct mptcp_sched_ops *sched_init = &mptcp_sched_default;
+
+	if (sched)
+		sched_init = sched;
+
+	if (!bpf_try_module_get(sched_init, sched_init->owner))
+		return -EBUSY;
+
+	msk->sched = sched_init;
+	if (msk->sched->init)
+		msk->sched->init(msk);
+
+	pr_debug("sched=%s", msk->sched->name);
+
+	return 0;
+}
+
+void mptcp_release_sched(struct mptcp_sock *msk)
+{
+	struct mptcp_sched_ops *sched = msk->sched;
+
+	if (!sched)
+		return;
+
+	msk->sched = NULL;
+	if (sched->release)
+		sched->release(msk);
+
+	bpf_module_put(sched, sched->owner);
+}
diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
index 8587f951ccfe..81a9c5d91aae 100644
--- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
@@ -23,6 +23,7 @@ struct mptcp_sock {
 
 	__u32		token;
 	struct sock	*first;
+	struct mptcp_sched_ops *sched;
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 5/9] mptcp: add get_subflow wrapper
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (3 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 4/9] mptcp: add sched in mptcp_sock Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 6/9] mptcp: add bpf_mptcp_sched_ops Geliang Tang
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch defines a new wrapper mptcp_sched_get_subflow(), invoke
get_subflow() of msk->sched in it. Use the wrapper instead of using
mptcp_subflow_get_send() directly.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/protocol.c |  9 ++++-----
 net/mptcp/protocol.h | 11 +++++++++++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 3a5189b2d9b8..00f65eb71104 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1507,7 +1507,6 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
 	subflow->avg_pacing_rate = div_u64((u64)subflow->avg_pacing_rate * wmem +
 					   READ_ONCE(ssk->sk_pacing_rate) * burst,
 					   burst + wmem);
-	msk->last_snd = ssk;
 	msk->snd_burst = burst;
 	return ssk;
 }
@@ -1567,7 +1566,7 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
 			int ret = 0;
 
 			prev_ssk = ssk;
-			ssk = mptcp_subflow_get_send(msk);
+			ssk = mptcp_sched_get_subflow(msk, false);
 
 			/* First check. If the ssk has changed since
 			 * the last round, release prev_ssk
@@ -1636,7 +1635,7 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
 			 * check for a different subflow usage only after
 			 * spooling the first chunk of data
 			 */
-			xmit_ssk = first ? ssk : mptcp_subflow_get_send(mptcp_sk(sk));
+			xmit_ssk = first ? ssk : mptcp_sched_get_subflow(mptcp_sk(sk), false);
 			if (!xmit_ssk)
 				goto out;
 			if (xmit_ssk != ssk) {
@@ -2481,7 +2480,7 @@ static void __mptcp_retrans(struct sock *sk)
 	mptcp_clean_una_wakeup(sk);
 
 	/* first check ssk: need to kick "stale" logic */
-	ssk = mptcp_subflow_get_retrans(msk);
+	ssk = mptcp_sched_get_subflow(msk, true);
 	dfrag = mptcp_rtx_head(sk);
 	if (!dfrag) {
 		if (mptcp_data_fin_enabled(msk)) {
@@ -3146,7 +3145,7 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
 		return;
 
 	if (!sock_owned_by_user(sk)) {
-		struct sock *xmit_ssk = mptcp_subflow_get_send(mptcp_sk(sk));
+		struct sock *xmit_ssk = mptcp_sched_get_subflow(mptcp_sk(sk), false);
 
 		if (xmit_ssk == ssk)
 			__mptcp_subflow_push_pending(sk, ssk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 427a42e04ae8..f31bc3271bcc 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -632,6 +632,17 @@ int mptcp_init_sched(struct mptcp_sock *msk,
 		     struct mptcp_sched_ops *sched);
 void mptcp_release_sched(struct mptcp_sock *msk);
 
+static inline struct sock *mptcp_sched_get_subflow(struct mptcp_sock *msk, bool reinject)
+{
+	struct sock *ssk = msk->sched ? INDIRECT_CALL_INET_1(msk->sched->get_subflow,
+							     mptcp_get_subflow_default,
+							     msk, reinject) :
+					mptcp_get_subflow_default(msk, reinject);
+
+	msk->last_snd = ssk;
+	return ssk;
+}
+
 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
 {
 	struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 6/9] mptcp: add bpf_mptcp_sched_ops
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (4 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 5/9] mptcp: add get_subflow wrapper Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag Geliang Tang
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements a new struct bpf_struct_ops, bpf_mptcp_sched_ops.
Register and unregister the bpf scheduler in .reg and .unreg.

This MPTCP BPF scheduler implementation is similar to BPF TCP CC. And
net/ipv4/bpf_tcp_ca.c is a frame of reference for this patch.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 kernel/bpf/bpf_struct_ops_types.h |   4 +
 net/mptcp/bpf.c                   | 129 ++++++++++++++++++++++++++++++
 2 files changed, 133 insertions(+)

diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
index 5678a9ddf817..5a6b0c0d8d3d 100644
--- a/kernel/bpf/bpf_struct_ops_types.h
+++ b/kernel/bpf/bpf_struct_ops_types.h
@@ -8,5 +8,9 @@ BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
 #ifdef CONFIG_INET
 #include <net/tcp.h>
 BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
+#ifdef CONFIG_MPTCP
+#include <net/mptcp.h>
+BPF_STRUCT_OPS_TYPE(mptcp_sched_ops)
+#endif
 #endif
 #endif
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 535602ba2582..e849fc3fb6c5 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -10,8 +10,137 @@
 #define pr_fmt(fmt) "MPTCP: " fmt
 
 #include <linux/bpf.h>
+#include <linux/bpf_verifier.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include "protocol.h"
 
+extern struct bpf_struct_ops bpf_mptcp_sched_ops;
+extern struct btf *btf_vmlinux;
+
+static u32 optional_ops[] = {
+	offsetof(struct mptcp_sched_ops, init),
+	offsetof(struct mptcp_sched_ops, release),
+	offsetof(struct mptcp_sched_ops, get_subflow),
+};
+
+static const struct bpf_func_proto *
+bpf_mptcp_sched_get_func_proto(enum bpf_func_id func_id,
+			       const struct bpf_prog *prog)
+{
+	return bpf_base_func_proto(func_id);
+}
+
+static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
+					     const struct btf *btf,
+					     const struct btf_type *t, int off,
+					     int size, enum bpf_access_type atype,
+					     u32 *next_btf_id,
+					     enum bpf_type_flag *flag)
+{
+	const struct btf_type *state;
+	u32 type_id;
+
+	if (atype == BPF_READ)
+		return btf_struct_access(log, btf, t, off, size, atype,
+					 next_btf_id, flag);
+
+	type_id = btf_find_by_name_kind(btf, "mptcp_sock", BTF_KIND_STRUCT);
+	if (type_id < 0)
+		return -EINVAL;
+
+	state = btf_type_by_id(btf, type_id);
+	if (t != state) {
+		bpf_log(log, "only read is supported\n");
+		return -EACCES;
+	}
+
+	return NOT_INIT;
+}
+
+static const struct bpf_verifier_ops bpf_mptcp_sched_verifier_ops = {
+	.get_func_proto		= bpf_mptcp_sched_get_func_proto,
+	.is_valid_access	= bpf_tracing_btf_ctx_access,
+	.btf_struct_access	= bpf_mptcp_sched_btf_struct_access,
+};
+
+static int bpf_mptcp_sched_reg(void *kdata)
+{
+	return mptcp_register_scheduler(kdata);
+}
+
+static void bpf_mptcp_sched_unreg(void *kdata)
+{
+	mptcp_unregister_scheduler(kdata);
+}
+
+static int bpf_mptcp_sched_check_member(const struct btf_type *t,
+					const struct btf_member *member)
+{
+	return 0;
+}
+
+static bool is_optional(u32 member_offset)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(optional_ops); i++) {
+		if (member_offset == optional_ops[i])
+			return true;
+	}
+
+	return false;
+}
+
+static int bpf_mptcp_sched_init_member(const struct btf_type *t,
+				       const struct btf_member *member,
+				       void *kdata, const void *udata)
+{
+	const struct mptcp_sched_ops *usched;
+	struct mptcp_sched_ops *sched;
+	int prog_fd;
+	u32 moff;
+
+	usched = (const struct mptcp_sched_ops *)udata;
+	sched = (struct mptcp_sched_ops *)kdata;
+
+	moff = __btf_member_bit_offset(t, member) / 8;
+	switch (moff) {
+	case offsetof(struct mptcp_sched_ops, name):
+		if (bpf_obj_name_cpy(sched->name, usched->name,
+				     sizeof(sched->name)) <= 0)
+			return -EINVAL;
+		if (mptcp_sched_find(usched->name))
+			return -EEXIST;
+		return 1;
+	}
+
+	if (!btf_type_resolve_func_ptr(btf_vmlinux, member->type, NULL))
+		return 0;
+
+	/* Ensure bpf_prog is provided for compulsory func ptr */
+	prog_fd = (int)(*(unsigned long *)(udata + moff));
+	if (!prog_fd && !is_optional(moff))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int bpf_mptcp_sched_init(struct btf *btf)
+{
+	return 0;
+}
+
+struct bpf_struct_ops bpf_mptcp_sched_ops = {
+	.verifier_ops	= &bpf_mptcp_sched_verifier_ops,
+	.reg		= bpf_mptcp_sched_reg,
+	.unreg		= bpf_mptcp_sched_unreg,
+	.check_member	= bpf_mptcp_sched_check_member,
+	.init_member	= bpf_mptcp_sched_init_member,
+	.init		= bpf_mptcp_sched_init,
+	.name		= "mptcp_sched_ops",
+};
+
 struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk)
 {
 	if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP && sk_is_mptcp(sk))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (5 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 6/9] mptcp: add bpf_mptcp_sched_ops Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-22  0:55   ` Mat Martineau
  2022-04-21  6:22 ` [PATCH mptcp-next v13 8/9] selftests: bpf: add bpf_first scheduler Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 9/9] selftests: bpf: add bpf_first test Geliang Tang
  8 siblings, 1 reply; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

For supporting a "redundant" packet scheduler in the future, this patch
adds a flag of struct mptcp_sock named call_me_again to indicate that
get_subflow() function needs to be called again.

Export it in bpf_mptcp_helpers.h, and add BPF write access to it.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/bpf.c                                 | 16 ++++++++++++++++
 net/mptcp/protocol.h                            |  1 +
 net/mptcp/sched.c                               |  1 +
 tools/testing/selftests/bpf/bpf_mptcp_helpers.h |  1 +
 4 files changed, 19 insertions(+)

diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index e849fc3fb6c5..1611dbe63eb2 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -40,6 +40,7 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 {
 	const struct btf_type *state;
 	u32 type_id;
+	size_t end;
 
 	if (atype == BPF_READ)
 		return btf_struct_access(log, btf, t, off, size, atype,
@@ -55,6 +56,21 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 		return -EACCES;
 	}
 
+	switch (off) {
+	case offsetofend(struct mptcp_sock, sched):
+		end = offsetofend(struct mptcp_sock, sched) + sizeof(u8);
+		break;
+	default:
+		bpf_log(log, "no write support to mptcp_sock at off %d\n", off);
+		return -EACCES;
+	}
+
+	if (off + size > end) {
+		bpf_log(log, "access beyond mptcp_sock at off %u size %u ended at %zu",
+			off, size, end);
+		return -EACCES;
+	}
+
 	return NOT_INIT;
 }
 
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index f31bc3271bcc..13c6ad5fbade 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -299,6 +299,7 @@ struct mptcp_sock {
 	struct sock	*first;
 	struct mptcp_pm_data	pm;
 	struct mptcp_sched_ops	*sched;
+	u8		call_me_again:1;
 	struct {
 		u32	space;	/* bytes copied in last measurement window */
 		u32	copied; /* bytes copied in this measurement window */
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 8025dc51fbe9..05ab45505f88 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -83,6 +83,7 @@ int mptcp_init_sched(struct mptcp_sock *msk,
 	msk->sched = sched_init;
 	if (msk->sched->init)
 		msk->sched->init(msk);
+	msk->call_me_again = 0;
 
 	pr_debug("sched=%s", msk->sched->name);
 
diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
index 81a9c5d91aae..dacee63455f5 100644
--- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
@@ -24,6 +24,7 @@ struct mptcp_sock {
 	__u32		token;
 	struct sock	*first;
 	struct mptcp_sched_ops *sched;
+	__u8		call_me_again:1;
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 8/9] selftests: bpf: add bpf_first scheduler
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (6 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  6:22 ` [PATCH mptcp-next v13 9/9] selftests: bpf: add bpf_first test Geliang Tang
  8 siblings, 0 replies; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the simplest MPTCP scheduler, named bpf_first,
which always picks the first subflow to send data. It's a sample of
MPTCP BPF scheduler implementations.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../selftests/bpf/progs/mptcp_bpf_first.c     | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_first.c

diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
new file mode 100644
index 000000000000..892be785dda2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
@@ -0,0 +1,33 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_mptcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_first_init")
+void BPF_PROG(mptcp_sched_first_init, struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_first_release")
+void BPF_PROG(mptcp_sched_first_release, struct mptcp_sock *msk)
+{
+}
+
+struct sock *BPF_STRUCT_OPS(bpf_first_get_subflow, struct mptcp_sock *msk, bool reinject)
+{
+	struct sock *ssk = msk->first;
+
+	msk->call_me_again = 0;
+	return ssk;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops first = {
+	.init		= (void *)mptcp_sched_first_init,
+	.release	= (void *)mptcp_sched_first_release,
+	.get_subflow	= (void *)bpf_first_get_subflow,
+	.name		= "bpf_first",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH mptcp-next v13 9/9] selftests: bpf: add bpf_first test
  2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
                   ` (7 preceding siblings ...)
  2022-04-21  6:22 ` [PATCH mptcp-next v13 8/9] selftests: bpf: add bpf_first scheduler Geliang Tang
@ 2022-04-21  6:22 ` Geliang Tang
  2022-04-21  7:56   ` selftests: bpf: add bpf_first test: Tests Results MPTCP CI
  8 siblings, 1 reply; 12+ messages in thread
From: Geliang Tang @ 2022-04-21  6:22 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch expends the MPTCP test base to support MPTCP packet
scheduler tests. Add the bpf_first scheduler test in it. Use sysctl
to set net.mptcp.scheduler to use this sched.

Some code in send_data() is from prog_tests/bpf_tcp_ca.c.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 113 ++++++++++++++++++
 1 file changed, 113 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 7e704f5aab05..377ebc0fbcbe 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -1,9 +1,11 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2020, Tessares SA. */
 
+#include <sys/param.h>
 #include <test_progs.h>
 #include "cgroup_helpers.h"
 #include "network_helpers.h"
+#include "mptcp_bpf_first.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -19,6 +21,8 @@ struct mptcp_storage {
 };
 
 static char monitor_log_path[64];
+static const unsigned int total_bytes = 10 * 1024 * 1024;
+static int stop, duration;
 
 static int verify_tsk(int map_fd, int client_fd)
 {
@@ -251,8 +255,117 @@ void test_base(void)
 	close(cgroup_fd);
 }
 
+static void *server(void *arg)
+{
+	int lfd = (int)(long)arg, err = 0, fd;
+	ssize_t nr_sent = 0, bytes = 0;
+	char batch[1500];
+
+	fd = accept(lfd, NULL, NULL);
+	while (fd == -1) {
+		if (errno == EINTR)
+			continue;
+		err = -errno;
+		goto done;
+	}
+
+	if (settimeo(fd, 0)) {
+		err = -errno;
+		goto done;
+	}
+
+	while (bytes < total_bytes && !READ_ONCE(stop)) {
+		nr_sent = send(fd, &batch,
+			       MIN(total_bytes - bytes, sizeof(batch)), 0);
+		if (nr_sent == -1 && errno == EINTR)
+			continue;
+		if (nr_sent == -1) {
+			err = -errno;
+			break;
+		}
+		bytes += nr_sent;
+	}
+
+	CHECK(bytes != total_bytes, "send", "%zd != %u nr_sent:%zd errno:%d\n",
+	      bytes, total_bytes, nr_sent, errno);
+
+done:
+	if (fd >= 0)
+		close(fd);
+	if (err) {
+		WRITE_ONCE(stop, 1);
+		return ERR_PTR(err);
+	}
+	return NULL;
+}
+
+static void send_data(int lfd, int fd)
+{
+	ssize_t nr_recv = 0, bytes = 0;
+	pthread_t srv_thread;
+	void *thread_ret;
+	char batch[1500];
+	int err;
+
+	WRITE_ONCE(stop, 0);
+
+	err = pthread_create(&srv_thread, NULL, server, (void *)(long)lfd);
+	if (CHECK(err != 0, "pthread_create", "err:%d errno:%d\n", err, errno))
+		return;
+
+	/* recv total_bytes */
+	while (bytes < total_bytes && !READ_ONCE(stop)) {
+		nr_recv = recv(fd, &batch,
+			       MIN(total_bytes - bytes, sizeof(batch)), 0);
+		if (nr_recv == -1 && errno == EINTR)
+			continue;
+		if (nr_recv == -1)
+			break;
+		bytes += nr_recv;
+	}
+
+	CHECK(bytes != total_bytes, "recv", "%zd != %u nr_recv:%zd errno:%d\n",
+	      bytes, total_bytes, nr_recv, errno);
+
+	WRITE_ONCE(stop, 1);
+
+	pthread_join(srv_thread, &thread_ret);
+	CHECK(IS_ERR(thread_ret), "pthread_join", "thread_ret:%ld",
+	      PTR_ERR(thread_ret));
+}
+
+static void test_first(void)
+{
+	struct mptcp_bpf_first *first_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	first_skel = mptcp_bpf_first__open_and_load();
+	if (CHECK(!first_skel, "bpf_first__open_and_load", "failed\n"))
+		return;
+
+	link = bpf_map__attach_struct_ops(first_skel->maps.first);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_first__destroy(first_skel);
+		return;
+	}
+
+	system("sysctl -q net.mptcp.scheduler=bpf_first");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_mptcp_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+
+	close(client_fd);
+	close(server_fd);
+	bpf_link__destroy(link);
+	mptcp_bpf_first__destroy(first_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
 		test_base();
+	if (test__start_subtest("first"))
+		test_first();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: selftests: bpf: add bpf_first test: Tests Results
  2022-04-21  6:22 ` [PATCH mptcp-next v13 9/9] selftests: bpf: add bpf_first test Geliang Tang
@ 2022-04-21  7:56   ` MPTCP CI
  0 siblings, 0 replies; 12+ messages in thread
From: MPTCP CI @ 2022-04-21  7:56 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6700592348266496
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6700592348266496/summary/summary.txt

- KVM Validation: debug:
  - Critical: KMemLeak ❌:
  - Task: https://cirrus-ci.com/task/4554345650847744
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4554345650847744/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/96d2ca054d7e


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag
  2022-04-21  6:22 ` [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag Geliang Tang
@ 2022-04-22  0:55   ` Mat Martineau
  0 siblings, 0 replies; 12+ messages in thread
From: Mat Martineau @ 2022-04-22  0:55 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

On Thu, 21 Apr 2022, Geliang Tang wrote:

> For supporting a "redundant" packet scheduler in the future, this patch
> adds a flag of struct mptcp_sock named call_me_again to indicate that
> get_subflow() function needs to be called again.
>
> Export it in bpf_mptcp_helpers.h, and add BPF write access to it.
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> net/mptcp/bpf.c                                 | 16 ++++++++++++++++
> net/mptcp/protocol.h                            |  1 +
> net/mptcp/sched.c                               |  1 +
> tools/testing/selftests/bpf/bpf_mptcp_helpers.h |  1 +
> 4 files changed, 19 insertions(+)
>
> diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
> index e849fc3fb6c5..1611dbe63eb2 100644
> --- a/net/mptcp/bpf.c
> +++ b/net/mptcp/bpf.c
> @@ -40,6 +40,7 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
> {
> 	const struct btf_type *state;
> 	u32 type_id;
> +	size_t end;
>
> 	if (atype == BPF_READ)
> 		return btf_struct_access(log, btf, t, off, size, atype,
> @@ -55,6 +56,21 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
> 		return -EACCES;
> 	}
>
> +	switch (off) {
> +	case offsetofend(struct mptcp_sock, sched):
> +		end = offsetofend(struct mptcp_sock, sched) + sizeof(u8);
> +		break;
> +	default:
> +		bpf_log(log, "no write support to mptcp_sock at off %d\n", off);
> +		return -EACCES;
> +	}
> +
> +	if (off + size > end) {
> +		bpf_log(log, "access beyond mptcp_sock at off %u size %u ended at %zu",
> +			off, size, end);
> +		return -EACCES;
> +	}
> +
> 	return NOT_INIT;
> }
>
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index f31bc3271bcc..13c6ad5fbade 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -299,6 +299,7 @@ struct mptcp_sock {
> 	struct sock	*first;
> 	struct mptcp_pm_data	pm;
> 	struct mptcp_sched_ops	*sched;
> +	u8		call_me_again:1;

This should be part of the get_subflow function call in mptcp_sched_ops, 
not part of the socket state. We need some BPF-compatible way to return 
both a subflow pointer and a boolean from the call to the scheduler.

Seems like it should work to add a new struct like this:

struct mptcp_sched_data {
 	struct sock *sock;
 	bool call_again;
};

void (*get_subflow)(struct mptcp_sock *msk, bool reinject, struct mptcp_sched_data *data);


and then modify bpf_mptcp_sched_btf_struct_access() to allow writing 
values? What do you think?


The other part we need to figure out is how to update the transmit code to 
send the same data across multiple subflows with this call_again flag is 
used. I'm not yet sure what to recommend for that yet.

- Mat

> 	struct {
> 		u32	space;	/* bytes copied in last measurement window */
> 		u32	copied; /* bytes copied in this measurement window */
> diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
> index 8025dc51fbe9..05ab45505f88 100644
> --- a/net/mptcp/sched.c
> +++ b/net/mptcp/sched.c
> @@ -83,6 +83,7 @@ int mptcp_init_sched(struct mptcp_sock *msk,
> 	msk->sched = sched_init;
> 	if (msk->sched->init)
> 		msk->sched->init(msk);
> +	msk->call_me_again = 0;
>
> 	pr_debug("sched=%s", msk->sched->name);
>
> diff --git a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
> index 81a9c5d91aae..dacee63455f5 100644
> --- a/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
> +++ b/tools/testing/selftests/bpf/bpf_mptcp_helpers.h
> @@ -24,6 +24,7 @@ struct mptcp_sock {
> 	__u32		token;
> 	struct sock	*first;
> 	struct mptcp_sched_ops *sched;
> +	__u8		call_me_again:1;
> 	char		ca_name[TCP_CA_NAME_MAX];
> } __attribute__((preserve_access_index));
>
> -- 
> 2.34.1
>
>
>

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-04-22  0:55 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21  6:22 [PATCH mptcp-next v13 0/9] BPF packet scheduler Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 1/9] mptcp: add struct mptcp_sched_ops Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 2/9] mptcp: register default scheduler Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 3/9] mptcp: add a new sysctl scheduler Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 4/9] mptcp: add sched in mptcp_sock Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 5/9] mptcp: add get_subflow wrapper Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 6/9] mptcp: add bpf_mptcp_sched_ops Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 7/9] mptcp: add call_me_again flag Geliang Tang
2022-04-22  0:55   ` Mat Martineau
2022-04-21  6:22 ` [PATCH mptcp-next v13 8/9] selftests: bpf: add bpf_first scheduler Geliang Tang
2022-04-21  6:22 ` [PATCH mptcp-next v13 9/9] selftests: bpf: add bpf_first test Geliang Tang
2022-04-21  7:56   ` selftests: bpf: add bpf_first test: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.