netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/2] net/smc: Introduce BPF injection capability
@ 2023-02-21 12:18 D. Wythe
  2023-02-21 12:18 ` [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC D. Wythe
  2023-02-21 12:18 ` [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability D. Wythe
  0 siblings, 2 replies; 14+ messages in thread
From: D. Wythe @ 2023-02-21 12:18 UTC (permalink / raw)
  To: kgraul, wenjia, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf

From: "D. Wythe" <alibuda@linux.alibaba.com>

This PATCHes attempt to introduce BPF injection capability for SMC,
and add selftest to ensure code stability.

As we all know that the SMC protocol is not suitable for all scenarios,
especially for short-lived. However, for most applications, they cannot
guarantee that there are no such scenarios at all. Therefore, apps
may need some specific strategies to decide shall we need to use SMC
or not, for example, apps can limit the scope of the SMC to a specific
IP address or port.

Based on the consideration of transparent replacement, we hope that apps
can remain transparent even if they need to formulate some specific
strategies for SMC using. That is, do not need to recompile their code.

On the other hand, we need to ensure the scalability of strategies
implementation. Although it is simple to use socket options or sysctl,
it will bring more complexity to subsequent expansion.

Fortunately, BPF can solve these concerns very well, users can write
thire own strategies in eBPF to choose whether to use SMC or not.
And it's quite easy for them to modify their strategies in the future.

This PATCHes implement injection capability for SMC via struct_ops.
In that way, we can add new injection scenarios in the future.

D. Wythe (2):
  net/smc: Introduce BPF injection capability for SMC
  bpf/selftests: Test for SMC protocol negotiate

 include/linux/btf_ids.h                          |  15 ++
 include/net/smc.h                                | 254 ++++++++++++++++++
 kernel/bpf/bpf_struct_ops_types.h                |   4 +
 net/Makefile                                     |   5 +
 net/smc/af_smc.c                                 |  10 +-
 net/smc/bpf_smc_struct_ops.c                     | 146 +++++++++++
 net/smc/smc.h                                    | 220 ----------------
 tools/testing/selftests/bpf/prog_tests/bpf_smc.c |  39 +++
 tools/testing/selftests/bpf/progs/bpf_smc.c      | 315 +++++++++++++++++++++++
 9 files changed, 787 insertions(+), 221 deletions(-)
 create mode 100644 net/smc/bpf_smc_struct_ops.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/bpf_smc.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_smc.c

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-21 12:18 [PATCH bpf-next v2 0/2] net/smc: Introduce BPF injection capability D. Wythe
@ 2023-02-21 12:18 ` D. Wythe
  2023-02-22 21:40   ` Martin KaFai Lau
  2023-02-27  7:58   ` Wenjia Zhang
  2023-02-21 12:18 ` [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability D. Wythe
  1 sibling, 2 replies; 14+ messages in thread
From: D. Wythe @ 2023-02-21 12:18 UTC (permalink / raw)
  To: kgraul, wenjia, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf

From: "D. Wythe" <alibuda@linux.alibaba.com>

This PATCH attempts to introduce BPF injection capability for SMC.
As we all know that the SMC protocol is not suitable for all scenarios,
especially for short-lived. However, for most applications, they cannot
guarantee that there are no such scenarios at all. Therefore, apps
may need some specific strategies to decide shall we need to use SMC
or not, for example, apps can limit the scope of the SMC to a specific
IP address or port.

Based on the consideration of transparent replacement, we hope that apps
can remain transparent even if they need to formulate some specific
strategies for SMC using. That is, do not need to recompile their code.

On the other hand, we need to ensure the scalability of strategies
implementation. Although it is simple to use socket options or sysctl,
it will bring more complexity to subsequent expansion.

Fortunately, BPF can solve these concerns very well, users can write
thire own strategies in eBPF to choose whether to use SMC or not.
And it's quite easy for them to modify their strategies in the future.

This PATCH implement injection capability for SMC via struct_ops.
In that way, we can add new injection scenarios in the future.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
---
 include/linux/btf_ids.h           |  15 +++
 include/net/smc.h                 | 254 ++++++++++++++++++++++++++++++++++++++
 kernel/bpf/bpf_struct_ops_types.h |   4 +
 net/Makefile                      |   5 +
 net/smc/af_smc.c                  |  10 +-
 net/smc/bpf_smc_struct_ops.c      | 146 ++++++++++++++++++++++
 net/smc/smc.h                     | 220 ---------------------------------
 7 files changed, 433 insertions(+), 221 deletions(-)
 create mode 100644 net/smc/bpf_smc_struct_ops.c

diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
index 3a4f7cd..25eab1e 100644
--- a/include/linux/btf_ids.h
+++ b/include/linux/btf_ids.h
@@ -264,6 +264,21 @@ enum {
 MAX_BTF_TRACING_TYPE,
 };
 
+#if IS_ENABLED(CONFIG_SMC)
+#define BTF_SMC_TYPE_xxx		\
+	BTF_SMC_TYPE(BTF_SMC_TYPE_SOCK, smc_sock)		\
+	BTF_SMC_TYPE(BTF_SMC_TYPE_CONNECTION, smc_connection)	\
+	BTF_SMC_TYPE(BTF_SMC_TYPE_HOST_CURSOR, smc_host_cursor)
+
+enum {
+#define BTF_SMC_TYPE(name, type) name,
+BTF_SMC_TYPE_xxx
+#undef BTF_SMC_TYPE
+MAX_BTF_SMC_TYPE,
+};
+extern u32 btf_smc_ids[];
+#endif
+
 extern u32 btf_tracing_ids[];
 extern u32 bpf_cgroup_btf_id[];
 extern u32 bpf_local_storage_map_btf_id[];
diff --git a/include/net/smc.h b/include/net/smc.h
index 597cb93..912c269 100644
--- a/include/net/smc.h
+++ b/include/net/smc.h
@@ -11,13 +11,16 @@
 #ifndef _SMC_H
 #define _SMC_H
 
+#include <net/inet_connection_sock.h>
 #include <linux/device.h>
 #include <linux/spinlock.h>
 #include <linux/types.h>
 #include <linux/wait.h>
+#include <linux/bpf.h>
 #include "linux/ism.h"
 
 struct sock;
+struct smc_diag_conninfo;
 
 #define SMC_MAX_PNETID_LEN	16	/* Max. length of PNET id */
 
@@ -90,4 +93,255 @@ struct smcd_dev {
 	u8 going_away : 1;
 };
 
+#if IS_ENABLED(CONFIG_SMC)
+
+struct smc_wr_rx_hdr {	/* common prefix part of LLC and CDC to demultiplex */
+	union {
+		u8 type;
+#if defined(__BIG_ENDIAN_BITFIELD)
+		struct {
+			u8 llc_version:4,
+			   llc_type:4;
+		};
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+		struct {
+			u8 llc_type:4,
+			   llc_version:4;
+		};
+#endif
+	};
+} __aligned(1);
+
+struct smc_cdc_conn_state_flags {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	u8	peer_done_writing : 1;	/* Sending done indicator */
+	u8	peer_conn_closed : 1;	/* Peer connection closed indicator */
+	u8	peer_conn_abort : 1;	/* Abnormal close indicator */
+	u8	reserved : 5;
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	u8	reserved : 5;
+	u8	peer_conn_abort : 1;
+	u8	peer_conn_closed : 1;
+	u8	peer_done_writing : 1;
+#endif
+};
+
+struct smc_cdc_producer_flags {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	u8	write_blocked : 1;	/* Writing Blocked, no rx buf space */
+	u8	urg_data_pending : 1;	/* Urgent Data Pending */
+	u8	urg_data_present : 1;	/* Urgent Data Present */
+	u8	cons_curs_upd_req : 1;	/* cursor update requested */
+	u8	failover_validation : 1;/* message replay due to failover */
+	u8	reserved : 3;
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	u8	reserved : 3;
+	u8	failover_validation : 1;
+	u8	cons_curs_upd_req : 1;
+	u8	urg_data_present : 1;
+	u8	urg_data_pending : 1;
+	u8	write_blocked : 1;
+#endif
+};
+
+enum smc_urg_state {
+	SMC_URG_VALID	= 1,			/* data present */
+	SMC_URG_NOTYET	= 2,			/* data pending */
+	SMC_URG_READ	= 3,			/* data was already read */
+};
+
+/* in host byte order */
+union smc_host_cursor {	/* SMC cursor - an offset in an RMBE */
+	struct {
+		u16	reserved;
+		u16	wrap;		/* window wrap sequence number */
+		u32	count;		/* cursor (= offset) part */
+	};
+#ifdef ATOMIC64_INIT
+	atomic64_t		acurs;	/* for atomic processing */
+#else
+	u64			acurs;	/* for atomic processing */
+#endif
+} __aligned(8);
+
+/* in host byte order, except for flag bitfields in network byte order */
+struct smc_host_cdc_msg {		/* Connection Data Control message */
+	struct smc_wr_rx_hdr		common; /* .type = 0xFE */
+	u8				len;	/* length = 44 */
+	u16				seqno;	/* connection seq # */
+	u32				token;	/* alert_token */
+	union smc_host_cursor		prod;		/* producer cursor */
+	union smc_host_cursor		cons;		/* consumer cursor,
+							 * piggy backed "ack"
+							 */
+	struct smc_cdc_producer_flags	prod_flags;	/* conn. tx/rx status */
+	struct smc_cdc_conn_state_flags	conn_state_flags; /* peer conn. status*/
+	u8				reserved[18];
+} __aligned(8);
+
+struct smc_connection {
+	struct rb_node		alert_node;
+	struct smc_link_group	*lgr;		/* link group of connection */
+	struct smc_link		*lnk;		/* assigned SMC-R link */
+	u32			alert_token_local; /* unique conn. id */
+	u8			peer_rmbe_idx;	/* from tcp handshake */
+	int			peer_rmbe_size;	/* size of peer rx buffer */
+	atomic_t		peer_rmbe_space;/* remaining free bytes in peer
+						 * rmbe
+						 */
+	int			rtoken_idx;	/* idx to peer RMB rkey/addr */
+
+	struct smc_buf_desc	*sndbuf_desc;	/* send buffer descriptor */
+	struct smc_buf_desc	*rmb_desc;	/* RMBE descriptor */
+	int			rmbe_size_short;/* compressed notation */
+	int			rmbe_update_limit;
+						/* lower limit for consumer
+						 * cursor update
+						 */
+
+	struct smc_host_cdc_msg	local_tx_ctrl;	/* host byte order staging
+						 * buffer for CDC msg send
+						 * .prod cf. TCP snd_nxt
+						 * .cons cf. TCP sends ack
+						 */
+	union smc_host_cursor	local_tx_ctrl_fin;
+						/* prod crsr - confirmed by peer
+						 */
+	union smc_host_cursor	tx_curs_prep;	/* tx - prepared data
+						 * snd_max..wmem_alloc
+						 */
+	union smc_host_cursor	tx_curs_sent;	/* tx - sent data
+						 * snd_nxt ?
+						 */
+	union smc_host_cursor	tx_curs_fin;	/* tx - confirmed by peer
+						 * snd-wnd-begin ?
+						 */
+	atomic_t		sndbuf_space;	/* remaining space in sndbuf */
+	u16			tx_cdc_seq;	/* sequence # for CDC send */
+	u16			tx_cdc_seq_fin;	/* sequence # - tx completed */
+	spinlock_t		send_lock;	/* protect wr_sends */
+	atomic_t		cdc_pend_tx_wr; /* number of pending tx CDC wqe
+						 * - inc when post wqe,
+						 * - dec on polled tx cqe
+						 */
+	wait_queue_head_t	cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/
+	atomic_t		tx_pushing;     /* nr_threads trying tx push */
+	struct delayed_work	tx_work;	/* retry of smc_cdc_msg_send */
+	u32			tx_off;		/* base offset in peer rmb */
+
+	struct smc_host_cdc_msg	local_rx_ctrl;	/* filled during event_handl.
+						 * .prod cf. TCP rcv_nxt
+						 * .cons cf. TCP snd_una
+						 */
+	union smc_host_cursor	rx_curs_confirmed; /* confirmed to peer
+						    * source of snd_una ?
+						    */
+	union smc_host_cursor	urg_curs;	/* points at urgent byte */
+	enum smc_urg_state	urg_state;
+	bool			urg_tx_pend;	/* urgent data staged */
+	bool			urg_rx_skip_pend;
+						/* indicate urgent oob data
+						 * read, but previous regular
+						 * data still pending
+						 */
+	char			urg_rx_byte;	/* urgent byte */
+	bool			tx_in_release_sock;
+						/* flush pending tx data in
+						 * sock release_cb()
+						 */
+	atomic_t		bytes_to_rcv;	/* arrived data,
+						 * not yet received
+						 */
+	atomic_t		splice_pending;	/* number of spliced bytes
+						 * pending processing
+						 */
+#ifndef KERNEL_HAS_ATOMIC64
+	spinlock_t		acurs_lock;	/* protect cursors */
+#endif
+	struct work_struct	close_work;	/* peer sent some closing */
+	struct work_struct	abort_work;	/* abort the connection */
+	struct tasklet_struct	rx_tsklet;	/* Receiver tasklet for SMC-D */
+	u8			rx_off;		/* receive offset:
+						 * 0 for SMC-R, 32 for SMC-D
+						 */
+	u64			peer_token;	/* SMC-D token of peer */
+	u8			killed : 1;	/* abnormal termination */
+	u8			freed : 1;	/* normal termiation */
+	u8			out_of_sync : 1; /* out of sync with peer */
+};
+
+struct smc_sock {				/* smc sock container */
+	struct sock		sk;
+	struct socket		*clcsock;	/* internal tcp socket */
+	void			(*clcsk_state_change)(struct sock *sk);
+						/* original stat_change fct. */
+	void			(*clcsk_data_ready)(struct sock *sk);
+						/* original data_ready fct. */
+	void			(*clcsk_write_space)(struct sock *sk);
+						/* original write_space fct. */
+	void			(*clcsk_error_report)(struct sock *sk);
+						/* original error_report fct. */
+	struct smc_connection	conn;		/* smc connection */
+	struct smc_sock		*listen_smc;	/* listen parent */
+	struct work_struct	connect_work;	/* handle non-blocking connect*/
+	struct work_struct	tcp_listen_work;/* handle tcp socket accepts */
+	struct work_struct	smc_listen_work;/* prepare new accept socket */
+	struct list_head	accept_q;	/* sockets to be accepted */
+	spinlock_t		accept_q_lock;	/* protects accept_q */
+	bool			limit_smc_hs;	/* put constraint on handshake */
+	bool			use_fallback;	/* fallback to tcp */
+	int			fallback_rsn;	/* reason for fallback */
+	u32			peer_diagnosis; /* decline reason from peer */
+	atomic_t                queued_smc_hs;  /* queued smc handshakes */
+	struct inet_connection_sock_af_ops		af_ops;
+	const struct inet_connection_sock_af_ops	*ori_af_ops;
+						/* original af ops */
+	int			sockopt_defer_accept;
+						/* sockopt TCP_DEFER_ACCEPT
+						 * value
+						 */
+	u8			wait_close_tx_prepared : 1;
+						/* shutdown wr or close
+						 * started, waiting for unsent
+						 * data to be sent
+						 */
+	u8			connect_nonblock : 1;
+						/* non-blocking connect in
+						 * flight
+						 */
+	struct mutex            clcsock_release_lock;
+						/* protects clcsock of a listen
+						 * socket
+						 */
+};
+
+#define SMC_SOCK_CLOSED_TIMING	(0)
+
+/* BPF struct ops for smc protocol negotiator */
+struct smc_sock_negotiator_ops {
+	/* ret for negotiate */
+	int (*negotiate)(struct smc_sock *sk);
+
+	/* info gathering timing */
+	void (*collect_info)(struct smc_sock *sk, int timing);
+};
+
+/* Query if current sock should go with SMC protocol
+ * SK_PASS for yes, otherwise for no.
+ */
+int smc_sock_should_select_smc(const struct smc_sock *smc);
+
+/* At some specific points in time,
+ * let negotiator can perform info gathering
+ * on target sock.
+ */
+void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing);
+
+#else
+struct smc_sock {};
+struct smc_connection {};
+struct smc_sock_negotiator_ops {};
+union smc_host_cursor {};
+#endif /* CONFIG_SMC */
+
 #endif	/* _SMC_H */
diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
index 5678a9d..35cdd15 100644
--- a/kernel/bpf/bpf_struct_ops_types.h
+++ b/kernel/bpf/bpf_struct_ops_types.h
@@ -9,4 +9,8 @@
 #include <net/tcp.h>
 BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
 #endif
+#if IS_ENABLED(CONFIG_SMC)
+#include <net/smc.h>
+BPF_STRUCT_OPS_TYPE(smc_sock_negotiator_ops)
+#endif
 #endif
diff --git a/net/Makefile b/net/Makefile
index 0914bea..47a4c00 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -52,6 +52,11 @@ obj-$(CONFIG_TIPC)		+= tipc/
 obj-$(CONFIG_NETLABEL)		+= netlabel/
 obj-$(CONFIG_IUCV)		+= iucv/
 obj-$(CONFIG_SMC)		+= smc/
+ifneq ($(CONFIG_SMC),)
+ifeq ($(CONFIG_BPF_SYSCALL),y)
+obj-y				+= smc/bpf_smc_struct_ops.o
+endif
+endif
 obj-$(CONFIG_RFKILL)		+= rfkill/
 obj-$(CONFIG_NET_9P)		+= 9p/
 obj-$(CONFIG_CAIF)		+= caif/
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index d7a7420..98651b85 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -166,6 +166,9 @@ static bool smc_hs_congested(const struct sock *sk)
 	if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq))
 		return true;
 
+	if (!smc_sock_should_select_smc(smc))
+		return true;
+
 	return false;
 }
 
@@ -320,6 +323,9 @@ static int smc_release(struct socket *sock)
 	sock_hold(sk); /* sock_put below */
 	smc = smc_sk(sk);
 
+	/* trigger info gathering if needed.*/
+	smc_sock_perform_collecting_info(smc, SMC_SOCK_CLOSED_TIMING);
+
 	old_state = sk->sk_state;
 
 	/* cleanup for a dangling non-blocking connect */
@@ -1627,7 +1633,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
 	}
 
 	smc_copy_sock_settings_to_clc(smc);
-	tcp_sk(smc->clcsock->sk)->syn_smc = 1;
+	tcp_sk(smc->clcsock->sk)->syn_smc = (smc_sock_should_select_smc(smc) == SK_PASS) ?
+		1 : 0;
+
 	if (smc->connect_nonblock) {
 		rc = -EALREADY;
 		goto out;
diff --git a/net/smc/bpf_smc_struct_ops.c b/net/smc/bpf_smc_struct_ops.c
new file mode 100644
index 0000000..a5989b6
--- /dev/null
+++ b/net/smc/bpf_smc_struct_ops.c
@@ -0,0 +1,146 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/kernel.h>
+#include <linux/bpf_verifier.h>
+#include <linux/btf_ids.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
+#include <net/sock.h>
+#include <net/smc.h>
+
+extern struct bpf_struct_ops smc_sock_negotiator_ops;
+
+DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
+struct smc_sock_negotiator_ops *negotiator;
+
+/* convert sk to smc_sock */
+static inline struct smc_sock *smc_sk(const struct sock *sk)
+{
+	return (struct smc_sock *)sk;
+}
+
+/* register ops */
+static inline void smc_reg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
+{
+	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
+	negotiator = ops;
+	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
+}
+
+/* unregister ops */
+static inline void smc_unreg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
+{
+	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
+	if (negotiator == ops)
+		negotiator = NULL;
+	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
+}
+
+int smc_sock_should_select_smc(const struct smc_sock *smc)
+{
+	int ret = SK_PASS;
+
+	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
+	if (negotiator && negotiator->negotiate)
+		ret = negotiator->negotiate((struct smc_sock *)smc);
+	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(smc_sock_should_select_smc);
+
+void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
+{
+	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
+	if (negotiator && negotiator->collect_info)
+		negotiator->collect_info((struct smc_sock *)smc, timing);
+	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
+}
+EXPORT_SYMBOL_GPL(smc_sock_perform_collecting_info);
+
+/* define global smc ID for smc_struct_ops */
+BTF_ID_LIST_GLOBAL(btf_smc_ids, MAX_BTF_SMC_TYPE)
+#define BTF_SMC_TYPE(name, type) BTF_ID(struct, type)
+BTF_SMC_TYPE_xxx
+#undef BTF_SMC_TYPE
+
+static int bpf_smc_passive_sk_init(struct btf *btf)
+{
+	return 0;
+}
+
+/* register ops by BPF */
+static int bpf_smc_passive_sk_ops_reg(void *kdata)
+{
+	struct smc_sock_negotiator_ops *ops = kdata;
+
+	/* at least one ops need implement */
+	if (!ops->negotiate || !ops->collect_info) {
+		pr_err("At least one ops need implement.\n");
+		return -EINVAL;
+	}
+
+	smc_reg_passive_sk_ops(ops);
+	/* always success now */
+	return 0;
+}
+
+/* unregister ops by BPF */
+static void bpf_smc_passive_sk_ops_unreg(void *kdata)
+{
+	smc_unreg_passive_sk_ops(kdata);
+}
+
+static int bpf_smc_passive_sk_ops_check_member(const struct btf_type *t,
+					       const struct btf_member *member,
+					       const struct bpf_prog *prog)
+{
+	return 0;
+}
+
+static int bpf_smc_passive_sk_ops_init_member(const struct btf_type *t,
+					      const struct btf_member *member,
+					      void *kdata, const void *udata)
+{
+	return 0;
+}
+
+static const struct bpf_func_proto *
+smc_passive_sk_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+{
+	return bpf_base_func_proto(func_id);
+}
+
+static bool smc_passive_sk_ops_prog_is_valid_access(int off, int size, enum bpf_access_type type,
+						    const struct bpf_prog *prog,
+						    struct bpf_insn_access_aux *info)
+{
+	return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
+}
+
+static int smc_passive_sk_ops_prog_struct_access(struct bpf_verifier_log *log,
+						 const struct bpf_reg_state *reg,
+						 int off, int size, enum bpf_access_type atype,
+						 u32 *next_btf_id, enum bpf_type_flag *flag)
+{
+	/* only allow read now*/
+	if (atype == BPF_READ)
+		return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag);
+
+	return -EACCES;
+}
+
+static const struct bpf_verifier_ops bpf_smc_passive_sk_verifier_ops = {
+	.get_func_proto  = smc_passive_sk_prog_func_proto,
+	.is_valid_access = smc_passive_sk_ops_prog_is_valid_access,
+	.btf_struct_access = smc_passive_sk_ops_prog_struct_access
+};
+
+struct bpf_struct_ops bpf_smc_sock_negotiator_ops = {
+	.verifier_ops = &bpf_smc_passive_sk_verifier_ops,
+	.init = bpf_smc_passive_sk_init,
+	.check_member = bpf_smc_passive_sk_ops_check_member,
+	.init_member = bpf_smc_passive_sk_ops_init_member,
+	.reg = bpf_smc_passive_sk_ops_reg,
+	.unreg = bpf_smc_passive_sk_ops_unreg,
+	.name = "smc_sock_negotiator_ops",
+};
diff --git a/net/smc/smc.h b/net/smc/smc.h
index 5ed765e..349b193 100644
--- a/net/smc/smc.h
+++ b/net/smc/smc.h
@@ -57,232 +57,12 @@ enum smc_state {		/* possible states of an SMC socket */
 
 struct smc_link_group;
 
-struct smc_wr_rx_hdr {	/* common prefix part of LLC and CDC to demultiplex */
-	union {
-		u8 type;
-#if defined(__BIG_ENDIAN_BITFIELD)
-		struct {
-			u8 llc_version:4,
-			   llc_type:4;
-		};
-#elif defined(__LITTLE_ENDIAN_BITFIELD)
-		struct {
-			u8 llc_type:4,
-			   llc_version:4;
-		};
-#endif
-	};
-} __aligned(1);
-
-struct smc_cdc_conn_state_flags {
-#if defined(__BIG_ENDIAN_BITFIELD)
-	u8	peer_done_writing : 1;	/* Sending done indicator */
-	u8	peer_conn_closed : 1;	/* Peer connection closed indicator */
-	u8	peer_conn_abort : 1;	/* Abnormal close indicator */
-	u8	reserved : 5;
-#elif defined(__LITTLE_ENDIAN_BITFIELD)
-	u8	reserved : 5;
-	u8	peer_conn_abort : 1;
-	u8	peer_conn_closed : 1;
-	u8	peer_done_writing : 1;
-#endif
-};
-
-struct smc_cdc_producer_flags {
-#if defined(__BIG_ENDIAN_BITFIELD)
-	u8	write_blocked : 1;	/* Writing Blocked, no rx buf space */
-	u8	urg_data_pending : 1;	/* Urgent Data Pending */
-	u8	urg_data_present : 1;	/* Urgent Data Present */
-	u8	cons_curs_upd_req : 1;	/* cursor update requested */
-	u8	failover_validation : 1;/* message replay due to failover */
-	u8	reserved : 3;
-#elif defined(__LITTLE_ENDIAN_BITFIELD)
-	u8	reserved : 3;
-	u8	failover_validation : 1;
-	u8	cons_curs_upd_req : 1;
-	u8	urg_data_present : 1;
-	u8	urg_data_pending : 1;
-	u8	write_blocked : 1;
-#endif
-};
-
-/* in host byte order */
-union smc_host_cursor {	/* SMC cursor - an offset in an RMBE */
-	struct {
-		u16	reserved;
-		u16	wrap;		/* window wrap sequence number */
-		u32	count;		/* cursor (= offset) part */
-	};
-#ifdef KERNEL_HAS_ATOMIC64
-	atomic64_t		acurs;	/* for atomic processing */
-#else
-	u64			acurs;	/* for atomic processing */
-#endif
-} __aligned(8);
-
-/* in host byte order, except for flag bitfields in network byte order */
-struct smc_host_cdc_msg {		/* Connection Data Control message */
-	struct smc_wr_rx_hdr		common; /* .type = 0xFE */
-	u8				len;	/* length = 44 */
-	u16				seqno;	/* connection seq # */
-	u32				token;	/* alert_token */
-	union smc_host_cursor		prod;		/* producer cursor */
-	union smc_host_cursor		cons;		/* consumer cursor,
-							 * piggy backed "ack"
-							 */
-	struct smc_cdc_producer_flags	prod_flags;	/* conn. tx/rx status */
-	struct smc_cdc_conn_state_flags	conn_state_flags; /* peer conn. status*/
-	u8				reserved[18];
-} __aligned(8);
-
-enum smc_urg_state {
-	SMC_URG_VALID	= 1,			/* data present */
-	SMC_URG_NOTYET	= 2,			/* data pending */
-	SMC_URG_READ	= 3,			/* data was already read */
-};
-
 struct smc_mark_woken {
 	bool woken;
 	void *key;
 	wait_queue_entry_t wait_entry;
 };
 
-struct smc_connection {
-	struct rb_node		alert_node;
-	struct smc_link_group	*lgr;		/* link group of connection */
-	struct smc_link		*lnk;		/* assigned SMC-R link */
-	u32			alert_token_local; /* unique conn. id */
-	u8			peer_rmbe_idx;	/* from tcp handshake */
-	int			peer_rmbe_size;	/* size of peer rx buffer */
-	atomic_t		peer_rmbe_space;/* remaining free bytes in peer
-						 * rmbe
-						 */
-	int			rtoken_idx;	/* idx to peer RMB rkey/addr */
-
-	struct smc_buf_desc	*sndbuf_desc;	/* send buffer descriptor */
-	struct smc_buf_desc	*rmb_desc;	/* RMBE descriptor */
-	int			rmbe_size_short;/* compressed notation */
-	int			rmbe_update_limit;
-						/* lower limit for consumer
-						 * cursor update
-						 */
-
-	struct smc_host_cdc_msg	local_tx_ctrl;	/* host byte order staging
-						 * buffer for CDC msg send
-						 * .prod cf. TCP snd_nxt
-						 * .cons cf. TCP sends ack
-						 */
-	union smc_host_cursor	local_tx_ctrl_fin;
-						/* prod crsr - confirmed by peer
-						 */
-	union smc_host_cursor	tx_curs_prep;	/* tx - prepared data
-						 * snd_max..wmem_alloc
-						 */
-	union smc_host_cursor	tx_curs_sent;	/* tx - sent data
-						 * snd_nxt ?
-						 */
-	union smc_host_cursor	tx_curs_fin;	/* tx - confirmed by peer
-						 * snd-wnd-begin ?
-						 */
-	atomic_t		sndbuf_space;	/* remaining space in sndbuf */
-	u16			tx_cdc_seq;	/* sequence # for CDC send */
-	u16			tx_cdc_seq_fin;	/* sequence # - tx completed */
-	spinlock_t		send_lock;	/* protect wr_sends */
-	atomic_t		cdc_pend_tx_wr; /* number of pending tx CDC wqe
-						 * - inc when post wqe,
-						 * - dec on polled tx cqe
-						 */
-	wait_queue_head_t	cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/
-	atomic_t		tx_pushing;     /* nr_threads trying tx push */
-	struct delayed_work	tx_work;	/* retry of smc_cdc_msg_send */
-	u32			tx_off;		/* base offset in peer rmb */
-
-	struct smc_host_cdc_msg	local_rx_ctrl;	/* filled during event_handl.
-						 * .prod cf. TCP rcv_nxt
-						 * .cons cf. TCP snd_una
-						 */
-	union smc_host_cursor	rx_curs_confirmed; /* confirmed to peer
-						    * source of snd_una ?
-						    */
-	union smc_host_cursor	urg_curs;	/* points at urgent byte */
-	enum smc_urg_state	urg_state;
-	bool			urg_tx_pend;	/* urgent data staged */
-	bool			urg_rx_skip_pend;
-						/* indicate urgent oob data
-						 * read, but previous regular
-						 * data still pending
-						 */
-	char			urg_rx_byte;	/* urgent byte */
-	bool			tx_in_release_sock;
-						/* flush pending tx data in
-						 * sock release_cb()
-						 */
-	atomic_t		bytes_to_rcv;	/* arrived data,
-						 * not yet received
-						 */
-	atomic_t		splice_pending;	/* number of spliced bytes
-						 * pending processing
-						 */
-#ifndef KERNEL_HAS_ATOMIC64
-	spinlock_t		acurs_lock;	/* protect cursors */
-#endif
-	struct work_struct	close_work;	/* peer sent some closing */
-	struct work_struct	abort_work;	/* abort the connection */
-	struct tasklet_struct	rx_tsklet;	/* Receiver tasklet for SMC-D */
-	u8			rx_off;		/* receive offset:
-						 * 0 for SMC-R, 32 for SMC-D
-						 */
-	u64			peer_token;	/* SMC-D token of peer */
-	u8			killed : 1;	/* abnormal termination */
-	u8			freed : 1;	/* normal termiation */
-	u8			out_of_sync : 1; /* out of sync with peer */
-};
-
-struct smc_sock {				/* smc sock container */
-	struct sock		sk;
-	struct socket		*clcsock;	/* internal tcp socket */
-	void			(*clcsk_state_change)(struct sock *sk);
-						/* original stat_change fct. */
-	void			(*clcsk_data_ready)(struct sock *sk);
-						/* original data_ready fct. */
-	void			(*clcsk_write_space)(struct sock *sk);
-						/* original write_space fct. */
-	void			(*clcsk_error_report)(struct sock *sk);
-						/* original error_report fct. */
-	struct smc_connection	conn;		/* smc connection */
-	struct smc_sock		*listen_smc;	/* listen parent */
-	struct work_struct	connect_work;	/* handle non-blocking connect*/
-	struct work_struct	tcp_listen_work;/* handle tcp socket accepts */
-	struct work_struct	smc_listen_work;/* prepare new accept socket */
-	struct list_head	accept_q;	/* sockets to be accepted */
-	spinlock_t		accept_q_lock;	/* protects accept_q */
-	bool			limit_smc_hs;	/* put constraint on handshake */
-	bool			use_fallback;	/* fallback to tcp */
-	int			fallback_rsn;	/* reason for fallback */
-	u32			peer_diagnosis; /* decline reason from peer */
-	atomic_t                queued_smc_hs;  /* queued smc handshakes */
-	struct inet_connection_sock_af_ops		af_ops;
-	const struct inet_connection_sock_af_ops	*ori_af_ops;
-						/* original af ops */
-	int			sockopt_defer_accept;
-						/* sockopt TCP_DEFER_ACCEPT
-						 * value
-						 */
-	u8			wait_close_tx_prepared : 1;
-						/* shutdown wr or close
-						 * started, waiting for unsent
-						 * data to be sent
-						 */
-	u8			connect_nonblock : 1;
-						/* non-blocking connect in
-						 * flight
-						 */
-	struct mutex            clcsock_release_lock;
-						/* protects clcsock of a listen
-						 * socket
-						 * */
-};
-
 static inline struct smc_sock *smc_sk(const struct sock *sk)
 {
 	return (struct smc_sock *)sk;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability
  2023-02-21 12:18 [PATCH bpf-next v2 0/2] net/smc: Introduce BPF injection capability D. Wythe
  2023-02-21 12:18 ` [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC D. Wythe
@ 2023-02-21 12:18 ` D. Wythe
  2023-02-22 22:35   ` Martin KaFai Lau
  1 sibling, 1 reply; 14+ messages in thread
From: D. Wythe @ 2023-02-21 12:18 UTC (permalink / raw)
  To: kgraul, wenjia, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf

From: "D. Wythe" <alibuda@linux.alibaba.com>

This PATCH adds a tiny selftest for SMC bpf capability,
making decisions on whether to use SMC by collecting
certain information from kernel smc sock.

Follow the steps below to run this test.

make -C tools/testing/selftests/bpf
cd tools/testing/selftests/bpf
sudo ./test_progs -t bpf_smc

Results shows:
18      bpf_smc:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
---
 tools/testing/selftests/bpf/prog_tests/bpf_smc.c |  39 +++
 tools/testing/selftests/bpf/progs/bpf_smc.c      | 315 +++++++++++++++++++++++
 2 files changed, 354 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/bpf_smc.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_smc.c

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_smc.c b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
new file mode 100644
index 0000000..b143932
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+
+#include <linux/err.h>
+#include <netinet/tcp.h>
+#include <test_progs.h>
+#include "bpf_smc.skel.h"
+
+void test_bpf_smc(void)
+{
+	struct bpf_smc *smc_skel;
+	struct bpf_link *link;
+	int err;
+
+	smc_skel = bpf_smc__open();
+	if (!ASSERT_OK_PTR(smc_skel, "skel_open"))
+		return;
+
+	err = bpf_map__set_type(smc_skel->maps.negotiator_map, BPF_MAP_TYPE_HASH);
+	if (!ASSERT_OK(err, "bpf_map__set_type"))
+		goto error;
+
+	err = bpf_map__set_max_entries(smc_skel->maps.negotiator_map, 1);
+	if (!ASSERT_OK(err, "bpf_map__set_type"))
+		goto error;
+
+	err =  bpf_smc__load(smc_skel);
+	if (!ASSERT_OK(err, "skel_load"))
+		goto error;
+
+	link = bpf_map__attach_struct_ops(smc_skel->maps.ops);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops"))
+		goto error;
+
+	bpf_link__destroy(link);
+error:
+	bpf_smc__destroy(smc_skel);
+}
+
diff --git a/tools/testing/selftests/bpf/progs/bpf_smc.c b/tools/testing/selftests/bpf/progs/bpf_smc.c
new file mode 100644
index 0000000..78c7976
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_smc.c
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/bpf.h>
+#include <linux/stddef.h>
+#include <linux/smc.h>
+#include <stdbool.h>
+#include <linux/types.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_tracing.h>
+
+#define BPF_STRUCT_OPS(name, args...) \
+	SEC("struct_ops/"#name) \
+	BPF_PROG(name, args)
+
+#define SMC_LISTEN		(10)
+#define SMC_SOCK_CLOSED_TIMING	(0)
+extern unsigned long CONFIG_HZ __kconfig;
+#define HZ CONFIG_HZ
+
+char _license[] SEC("license") = "GPL";
+#define max(a, b) ((a) > (b) ? (a) : (b))
+
+struct sock_common {
+	unsigned char	skc_state;
+	__u16	skc_num;
+} __attribute__((preserve_access_index));
+
+struct sock {
+	struct sock_common	__sk_common;
+	int	sk_sndbuf;
+} __attribute__((preserve_access_index));
+
+struct inet_sock {
+	struct sock	sk;
+} __attribute__((preserve_access_index));
+
+struct inet_connection_sock {
+	struct inet_sock	icsk_inet;
+} __attribute__((preserve_access_index));
+
+struct tcp_sock {
+	struct inet_connection_sock	inet_conn;
+	__u32	rcv_nxt;
+	__u32	snd_nxt;
+	__u32	snd_una;
+	__u32	delivered;
+	__u8	syn_data:1,	/* SYN includes data */
+		syn_fastopen:1,	/* SYN includes Fast Open option */
+		syn_fastopen_exp:1,/* SYN includes Fast Open exp. option */
+		syn_fastopen_ch:1, /* Active TFO re-enabling probe */
+		syn_data_acked:1,/* data in SYN is acked by SYN-ACK */
+		save_syn:1,	/* Save headers of SYN packet */
+		is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */
+		syn_smc:1;	/* SYN includes SMC */
+} __attribute__((preserve_access_index));
+
+struct socket {
+	struct sock *sk;
+} __attribute__((preserve_access_index));
+
+union smc_host_cursor {
+	struct {
+		__u16	reserved;
+		__u16	wrap;
+		__u32	count;
+	};
+} __attribute__((preserve_access_index));
+
+struct smc_connection {
+	union smc_host_cursor	tx_curs_sent;
+	union smc_host_cursor	rx_curs_confirmed;
+} __attribute__((preserve_access_index));
+
+struct smc_sock {
+	struct sock	sk;
+	struct socket	*clcsock;	/* internal tcp socket */
+	struct smc_connection	conn;
+	int use_fallback;
+} __attribute__((preserve_access_index));
+
+static __always_inline struct tcp_sock *tcp_sk(const struct sock *sk)
+{
+	return (struct tcp_sock *)sk;
+}
+
+static __always_inline struct smc_sock *smc_sk(struct sock *sk)
+{
+	return (struct smc_sock *)sk;
+}
+
+struct smc_prediction {
+	/* protection for smc_prediction */
+	struct bpf_spin_lock lock;
+	/* start of time slice */
+	__u64	start_tstamp;
+	/* delta of pacing */
+	__u64	pacing_delta;
+	/* N of closed connections determined as long connections
+	 * in current time slice
+	 */
+	__u32	closed_long_cc;
+	/* N of closed connections in this time slice */
+	__u32	closed_total_cc;
+	/* N of incoming connections determined as long connections
+	 * in current time slice
+	 */
+	__u32	incoming_long_cc;
+	/* last splice rate of long cc */
+	__u32	last_rate_of_lcc;
+};
+
+#define SMC_PREDICTION_MIN_PACING_DELTA                (1llu)
+#define SMC_PREDICTION_MAX_PACING_DELTA                (HZ << 3)
+#define SMC_PREDICTION_MAX_LONGCC_PER_SPLICE           (8)
+#define SMC_PREDICTION_MAX_PORT                        (64)
+#define SMC_PREDICTION_MAX_SPLICE_GAP                  (1)
+#define SMC_PREDICTION_LONGCC_RATE_THRESHOLD           (13189)
+#define SMC_PREDICTION_LONGCC_PACKETS_THRESHOLD        (100)
+#define SMC_PREDICTION_LONGCC_BYTES_THRESHOLD	\
+		(SMC_PREDICTION_LONGCC_PACKETS_THRESHOLD * 1024)
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, SMC_PREDICTION_MAX_PORT);
+	__type(key, __u16);
+	__type(value, struct smc_prediction);
+} negotiator_map SEC(".maps");
+
+
+static inline __u32 smc_prediction_calt_rate(struct smc_prediction *smc_predictor)
+{
+	if (!smc_predictor->closed_total_cc)
+		return smc_predictor->last_rate_of_lcc;
+
+	return (smc_predictor->closed_long_cc << 14) / smc_predictor->closed_total_cc;
+}
+
+static inline struct smc_prediction *smc_prediction_get(const struct smc_sock *smc,
+							const struct tcp_sock *tp, __u64 tstamp)
+{
+	struct smc_prediction zero = {}, *smc_predictor;
+	__u16 key;
+	__u32 gap;
+	int err;
+
+	err = bpf_core_read(&key, sizeof(__u16), &tp->inet_conn.icsk_inet.sk.__sk_common.skc_num);
+	if (err)
+		return NULL;
+
+	/* BAD key */
+	if (key == 0)
+		return NULL;
+
+	smc_predictor = bpf_map_lookup_elem(&negotiator_map, &key);
+	if (!smc_predictor) {
+		zero.start_tstamp = bpf_jiffies64();
+		zero.pacing_delta = SMC_PREDICTION_MIN_PACING_DELTA;
+		bpf_map_update_elem(&negotiator_map, &key, &zero, 0);
+		smc_predictor =  bpf_map_lookup_elem(&negotiator_map, &key);
+		if (!smc_predictor)
+			return NULL;
+	}
+
+	if (tstamp) {
+		bpf_spin_lock(&smc_predictor->lock);
+		gap = (tstamp - smc_predictor->start_tstamp) / smc_predictor->pacing_delta;
+		/* new splice */
+		if (gap > 0) {
+			smc_predictor->start_tstamp = tstamp;
+			smc_predictor->last_rate_of_lcc =
+				(smc_prediction_calt_rate(smc_predictor) * 7) >> (2 + gap);
+			smc_predictor->closed_long_cc = 0;
+			smc_predictor->closed_total_cc = 0;
+			smc_predictor->incoming_long_cc = 0;
+		}
+		bpf_spin_unlock(&smc_predictor->lock);
+	}
+	return smc_predictor;
+}
+
+/* BPF struct ops for smc protocol negotiator */
+struct smc_sock_negotiator_ops {
+	/* ret for negotiate */
+	int (*negotiate)(struct smc_sock *smc);
+
+	/* info gathering timing */
+	void (*collect_info)(struct smc_sock *smc, int timing);
+};
+
+int BPF_STRUCT_OPS(bpf_smc_negotiate, struct smc_sock *smc)
+{
+	struct smc_prediction *smc_predictor;
+	struct tcp_sock *tp;
+	struct sock *clcsk;
+	int ret = SK_DROP;
+	__u32 rate = 0;
+
+	/* Only make decison during listen */
+	if (smc->sk.__sk_common.skc_state != SMC_LISTEN)
+		return SK_PASS;
+
+	clcsk = BPF_CORE_READ(smc, clcsock, sk);
+	if (!clcsk)
+		goto error;
+
+	tp = tcp_sk(clcsk);
+	if (!tp)
+		goto error;
+
+	smc_predictor = smc_prediction_get(smc, tp, bpf_jiffies64());
+	if (!smc_predictor)
+		return SK_PASS;
+
+	bpf_spin_lock(&smc_predictor->lock);
+
+	if (smc_predictor->incoming_long_cc == 0)
+		goto out_locked_pass;
+
+	if (smc_predictor->incoming_long_cc > SMC_PREDICTION_MAX_LONGCC_PER_SPLICE) {
+		ret = 100;
+		goto out_locked_drop;
+	}
+
+	rate = smc_prediction_calt_rate(smc_predictor);
+	if (rate < SMC_PREDICTION_LONGCC_RATE_THRESHOLD) {
+		ret = 200;
+		goto out_locked_drop;
+	}
+out_locked_pass:
+	smc_predictor->incoming_long_cc++;
+	bpf_spin_unlock(&smc_predictor->lock);
+	return SK_PASS;
+out_locked_drop:
+	bpf_spin_unlock(&smc_predictor->lock);
+error:
+	return SK_DROP;
+}
+
+void BPF_STRUCT_OPS(bpf_smc_collect_info, struct smc_sock *smc, int timing)
+{
+	struct smc_prediction *smc_predictor;
+	int use_fallback, sndbuf, err;
+	struct tcp_sock *tp;
+	struct sock *clcsk;
+	__u16 wrap, count;
+	__u32 delivered;
+	bool match = false;
+
+	/* only fouces on closed */
+	if (timing != SMC_SOCK_CLOSED_TIMING)
+		return;
+
+	clcsk = BPF_CORE_READ(smc, clcsock, sk);
+	if (!clcsk)
+		goto error;
+
+	tp = tcp_sk(clcsk);
+	if (!tp)
+		goto error;
+
+	smc_predictor = smc_prediction_get(smc, tp, 0);
+	if (!smc_predictor)
+		goto error;
+
+	err = bpf_core_read(&use_fallback, sizeof(use_fallback), &smc->use_fallback);
+	if (err)
+		goto error;
+
+	if (use_fallback) {
+		err = bpf_core_read(&delivered, sizeof(delivered), &tp->delivered);
+		if (err)
+			goto error;
+
+		match = (delivered > SMC_PREDICTION_LONGCC_PACKETS_THRESHOLD);
+
+	} else {
+		delivered = 0;	/* tcp delivered */
+		err = bpf_core_read(&wrap, sizeof(__u16), &smc->conn.tx_curs_sent.wrap);
+		if (err)
+			goto error;
+		err = bpf_core_read(&count, sizeof(__u16), &smc->conn.tx_curs_sent.count);
+		if (err)
+			goto error;
+		err = bpf_core_read(&sndbuf, sizeof(int), &clcsk->sk_sndbuf);
+		if (err)
+			goto error;
+
+		match = (count + wrap * sndbuf) > SMC_PREDICTION_LONGCC_BYTES_THRESHOLD;
+	}
+	bpf_spin_lock(&smc_predictor->lock);
+	smc_predictor->closed_total_cc++;
+	if (match) {
+		/* increase stats */
+		smc_predictor->closed_long_cc++;
+		/* try more aggressive */
+		if (smc_predictor->pacing_delta > SMC_PREDICTION_MIN_PACING_DELTA) {
+			if (use_fallback) {
+				smc_predictor->pacing_delta = max(SMC_PREDICTION_MIN_PACING_DELTA,
+						(smc_predictor->pacing_delta * 3) >> 2);
+			}
+		}
+	} else if (!use_fallback) {
+		smc_predictor->pacing_delta <<= 1;
+	}
+	bpf_spin_unlock(&smc_predictor->lock);
+error:
+	return;
+}
+
+SEC(".struct_ops")
+struct smc_sock_negotiator_ops ops = {
+	.negotiate	= (void *)bpf_smc_negotiate,
+	.collect_info	= (void *)bpf_smc_collect_info,
+};
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-21 12:18 ` [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC D. Wythe
@ 2023-02-22 21:40   ` Martin KaFai Lau
  2023-03-09 11:49     ` D. Wythe
  2023-02-27  7:58   ` Wenjia Zhang
  1 sibling, 1 reply; 14+ messages in thread
From: Martin KaFai Lau @ 2023-02-22 21:40 UTC (permalink / raw)
  To: D. Wythe
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii

On 2/21/23 4:18 AM, D. Wythe wrote:
> From: "D. Wythe" <alibuda@linux.alibaba.com>
> 
> This PATCH attempts to introduce BPF injection capability for SMC.
> As we all know that the SMC protocol is not suitable for all scenarios,
> especially for short-lived. However, for most applications, they cannot
> guarantee that there are no such scenarios at all. Therefore, apps
> may need some specific strategies to decide shall we need to use SMC
> or not, for example, apps can limit the scope of the SMC to a specific
> IP address or port.
> 
> Based on the consideration of transparent replacement, we hope that apps
> can remain transparent even if they need to formulate some specific
> strategies for SMC using. That is, do not need to recompile their code.
> 
> On the other hand, we need to ensure the scalability of strategies
> implementation. Although it is simple to use socket options or sysctl,
> it will bring more complexity to subsequent expansion.
> 
> Fortunately, BPF can solve these concerns very well, users can write
> thire own strategies in eBPF to choose whether to use SMC or not.
> And it's quite easy for them to modify their strategies in the future.
> 
> This PATCH implement injection capability for SMC via struct_ops.
> In that way, we can add new injection scenarios in the future.

I have never used smc. I can only comment at its high level usage and details on 
the bpf side.

> 
> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> ---
>   include/linux/btf_ids.h           |  15 +++
>   include/net/smc.h                 | 254 ++++++++++++++++++++++++++++++++++++++
>   kernel/bpf/bpf_struct_ops_types.h |   4 +
>   net/Makefile                      |   5 +
>   net/smc/af_smc.c                  |  10 +-
>   net/smc/bpf_smc_struct_ops.c      | 146 ++++++++++++++++++++++
>   net/smc/smc.h                     | 220 ---------------------------------
>   7 files changed, 433 insertions(+), 221 deletions(-)
>   create mode 100644 net/smc/bpf_smc_struct_ops.c
> 
> diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
> index 3a4f7cd..25eab1e 100644
> --- a/include/linux/btf_ids.h
> +++ b/include/linux/btf_ids.h
> @@ -264,6 +264,21 @@ enum {
>   MAX_BTF_TRACING_TYPE,
>   };
>   
> +#if IS_ENABLED(CONFIG_SMC)
> +#define BTF_SMC_TYPE_xxx		\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_SOCK, smc_sock)		\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_CONNECTION, smc_connection)	\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_HOST_CURSOR, smc_host_cursor)
> +
> +enum {
> +#define BTF_SMC_TYPE(name, type) name,
> +BTF_SMC_TYPE_xxx
> +#undef BTF_SMC_TYPE
> +MAX_BTF_SMC_TYPE,
> +};
> +extern u32 btf_smc_ids[];

Do all these need to be in btf_ids.h?

> +#endif
> +
>   extern u32 btf_tracing_ids[];
>   extern u32 bpf_cgroup_btf_id[];
>   extern u32 bpf_local_storage_map_btf_id[];
> diff --git a/include/net/smc.h b/include/net/smc.h
> index 597cb93..912c269 100644
> --- a/include/net/smc.h
> +++ b/include/net/smc.h

It is not obvious to me why the header moving is needed (from net/smc/smc.h to 
include/net/smc.h ?). This can use some comment in the commit message and please 
break it out to another patch.

[ ... ]

> --- a/net/Makefile
> +++ b/net/Makefile
> @@ -52,6 +52,11 @@ obj-$(CONFIG_TIPC)		+= tipc/
>   obj-$(CONFIG_NETLABEL)		+= netlabel/
>   obj-$(CONFIG_IUCV)		+= iucv/
>   obj-$(CONFIG_SMC)		+= smc/
> +ifneq ($(CONFIG_SMC),)
> +ifeq ($(CONFIG_BPF_SYSCALL),y)
> +obj-y				+= smc/bpf_smc_struct_ops.o

This will ensure bpf_smc_struct_ops.c compiled as builtin even when smc is 
compiled as module?

> diff --git a/net/smc/bpf_smc_struct_ops.c b/net/smc/bpf_smc_struct_ops.c
> new file mode 100644
> index 0000000..a5989b6
> --- /dev/null
> +++ b/net/smc/bpf_smc_struct_ops.c
> @@ -0,0 +1,146 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/kernel.h>
> +#include <linux/bpf_verifier.h>
> +#include <linux/btf_ids.h>
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <net/sock.h>
> +#include <net/smc.h>
> +
> +extern struct bpf_struct_ops smc_sock_negotiator_ops;
> +
> +DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
> +struct smc_sock_negotiator_ops *negotiator;

Is it sure one global negotiator (policy) will work for all smc_sock? or each sk 
should have its own negotiator and the negotiator is selected by setsockopt.

> +
> +/* convert sk to smc_sock */
> +static inline struct smc_sock *smc_sk(const struct sock *sk)
> +{
> +	return (struct smc_sock *)sk;
> +}
> +
> +/* register ops */
> +static inline void smc_reg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
> +{
> +	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	negotiator = ops;

What happens to the existing negotiator?

> +	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +
> +/* unregister ops */
> +static inline void smc_unreg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
> +{
> +	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator == ops)
> +		negotiator = NULL;
> +	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +
> +int smc_sock_should_select_smc(const struct smc_sock *smc)
> +{
> +	int ret = SK_PASS;
> +
> +	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator && negotiator->negotiate)
> +		ret = negotiator->negotiate((struct smc_sock *)smc);
> +	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(smc_sock_should_select_smc);
> +
> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
> +{
> +	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator && negotiator->collect_info)
> +		negotiator->collect_info((struct smc_sock *)smc, timing);
> +	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +EXPORT_SYMBOL_GPL(smc_sock_perform_collecting_info);
> +
> +/* define global smc ID for smc_struct_ops */
> +BTF_ID_LIST_GLOBAL(btf_smc_ids, MAX_BTF_SMC_TYPE)

How is btf_smc_ids used?

> +#define BTF_SMC_TYPE(name, type) BTF_ID(struct, type)
> +BTF_SMC_TYPE_xxx
> +#undef BTF_SMC_TYPE
> +



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability
  2023-02-21 12:18 ` [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability D. Wythe
@ 2023-02-22 22:35   ` Martin KaFai Lau
  2023-03-09 11:58     ` D. Wythe
  0 siblings, 1 reply; 14+ messages in thread
From: Martin KaFai Lau @ 2023-02-22 22:35 UTC (permalink / raw)
  To: D. Wythe
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii

On 2/21/23 4:18 AM, D. Wythe wrote:
> From: "D. Wythe" <alibuda@linux.alibaba.com>
> 
> This PATCH adds a tiny selftest for SMC bpf capability,
> making decisions on whether to use SMC by collecting
> certain information from kernel smc sock.
> 
> Follow the steps below to run this test.
> 
> make -C tools/testing/selftests/bpf
> cd tools/testing/selftests/bpf
> sudo ./test_progs -t bpf_smc
> 
> Results shows:
> 18      bpf_smc:OK
> Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
> 
> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> ---
>   tools/testing/selftests/bpf/prog_tests/bpf_smc.c |  39 +++
>   tools/testing/selftests/bpf/progs/bpf_smc.c      | 315 +++++++++++++++++++++++
>   2 files changed, 354 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/bpf_smc.c
>   create mode 100644 tools/testing/selftests/bpf/progs/bpf_smc.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_smc.c b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
> new file mode 100644
> index 0000000..b143932
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
> @@ -0,0 +1,39 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2019 Facebook */

copy-and-paste left-over...

> diff --git a/tools/testing/selftests/bpf/progs/bpf_smc.c b/tools/testing/selftests/bpf/progs/bpf_smc.c
> new file mode 100644
> index 0000000..78c7976
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/bpf_smc.c
> @@ -0,0 +1,315 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/bpf.h>
> +#include <linux/stddef.h>
> +#include <linux/smc.h>
> +#include <stdbool.h>
> +#include <linux/types.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_core_read.h>
> +#include <bpf/bpf_tracing.h>
> +
> +#define BPF_STRUCT_OPS(name, args...) \
> +	SEC("struct_ops/"#name) \
> +	BPF_PROG(name, args)
> +
> +#define SMC_LISTEN		(10)
> +#define SMC_SOCK_CLOSED_TIMING	(0)
> +extern unsigned long CONFIG_HZ __kconfig;
> +#define HZ CONFIG_HZ
> +
> +char _license[] SEC("license") = "GPL";
> +#define max(a, b) ((a) > (b) ? (a) : (b))
> +
> +struct sock_common {
> +	unsigned char	skc_state;
> +	__u16	skc_num;
> +} __attribute__((preserve_access_index));
> +
> +struct sock {
> +	struct sock_common	__sk_common;
> +	int	sk_sndbuf;
> +} __attribute__((preserve_access_index));
> +
> +struct inet_sock {
> +	struct sock	sk;
> +} __attribute__((preserve_access_index));
> +
> +struct inet_connection_sock {
> +	struct inet_sock	icsk_inet;
> +} __attribute__((preserve_access_index));
> +
> +struct tcp_sock {
> +	struct inet_connection_sock	inet_conn;
> +	__u32	rcv_nxt;
> +	__u32	snd_nxt;
> +	__u32	snd_una;
> +	__u32	delivered;
> +	__u8	syn_data:1,	/* SYN includes data */
> +		syn_fastopen:1,	/* SYN includes Fast Open option */
> +		syn_fastopen_exp:1,/* SYN includes Fast Open exp. option */
> +		syn_fastopen_ch:1, /* Active TFO re-enabling probe */
> +		syn_data_acked:1,/* data in SYN is acked by SYN-ACK */
> +		save_syn:1,	/* Save headers of SYN packet */
> +		is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */
> +		syn_smc:1;	/* SYN includes SMC */
> +} __attribute__((preserve_access_index));
> +
> +struct socket {
> +	struct sock *sk;
> +} __attribute__((preserve_access_index));

All these tcp_sock, socket, inet_sock definitions can go away if it includes 
"vmlinux.h". tcp_ca_write_sk_pacing.c is a better example to follow. Try to 
define the "common" (eg. tcp, tc...etc) missing macros in bpf_tracing_net.h. The 
smc specific macros can stay in this file.

> +static inline struct smc_prediction *smc_prediction_get(const struct smc_sock *smc,
> +							const struct tcp_sock *tp, __u64 tstamp)
> +{
> +	struct smc_prediction zero = {}, *smc_predictor;
> +	__u16 key;
> +	__u32 gap;
> +	int err;
> +
> +	err = bpf_core_read(&key, sizeof(__u16), &tp->inet_conn.icsk_inet.sk.__sk_common.skc_num);
> +	if (err)
> +		return NULL;
> +
> +	/* BAD key */
> +	if (key == 0)
> +		return NULL;
> +
> +	smc_predictor = bpf_map_lookup_elem(&negotiator_map, &key);
> +	if (!smc_predictor) {
> +		zero.start_tstamp = bpf_jiffies64();
> +		zero.pacing_delta = SMC_PREDICTION_MIN_PACING_DELTA;
> +		bpf_map_update_elem(&negotiator_map, &key, &zero, 0);
> +		smc_predictor =  bpf_map_lookup_elem(&negotiator_map, &key);
> +		if (!smc_predictor)
> +			return NULL;
> +	}
> +
> +	if (tstamp) {
> +		bpf_spin_lock(&smc_predictor->lock);
> +		gap = (tstamp - smc_predictor->start_tstamp) / smc_predictor->pacing_delta;
> +		/* new splice */
> +		if (gap > 0) {
> +			smc_predictor->start_tstamp = tstamp;
> +			smc_predictor->last_rate_of_lcc =
> +				(smc_prediction_calt_rate(smc_predictor) * 7) >> (2 + gap);
> +			smc_predictor->closed_long_cc = 0;
> +			smc_predictor->closed_total_cc = 0;
> +			smc_predictor->incoming_long_cc = 0;
> +		}
> +		bpf_spin_unlock(&smc_predictor->lock);
> +	}
> +	return smc_predictor;
> +}
> +
> +/* BPF struct ops for smc protocol negotiator */
> +struct smc_sock_negotiator_ops {
> +	/* ret for negotiate */
> +	int (*negotiate)(struct smc_sock *smc);
> +
> +	/* info gathering timing */
> +	void (*collect_info)(struct smc_sock *smc, int timing);
> +};
> +
> +int BPF_STRUCT_OPS(bpf_smc_negotiate, struct smc_sock *smc)
> +{
> +	struct smc_prediction *smc_predictor;
> +	struct tcp_sock *tp;
> +	struct sock *clcsk;
> +	int ret = SK_DROP;
> +	__u32 rate = 0;
> +
> +	/* Only make decison during listen */
> +	if (smc->sk.__sk_common.skc_state != SMC_LISTEN)
> +		return SK_PASS;
> +
> +	clcsk = BPF_CORE_READ(smc, clcsock, sk);

Instead of using bpf_core_read here, why not directly gets the clcsk like the 
'smc->sk.__sk_common.skc_state' above.

> +	if (!clcsk)
> +		goto error;
> +
> +	tp = tcp_sk(clcsk);

There is a bpf_skc_to_tcp_sock(). Give it a try after changing the above 
BPF_CORE_READ.

> +	if (!tp)
> +		goto error;
> +
> +	smc_predictor = smc_prediction_get(smc, tp, bpf_jiffies64());
> +	if (!smc_predictor)
> +		return SK_PASS;
> +
> +	bpf_spin_lock(&smc_predictor->lock);
> +
> +	if (smc_predictor->incoming_long_cc == 0)
> +		goto out_locked_pass;
> +
> +	if (smc_predictor->incoming_long_cc > SMC_PREDICTION_MAX_LONGCC_PER_SPLICE) {
> +		ret = 100;
> +		goto out_locked_drop;
> +	}
> +
> +	rate = smc_prediction_calt_rate(smc_predictor);
> +	if (rate < SMC_PREDICTION_LONGCC_RATE_THRESHOLD) {
> +		ret = 200;
> +		goto out_locked_drop;
> +	}
> +out_locked_pass:
> +	smc_predictor->incoming_long_cc++;
> +	bpf_spin_unlock(&smc_predictor->lock);
> +	return SK_PASS;
> +out_locked_drop:
> +	bpf_spin_unlock(&smc_predictor->lock);
> +error:
> +	return SK_DROP;
> +}
> +
> +void BPF_STRUCT_OPS(bpf_smc_collect_info, struct smc_sock *smc, int timing)

Try to stay with SEC("struct_ops/...") void BPF_PROG(....)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-21 12:18 ` [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC D. Wythe
  2023-02-22 21:40   ` Martin KaFai Lau
@ 2023-02-27  7:58   ` Wenjia Zhang
  2023-02-28  8:50     ` D. Wythe
  1 sibling, 1 reply; 14+ messages in thread
From: Wenjia Zhang @ 2023-02-27  7:58 UTC (permalink / raw)
  To: D. Wythe, kgraul, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf



On 21.02.23 13:18, D. Wythe wrote:
> From: "D. Wythe" <alibuda@linux.alibaba.com>
> 
> This PATCH attempts to introduce BPF injection capability for SMC.
> As we all know that the SMC protocol is not suitable for all scenarios,
> especially for short-lived. However, for most applications, they cannot
> guarantee that there are no such scenarios at all. Therefore, apps
> may need some specific strategies to decide shall we need to use SMC
> or not, for example, apps can limit the scope of the SMC to a specific
> IP address or port.
> 
> Based on the consideration of transparent replacement, we hope that apps
> can remain transparent even if they need to formulate some specific
> strategies for SMC using. That is, do not need to recompile their code.
> 
> On the other hand, we need to ensure the scalability of strategies
> implementation. Although it is simple to use socket options or sysctl,
> it will bring more complexity to subsequent expansion.
> 
> Fortunately, BPF can solve these concerns very well, users can write
> thire own strategies in eBPF to choose whether to use SMC or not.
> And it's quite easy for them to modify their strategies in the future.
> 
> This PATCH implement injection capability for SMC via struct_ops.
> In that way, we can add new injection scenarios in the future.
> 
> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> ---
>   include/linux/btf_ids.h           |  15 +++
>   include/net/smc.h                 | 254 ++++++++++++++++++++++++++++++++++++++
>   kernel/bpf/bpf_struct_ops_types.h |   4 +
>   net/Makefile                      |   5 +
>   net/smc/af_smc.c                  |  10 +-
>   net/smc/bpf_smc_struct_ops.c      | 146 ++++++++++++++++++++++
>   net/smc/smc.h                     | 220 ---------------------------------
>   7 files changed, 433 insertions(+), 221 deletions(-)
>   create mode 100644 net/smc/bpf_smc_struct_ops.c
> 
> diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
> index 3a4f7cd..25eab1e 100644
> --- a/include/linux/btf_ids.h
> +++ b/include/linux/btf_ids.h
> @@ -264,6 +264,21 @@ enum {
>   MAX_BTF_TRACING_TYPE,
>   };
>   
> +#if IS_ENABLED(CONFIG_SMC)
> +#define BTF_SMC_TYPE_xxx		\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_SOCK, smc_sock)		\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_CONNECTION, smc_connection)	\
> +	BTF_SMC_TYPE(BTF_SMC_TYPE_HOST_CURSOR, smc_host_cursor)
> +
> +enum {
> +#define BTF_SMC_TYPE(name, type) name,
> +BTF_SMC_TYPE_xxx
> +#undef BTF_SMC_TYPE
> +MAX_BTF_SMC_TYPE,
> +};
> +extern u32 btf_smc_ids[];
> +#endif
> +
>   extern u32 btf_tracing_ids[];
>   extern u32 bpf_cgroup_btf_id[];
>   extern u32 bpf_local_storage_map_btf_id[];
> diff --git a/include/net/smc.h b/include/net/smc.h
> index 597cb93..912c269 100644
> --- a/include/net/smc.h
> +++ b/include/net/smc.h
> @@ -11,13 +11,16 @@
>   #ifndef _SMC_H
>   #define _SMC_H
>   
> +#include <net/inet_connection_sock.h>
>   #include <linux/device.h>
>   #include <linux/spinlock.h>
>   #include <linux/types.h>
>   #include <linux/wait.h>
> +#include <linux/bpf.h>
>   #include "linux/ism.h"
>   
>   struct sock;
> +struct smc_diag_conninfo;
>   
>   #define SMC_MAX_PNETID_LEN	16	/* Max. length of PNET id */
>   
> @@ -90,4 +93,255 @@ struct smcd_dev {
>   	u8 going_away : 1;
>   };
>   
> +#if IS_ENABLED(CONFIG_SMC)
> +
> +struct smc_wr_rx_hdr {	/* common prefix part of LLC and CDC to demultiplex */
> +	union {
> +		u8 type;
> +#if defined(__BIG_ENDIAN_BITFIELD)
> +		struct {
> +			u8 llc_version:4,
> +			   llc_type:4;
> +		};
> +#elif defined(__LITTLE_ENDIAN_BITFIELD)
> +		struct {
> +			u8 llc_type:4,
> +			   llc_version:4;
> +		};
> +#endif
> +	};
> +} __aligned(1);
> +
> +struct smc_cdc_conn_state_flags {
> +#if defined(__BIG_ENDIAN_BITFIELD)
> +	u8	peer_done_writing : 1;	/* Sending done indicator */
> +	u8	peer_conn_closed : 1;	/* Peer connection closed indicator */
> +	u8	peer_conn_abort : 1;	/* Abnormal close indicator */
> +	u8	reserved : 5;
> +#elif defined(__LITTLE_ENDIAN_BITFIELD)
> +	u8	reserved : 5;
> +	u8	peer_conn_abort : 1;
> +	u8	peer_conn_closed : 1;
> +	u8	peer_done_writing : 1;
> +#endif
> +};
> +
> +struct smc_cdc_producer_flags {
> +#if defined(__BIG_ENDIAN_BITFIELD)
> +	u8	write_blocked : 1;	/* Writing Blocked, no rx buf space */
> +	u8	urg_data_pending : 1;	/* Urgent Data Pending */
> +	u8	urg_data_present : 1;	/* Urgent Data Present */
> +	u8	cons_curs_upd_req : 1;	/* cursor update requested */
> +	u8	failover_validation : 1;/* message replay due to failover */
> +	u8	reserved : 3;
> +#elif defined(__LITTLE_ENDIAN_BITFIELD)
> +	u8	reserved : 3;
> +	u8	failover_validation : 1;
> +	u8	cons_curs_upd_req : 1;
> +	u8	urg_data_present : 1;
> +	u8	urg_data_pending : 1;
> +	u8	write_blocked : 1;
> +#endif
> +};
> +
> +enum smc_urg_state {
> +	SMC_URG_VALID	= 1,			/* data present */
> +	SMC_URG_NOTYET	= 2,			/* data pending */
> +	SMC_URG_READ	= 3,			/* data was already read */
> +};
> +
> +/* in host byte order */
> +union smc_host_cursor {	/* SMC cursor - an offset in an RMBE */
> +	struct {
> +		u16	reserved;
> +		u16	wrap;		/* window wrap sequence number */
> +		u32	count;		/* cursor (= offset) part */
> +	};
> +#ifdef ATOMIC64_INIT
> +	atomic64_t		acurs;	/* for atomic processing */
> +#else
> +	u64			acurs;	/* for atomic processing */
> +#endif
> +} __aligned(8);
> +
> +/* in host byte order, except for flag bitfields in network byte order */
> +struct smc_host_cdc_msg {		/* Connection Data Control message */
> +	struct smc_wr_rx_hdr		common; /* .type = 0xFE */
> +	u8				len;	/* length = 44 */
> +	u16				seqno;	/* connection seq # */
> +	u32				token;	/* alert_token */
> +	union smc_host_cursor		prod;		/* producer cursor */
> +	union smc_host_cursor		cons;		/* consumer cursor,
> +							 * piggy backed "ack"
> +							 */
> +	struct smc_cdc_producer_flags	prod_flags;	/* conn. tx/rx status */
> +	struct smc_cdc_conn_state_flags	conn_state_flags; /* peer conn. status*/
> +	u8				reserved[18];
> +} __aligned(8);
> +
> +struct smc_connection {
> +	struct rb_node		alert_node;
> +	struct smc_link_group	*lgr;		/* link group of connection */
> +	struct smc_link		*lnk;		/* assigned SMC-R link */
> +	u32			alert_token_local; /* unique conn. id */
> +	u8			peer_rmbe_idx;	/* from tcp handshake */
> +	int			peer_rmbe_size;	/* size of peer rx buffer */
> +	atomic_t		peer_rmbe_space;/* remaining free bytes in peer
> +						 * rmbe
> +						 */
> +	int			rtoken_idx;	/* idx to peer RMB rkey/addr */
> +
> +	struct smc_buf_desc	*sndbuf_desc;	/* send buffer descriptor */
> +	struct smc_buf_desc	*rmb_desc;	/* RMBE descriptor */
> +	int			rmbe_size_short;/* compressed notation */
> +	int			rmbe_update_limit;
> +						/* lower limit for consumer
> +						 * cursor update
> +						 */
> +
> +	struct smc_host_cdc_msg	local_tx_ctrl;	/* host byte order staging
> +						 * buffer for CDC msg send
> +						 * .prod cf. TCP snd_nxt
> +						 * .cons cf. TCP sends ack
> +						 */
> +	union smc_host_cursor	local_tx_ctrl_fin;
> +						/* prod crsr - confirmed by peer
> +						 */
> +	union smc_host_cursor	tx_curs_prep;	/* tx - prepared data
> +						 * snd_max..wmem_alloc
> +						 */
> +	union smc_host_cursor	tx_curs_sent;	/* tx - sent data
> +						 * snd_nxt ?
> +						 */
> +	union smc_host_cursor	tx_curs_fin;	/* tx - confirmed by peer
> +						 * snd-wnd-begin ?
> +						 */
> +	atomic_t		sndbuf_space;	/* remaining space in sndbuf */
> +	u16			tx_cdc_seq;	/* sequence # for CDC send */
> +	u16			tx_cdc_seq_fin;	/* sequence # - tx completed */
> +	spinlock_t		send_lock;	/* protect wr_sends */
> +	atomic_t		cdc_pend_tx_wr; /* number of pending tx CDC wqe
> +						 * - inc when post wqe,
> +						 * - dec on polled tx cqe
> +						 */
> +	wait_queue_head_t	cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/
> +	atomic_t		tx_pushing;     /* nr_threads trying tx push */
> +	struct delayed_work	tx_work;	/* retry of smc_cdc_msg_send */
> +	u32			tx_off;		/* base offset in peer rmb */
> +
> +	struct smc_host_cdc_msg	local_rx_ctrl;	/* filled during event_handl.
> +						 * .prod cf. TCP rcv_nxt
> +						 * .cons cf. TCP snd_una
> +						 */
> +	union smc_host_cursor	rx_curs_confirmed; /* confirmed to peer
> +						    * source of snd_una ?
> +						    */
> +	union smc_host_cursor	urg_curs;	/* points at urgent byte */
> +	enum smc_urg_state	urg_state;
> +	bool			urg_tx_pend;	/* urgent data staged */
> +	bool			urg_rx_skip_pend;
> +						/* indicate urgent oob data
> +						 * read, but previous regular
> +						 * data still pending
> +						 */
> +	char			urg_rx_byte;	/* urgent byte */
> +	bool			tx_in_release_sock;
> +						/* flush pending tx data in
> +						 * sock release_cb()
> +						 */
> +	atomic_t		bytes_to_rcv;	/* arrived data,
> +						 * not yet received
> +						 */
> +	atomic_t		splice_pending;	/* number of spliced bytes
> +						 * pending processing
> +						 */
> +#ifndef KERNEL_HAS_ATOMIC64
> +	spinlock_t		acurs_lock;	/* protect cursors */
> +#endif
> +	struct work_struct	close_work;	/* peer sent some closing */
> +	struct work_struct	abort_work;	/* abort the connection */
> +	struct tasklet_struct	rx_tsklet;	/* Receiver tasklet for SMC-D */
> +	u8			rx_off;		/* receive offset:
> +						 * 0 for SMC-R, 32 for SMC-D
> +						 */
> +	u64			peer_token;	/* SMC-D token of peer */
> +	u8			killed : 1;	/* abnormal termination */
> +	u8			freed : 1;	/* normal termiation */
> +	u8			out_of_sync : 1; /* out of sync with peer */
> +};
> +
> +struct smc_sock {				/* smc sock container */
> +	struct sock		sk;
> +	struct socket		*clcsock;	/* internal tcp socket */
> +	void			(*clcsk_state_change)(struct sock *sk);
> +						/* original stat_change fct. */
> +	void			(*clcsk_data_ready)(struct sock *sk);
> +						/* original data_ready fct. */
> +	void			(*clcsk_write_space)(struct sock *sk);
> +						/* original write_space fct. */
> +	void			(*clcsk_error_report)(struct sock *sk);
> +						/* original error_report fct. */
> +	struct smc_connection	conn;		/* smc connection */
> +	struct smc_sock		*listen_smc;	/* listen parent */
> +	struct work_struct	connect_work;	/* handle non-blocking connect*/
> +	struct work_struct	tcp_listen_work;/* handle tcp socket accepts */
> +	struct work_struct	smc_listen_work;/* prepare new accept socket */
> +	struct list_head	accept_q;	/* sockets to be accepted */
> +	spinlock_t		accept_q_lock;	/* protects accept_q */
> +	bool			limit_smc_hs;	/* put constraint on handshake */
> +	bool			use_fallback;	/* fallback to tcp */
> +	int			fallback_rsn;	/* reason for fallback */
> +	u32			peer_diagnosis; /* decline reason from peer */
> +	atomic_t                queued_smc_hs;  /* queued smc handshakes */
> +	struct inet_connection_sock_af_ops		af_ops;
> +	const struct inet_connection_sock_af_ops	*ori_af_ops;
> +						/* original af ops */
> +	int			sockopt_defer_accept;
> +						/* sockopt TCP_DEFER_ACCEPT
> +						 * value
> +						 */
> +	u8			wait_close_tx_prepared : 1;
> +						/* shutdown wr or close
> +						 * started, waiting for unsent
> +						 * data to be sent
> +						 */
> +	u8			connect_nonblock : 1;
> +						/* non-blocking connect in
> +						 * flight
> +						 */
> +	struct mutex            clcsock_release_lock;
> +						/* protects clcsock of a listen
> +						 * socket
> +						 */
> +};
> +
> +#define SMC_SOCK_CLOSED_TIMING	(0)
> +
> +/* BPF struct ops for smc protocol negotiator */
> +struct smc_sock_negotiator_ops {
> +	/* ret for negotiate */
> +	int (*negotiate)(struct smc_sock *sk);
> +
> +	/* info gathering timing */
> +	void (*collect_info)(struct smc_sock *sk, int timing);
> +};
> +
> +/* Query if current sock should go with SMC protocol
> + * SK_PASS for yes, otherwise for no.
> + */
> +int smc_sock_should_select_smc(const struct smc_sock *smc);
> +
> +/* At some specific points in time,
> + * let negotiator can perform info gathering
> + * on target sock.
> + */
> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing);
> +
> +#else
> +struct smc_sock {};
> +struct smc_connection {};
> +struct smc_sock_negotiator_ops {};
> +union smc_host_cursor {};
> +#endif /* CONFIG_SMC */
> +
>   #endif	/* _SMC_H */
> diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
> index 5678a9d..35cdd15 100644
> --- a/kernel/bpf/bpf_struct_ops_types.h
> +++ b/kernel/bpf/bpf_struct_ops_types.h
> @@ -9,4 +9,8 @@
>   #include <net/tcp.h>
>   BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
>   #endif
> +#if IS_ENABLED(CONFIG_SMC)
> +#include <net/smc.h>
> +BPF_STRUCT_OPS_TYPE(smc_sock_negotiator_ops)
> +#endif
>   #endif
> diff --git a/net/Makefile b/net/Makefile
> index 0914bea..47a4c00 100644
> --- a/net/Makefile
> +++ b/net/Makefile
> @@ -52,6 +52,11 @@ obj-$(CONFIG_TIPC)		+= tipc/
>   obj-$(CONFIG_NETLABEL)		+= netlabel/
>   obj-$(CONFIG_IUCV)		+= iucv/
>   obj-$(CONFIG_SMC)		+= smc/
> +ifneq ($(CONFIG_SMC),)
> +ifeq ($(CONFIG_BPF_SYSCALL),y)
> +obj-y				+= smc/bpf_smc_struct_ops.o
> +endif
> +endif
>   obj-$(CONFIG_RFKILL)		+= rfkill/
>   obj-$(CONFIG_NET_9P)		+= 9p/
>   obj-$(CONFIG_CAIF)		+= caif/
> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
> index d7a7420..98651b85 100644
> --- a/net/smc/af_smc.c
> +++ b/net/smc/af_smc.c
> @@ -166,6 +166,9 @@ static bool smc_hs_congested(const struct sock *sk)
>   	if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq))
>   		return true;
>   
> +	if (!smc_sock_should_select_smc(smc))
> +		return true;
> +
>   	return false;
>   }
>   
> @@ -320,6 +323,9 @@ static int smc_release(struct socket *sock)
>   	sock_hold(sk); /* sock_put below */
>   	smc = smc_sk(sk);
>   
> +	/* trigger info gathering if needed.*/
> +	smc_sock_perform_collecting_info(smc, SMC_SOCK_CLOSED_TIMING);
> +
>   	old_state = sk->sk_state;
>   
>   	/* cleanup for a dangling non-blocking connect */
> @@ -1627,7 +1633,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
>   	}
>   
>   	smc_copy_sock_settings_to_clc(smc);
> -	tcp_sk(smc->clcsock->sk)->syn_smc = 1;
> +	tcp_sk(smc->clcsock->sk)->syn_smc = (smc_sock_should_select_smc(smc) == SK_PASS) ?
> +		1 : 0;
> +
>   	if (smc->connect_nonblock) {
>   		rc = -EALREADY;
>   		goto out;
> diff --git a/net/smc/bpf_smc_struct_ops.c b/net/smc/bpf_smc_struct_ops.c
> new file mode 100644
> index 0000000..a5989b6
> --- /dev/null
> +++ b/net/smc/bpf_smc_struct_ops.c
> @@ -0,0 +1,146 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/kernel.h>
> +#include <linux/bpf_verifier.h>
> +#include <linux/btf_ids.h>
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <net/sock.h>
> +#include <net/smc.h>
> +
> +extern struct bpf_struct_ops smc_sock_negotiator_ops;
> +
> +DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
> +struct smc_sock_negotiator_ops *negotiator;
> +
> +/* convert sk to smc_sock */
> +static inline struct smc_sock *smc_sk(const struct sock *sk)
> +{
> +	return (struct smc_sock *)sk;
> +}
> +
> +/* register ops */
> +static inline void smc_reg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
> +{
> +	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	negotiator = ops;
> +	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +
> +/* unregister ops */
> +static inline void smc_unreg_passive_sk_ops(struct smc_sock_negotiator_ops *ops)
> +{
> +	write_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator == ops)
> +		negotiator = NULL;
> +	write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +
> +int smc_sock_should_select_smc(const struct smc_sock *smc)
> +{
> +	int ret = SK_PASS;
> +
> +	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator && negotiator->negotiate)
> +		ret = negotiator->negotiate((struct smc_sock *)smc);
> +	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(smc_sock_should_select_smc);
> +
> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
> +{
> +	read_lock_bh(&smc_sock_negotiator_ops_rwlock);
> +	if (negotiator && negotiator->collect_info)
> +		negotiator->collect_info((struct smc_sock *)smc, timing);
> +	read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
> +}
> +EXPORT_SYMBOL_GPL(smc_sock_perform_collecting_info);
> +
> +/* define global smc ID for smc_struct_ops */
> +BTF_ID_LIST_GLOBAL(btf_smc_ids, MAX_BTF_SMC_TYPE)
> +#define BTF_SMC_TYPE(name, type) BTF_ID(struct, type)
> +BTF_SMC_TYPE_xxx
> +#undef BTF_SMC_TYPE
> +
> +static int bpf_smc_passive_sk_init(struct btf *btf)
> +{
> +	return 0;
> +}
> +
> +/* register ops by BPF */
> +static int bpf_smc_passive_sk_ops_reg(void *kdata)
> +{
> +	struct smc_sock_negotiator_ops *ops = kdata;
> +
> +	/* at least one ops need implement */
> +	if (!ops->negotiate || !ops->collect_info) {
> +		pr_err("At least one ops need implement.\n");
> +		return -EINVAL;
> +	}
> +
> +	smc_reg_passive_sk_ops(ops);
> +	/* always success now */
> +	return 0;
> +}
> +
> +/* unregister ops by BPF */
> +static void bpf_smc_passive_sk_ops_unreg(void *kdata)
> +{
> +	smc_unreg_passive_sk_ops(kdata);
> +}
> +
> +static int bpf_smc_passive_sk_ops_check_member(const struct btf_type *t,
> +					       const struct btf_member *member,
> +					       const struct bpf_prog *prog)
> +{
> +	return 0;
> +}

Please check the right pointer type of check_member:

int (*check_member)(const struct btf_type *t,
		    const struct btf_member *member);

> +
> +static int bpf_smc_passive_sk_ops_init_member(const struct btf_type *t,
> +					      const struct btf_member *member,
> +					      void *kdata, const void *udata)
> +{
> +	return 0;
> +}
> +
> +static const struct bpf_func_proto *
> +smc_passive_sk_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> +{
> +	return bpf_base_func_proto(func_id);
> +}
> +
> +static bool smc_passive_sk_ops_prog_is_valid_access(int off, int size, enum bpf_access_type type,
> +						    const struct bpf_prog *prog,
> +						    struct bpf_insn_access_aux *info)
> +{
> +	return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
> +}
> +
> +static int smc_passive_sk_ops_prog_struct_access(struct bpf_verifier_log *log,
> +						 const struct bpf_reg_state *reg,
> +						 int off, int size, enum bpf_access_type atype,
> +						 u32 *next_btf_id, enum bpf_type_flag *flag)
> +{
> +	/* only allow read now*/
> +	if (atype == BPF_READ)
> +		return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag);
> +
> +	return -EACCES;
> +}
> +
> +static const struct bpf_verifier_ops bpf_smc_passive_sk_verifier_ops = {
> +	.get_func_proto  = smc_passive_sk_prog_func_proto,
> +	.is_valid_access = smc_passive_sk_ops_prog_is_valid_access,
> +	.btf_struct_access = smc_passive_sk_ops_prog_struct_access
> +};
> +
> +struct bpf_struct_ops bpf_smc_sock_negotiator_ops = {
> +	.verifier_ops = &bpf_smc_passive_sk_verifier_ops,
> +	.init = bpf_smc_passive_sk_init,
> +	.check_member = bpf_smc_passive_sk_ops_check_member,
> +	.init_member = bpf_smc_passive_sk_ops_init_member,
> +	.reg = bpf_smc_passive_sk_ops_reg,
> +	.unreg = bpf_smc_passive_sk_ops_unreg,
> +	.name = "smc_sock_negotiator_ops",
> +};
> diff --git a/net/smc/smc.h b/net/smc/smc.h
> index 5ed765e..349b193 100644
> --- a/net/smc/smc.h
> +++ b/net/smc/smc.h
> @@ -57,232 +57,12 @@ enum smc_state {		/* possible states of an SMC socket */
>   
>   struct smc_link_group;
>   
> -struct smc_wr_rx_hdr {	/* common prefix part of LLC and CDC to demultiplex */
> -	union {
> -		u8 type;
> -#if defined(__BIG_ENDIAN_BITFIELD)
> -		struct {
> -			u8 llc_version:4,
> -			   llc_type:4;
> -		};
> -#elif defined(__LITTLE_ENDIAN_BITFIELD)
> -		struct {
> -			u8 llc_type:4,
> -			   llc_version:4;
> -		};
> -#endif
> -	};
> -} __aligned(1);
> -
> -struct smc_cdc_conn_state_flags {
> -#if defined(__BIG_ENDIAN_BITFIELD)
> -	u8	peer_done_writing : 1;	/* Sending done indicator */
> -	u8	peer_conn_closed : 1;	/* Peer connection closed indicator */
> -	u8	peer_conn_abort : 1;	/* Abnormal close indicator */
> -	u8	reserved : 5;
> -#elif defined(__LITTLE_ENDIAN_BITFIELD)
> -	u8	reserved : 5;
> -	u8	peer_conn_abort : 1;
> -	u8	peer_conn_closed : 1;
> -	u8	peer_done_writing : 1;
> -#endif
> -};
> -
> -struct smc_cdc_producer_flags {
> -#if defined(__BIG_ENDIAN_BITFIELD)
> -	u8	write_blocked : 1;	/* Writing Blocked, no rx buf space */
> -	u8	urg_data_pending : 1;	/* Urgent Data Pending */
> -	u8	urg_data_present : 1;	/* Urgent Data Present */
> -	u8	cons_curs_upd_req : 1;	/* cursor update requested */
> -	u8	failover_validation : 1;/* message replay due to failover */
> -	u8	reserved : 3;
> -#elif defined(__LITTLE_ENDIAN_BITFIELD)
> -	u8	reserved : 3;
> -	u8	failover_validation : 1;
> -	u8	cons_curs_upd_req : 1;
> -	u8	urg_data_present : 1;
> -	u8	urg_data_pending : 1;
> -	u8	write_blocked : 1;
> -#endif
> -};
> -
> -/* in host byte order */
> -union smc_host_cursor {	/* SMC cursor - an offset in an RMBE */
> -	struct {
> -		u16	reserved;
> -		u16	wrap;		/* window wrap sequence number */
> -		u32	count;		/* cursor (= offset) part */
> -	};
> -#ifdef KERNEL_HAS_ATOMIC64
> -	atomic64_t		acurs;	/* for atomic processing */
> -#else
> -	u64			acurs;	/* for atomic processing */
> -#endif
> -} __aligned(8);
> -
> -/* in host byte order, except for flag bitfields in network byte order */
> -struct smc_host_cdc_msg {		/* Connection Data Control message */
> -	struct smc_wr_rx_hdr		common; /* .type = 0xFE */
> -	u8				len;	/* length = 44 */
> -	u16				seqno;	/* connection seq # */
> -	u32				token;	/* alert_token */
> -	union smc_host_cursor		prod;		/* producer cursor */
> -	union smc_host_cursor		cons;		/* consumer cursor,
> -							 * piggy backed "ack"
> -							 */
> -	struct smc_cdc_producer_flags	prod_flags;	/* conn. tx/rx status */
> -	struct smc_cdc_conn_state_flags	conn_state_flags; /* peer conn. status*/
> -	u8				reserved[18];
> -} __aligned(8);
> -
> -enum smc_urg_state {
> -	SMC_URG_VALID	= 1,			/* data present */
> -	SMC_URG_NOTYET	= 2,			/* data pending */
> -	SMC_URG_READ	= 3,			/* data was already read */
> -};
> -
>   struct smc_mark_woken {
>   	bool woken;
>   	void *key;
>   	wait_queue_entry_t wait_entry;
>   };
>   
> -struct smc_connection {
> -	struct rb_node		alert_node;
> -	struct smc_link_group	*lgr;		/* link group of connection */
> -	struct smc_link		*lnk;		/* assigned SMC-R link */
> -	u32			alert_token_local; /* unique conn. id */
> -	u8			peer_rmbe_idx;	/* from tcp handshake */
> -	int			peer_rmbe_size;	/* size of peer rx buffer */
> -	atomic_t		peer_rmbe_space;/* remaining free bytes in peer
> -						 * rmbe
> -						 */
> -	int			rtoken_idx;	/* idx to peer RMB rkey/addr */
> -
> -	struct smc_buf_desc	*sndbuf_desc;	/* send buffer descriptor */
> -	struct smc_buf_desc	*rmb_desc;	/* RMBE descriptor */
> -	int			rmbe_size_short;/* compressed notation */
> -	int			rmbe_update_limit;
> -						/* lower limit for consumer
> -						 * cursor update
> -						 */
> -
> -	struct smc_host_cdc_msg	local_tx_ctrl;	/* host byte order staging
> -						 * buffer for CDC msg send
> -						 * .prod cf. TCP snd_nxt
> -						 * .cons cf. TCP sends ack
> -						 */
> -	union smc_host_cursor	local_tx_ctrl_fin;
> -						/* prod crsr - confirmed by peer
> -						 */
> -	union smc_host_cursor	tx_curs_prep;	/* tx - prepared data
> -						 * snd_max..wmem_alloc
> -						 */
> -	union smc_host_cursor	tx_curs_sent;	/* tx - sent data
> -						 * snd_nxt ?
> -						 */
> -	union smc_host_cursor	tx_curs_fin;	/* tx - confirmed by peer
> -						 * snd-wnd-begin ?
> -						 */
> -	atomic_t		sndbuf_space;	/* remaining space in sndbuf */
> -	u16			tx_cdc_seq;	/* sequence # for CDC send */
> -	u16			tx_cdc_seq_fin;	/* sequence # - tx completed */
> -	spinlock_t		send_lock;	/* protect wr_sends */
> -	atomic_t		cdc_pend_tx_wr; /* number of pending tx CDC wqe
> -						 * - inc when post wqe,
> -						 * - dec on polled tx cqe
> -						 */
> -	wait_queue_head_t	cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/
> -	atomic_t		tx_pushing;     /* nr_threads trying tx push */
> -	struct delayed_work	tx_work;	/* retry of smc_cdc_msg_send */
> -	u32			tx_off;		/* base offset in peer rmb */
> -
> -	struct smc_host_cdc_msg	local_rx_ctrl;	/* filled during event_handl.
> -						 * .prod cf. TCP rcv_nxt
> -						 * .cons cf. TCP snd_una
> -						 */
> -	union smc_host_cursor	rx_curs_confirmed; /* confirmed to peer
> -						    * source of snd_una ?
> -						    */
> -	union smc_host_cursor	urg_curs;	/* points at urgent byte */
> -	enum smc_urg_state	urg_state;
> -	bool			urg_tx_pend;	/* urgent data staged */
> -	bool			urg_rx_skip_pend;
> -						/* indicate urgent oob data
> -						 * read, but previous regular
> -						 * data still pending
> -						 */
> -	char			urg_rx_byte;	/* urgent byte */
> -	bool			tx_in_release_sock;
> -						/* flush pending tx data in
> -						 * sock release_cb()
> -						 */
> -	atomic_t		bytes_to_rcv;	/* arrived data,
> -						 * not yet received
> -						 */
> -	atomic_t		splice_pending;	/* number of spliced bytes
> -						 * pending processing
> -						 */
> -#ifndef KERNEL_HAS_ATOMIC64
> -	spinlock_t		acurs_lock;	/* protect cursors */
> -#endif
> -	struct work_struct	close_work;	/* peer sent some closing */
> -	struct work_struct	abort_work;	/* abort the connection */
> -	struct tasklet_struct	rx_tsklet;	/* Receiver tasklet for SMC-D */
> -	u8			rx_off;		/* receive offset:
> -						 * 0 for SMC-R, 32 for SMC-D
> -						 */
> -	u64			peer_token;	/* SMC-D token of peer */
> -	u8			killed : 1;	/* abnormal termination */
> -	u8			freed : 1;	/* normal termiation */
> -	u8			out_of_sync : 1; /* out of sync with peer */
> -};
> -
> -struct smc_sock {				/* smc sock container */
> -	struct sock		sk;
> -	struct socket		*clcsock;	/* internal tcp socket */
> -	void			(*clcsk_state_change)(struct sock *sk);
> -						/* original stat_change fct. */
> -	void			(*clcsk_data_ready)(struct sock *sk);
> -						/* original data_ready fct. */
> -	void			(*clcsk_write_space)(struct sock *sk);
> -						/* original write_space fct. */
> -	void			(*clcsk_error_report)(struct sock *sk);
> -						/* original error_report fct. */
> -	struct smc_connection	conn;		/* smc connection */
> -	struct smc_sock		*listen_smc;	/* listen parent */
> -	struct work_struct	connect_work;	/* handle non-blocking connect*/
> -	struct work_struct	tcp_listen_work;/* handle tcp socket accepts */
> -	struct work_struct	smc_listen_work;/* prepare new accept socket */
> -	struct list_head	accept_q;	/* sockets to be accepted */
> -	spinlock_t		accept_q_lock;	/* protects accept_q */
> -	bool			limit_smc_hs;	/* put constraint on handshake */
> -	bool			use_fallback;	/* fallback to tcp */
> -	int			fallback_rsn;	/* reason for fallback */
> -	u32			peer_diagnosis; /* decline reason from peer */
> -	atomic_t                queued_smc_hs;  /* queued smc handshakes */
> -	struct inet_connection_sock_af_ops		af_ops;
> -	const struct inet_connection_sock_af_ops	*ori_af_ops;
> -						/* original af ops */
> -	int			sockopt_defer_accept;
> -						/* sockopt TCP_DEFER_ACCEPT
> -						 * value
> -						 */
> -	u8			wait_close_tx_prepared : 1;
> -						/* shutdown wr or close
> -						 * started, waiting for unsent
> -						 * data to be sent
> -						 */
> -	u8			connect_nonblock : 1;
> -						/* non-blocking connect in
> -						 * flight
> -						 */
> -	struct mutex            clcsock_release_lock;
> -						/* protects clcsock of a listen
> -						 * socket
> -						 * */
> -};
> -
>   static inline struct smc_sock *smc_sk(const struct sock *sk)
>   {
>   	return (struct smc_sock *)sk;

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-27  7:58   ` Wenjia Zhang
@ 2023-02-28  8:50     ` D. Wythe
  2023-02-28  8:58       ` Wenjia Zhang
  0 siblings, 1 reply; 14+ messages in thread
From: D. Wythe @ 2023-02-28  8:50 UTC (permalink / raw)
  To: Wenjia Zhang, kgraul, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf



On 2/27/23 3:58 PM, Wenjia Zhang wrote:
> 
> 
> On 21.02.23 13:18, D. Wythe wrote:
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>
>> This PATCH attempts to introduce BPF injection capability for SMC.
>> As we all know that the SMC protocol is not suitable for all scenarios,
>> especially for short-lived. However, for most applications, they cannot
>> guarantee that there are no such scenarios at all. Therefore, apps
>> may need some specific strategies to decide shall we need to use SMC
>> or not, for example, apps can limit the scope of the SMC to a specific
>> IP address or port.

...

>> +static int bpf_smc_passive_sk_ops_check_member(const struct btf_type *t,
>> +                           const struct btf_member *member,
>> +                           const struct bpf_prog *prog)
>> +{
>> +    return 0;
>> +}
> 
> Please check the right pointer type of check_member:
> 
> int (*check_member)(const struct btf_type *t,
>              const struct btf_member *member);
> 

Hi Wenjia,

That's weird. the prototype of check_member on
latested net-next and bpf-next is:

struct bpf_struct_ops {
	const struct bpf_verifier_ops *verifier_ops;
	int (*init)(struct btf *btf);
	int (*check_member)(const struct btf_type *t,
			    const struct btf_member *member,
			    const struct bpf_prog *prog);
	int (*init_member)(const struct btf_type *t,
			   const struct btf_member *member,
			   void *kdata, const void *udata);
	int (*reg)(void *kdata);
	void (*unreg)(void *kdata);
	const struct btf_type *type;
	const struct btf_type *value_type;
	const char *name;
	struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
	u32 type_id;
	u32 value_id;
};

I wonder if there is any code out of sync?

And also I found that this patch is too complex and mixed with the code of two modules (smc & bpf).
I will split them out for easier review today.

Best wishes
D. Wythe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-28  8:50     ` D. Wythe
@ 2023-02-28  8:58       ` Wenjia Zhang
  0 siblings, 0 replies; 14+ messages in thread
From: Wenjia Zhang @ 2023-02-28  8:58 UTC (permalink / raw)
  To: D. Wythe, kgraul, jaka, ast, daniel, andrii
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf



On 28.02.23 09:50, D. Wythe wrote:
> 
> 
> On 2/27/23 3:58 PM, Wenjia Zhang wrote:
>>
>>
>> On 21.02.23 13:18, D. Wythe wrote:
>>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>>
>>> This PATCH attempts to introduce BPF injection capability for SMC.
>>> As we all know that the SMC protocol is not suitable for all scenarios,
>>> especially for short-lived. However, for most applications, they cannot
>>> guarantee that there are no such scenarios at all. Therefore, apps
>>> may need some specific strategies to decide shall we need to use SMC
>>> or not, for example, apps can limit the scope of the SMC to a specific
>>> IP address or port.
> 
> ...
> 
>>> +static int bpf_smc_passive_sk_ops_check_member(const struct btf_type 
>>> *t,
>>> +                           const struct btf_member *member,
>>> +                           const struct bpf_prog *prog)
>>> +{
>>> +    return 0;
>>> +}
>>
>> Please check the right pointer type of check_member:
>>
>> int (*check_member)(const struct btf_type *t,
>>              const struct btf_member *member);
>>
> 
> Hi Wenjia,
> 
> That's weird. the prototype of check_member on
> latested net-next and bpf-next is:
> 
> struct bpf_struct_ops {
>      const struct bpf_verifier_ops *verifier_ops;
>      int (*init)(struct btf *btf);
>      int (*check_member)(const struct btf_type *t,
>                  const struct btf_member *member,
>                  const struct bpf_prog *prog);
>      int (*init_member)(const struct btf_type *t,
>                 const struct btf_member *member,
>                 void *kdata, const void *udata);
>      int (*reg)(void *kdata);
>      void (*unreg)(void *kdata);
>      const struct btf_type *type;
>      const struct btf_type *value_type;
>      const char *name;
>      struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
>      u32 type_id;
>      u32 value_id;
> };
> 
> I wonder if there is any code out of sync?
> 
> And also I found that this patch is too complex and mixed with the code 
> of two modules (smc & bpf).
> I will split them out for easier review today.
> 
> Best wishes
> D. Wythe
> 

Good question, the base I used is the current torvalds tree, maybe some 
code there is still not up-to-date.

But it would be great if you can split them out for better review.

Thanks
Wenjia

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-02-22 21:40   ` Martin KaFai Lau
@ 2023-03-09 11:49     ` D. Wythe
  2023-03-23 20:46       ` Martin KaFai Lau
  0 siblings, 1 reply; 14+ messages in thread
From: D. Wythe @ 2023-03-09 11:49 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii


On 2/23/23 5:40 AM, Martin KaFai Lau wrote:
> On 2/21/23 4:18 AM, D. Wythe wrote:
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>
>> This PATCH attempts to introduce BPF injection capability for SMC.
>> As we all know that the SMC protocol is not suitable for all scenarios,
>> especially for short-lived. However, for most applications, they cannot
>> guarantee that there are no such scenarios at all. Therefore, apps
>> may need some specific strategies to decide shall we need to use SMC
>> or not, for example, apps can limit the scope of the SMC to a specific
>> IP address or port.
>>
>> Based on the consideration of transparent replacement, we hope that apps
>> can remain transparent even if they need to formulate some specific
>> strategies for SMC using. That is, do not need to recompile their code.
>>
>> On the other hand, we need to ensure the scalability of strategies
>> implementation. Although it is simple to use socket options or sysctl,
>> it will bring more complexity to subsequent expansion.
>>
>> Fortunately, BPF can solve these concerns very well, users can write
>> thire own strategies in eBPF to choose whether to use SMC or not.
>> And it's quite easy for them to modify their strategies in the future.
>>
>> This PATCH implement injection capability for SMC via struct_ops.
>> In that way, we can add new injection scenarios in the future.
>
> I have never used smc. I can only comment at its high level usage and 
> details on the bpf side.


Hi Martin,

Thank you very much for your comments and I'm very sorry for my mistakes.

>
>>
>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>> ---
>>   include/linux/btf_ids.h           |  15 +++
>>   include/net/smc.h                 | 254 
>> ++++++++++++++++++++++++++++++++++++++
>>   kernel/bpf/bpf_struct_ops_types.h |   4 +
>>   net/Makefile                      |   5 +
>>   net/smc/af_smc.c                  |  10 +-
>>   net/smc/bpf_smc_struct_ops.c      | 146 ++++++++++++++++++++++
>>   net/smc/smc.h                     | 220 
>> ---------------------------------
>>   7 files changed, 433 insertions(+), 221 deletions(-)
>>   create mode 100644 net/smc/bpf_smc_struct_ops.c
>>
>> diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
>> index 3a4f7cd..25eab1e 100644
>> --- a/include/linux/btf_ids.h
>> +++ b/include/linux/btf_ids.h
>> @@ -264,6 +264,21 @@ enum {
>>   MAX_BTF_TRACING_TYPE,
>>   };
>>   +#if IS_ENABLED(CONFIG_SMC)
>> +#define BTF_SMC_TYPE_xxx        \
>> +    BTF_SMC_TYPE(BTF_SMC_TYPE_SOCK, smc_sock)        \
>> +    BTF_SMC_TYPE(BTF_SMC_TYPE_CONNECTION, smc_connection)    \
>> +    BTF_SMC_TYPE(BTF_SMC_TYPE_HOST_CURSOR, smc_host_cursor)
>> +
>> +enum {
>> +#define BTF_SMC_TYPE(name, type) name,
>> +BTF_SMC_TYPE_xxx
>> +#undef BTF_SMC_TYPE
>> +MAX_BTF_SMC_TYPE,
>> +};
>> +extern u32 btf_smc_ids[];
>
> Do all these need to be in btf_ids.h?

My original intention is to do some security checks via btf_smc_ids,

but since it is not implemented at present, so it is not necessary here.

>
>> +#endif
>> +
>>   extern u32 btf_tracing_ids[];
>>   extern u32 bpf_cgroup_btf_id[];
>>   extern u32 bpf_local_storage_map_btf_id[];
>> diff --git a/include/net/smc.h b/include/net/smc.h
>> index 597cb93..912c269 100644
>> --- a/include/net/smc.h
>> +++ b/include/net/smc.h
>
> It is not obvious to me why the header moving is needed (from 
> net/smc/smc.h to include/net/smc.h ?). This can use some comment in 
> the commit message and please break it out to another patch.

Got it, , I have finished the splitting.

>
> [ ... ]
>
>> --- a/net/Makefile
>> +++ b/net/Makefile
>> @@ -52,6 +52,11 @@ obj-$(CONFIG_TIPC)        += tipc/
>>   obj-$(CONFIG_NETLABEL)        += netlabel/
>>   obj-$(CONFIG_IUCV)        += iucv/
>>   obj-$(CONFIG_SMC)        += smc/
>> +ifneq ($(CONFIG_SMC),)
>> +ifeq ($(CONFIG_BPF_SYSCALL),y)
>> +obj-y                += smc/bpf_smc_struct_ops.o
>
> This will ensure bpf_smc_struct_ops.c compiled as builtin even when 
> smc is compiled as module?

Yes,  smc allow compiled as module.

We are also struggling here. If you have a better way, please let me 
know. 😁

>
>> diff --git a/net/smc/bpf_smc_struct_ops.c b/net/smc/bpf_smc_struct_ops.c
>> new file mode 100644
>> index 0000000..a5989b6
>> --- /dev/null
>> +++ b/net/smc/bpf_smc_struct_ops.c
>> @@ -0,0 +1,146 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/bpf_verifier.h>
>> +#include <linux/btf_ids.h>
>> +#include <linux/bpf.h>
>> +#include <linux/btf.h>
>> +#include <net/sock.h>
>> +#include <net/smc.h>
>> +
>> +extern struct bpf_struct_ops smc_sock_negotiator_ops;
>> +
>> +DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
>> +struct smc_sock_negotiator_ops *negotiator;
>
> Is it sure one global negotiator (policy) will work for all smc_sock? 
> or each sk should have its own negotiator and the negotiator is 
> selected by setsockopt.
>
This is really a good question,  we can really consider adding an 
independent negotiator for each sock.

But just like the TCP congestion control , the global negotiator can be 
used for sock without

special requirements.


>> +
>> +/* convert sk to smc_sock */
>> +static inline struct smc_sock *smc_sk(const struct sock *sk)
>> +{
>> +    return (struct smc_sock *)sk;
>> +}
>> +
>> +/* register ops */
>> +static inline void smc_reg_passive_sk_ops(struct 
>> smc_sock_negotiator_ops *ops)
>> +{
>> +    write_lock_bh(&smc_sock_negotiator_ops_rwlock);
>> +    negotiator = ops;
>
> What happens to the existing negotiator?

What if we return a failure when the negotiator already exists ?

>
>> + write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
>> +}
>> +
>> +/* unregister ops */
>> +static inline void smc_unreg_passive_sk_ops(struct 
>> smc_sock_negotiator_ops *ops)
>> +{
>> +    write_lock_bh(&smc_sock_negotiator_ops_rwlock);
>> +    if (negotiator == ops)
>> +        negotiator = NULL;
>> +    write_unlock_bh(&smc_sock_negotiator_ops_rwlock);
>> +}
>> +
>> +int smc_sock_should_select_smc(const struct smc_sock *smc)
>> +{
>> +    int ret = SK_PASS;
>> +
>> +    read_lock_bh(&smc_sock_negotiator_ops_rwlock);
>> +    if (negotiator && negotiator->negotiate)
>> +        ret = negotiator->negotiate((struct smc_sock *)smc);
>> +    read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(smc_sock_should_select_smc);
>> +
>> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, 
>> int timing)
>> +{
>> +    read_lock_bh(&smc_sock_negotiator_ops_rwlock);
>> +    if (negotiator && negotiator->collect_info)
>> +        negotiator->collect_info((struct smc_sock *)smc, timing);
>> +    read_unlock_bh(&smc_sock_negotiator_ops_rwlock);
>> +}
>> +EXPORT_SYMBOL_GPL(smc_sock_perform_collecting_info);
>> +
>> +/* define global smc ID for smc_struct_ops */
>> +BTF_ID_LIST_GLOBAL(btf_smc_ids, MAX_BTF_SMC_TYPE)
>
> How is btf_smc_ids used?

Yes, it is useless here for the time being. I will remove them in the 
new version.

>
>> +#define BTF_SMC_TYPE(name, type) BTF_ID(struct, type)
>> +BTF_SMC_TYPE_xxx
>> +#undef BTF_SMC_TYPE
>> +
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability
  2023-02-22 22:35   ` Martin KaFai Lau
@ 2023-03-09 11:58     ` D. Wythe
  0 siblings, 0 replies; 14+ messages in thread
From: D. Wythe @ 2023-03-09 11:58 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii


On 2/23/23 6:35 AM, Martin KaFai Lau wrote:
> On 2/21/23 4:18 AM, D. Wythe wrote:
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>
>> This PATCH adds a tiny selftest for SMC bpf capability,
>> making decisions on whether to use SMC by collecting
>> certain information from kernel smc sock.
>>
>> Follow the steps below to run this test.
>>
>> make -C tools/testing/selftests/bpf
>> cd tools/testing/selftests/bpf
>> sudo ./test_progs -t bpf_smc
>>
>> Results shows:
>> 18      bpf_smc:OK
>> Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
>>
>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>> ---
>>   tools/testing/selftests/bpf/prog_tests/bpf_smc.c |  39 +++
>>   tools/testing/selftests/bpf/progs/bpf_smc.c      | 315 
>> +++++++++++++++++++++++
>>   2 files changed, 354 insertions(+)
>>   create mode 100644 tools/testing/selftests/bpf/prog_tests/bpf_smc.c
>>   create mode 100644 tools/testing/selftests/bpf/progs/bpf_smc.c
>>
>> diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_smc.c 
>> b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
>> new file mode 100644
>> index 0000000..b143932
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/prog_tests/bpf_smc.c
>> @@ -0,0 +1,39 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/* Copyright (c) 2019 Facebook */
>
> copy-and-paste left-over...

Sorry for that, but it might be more appropriate to delete it here... 😂

>
>> diff --git a/tools/testing/selftests/bpf/progs/bpf_smc.c 
>> b/tools/testing/selftests/bpf/progs/bpf_smc.c
>> new file mode 100644
>> index 0000000..78c7976
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/bpf_smc.c
>> @@ -0,0 +1,315 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +
>> +#include <linux/bpf.h>
>> +#include <linux/stddef.h>
>> +#include <linux/smc.h>
>> +#include <stdbool.h>
>> +#include <linux/types.h>
>> +#include <bpf/bpf_helpers.h>
>> +#include <bpf/bpf_core_read.h>
>> +#include <bpf/bpf_tracing.h>
>> +
>> +#define BPF_STRUCT_OPS(name, args...) \
>> +    SEC("struct_ops/"#name) \
>> +    BPF_PROG(name, args)
>> +
>> +#define SMC_LISTEN        (10)
>> +#define SMC_SOCK_CLOSED_TIMING    (0)
>> +extern unsigned long CONFIG_HZ __kconfig;
>> +#define HZ CONFIG_HZ
>> +
>> +char _license[] SEC("license") = "GPL";
>> +#define max(a, b) ((a) > (b) ? (a) : (b))
>> +
>> +struct sock_common {
>> +    unsigned char    skc_state;
>> +    __u16    skc_num;
>> +} __attribute__((preserve_access_index));
>> +
>> +struct sock {
>> +    struct sock_common    __sk_common;
>> +    int    sk_sndbuf;
>> +} __attribute__((preserve_access_index));
>> +
>> +struct inet_sock {
>> +    struct sock    sk;
>> +} __attribute__((preserve_access_index));
>> +
>> +struct inet_connection_sock {
>> +    struct inet_sock    icsk_inet;
>> +} __attribute__((preserve_access_index));
>> +
>> +struct tcp_sock {
>> +    struct inet_connection_sock    inet_conn;
>> +    __u32    rcv_nxt;
>> +    __u32    snd_nxt;
>> +    __u32    snd_una;
>> +    __u32    delivered;
>> +    __u8    syn_data:1,    /* SYN includes data */
>> +        syn_fastopen:1,    /* SYN includes Fast Open option */
>> +        syn_fastopen_exp:1,/* SYN includes Fast Open exp. option */
>> +        syn_fastopen_ch:1, /* Active TFO re-enabling probe */
>> +        syn_data_acked:1,/* data in SYN is acked by SYN-ACK */
>> +        save_syn:1,    /* Save headers of SYN packet */
>> +        is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */
>> +        syn_smc:1;    /* SYN includes SMC */
>> +} __attribute__((preserve_access_index));
>> +
>> +struct socket {
>> +    struct sock *sk;
>> +} __attribute__((preserve_access_index));
>
> All these tcp_sock, socket, inet_sock definitions can go away if it 
> includes "vmlinux.h". tcp_ca_write_sk_pacing.c is a better example to 
> follow. Try to define the "common" (eg. tcp, tc...etc) missing macros 
> in bpf_tracing_net.h. The smc specific macros can stay in this file.

Got it, i'll fix this.

>> +static inline struct smc_prediction *smc_prediction_get(const struct 
>> smc_sock *smc,
>> +                            const struct tcp_sock *tp, __u64 tstamp)
>> +{
>> +    struct smc_prediction zero = {}, *smc_predictor;
>> +    __u16 key;
>> +    __u32 gap;
>> +    int err;
>> +
>> +    err = bpf_core_read(&key, sizeof(__u16), 
>> &tp->inet_conn.icsk_inet.sk.__sk_common.skc_num);
>> +    if (err)
>> +        return NULL;
>> +
>> +    /* BAD key */
>> +    if (key == 0)
>> +        return NULL;
>> +
>> +    smc_predictor = bpf_map_lookup_elem(&negotiator_map, &key);
>> +    if (!smc_predictor) {
>> +        zero.start_tstamp = bpf_jiffies64();
>> +        zero.pacing_delta = SMC_PREDICTION_MIN_PACING_DELTA;
>> +        bpf_map_update_elem(&negotiator_map, &key, &zero, 0);
>> +        smc_predictor = bpf_map_lookup_elem(&negotiator_map, &key);
>> +        if (!smc_predictor)
>> +            return NULL;
>> +    }
>> +
>> +    if (tstamp) {
>> +        bpf_spin_lock(&smc_predictor->lock);
>> +        gap = (tstamp - smc_predictor->start_tstamp) / 
>> smc_predictor->pacing_delta;
>> +        /* new splice */
>> +        if (gap > 0) {
>> +            smc_predictor->start_tstamp = tstamp;
>> +            smc_predictor->last_rate_of_lcc =
>> +                (smc_prediction_calt_rate(smc_predictor) * 7) >> (2 
>> + gap);
>> +            smc_predictor->closed_long_cc = 0;
>> +            smc_predictor->closed_total_cc = 0;
>> +            smc_predictor->incoming_long_cc = 0;
>> +        }
>> +        bpf_spin_unlock(&smc_predictor->lock);
>> +    }
>> +    return smc_predictor;
>> +}
>> +
>> +/* BPF struct ops for smc protocol negotiator */
>> +struct smc_sock_negotiator_ops {
>> +    /* ret for negotiate */
>> +    int (*negotiate)(struct smc_sock *smc);
>> +
>> +    /* info gathering timing */
>> +    void (*collect_info)(struct smc_sock *smc, int timing);
>> +};
>> +
>> +int BPF_STRUCT_OPS(bpf_smc_negotiate, struct smc_sock *smc)
>> +{
>> +    struct smc_prediction *smc_predictor;
>> +    struct tcp_sock *tp;
>> +    struct sock *clcsk;
>> +    int ret = SK_DROP;
>> +    __u32 rate = 0;
>> +
>> +    /* Only make decison during listen */
>> +    if (smc->sk.__sk_common.skc_state != SMC_LISTEN)
>> +        return SK_PASS;
>> +
>> +    clcsk = BPF_CORE_READ(smc, clcsock, sk);
>
> Instead of using bpf_core_read here, why not directly gets the clcsk 
> like the 'smc->sk.__sk_common.skc_state' above.
>
>> +    if (!clcsk)
>> +        goto error;
>> +
>> +    tp = tcp_sk(clcsk);
>
> There is a bpf_skc_to_tcp_sock(). Give it a try after changing the 
> above BPF_CORE_READ.

Copy that!  thanks.

>
>> +    if (!tp)
>> +        goto error;
>> +
>> +    smc_predictor = smc_prediction_get(smc, tp, bpf_jiffies64());
>> +    if (!smc_predictor)
>> +        return SK_PASS;
>> +
>> +    bpf_spin_lock(&smc_predictor->lock);
>> +
>> +    if (smc_predictor->incoming_long_cc == 0)
>> +        goto out_locked_pass;
>> +
>> +    if (smc_predictor->incoming_long_cc > 
>> SMC_PREDICTION_MAX_LONGCC_PER_SPLICE) {
>> +        ret = 100;
>> +        goto out_locked_drop;
>> +    }
>> +
>> +    rate = smc_prediction_calt_rate(smc_predictor);
>> +    if (rate < SMC_PREDICTION_LONGCC_RATE_THRESHOLD) {
>> +        ret = 200;
>> +        goto out_locked_drop;
>> +    }
>> +out_locked_pass:
>> +    smc_predictor->incoming_long_cc++;
>> +    bpf_spin_unlock(&smc_predictor->lock);
>> +    return SK_PASS;
>> +out_locked_drop:
>> +    bpf_spin_unlock(&smc_predictor->lock);
>> +error:
>> +    return SK_DROP;
>> +}
>> +
>> +void BPF_STRUCT_OPS(bpf_smc_collect_info, struct smc_sock *smc, int 
>> timing)
>
> Try to stay with SEC("struct_ops/...") void BPF_PROG(....)

Got it.  I have finished this modification in v4.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-03-09 11:49     ` D. Wythe
@ 2023-03-23 20:46       ` Martin KaFai Lau
  2023-03-24  4:08         ` D. Wythe
  0 siblings, 1 reply; 14+ messages in thread
From: Martin KaFai Lau @ 2023-03-23 20:46 UTC (permalink / raw)
  To: D. Wythe
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii

On 3/9/23 3:49 AM, D. Wythe wrote:
>>> --- /dev/null
>>> +++ b/net/smc/bpf_smc_struct_ops.c
>>> @@ -0,0 +1,146 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +#include <linux/kernel.h>
>>> +#include <linux/bpf_verifier.h>
>>> +#include <linux/btf_ids.h>
>>> +#include <linux/bpf.h>
>>> +#include <linux/btf.h>
>>> +#include <net/sock.h>
>>> +#include <net/smc.h>
>>> +
>>> +extern struct bpf_struct_ops smc_sock_negotiator_ops;
>>> +
>>> +DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
>>> +struct smc_sock_negotiator_ops *negotiator;
>>
>> Is it sure one global negotiator (policy) will work for all smc_sock? or each 
>> sk should have its own negotiator and the negotiator is selected by setsockopt.
>>
> This is really a good question,  we can really consider adding an independent 
> negotiator for each sock.
> 
> But just like the TCP congestion control , the global negotiator can be used for 
> sock without
> 
> special requirements.

It is different from TCP congestion control (CC). TCP CC has a global default 
but each sk can select what tcp-cc to use and there can be multiple tcp-cc 
registered under different names.

It sounds like smc using tcp_sock should be able to have different negotiator 
also (eg. based on dst IP or some other tcp connection characteristic). The 
tcp-cc registration, per-sock selection and the rcu_read_lock+refcnt are well 
understood and there are other bpf infrastructure to support the per sock tcp-cc 
selection (like bpf_setsockopt).

For the network stack, there is little reason other af_* should not follow at 
the beginning considering the infrastructure has already been built. The one 
single global negotiator and reader/writer lock in this patch reads like an 
effort wanted to give it a try and see if it will be useful before implementing 
the whole thing. It is better to keep it off the tree for now until it is more 
ready.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-03-23 20:46       ` Martin KaFai Lau
@ 2023-03-24  4:08         ` D. Wythe
  2023-03-24 23:27           ` Martin KaFai Lau
  0 siblings, 1 reply; 14+ messages in thread
From: D. Wythe @ 2023-03-24  4:08 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii



On 3/24/23 4:46 AM, Martin KaFai Lau wrote:
> On 3/9/23 3:49 AM, D. Wythe wrote:
>>>> --- /dev/null
>>>> +++ b/net/smc/bpf_smc_struct_ops.c
>>>> @@ -0,0 +1,146 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +
>>>> +#include <linux/kernel.h>
>>>> +#include <linux/bpf_verifier.h>
>>>> +#include <linux/btf_ids.h>
>>>> +#include <linux/bpf.h>
>>>> +#include <linux/btf.h>
>>>> +#include <net/sock.h>
>>>> +#include <net/smc.h>
>>>> +
>>>> +extern struct bpf_struct_ops smc_sock_negotiator_ops;
>>>> +
>>>> +DEFINE_RWLOCK(smc_sock_negotiator_ops_rwlock);
>>>> +struct smc_sock_negotiator_ops *negotiator;
>>>
>>> Is it sure one global negotiator (policy) will work for all 
>>> smc_sock? or each sk should have its own negotiator and the 
>>> negotiator is selected by setsockopt.
>>>
>> This is really a good question,  we can really consider adding an 
>> independent negotiator for each sock.
>>
>> But just like the TCP congestion control , the global negotiator can 
>> be used for sock without
>>
>> special requirements.
>
> It is different from TCP congestion control (CC). TCP CC has a global 
> default but each sk can select what tcp-cc to use and there can be 
> multiple tcp-cc registered under different names.
>
> It sounds like smc using tcp_sock should be able to have different 
> negotiator also (eg. based on dst IP or some other tcp connection 
> characteristic). The tcp-cc registration, per-sock selection and the 
> rcu_read_lock+refcnt are well understood and there are other bpf 
> infrastructure to support the per sock tcp-cc selection (like 
> bpf_setsockopt).
>
> For the network stack, there is little reason other af_* should not 
> follow at the beginning considering the infrastructure has already 
> been built. The one single global negotiator and reader/writer lock in 
> this patch reads like an effort wanted to give it a try and see if it 
> will be useful before implementing the whole thing. It is better to 
> keep it off the tree for now until it is more ready.

Hi Martin,

Thank you very much for your comments. I have indeed removed global 
negotiator from my latest implementation.

The latest design is that users can register a negotiator implementation 
indexed by name, smc_sock can use bpf_setsockopt to specify
whether a specific negotiation implementation is required via name. If 
there are no settings, there will be no negotiators.

What do you think?

In addition, I am very sorry that I have not issued my implementation 
for such a long time, and I have encountered some problems with the 
implementation because
the SMC needs to be built as kernel module, I have struggled with the 
bpf_setsockopt implementation, and there are some new self-testes that 
need to be written.

However, I believe that I can send a new version as soon as possible.


Best wishes
D. Wythe






^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-03-24  4:08         ` D. Wythe
@ 2023-03-24 23:27           ` Martin KaFai Lau
  2023-04-03  8:21             ` D. Wythe
  0 siblings, 1 reply; 14+ messages in thread
From: Martin KaFai Lau @ 2023-03-24 23:27 UTC (permalink / raw)
  To: D. Wythe
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii

On 3/23/23 9:08 PM, D. Wythe wrote:
> 
> The latest design is that users can register a negotiator implementation indexed 
> by name, smc_sock can use bpf_setsockopt to specify
> whether a specific negotiation implementation is required via name. If there are 
> no settings, there will be no negotiators.
> 
> What do you think?

tbh, bpf_setsockopt is many steps away. It needs to begin with a syscall 
setsockopt first. There is little reason it can only be done with a bpf prog. 
and how does the user know which negotiator a smc sock is using? Currently, ss 
can learn the tcp-cc of a sk.

~~~~~~~~

If this effort is serious, the code quality has to be much improved. The obvious 
bug and unused variables make this set at most a RFC.

 From the bpf perspective, it is ok-ish to start with a global negotiator first 
and skip the setsockopt details for now. However, it needs to be have a name. 
The new link_update 
(https://lore.kernel.org/bpf/20230323032405.3735486-1-kuifeng@meta.com/) has to 
work also. The struct_ops is rcu reader safe, so leverage it whenever it can 
instead of the read/write lock. It is how struct_ops work for tcp, so try to 
stay consistent as much as possible in the networking stack.

> 
> In addition, I am very sorry that I have not issued my implementation for such a 
> long time, and I have encountered some problems with the implementation because
> the SMC needs to be built as kernel module, I have struggled with the 
> bpf_setsockopt implementation, and there are some new self-testes that need to 
> be written.
> 

Regarding compiling as module,

+ifneq ($(CONFIG_SMC),)
+ifeq ($(CONFIG_BPF_SYSCALL),y)
+obj-y				+= smc/bpf_smc_struct_ops.o
+endif

struct_ops does not support module now. It is on the todo list. The 
bpf_smc_struct_ops.o above can only be used when CONFIG_SMC=y. Otherwise, the 
bpf_smc_struct_ops is always built in while most users will never load the smc 
module.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC
  2023-03-24 23:27           ` Martin KaFai Lau
@ 2023-04-03  8:21             ` D. Wythe
  0 siblings, 0 replies; 14+ messages in thread
From: D. Wythe @ 2023-04-03  8:21 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: kuba, davem, netdev, linux-s390, linux-rdma, bpf, kgraul, wenjia,
	jaka, ast, daniel, andrii


Hi Martin,

Sorry to have been responding so late,  I've been working on the 
link_update you mentioned in last week,
I have completed the support and testing of the related functions of it. 
and it is expected to be released in the
next few days.

As you mentioned, I do have much experience in kernel network 
development, so I plan to resend the PATCH in the form of RFC.
I really hope to receive your suggestions in next serials. Thank you.😉

Best wishes.
D. Wythe


On 3/25/23 7:27 AM, Martin KaFai Lau wrote:
> On 3/23/23 9:08 PM, D. Wythe wrote:
>>
>> The latest design is that users can register a negotiator 
>> implementation indexed by name, smc_sock can use bpf_setsockopt to 
>> specify
>> whether a specific negotiation implementation is required via name. 
>> If there are no settings, there will be no negotiators.
>>
>> What do you think?
>
> tbh, bpf_setsockopt is many steps away. It needs to begin with a 
> syscall setsockopt first. There is little reason it can only be done 
> with a bpf prog. and how does the user know which negotiator a smc 
> sock is using? Currently, ss can learn the tcp-cc of a sk.
>
> ~~~~~~~~
>
> If this effort is serious, the code quality has to be much improved. 
> The obvious bug and unused variables make this set at most a RFC.
>
> From the bpf perspective, it is ok-ish to start with a global 
> negotiator first and skip the setsockopt details for now. However, it 
> needs to be have a name. The new link_update 
> (https://lore.kernel.org/bpf/20230323032405.3735486-1-kuifeng@meta.com/) 
> has to work also. The struct_ops is rcu reader safe, so leverage it 
> whenever it can instead of the read/write lock. It is how struct_ops 
> work for tcp, so try to stay consistent as much as possible in the 
> networking stack.
>
>>
>> In addition, I am very sorry that I have not issued my implementation 
>> for such a long time, and I have encountered some problems with the 
>> implementation because
>> the SMC needs to be built as kernel module, I have struggled with the 
>> bpf_setsockopt implementation, and there are some new self-testes 
>> that need to be written.
>>
>
> Regarding compiling as module,
>
> +ifneq ($(CONFIG_SMC),)
> +ifeq ($(CONFIG_BPF_SYSCALL),y)
> +obj-y                += smc/bpf_smc_struct_ops.o
> +endif
>
> struct_ops does not support module now. It is on the todo list. The 
> bpf_smc_struct_ops.o above can only be used when CONFIG_SMC=y. 
> Otherwise, the bpf_smc_struct_ops is always built in while most users 
> will never load the smc module.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-04-03  8:21 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-21 12:18 [PATCH bpf-next v2 0/2] net/smc: Introduce BPF injection capability D. Wythe
2023-02-21 12:18 ` [PATCH bpf-next v2 1/2] net/smc: Introduce BPF injection capability for SMC D. Wythe
2023-02-22 21:40   ` Martin KaFai Lau
2023-03-09 11:49     ` D. Wythe
2023-03-23 20:46       ` Martin KaFai Lau
2023-03-24  4:08         ` D. Wythe
2023-03-24 23:27           ` Martin KaFai Lau
2023-04-03  8:21             ` D. Wythe
2023-02-27  7:58   ` Wenjia Zhang
2023-02-28  8:50     ` D. Wythe
2023-02-28  8:58       ` Wenjia Zhang
2023-02-21 12:18 ` [PATCH bpf-next v2 2/2] bpf/selftests: add selftest for SMC bpf capability D. Wythe
2023-02-22 22:35   ` Martin KaFai Lau
2023-03-09 11:58     ` D. Wythe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).