bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs
@ 2021-02-16 10:57 Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 1/8] bpf: consolidate shared test timing code Lorenz Bauer
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

We don't have PROG_TEST_RUN support for sk_lookup programs at the
moment. So far this hasn't been a problem, since we can run our
tests in a separate network namespace. For benchmarking it's nice
to have PROG_TEST_RUN, so I've gone and implemented it.

Multiple sk_lookup programs can be attached at once to the same
netns. This can't be expressed with the current PROG_TEST_RUN
API, so I'm proposing to extend it with an array of prog_fd.

Patches 1-2 are clean ups. Patches 3-4 add the new UAPI and
implement PROG_TEST_RUN for sk_lookup. Patch 5 adds a new
function to libbpf to access multi prog tests. Patches 6-8 add
tests.

Andrii, for patch 4 I decided on the following API:

    int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
                                struct bpf_test_run_opts *opts)

To be consistent with the rest of libbpf it would be better
to take int *prog_fds, but I think then the function would have to
convert the array to account for platforms where

    sizeof(int) != sizeof(__u32)

Please let me know what your preference is.

Lorenz Bauer (8):
  bpf: consolidate shared test timing code
  bpf: add for_each_bpf_prog helper
  bpf: allow multiple programs in BPF_PROG_TEST_RUN
  bpf: add PROG_TEST_RUN support for sk_lookup programs
  tools: libbpf: allow testing program types with multi-prog semantics
  selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN
  selftests: bpf: convert sk_lookup ctx access tests to PROG_TEST_RUN
  selftests: bpf: check that PROG_TEST_RUN repeats as requested

 include/linux/bpf-netns.h                     |   2 +
 include/linux/bpf.h                           |  24 +-
 include/linux/filter.h                        |   4 +-
 include/uapi/linux/bpf.h                      |  11 +-
 kernel/bpf/net_namespace.c                    |   2 +-
 kernel/bpf/syscall.c                          |  73 +++++-
 net/bpf/test_run.c                            | 230 +++++++++++++-----
 net/core/filter.c                             |   1 +
 tools/include/uapi/linux/bpf.h                |  11 +-
 tools/lib/bpf/bpf.c                           |  16 +-
 tools/lib/bpf/bpf.h                           |   3 +
 tools/lib/bpf/libbpf.map                      |   1 +
 .../selftests/bpf/prog_tests/prog_run_xattr.c |  51 +++-
 .../selftests/bpf/prog_tests/sk_lookup.c      | 172 +++++++++----
 .../selftests/bpf/progs/test_sk_lookup.c      |  62 +++--
 15 files changed, 499 insertions(+), 164 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 1/8] bpf: consolidate shared test timing code
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 2/8] bpf: add for_each_bpf_prog helper Lorenz Bauer
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Share the timing / signal interruption logic between different
implementations of PROG_TEST_RUN. There is a change in behaviour
as well. We check the loop exit condition before checking for
pending signals. This resolves an edge case where a signal
arrives during the last iteration. Instead of aborting with
EINTR we return the successful result to user space.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 net/bpf/test_run.c | 137 +++++++++++++++++++++++++--------------------
 1 file changed, 76 insertions(+), 61 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 58bcb8c849d5..33bd2f67e259 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -16,14 +16,82 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/bpf_test_run.h>
 
+struct test_timer {
+	enum { NO_PREEMPT, NO_MIGRATE } mode;
+	u32 i;
+	u64 time_start, time_spent;
+};
+
+static inline void t_enter(struct test_timer *t)
+{
+	rcu_read_lock();
+	if (t->mode == NO_PREEMPT)
+		preempt_disable();
+	else
+		migrate_disable();
+
+	t->time_start = ktime_get_ns();
+}
+
+static inline void t_leave(struct test_timer *t)
+{
+	t->time_spent += ktime_get_ns() - t->time_start;
+	t->time_start = 0;
+
+	if (t->mode == NO_PREEMPT)
+		preempt_enable();
+	else
+		migrate_enable();
+	rcu_read_unlock();
+}
+
+static inline bool t_check(struct test_timer *t, u32 repeat, int *err, u32 *duration)
+{
+	if (!t->time_start) {
+		/* Enter protected section before first iteration. */
+		t_enter(t);
+		return true;
+	}
+
+	t->i++;
+	if (t->i >= repeat) {
+		/* Leave the protected section after the last iteration. */
+		t_leave(t);
+		do_div(t->time_spent, t->i);
+		*duration = t->time_spent > U32_MAX ? U32_MAX : (u32)t->time_spent;
+		*err = 0;
+		goto reset;
+	}
+
+	if (signal_pending(current)) {
+		/* During iteration: we've been cancelled, abort. */
+		t_leave(t);
+		*err = -EINTR;
+		goto reset;
+	}
+
+	if (need_resched()) {
+		/* During iteration: we need to reschedule between runs. */
+		t_leave(t);
+		cond_resched();
+		t_enter(t);
+	}
+
+	/* Do another round. */
+	return true;
+
+reset:
+	t->time_spent = t->i = 0;
+	return false;
+}
+
 static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
 			u32 *retval, u32 *time, bool xdp)
 {
 	struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { NULL };
+	struct test_timer t = { NO_MIGRATE };
 	enum bpf_cgroup_storage_type stype;
-	u64 time_start, time_spent = 0;
-	int ret = 0;
-	u32 i;
+	int ret;
 
 	for_each_cgroup_storage_type(stype) {
 		storage[stype] = bpf_cgroup_storage_alloc(prog, stype);
@@ -38,40 +106,14 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
 	if (!repeat)
 		repeat = 1;
 
-	rcu_read_lock();
-	migrate_disable();
-	time_start = ktime_get_ns();
-	for (i = 0; i < repeat; i++) {
+	while (t_check(&t, repeat, &ret, time)) {
 		bpf_cgroup_storage_set(storage);
 
 		if (xdp)
 			*retval = bpf_prog_run_xdp(prog, ctx);
 		else
 			*retval = BPF_PROG_RUN(prog, ctx);
-
-		if (signal_pending(current)) {
-			ret = -EINTR;
-			break;
-		}
-
-		if (need_resched()) {
-			time_spent += ktime_get_ns() - time_start;
-			migrate_enable();
-			rcu_read_unlock();
-
-			cond_resched();
-
-			rcu_read_lock();
-			migrate_disable();
-			time_start = ktime_get_ns();
-		}
 	}
-	time_spent += ktime_get_ns() - time_start;
-	migrate_enable();
-	rcu_read_unlock();
-
-	do_div(time_spent, repeat);
-	*time = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
 
 	for_each_cgroup_storage_type(stype)
 		bpf_cgroup_storage_free(storage[stype]);
@@ -674,18 +716,17 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 				     const union bpf_attr *kattr,
 				     union bpf_attr __user *uattr)
 {
+	struct test_timer t = { NO_PREEMPT };
 	u32 size = kattr->test.data_size_in;
 	struct bpf_flow_dissector ctx = {};
 	u32 repeat = kattr->test.repeat;
 	struct bpf_flow_keys *user_ctx;
 	struct bpf_flow_keys flow_keys;
-	u64 time_start, time_spent = 0;
 	const struct ethhdr *eth;
 	unsigned int flags = 0;
 	u32 retval, duration;
 	void *data;
 	int ret;
-	u32 i;
 
 	if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
 		return -EINVAL;
@@ -721,39 +762,13 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 	ctx.data = data;
 	ctx.data_end = (__u8 *)data + size;
 
-	rcu_read_lock();
-	preempt_disable();
-	time_start = ktime_get_ns();
-	for (i = 0; i < repeat; i++) {
+	while (t_check(&t, repeat, &ret, &duration)) {
 		retval = bpf_flow_dissect(prog, &ctx, eth->h_proto, ETH_HLEN,
 					  size, flags);
-
-		if (signal_pending(current)) {
-			preempt_enable();
-			rcu_read_unlock();
-
-			ret = -EINTR;
-			goto out;
-		}
-
-		if (need_resched()) {
-			time_spent += ktime_get_ns() - time_start;
-			preempt_enable();
-			rcu_read_unlock();
-
-			cond_resched();
-
-			rcu_read_lock();
-			preempt_disable();
-			time_start = ktime_get_ns();
-		}
 	}
-	time_spent += ktime_get_ns() - time_start;
-	preempt_enable();
-	rcu_read_unlock();
 
-	do_div(time_spent, repeat);
-	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
+	if (ret < 0)
+		goto out;
 
 	ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
 			      retval, duration);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 2/8] bpf: add for_each_bpf_prog helper
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 1/8] bpf: consolidate shared test timing code Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 3/8] bpf: allow multiple programs in BPF_PROG_TEST_RUN Lorenz Bauer
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Add a helper to iterate bpf_prog_arrays, which are a hybrid between
and array and a linked list. Hide this behind a for each macro.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 include/linux/bpf.h    | 11 +++++------
 include/linux/filter.h |  4 +---
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cccaef1088ea..875f6bc4bf1d 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1070,6 +1070,9 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
 /* BPF program asks to set CN on the packet. */
 #define BPF_RET_SET_CN						(1 << 0)
 
+#define for_each_bpf_prog(_array, _item, _prog) \
+	for ((_item) = &(_array)->items[0]; ((_prog) = READ_ONCE((_item)->prog)); (_item)++)
+
 #define BPF_PROG_RUN_ARRAY_FLAGS(array, ctx, func, ret_flags)		\
 	({								\
 		struct bpf_prog_array_item *_item;			\
@@ -1080,13 +1083,11 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
 		migrate_disable();					\
 		rcu_read_lock();					\
 		_array = rcu_dereference(array);			\
-		_item = &_array->items[0];				\
-		while ((_prog = READ_ONCE(_item->prog))) {		\
+		for_each_bpf_prog(_array, _item, _prog) {		\
 			bpf_cgroup_storage_set(_item->cgroup_storage);	\
 			func_ret = func(_prog, ctx);			\
 			_ret &= (func_ret & 1);				\
 			*(ret_flags) |= (func_ret >> 1);			\
-			_item++;					\
 		}							\
 		rcu_read_unlock();					\
 		migrate_enable();					\
@@ -1104,11 +1105,9 @@ int bpf_prog_array_copy(struct bpf_prog_array *old_array,
 		_array = rcu_dereference(array);	\
 		if (unlikely(check_non_null && !_array))\
 			goto _out;			\
-		_item = &_array->items[0];		\
-		while ((_prog = READ_ONCE(_item->prog))) {		\
+		for_each_bpf_prog(_array, _item, _prog) {		\
 			bpf_cgroup_storage_set(_item->cgroup_storage);	\
 			_ret &= func(_prog, ctx);	\
-			_item++;			\
 		}					\
 _out:							\
 		rcu_read_unlock();			\
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 3b00fc906ccd..9ed20ff29d9a 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1376,8 +1376,7 @@ extern struct static_key_false bpf_sk_lookup_enabled;
 		u32 _ret;						\
 									\
 		migrate_disable();					\
-		_item = &(array)->items[0];				\
-		while ((_prog = READ_ONCE(_item->prog))) {		\
+		for_each_bpf_prog(array, _item, _prog) {		\
 			/* restore most recent selection */		\
 			_ctx->selected_sk = _selected_sk;		\
 			_ctx->no_reuseport = _no_reuseport;		\
@@ -1390,7 +1389,6 @@ extern struct static_key_false bpf_sk_lookup_enabled;
 			} else if (_ret == SK_DROP && _all_pass) {	\
 				_all_pass = false;			\
 			}						\
-			_item++;					\
 		}							\
 		_ctx->selected_sk = _selected_sk;			\
 		_ctx->no_reuseport = _no_reuseport;			\
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 3/8] bpf: allow multiple programs in BPF_PROG_TEST_RUN
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 1/8] bpf: consolidate shared test timing code Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 2/8] bpf: add for_each_bpf_prog helper Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

The sk_lookup hook allows installing multiple BPF programs
simultaneously and has defined semantics. We therefore need
to be able to test multiple programs with one PROG_TEST_RUN
call. Extend the UAPI to include a prog_fds array which
enables this case. Passing an array with a single fd falls
back to current behaviour. Program types that allow multiple
programs have to provide a new test_run_array callback.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 include/linux/bpf-netns.h      |  2 +
 include/linux/bpf.h            |  3 ++
 include/uapi/linux/bpf.h       |  6 ++-
 kernel/bpf/net_namespace.c     |  2 +-
 kernel/bpf/syscall.c           | 73 ++++++++++++++++++++++++++++++----
 tools/include/uapi/linux/bpf.h |  6 ++-
 6 files changed, 81 insertions(+), 11 deletions(-)

diff --git a/include/linux/bpf-netns.h b/include/linux/bpf-netns.h
index 722f799c1a2e..f34800cd7017 100644
--- a/include/linux/bpf-netns.h
+++ b/include/linux/bpf-netns.h
@@ -5,6 +5,8 @@
 #include <linux/mutex.h>
 #include <uapi/linux/bpf.h>
 
+#define BPF_SK_LOOKUP_MAX_PROGS	64
+
 enum netns_bpf_attach_type {
 	NETNS_BPF_INVALID = -1,
 	NETNS_BPF_FLOW_DISSECTOR = 0,
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 875f6bc4bf1d..67c21c8ba7cc 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -26,6 +26,7 @@ struct bpf_verifier_env;
 struct bpf_verifier_log;
 struct perf_event;
 struct bpf_prog;
+struct bpf_prog_array;
 struct bpf_prog_aux;
 struct bpf_map;
 struct sock;
@@ -437,6 +438,8 @@ bpf_ctx_record_field_size(struct bpf_insn_access_aux *aux, u32 size)
 struct bpf_prog_ops {
 	int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr,
 			union bpf_attr __user *uattr);
+	int (*test_run_array)(struct bpf_prog_array *progs, const union bpf_attr *kattr,
+			      union bpf_attr __user *uattr);
 };
 
 struct bpf_verifier_ops {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 4c24daa43bac..b37a0f39b95f 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -589,7 +589,9 @@ union bpf_attr {
 	};
 
 	struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */
-		__u32		prog_fd;
+		__u32		prog_fd;	/* input: program to test. mutually exclusive with
+						 *   prog_fds.
+						 */
 		__u32		retval;
 		__u32		data_size_in;	/* input: len of data_in */
 		__u32		data_size_out;	/* input/output: len of data_out
@@ -609,6 +611,8 @@ union bpf_attr {
 		__aligned_u64	ctx_out;
 		__u32		flags;
 		__u32		cpu;
+		__aligned_u64	prog_fds;
+		__u32		prog_fds_cnt;
 	} test;
 
 	struct { /* anonymous struct used by BPF_*_GET_*_ID */
diff --git a/kernel/bpf/net_namespace.c b/kernel/bpf/net_namespace.c
index 542f275bf252..61e4769f0110 100644
--- a/kernel/bpf/net_namespace.c
+++ b/kernel/bpf/net_namespace.c
@@ -411,7 +411,7 @@ static int netns_bpf_max_progs(enum netns_bpf_attach_type type)
 	case NETNS_BPF_FLOW_DISSECTOR:
 		return 1;
 	case NETNS_BPF_SK_LOOKUP:
-		return 64;
+		return BPF_SK_LOOKUP_MAX_PROGS;
 	default:
 		return 0;
 	}
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c859bc46d06c..f8c7b9d86b3f 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3100,13 +3100,17 @@ static int bpf_prog_query(const union bpf_attr *attr,
 	}
 }
 
-#define BPF_PROG_TEST_RUN_LAST_FIELD test.cpu
+#define BPF_PROG_TEST_RUN_LAST_FIELD test.prog_fds_cnt
 
 static int bpf_prog_test_run(const union bpf_attr *attr,
 			     union bpf_attr __user *uattr)
 {
+	enum bpf_prog_type prog_type = BPF_PROG_TYPE_UNSPEC;
+	u32 prog_fds[BPF_SK_LOOKUP_MAX_PROGS];
+	struct bpf_prog_array *progs = NULL;
 	struct bpf_prog *prog;
-	int ret = -ENOTSUPP;
+	u32 prog_cnt;
+	int i, ret;
 
 	if (CHECK_ATTR(BPF_PROG_TEST_RUN))
 		return -EINVAL;
@@ -3119,14 +3123,67 @@ static int bpf_prog_test_run(const union bpf_attr *attr,
 	    (!attr->test.ctx_size_out && attr->test.ctx_out))
 		return -EINVAL;
 
-	prog = bpf_prog_get(attr->test.prog_fd);
-	if (IS_ERR(prog))
-		return PTR_ERR(prog);
+	if ((attr->test.prog_fds && !attr->test.prog_fds_cnt) ||
+	    (!attr->test.prog_fds && attr->test.prog_fds_cnt))
+		return -EINVAL;
 
-	if (prog->aux->ops->test_run)
-		ret = prog->aux->ops->test_run(prog, attr, uattr);
+	if (attr->test.prog_fds) {
+		u32 __user *uprog_fds = u64_to_user_ptr(attr->test.prog_fds);
 
-	bpf_prog_put(prog);
+		if (attr->test.prog_fds_cnt >= ARRAY_SIZE(prog_fds))
+			return -EINVAL;
+
+		if (attr->test.prog_fd)
+			return -EINVAL;
+
+		prog_cnt = attr->test.prog_fds_cnt;
+		if (copy_from_user(prog_fds, uprog_fds, prog_cnt * sizeof(prog_fds[0])))
+			return -EFAULT;
+	} else {
+		prog_cnt = 1;
+		prog_fds[0] = attr->test.prog_fd;
+	}
+
+	progs = bpf_prog_array_alloc(prog_cnt, GFP_KERNEL);
+	if (!progs)
+		return -ENOMEM;
+
+	for (i = 0; i < prog_cnt; i++) {
+		prog = bpf_prog_get(prog_fds[i]);
+		if (IS_ERR(prog)) {
+			ret = PTR_ERR(prog);
+			goto out;
+		}
+
+		progs->items[i].prog = prog;
+
+		if (prog_type && prog->type != prog_type) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		prog_type = prog->type;
+	}
+
+	prog = progs->items[0].prog;
+	if (prog->aux->ops->test_run_array) {
+		ret = prog->aux->ops->test_run_array(progs, attr, uattr);
+	} else if (prog->aux->ops->test_run) {
+		if (prog_cnt > 1) {
+			ret = -EOPNOTSUPP;
+			goto out;
+		}
+
+		ret = prog->aux->ops->test_run(progs->items[0].prog, attr, uattr);
+	} else {
+		ret = -ENOTSUPP;
+	}
+
+out:
+	for (i = 0; i < prog_cnt; i++)
+		if (progs->items[i].prog)
+			bpf_prog_put(progs->items[i].prog);
+	bpf_prog_array_free(progs);
 	return ret;
 }
 
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 4c24daa43bac..b37a0f39b95f 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -589,7 +589,9 @@ union bpf_attr {
 	};
 
 	struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */
-		__u32		prog_fd;
+		__u32		prog_fd;	/* input: program to test. mutually exclusive with
+						 *   prog_fds.
+						 */
 		__u32		retval;
 		__u32		data_size_in;	/* input: len of data_in */
 		__u32		data_size_out;	/* input/output: len of data_out
@@ -609,6 +611,8 @@ union bpf_attr {
 		__aligned_u64	ctx_out;
 		__u32		flags;
 		__u32		cpu;
+		__aligned_u64	prog_fds;
+		__u32		prog_fds_cnt;
 	} test;
 
 	struct { /* anonymous struct used by BPF_*_GET_*_ID */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (2 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 3/8] bpf: allow multiple programs in BPF_PROG_TEST_RUN Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-23  1:11   ` Alexei Starovoitov
  2021-02-16 10:57 ` [PATCH bpf-next 5/8] tools: libbpf: allow testing program types with multi-prog semantics Lorenz Bauer
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Allow to pass (multiple) sk_lookup programs to PROG_TEST_RUN.
User space provides the full bpf_sk_lookup struct as context.
Since the context includes a socket pointer that can't be exposed
to user space we define that PROG_TEST_RUN returns the cookie
of the selected socket or zero in place of the socket pointer.

We don't support testing programs that select a reuseport socket,
since this would mean running another (unrelated) BPF program
from the sk_lookup test handler.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 include/linux/bpf.h            | 10 ++++
 include/uapi/linux/bpf.h       |  5 +-
 net/bpf/test_run.c             | 93 ++++++++++++++++++++++++++++++++++
 net/core/filter.c              |  1 +
 tools/include/uapi/linux/bpf.h |  5 +-
 5 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 67c21c8ba7cc..d251db1354ec 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1472,6 +1472,9 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
 			     const union bpf_attr *kattr,
 			     union bpf_attr __user *uattr);
+int bpf_prog_test_run_sk_lookup(struct bpf_prog_array *progs,
+				const union bpf_attr *kattr,
+				union bpf_attr __user *uattr);
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info);
@@ -1672,6 +1675,13 @@ static inline int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 	return -ENOTSUPP;
 }
 
+static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog_array *progs,
+					      const union bpf_attr *kattr,
+					      union bpf_attr __user *uattr)
+{
+	return -ENOTSUPP;
+}
+
 static inline void bpf_map_put(struct bpf_map *map)
 {
 }
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index b37a0f39b95f..078ad0b8d1a7 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5210,7 +5210,10 @@ struct bpf_pidns_info {
 
 /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */
 struct bpf_sk_lookup {
-	__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
+	union {
+		__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
+		__u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */
+	};
 
 	__u32 family;		/* Protocol family (AF_INET, AF_INET6) */
 	__u32 protocol;		/* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 33bd2f67e259..932c8e036b0a 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -10,8 +10,10 @@
 #include <net/bpf_sk_storage.h>
 #include <net/sock.h>
 #include <net/tcp.h>
+#include <net/net_namespace.h>
 #include <linux/error-injection.h>
 #include <linux/smp.h>
+#include <linux/sock_diag.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/bpf_test_run.h>
@@ -781,3 +783,94 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 	kfree(data);
 	return ret;
 }
+
+int bpf_prog_test_run_sk_lookup(struct bpf_prog_array *progs, const union bpf_attr *kattr,
+				union bpf_attr __user *uattr)
+{
+	struct test_timer t = { NO_PREEMPT };
+	struct bpf_sk_lookup_kern ctx = {};
+	u32 repeat = kattr->test.repeat;
+	struct bpf_sk_lookup *user_ctx;
+	u32 retval, duration;
+	int ret = -EINVAL;
+
+	if (bpf_prog_array_length(progs) >= BPF_SK_LOOKUP_MAX_PROGS)
+		return -E2BIG;
+
+	if (kattr->test.flags || kattr->test.cpu)
+		return -EINVAL;
+
+	if (kattr->test.data_in || kattr->test.data_size_in || kattr->test.data_out ||
+	    kattr->test.data_size_out)
+		return -EINVAL;
+
+	if (!repeat)
+		repeat = 1;
+
+	user_ctx = bpf_ctx_init(kattr, sizeof(*user_ctx));
+	if (IS_ERR(user_ctx))
+		return PTR_ERR(user_ctx);
+
+	if (!user_ctx)
+		return -EINVAL;
+
+	if (user_ctx->sk)
+		goto out;
+
+	if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
+		goto out;
+
+	if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
+		ret = -ERANGE;
+		goto out;
+	}
+
+	ctx.family = user_ctx->family;
+	ctx.protocol = user_ctx->protocol;
+	ctx.dport = user_ctx->local_port;
+	ctx.sport = user_ctx->remote_port;
+
+	switch (ctx.family) {
+	case AF_INET:
+		ctx.v4.daddr = user_ctx->local_ip4;
+		ctx.v4.saddr = user_ctx->remote_ip4;
+		break;
+
+#if IS_ENABLED(CONFIG_IPV6)
+	case AF_INET6:
+		ctx.v6.daddr = (struct in6_addr *)user_ctx->local_ip6;
+		ctx.v6.saddr = (struct in6_addr *)user_ctx->remote_ip6;
+		break;
+#endif
+
+	default:
+		ret = -EAFNOSUPPORT;
+		goto out;
+	}
+
+	while (t_check(&t, repeat, &ret, &duration)) {
+		ctx.selected_sk = NULL;
+		retval = BPF_PROG_SK_LOOKUP_RUN_ARRAY(progs, ctx, BPF_PROG_RUN);
+	}
+
+	if (ret < 0)
+		goto out;
+
+	user_ctx->cookie = 0;
+	if (ctx.selected_sk) {
+		if (ctx.selected_sk->sk_reuseport && !ctx.no_reuseport) {
+			ret = -EOPNOTSUPP;
+			goto out;
+		}
+
+		user_ctx->cookie = sock_gen_cookie(ctx.selected_sk);
+	}
+
+	ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration);
+	if (!ret)
+		ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx));
+
+out:
+	kfree(user_ctx);
+	return ret;
+}
diff --git a/net/core/filter.c b/net/core/filter.c
index 7059cf604d94..978cea941268 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -10451,6 +10451,7 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type,
 }
 
 const struct bpf_prog_ops sk_lookup_prog_ops = {
+	.test_run_array = bpf_prog_test_run_sk_lookup,
 };
 
 const struct bpf_verifier_ops sk_lookup_verifier_ops = {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index b37a0f39b95f..078ad0b8d1a7 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5210,7 +5210,10 @@ struct bpf_pidns_info {
 
 /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */
 struct bpf_sk_lookup {
-	__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
+	union {
+		__bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */
+		__u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */
+	};
 
 	__u32 family;		/* Protocol family (AF_INET, AF_INET6) */
 	__u32 protocol;		/* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 5/8] tools: libbpf: allow testing program types with multi-prog semantics
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (3 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 6/8] selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN Lorenz Bauer
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Add a wrapper bpf_prog_test_run_array that allows testing
multiple programs for supported program types.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 tools/lib/bpf/bpf.c      | 16 +++++++++++++++-
 tools/lib/bpf/bpf.h      |  3 +++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index bba48ff4c5c0..95ec1a3f0931 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -765,6 +765,14 @@ int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr)
 }
 
 int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
+{
+	__u32 fd = prog_fd;
+
+	return bpf_prog_test_run_array(&fd, 1, opts);
+}
+
+int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
+			       struct bpf_test_run_opts *opts)
 {
 	union bpf_attr attr;
 	int ret;
@@ -773,7 +781,6 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
 		return -EINVAL;
 
 	memset(&attr, 0, sizeof(attr));
-	attr.test.prog_fd = prog_fd;
 	attr.test.cpu = OPTS_GET(opts, cpu, 0);
 	attr.test.flags = OPTS_GET(opts, flags, 0);
 	attr.test.repeat = OPTS_GET(opts, repeat, 0);
@@ -787,6 +794,13 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
 	attr.test.data_in = ptr_to_u64(OPTS_GET(opts, data_in, NULL));
 	attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL));
 
+	if (prog_fds_cnt == 1) {
+		attr.test.prog_fd = prog_fds[0];
+	} else {
+		attr.test.prog_fds = ptr_to_u64(prog_fds);
+		attr.test.prog_fds_cnt = prog_fds_cnt;
+	}
+
 	ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
 	OPTS_SET(opts, data_size_out, attr.test.data_size_out);
 	OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 875dde20d56e..47a05fdc9867 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -278,6 +278,9 @@ struct bpf_test_run_opts {
 LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
 				      struct bpf_test_run_opts *opts);
 
+LIBBPF_API int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
+					  struct bpf_test_run_opts *opts);
+
 #ifdef __cplusplus
 } /* extern "C" */
 #endif
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 1c0fd2dd233a..bc3a0b2d645f 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -340,6 +340,7 @@ LIBBPF_0.2.0 {
 
 LIBBPF_0.3.0 {
 	global:
+		bpf_prog_test_run_array;
 		btf__base_btf;
 		btf__parse_elf_split;
 		btf__parse_raw_split;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 6/8] selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (4 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 5/8] tools: libbpf: allow testing program types with multi-prog semantics Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 7/8] selftests: bpf: convert sk_lookup ctx access " Lorenz Bauer
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Make the tests for multi program sk_lookup semantics use bpf_prog_run_array.
This simplifies the test a bit and adds coverage to the new libbpf function.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/sk_lookup.c      | 100 ++++++++++++------
 1 file changed, 65 insertions(+), 35 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
index 9ff0412e1fd3..a8e4a2044170 100644
--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
@@ -267,6 +267,17 @@ static int recv_byte(int fd)
 	return 0;
 }
 
+static __u64 socket_cookie(int fd)
+{
+	__u64 cookie;
+	socklen_t cookie_len = sizeof(cookie);
+
+	if (CHECK(getsockopt(fd, SOL_SOCKET, SO_COOKIE, &cookie, &cookie_len) < 0,
+		  "getsockopt(SO_COOKIE)", "%s\n", strerror(errno)))
+		return 0;
+	return cookie;
+}
+
 static int tcp_recv_send(int server_fd)
 {
 	char buf[1];
@@ -1128,17 +1139,27 @@ struct test_multi_prog {
 	struct bpf_program *prog2;
 	struct bpf_map *redir_map;
 	struct bpf_map *run_map;
-	int expect_errno;
+	enum sk_action result;
 	struct inet_addr listen_at;
+	bool redirect;
 };
 
 static void run_multi_prog_lookup(const struct test_multi_prog *t)
 {
-	struct sockaddr_storage dst = {};
-	int map_fd, server_fd, client_fd;
-	struct bpf_link *link1, *link2;
+	int map_fd, server_fd;
+	struct bpf_sk_lookup ctx = {};
 	int prog_idx, done, err;
+	__u32 prog_fds[2];
 
+	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+		.ctx_in = &ctx,
+		.ctx_size_in = sizeof(ctx),
+		.ctx_out = &ctx,
+		.ctx_size_out = sizeof(ctx),
+	);
+
+	prog_fds[0] = bpf_program__fd(t->prog1);
+	prog_fds[1] = bpf_program__fd(t->prog2);
 	map_fd = bpf_map__fd(t->run_map);
 
 	done = 0;
@@ -1151,33 +1172,37 @@ static void run_multi_prog_lookup(const struct test_multi_prog *t)
 	if (CHECK(err, "bpf_map_update_elem", "failed\n"))
 		return;
 
-	link1 = attach_lookup_prog(t->prog1);
-	if (!link1)
-		return;
-	link2 = attach_lookup_prog(t->prog2);
-	if (!link2)
-		goto out_unlink1;
-
 	server_fd = make_server(SOCK_STREAM, t->listen_at.ip,
 				t->listen_at.port, NULL);
 	if (server_fd < 0)
-		goto out_unlink2;
+		return;
 
 	err = update_lookup_map(t->redir_map, SERVER_A, server_fd);
 	if (err)
-		goto out_close_server;
-
-	client_fd = make_socket(SOCK_STREAM, EXT_IP4, EXT_PORT, &dst);
-	if (client_fd < 0)
-		goto out_close_server;
-
-	err = connect(client_fd, (void *)&dst, inetaddr_len(&dst));
-	if (CHECK(err && !t->expect_errno, "connect",
-		  "unexpected error %d\n", errno))
-		goto out_close_client;
-	if (CHECK(err && t->expect_errno && errno != t->expect_errno,
-		  "connect", "unexpected error %d\n", errno))
-		goto out_close_client;
+		goto out;
+
+	ctx.family = AF_INET;
+	ctx.protocol = IPPROTO_TCP;
+
+	err = bpf_prog_test_run_array(prog_fds, ARRAY_SIZE(prog_fds), &opts);
+	if (CHECK(err, "test_run_array", "failed with error %d\n", errno))
+		goto out;
+
+	if (CHECK(opts.retval != t->result, "test_run", "unexpected result %d\n", opts.retval))
+		goto out;
+
+	if (t->redirect) {
+		__u64 cookie = socket_cookie(server_fd);
+
+		if (!cookie)
+			goto out;
+
+		if (CHECK(ctx.cookie != cookie, "redirect",
+			  "selected sk:%llu instead of sk:%llu\n", ctx.cookie, cookie))
+			goto out;
+	} else if (CHECK(ctx.cookie, "redirect", "selected unexpected sk:%llu\n", ctx.cookie)) {
+		goto out;
+	}
 
 	done = 0;
 	prog_idx = PROG1;
@@ -1191,14 +1216,8 @@ static void run_multi_prog_lookup(const struct test_multi_prog *t)
 	CHECK(err, "bpf_map_lookup_elem", "failed\n");
 	CHECK(!done, "bpf_map_lookup_elem", "PROG2 !done\n");
 
-out_close_client:
-	close(client_fd);
-out_close_server:
+out:
 	close(server_fd);
-out_unlink2:
-	bpf_link__destroy(link2);
-out_unlink1:
-	bpf_link__destroy(link1);
 }
 
 static void test_multi_prog_lookup(struct test_sk_lookup *skel)
@@ -1209,57 +1228,68 @@ static void test_multi_prog_lookup(struct test_sk_lookup *skel)
 			.prog1		= skel->progs.multi_prog_pass1,
 			.prog2		= skel->progs.multi_prog_pass2,
 			.listen_at	= { EXT_IP4, EXT_PORT },
+			.result		= SK_PASS,
 		},
 		{
 			.desc		= "multi prog - drop, drop",
 			.prog1		= skel->progs.multi_prog_drop1,
 			.prog2		= skel->progs.multi_prog_drop2,
 			.listen_at	= { EXT_IP4, EXT_PORT },
-			.expect_errno	= ECONNREFUSED,
+			.result		= SK_DROP,
 		},
 		{
 			.desc		= "multi prog - pass, drop",
 			.prog1		= skel->progs.multi_prog_pass1,
 			.prog2		= skel->progs.multi_prog_drop2,
 			.listen_at	= { EXT_IP4, EXT_PORT },
-			.expect_errno	= ECONNREFUSED,
+			.result		= SK_DROP,
 		},
 		{
 			.desc		= "multi prog - drop, pass",
 			.prog1		= skel->progs.multi_prog_drop1,
 			.prog2		= skel->progs.multi_prog_pass2,
 			.listen_at	= { EXT_IP4, EXT_PORT },
-			.expect_errno	= ECONNREFUSED,
+			.result		= SK_DROP,
 		},
 		{
 			.desc		= "multi prog - pass, redir",
 			.prog1		= skel->progs.multi_prog_pass1,
 			.prog2		= skel->progs.multi_prog_redir2,
 			.listen_at	= { INT_IP4, INT_PORT },
+			.result		= SK_PASS,
+			.redirect	= true,
 		},
 		{
 			.desc		= "multi prog - redir, pass",
 			.prog1		= skel->progs.multi_prog_redir1,
 			.prog2		= skel->progs.multi_prog_pass2,
 			.listen_at	= { INT_IP4, INT_PORT },
+			.result		= SK_PASS,
+			.redirect	= true,
 		},
 		{
 			.desc		= "multi prog - drop, redir",
 			.prog1		= skel->progs.multi_prog_drop1,
 			.prog2		= skel->progs.multi_prog_redir2,
 			.listen_at	= { INT_IP4, INT_PORT },
+			.result		= SK_PASS,
+			.redirect	= true,
 		},
 		{
 			.desc		= "multi prog - redir, drop",
 			.prog1		= skel->progs.multi_prog_redir1,
 			.prog2		= skel->progs.multi_prog_drop2,
 			.listen_at	= { INT_IP4, INT_PORT },
+			.result		= SK_PASS,
+			.redirect	= true,
 		},
 		{
 			.desc		= "multi prog - redir, redir",
 			.prog1		= skel->progs.multi_prog_redir1,
 			.prog2		= skel->progs.multi_prog_redir2,
 			.listen_at	= { INT_IP4, INT_PORT },
+			.result		= SK_PASS,
+			.redirect	= true,
 		},
 	};
 	struct test_multi_prog *t;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 7/8] selftests: bpf: convert sk_lookup ctx access tests to PROG_TEST_RUN
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (5 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 6/8] selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-16 10:57 ` [PATCH bpf-next 8/8] selftests: bpf: check that PROG_TEST_RUN repeats as requested Lorenz Bauer
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Convert the selftests for sk_lookup narrow context access to use
PROG_TEST_RUN instead of creating actual sockets. This ensures that
ctx is populated correctly when using PROG_TEST_RUN.

Assert concrete values since we now control remote_ip and remote_port.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/sk_lookup.c      | 72 +++++++++++++++----
 .../selftests/bpf/progs/test_sk_lookup.c      | 62 ++++++++++------
 2 files changed, 98 insertions(+), 36 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
index a8e4a2044170..eb8906ace5a9 100644
--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
@@ -241,6 +241,37 @@ static int make_client(int sotype, const char *ip, int port)
 	return -1;
 }
 
+static int fill_sk_lookup_ctx(struct bpf_sk_lookup *ctx, const char *local_ip, __u16 local_port,
+			      const char *remote_ip, __u16 remote_port)
+{
+	void *local, *remote;
+	int err;
+
+	memset(ctx, 0, sizeof(*ctx));
+	ctx->local_port = local_port;
+	ctx->remote_port = htons(remote_port);
+
+	if (is_ipv6(local_ip)) {
+		ctx->family = AF_INET6;
+		local = &ctx->local_ip6[0];
+		remote = &ctx->remote_ip6[0];
+	} else {
+		ctx->family = AF_INET;
+		local = &ctx->local_ip4;
+		remote = &ctx->remote_ip4;
+	}
+
+	err = inet_pton(ctx->family, local_ip, local);
+	if (CHECK(err != 1, "inet_pton", "local_ip failed\n"))
+		return 1;
+
+	err = inet_pton(ctx->family, remote_ip, remote);
+	if (CHECK(err != 1, "inet_pton", "remote_ip failed\n"))
+		return 1;
+
+	return 0;
+}
+
 static int send_byte(int fd)
 {
 	ssize_t n;
@@ -1020,18 +1051,27 @@ static void test_drop_on_reuseport(struct test_sk_lookup *skel)
 
 static void run_sk_assign(struct test_sk_lookup *skel,
 			  struct bpf_program *lookup_prog,
-			  const char *listen_ip, const char *connect_ip)
+			  const char *remote_ip, const char *local_ip)
 {
-	int client_fd, peer_fd, server_fds[MAX_SERVERS] = { -1 };
-	struct bpf_link *lookup_link;
+	int server_fds[MAX_SERVERS] = { -1 };
+	struct bpf_sk_lookup ctx;
+	__u64 server_cookie;
 	int i, err;
 
-	lookup_link = attach_lookup_prog(lookup_prog);
-	if (!lookup_link)
+	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+		.ctx_in = &ctx,
+		.ctx_size_in = sizeof(ctx),
+		.ctx_out = &ctx,
+		.ctx_size_out = sizeof(ctx),
+	);
+
+	if (fill_sk_lookup_ctx(&ctx, local_ip, EXT_PORT, remote_ip, INT_PORT))
 		return;
 
+	ctx.protocol = IPPROTO_TCP;
+
 	for (i = 0; i < ARRAY_SIZE(server_fds); i++) {
-		server_fds[i] = make_server(SOCK_STREAM, listen_ip, 0, NULL);
+		server_fds[i] = make_server(SOCK_STREAM, local_ip, 0, NULL);
 		if (server_fds[i] < 0)
 			goto close_servers;
 
@@ -1041,23 +1081,25 @@ static void run_sk_assign(struct test_sk_lookup *skel,
 			goto close_servers;
 	}
 
-	client_fd = make_client(SOCK_STREAM, connect_ip, EXT_PORT);
-	if (client_fd < 0)
+	server_cookie = socket_cookie(server_fds[SERVER_B]);
+	if (!server_cookie)
+		return;
+
+	err = bpf_prog_test_run_opts(bpf_program__fd(lookup_prog), &opts);
+	if (CHECK(err, "test_run", "failed with error %d\n", errno))
+		goto close_servers;
+
+	if (CHECK(ctx.cookie == 0, "ctx.cookie", "no socket selected\n"))
 		goto close_servers;
 
-	peer_fd = accept(server_fds[SERVER_B], NULL, NULL);
-	if (CHECK(peer_fd < 0, "accept", "failed\n"))
-		goto close_client;
+	CHECK(ctx.cookie != server_cookie, "ctx.cookie",
+	      "selected sk %llu instead of %llu\n", ctx.cookie, server_cookie);
 
-	close(peer_fd);
-close_client:
-	close(client_fd);
 close_servers:
 	for (i = 0; i < ARRAY_SIZE(server_fds); i++) {
 		if (server_fds[i] != -1)
 			close(server_fds[i]);
 	}
-	bpf_link__destroy(lookup_link);
 }
 
 static void run_sk_assign_v4(struct test_sk_lookup *skel,
diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
index 1032b292af5b..ac6f7f205e25 100644
--- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
@@ -64,6 +64,10 @@ static const int PROG_DONE = 1;
 static const __u32 KEY_SERVER_A = SERVER_A;
 static const __u32 KEY_SERVER_B = SERVER_B;
 
+static const __u16 SRC_PORT = bpf_htons(8008);
+static const __u32 SRC_IP4 = IP4(127, 0, 0, 2);
+static const __u32 SRC_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000002);
+
 static const __u16 DST_PORT = 7007; /* Host byte order */
 static const __u32 DST_IP4 = IP4(127, 0, 0, 1);
 static const __u32 DST_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000001);
@@ -398,11 +402,12 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
 	if (LSW(ctx->protocol, 0) != IPPROTO_TCP)
 		return SK_DROP;
 
-	/* Narrow loads from remote_port field. Expect non-0 value. */
-	if (LSB(ctx->remote_port, 0) == 0 && LSB(ctx->remote_port, 1) == 0 &&
-	    LSB(ctx->remote_port, 2) == 0 && LSB(ctx->remote_port, 3) == 0)
+	/* Narrow loads from remote_port field. Expect SRC_PORT. */
+	if (LSB(ctx->remote_port, 0) != ((SRC_PORT >> 0) & 0xff) ||
+	    LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff) ||
+	    LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0)
 		return SK_DROP;
-	if (LSW(ctx->remote_port, 0) == 0)
+	if (LSW(ctx->remote_port, 0) != SRC_PORT)
 		return SK_DROP;
 
 	/* Narrow loads from local_port field. Expect DST_PORT. */
@@ -415,11 +420,14 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
 
 	/* Narrow loads from IPv4 fields */
 	if (v4) {
-		/* Expect non-0.0.0.0 in remote_ip4 */
-		if (LSB(ctx->remote_ip4, 0) == 0 && LSB(ctx->remote_ip4, 1) == 0 &&
-		    LSB(ctx->remote_ip4, 2) == 0 && LSB(ctx->remote_ip4, 3) == 0)
+		/* Expect SRC_IP4 in remote_ip4 */
+		if (LSB(ctx->remote_ip4, 0) != ((SRC_IP4 >> 0) & 0xff) ||
+		    LSB(ctx->remote_ip4, 1) != ((SRC_IP4 >> 8) & 0xff) ||
+		    LSB(ctx->remote_ip4, 2) != ((SRC_IP4 >> 16) & 0xff) ||
+		    LSB(ctx->remote_ip4, 3) != ((SRC_IP4 >> 24) & 0xff))
 			return SK_DROP;
-		if (LSW(ctx->remote_ip4, 0) == 0 && LSW(ctx->remote_ip4, 1) == 0)
+		if (LSW(ctx->remote_ip4, 0) != ((SRC_IP4 >> 0) & 0xffff) ||
+		    LSW(ctx->remote_ip4, 1) != ((SRC_IP4 >> 16) & 0xffff))
 			return SK_DROP;
 
 		/* Expect DST_IP4 in local_ip4 */
@@ -448,20 +456,32 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
 
 	/* Narrow loads from IPv6 fields */
 	if (!v4) {
-		/* Expect non-:: IP in remote_ip6 */
-		if (LSB(ctx->remote_ip6[0], 0) == 0 && LSB(ctx->remote_ip6[0], 1) == 0 &&
-		    LSB(ctx->remote_ip6[0], 2) == 0 && LSB(ctx->remote_ip6[0], 3) == 0 &&
-		    LSB(ctx->remote_ip6[1], 0) == 0 && LSB(ctx->remote_ip6[1], 1) == 0 &&
-		    LSB(ctx->remote_ip6[1], 2) == 0 && LSB(ctx->remote_ip6[1], 3) == 0 &&
-		    LSB(ctx->remote_ip6[2], 0) == 0 && LSB(ctx->remote_ip6[2], 1) == 0 &&
-		    LSB(ctx->remote_ip6[2], 2) == 0 && LSB(ctx->remote_ip6[2], 3) == 0 &&
-		    LSB(ctx->remote_ip6[3], 0) == 0 && LSB(ctx->remote_ip6[3], 1) == 0 &&
-		    LSB(ctx->remote_ip6[3], 2) == 0 && LSB(ctx->remote_ip6[3], 3) == 0)
+		/* Expect SRC_IP6 in remote_ip6 */
+		if (LSB(ctx->remote_ip6[0], 0) != ((SRC_IP6[0] >> 0) & 0xff) ||
+		    LSB(ctx->remote_ip6[0], 1) != ((SRC_IP6[0] >> 8) & 0xff) ||
+		    LSB(ctx->remote_ip6[0], 2) != ((SRC_IP6[0] >> 16) & 0xff) ||
+		    LSB(ctx->remote_ip6[0], 3) != ((SRC_IP6[0] >> 24) & 0xff) ||
+		    LSB(ctx->remote_ip6[1], 0) != ((SRC_IP6[1] >> 0) & 0xff) ||
+		    LSB(ctx->remote_ip6[1], 1) != ((SRC_IP6[1] >> 8) & 0xff) ||
+		    LSB(ctx->remote_ip6[1], 2) != ((SRC_IP6[1] >> 16) & 0xff) ||
+		    LSB(ctx->remote_ip6[1], 3) != ((SRC_IP6[1] >> 24) & 0xff) ||
+		    LSB(ctx->remote_ip6[2], 0) != ((SRC_IP6[2] >> 0) & 0xff) ||
+		    LSB(ctx->remote_ip6[2], 1) != ((SRC_IP6[2] >> 8) & 0xff) ||
+		    LSB(ctx->remote_ip6[2], 2) != ((SRC_IP6[2] >> 16) & 0xff) ||
+		    LSB(ctx->remote_ip6[2], 3) != ((SRC_IP6[2] >> 24) & 0xff) ||
+		    LSB(ctx->remote_ip6[3], 0) != ((SRC_IP6[3] >> 0) & 0xff) ||
+		    LSB(ctx->remote_ip6[3], 1) != ((SRC_IP6[3] >> 8) & 0xff) ||
+		    LSB(ctx->remote_ip6[3], 2) != ((SRC_IP6[3] >> 16) & 0xff) ||
+		    LSB(ctx->remote_ip6[3], 3) != ((SRC_IP6[3] >> 24) & 0xff))
 			return SK_DROP;
-		if (LSW(ctx->remote_ip6[0], 0) == 0 && LSW(ctx->remote_ip6[0], 1) == 0 &&
-		    LSW(ctx->remote_ip6[1], 0) == 0 && LSW(ctx->remote_ip6[1], 1) == 0 &&
-		    LSW(ctx->remote_ip6[2], 0) == 0 && LSW(ctx->remote_ip6[2], 1) == 0 &&
-		    LSW(ctx->remote_ip6[3], 0) == 0 && LSW(ctx->remote_ip6[3], 1) == 0)
+		if (LSW(ctx->remote_ip6[0], 0) != ((SRC_IP6[0] >> 0) & 0xffff) ||
+		    LSW(ctx->remote_ip6[0], 1) != ((SRC_IP6[0] >> 16) & 0xffff) ||
+		    LSW(ctx->remote_ip6[1], 0) != ((SRC_IP6[1] >> 0) & 0xffff) ||
+		    LSW(ctx->remote_ip6[1], 1) != ((SRC_IP6[1] >> 16) & 0xffff) ||
+		    LSW(ctx->remote_ip6[2], 0) != ((SRC_IP6[2] >> 0) & 0xffff) ||
+		    LSW(ctx->remote_ip6[2], 1) != ((SRC_IP6[2] >> 16) & 0xffff) ||
+		    LSW(ctx->remote_ip6[3], 0) != ((SRC_IP6[3] >> 0) & 0xffff) ||
+		    LSW(ctx->remote_ip6[3], 1) != ((SRC_IP6[3] >> 16) & 0xffff))
 			return SK_DROP;
 		/* Expect DST_IP6 in local_ip6 */
 		if (LSB(ctx->local_ip6[0], 0) != ((DST_IP6[0] >> 0) & 0xff) ||
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 8/8] selftests: bpf: check that PROG_TEST_RUN repeats as requested
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (6 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 7/8] selftests: bpf: convert sk_lookup ctx access " Lorenz Bauer
@ 2021-02-16 10:57 ` Lorenz Bauer
  2021-02-17 20:08 ` [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs John Fastabend
  2021-02-23  7:29 ` Andrii Nakryiko
  9 siblings, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-16 10:57 UTC (permalink / raw)
  To: ast, daniel, andrii, jakub; +Cc: kernel-team, bpf, netdev, Lorenz Bauer

Extend a simple prog_run test to check that PROG_TEST_RUN adheres
to the requested repetitions. Convert it to use BPF skeleton.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/prog_run_xattr.c | 51 +++++++++++++++----
 1 file changed, 42 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c b/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
index 935a294f049a..131d7f7eeb42 100644
--- a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
+++ b/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
@@ -2,12 +2,31 @@
 #include <test_progs.h>
 #include <network_helpers.h>
 
-void test_prog_run_xattr(void)
+#include "test_pkt_access.skel.h"
+
+static const __u32 duration;
+
+static void check_run_cnt(int prog_fd, __u64 run_cnt)
 {
-	const char *file = "./test_pkt_access.o";
-	struct bpf_object *obj;
-	char buf[10];
+	struct bpf_prog_info info = {};
+	__u32 info_len = sizeof(info);
 	int err;
+
+	err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
+	if (CHECK(err, "get_prog_info", "failed to get bpf_prog_info for fd %d\n", prog_fd))
+		return;
+
+	CHECK(run_cnt != info.run_cnt, "run_cnt",
+	      "incorrect number of repetitions, want %llu have %llu\n", run_cnt, info.run_cnt);
+}
+
+void test_prog_run_xattr(void)
+{
+	struct test_pkt_access *skel;
+	int err, stats_fd = -1;
+	char buf[10] = {};
+	__u64 run_cnt = 0;
+
 	struct bpf_prog_test_run_attr tattr = {
 		.repeat = 1,
 		.data_in = &pkt_v4,
@@ -16,12 +35,15 @@ void test_prog_run_xattr(void)
 		.data_size_out = 5,
 	};
 
-	err = bpf_prog_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj,
-			    &tattr.prog_fd);
-	if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno))
+	stats_fd = bpf_enable_stats(BPF_STATS_RUN_TIME);
+	if (CHECK_ATTR(stats_fd < 0, "enable_stats", "failed %d\n", errno))
 		return;
 
-	memset(buf, 0, sizeof(buf));
+	skel = test_pkt_access__open_and_load();
+	if (CHECK_ATTR(!skel, "open_and_load", "failed\n"))
+		goto cleanup;
+
+	tattr.prog_fd = bpf_program__fd(skel->progs.test_pkt_access);
 
 	err = bpf_prog_test_run_xattr(&tattr);
 	CHECK_ATTR(err != -1 || errno != ENOSPC || tattr.retval, "run",
@@ -34,8 +56,12 @@ void test_prog_run_xattr(void)
 	CHECK_ATTR(buf[5] != 0, "overflow",
 	      "BPF_PROG_TEST_RUN ignored size hint\n");
 
+	run_cnt += tattr.repeat;
+	check_run_cnt(tattr.prog_fd, run_cnt);
+
 	tattr.data_out = NULL;
 	tattr.data_size_out = 0;
+	tattr.repeat = 2;
 	errno = 0;
 
 	err = bpf_prog_test_run_xattr(&tattr);
@@ -46,5 +72,12 @@ void test_prog_run_xattr(void)
 	err = bpf_prog_test_run_xattr(&tattr);
 	CHECK_ATTR(err != -EINVAL, "run_wrong_size_out", "err %d\n", err);
 
-	bpf_object__close(obj);
+	run_cnt += tattr.repeat;
+	check_run_cnt(tattr.prog_fd, run_cnt);
+
+cleanup:
+	if (skel)
+		test_pkt_access__destroy(skel);
+	if (stats_fd != -1)
+		close(stats_fd);
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* RE: [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (7 preceding siblings ...)
  2021-02-16 10:57 ` [PATCH bpf-next 8/8] selftests: bpf: check that PROG_TEST_RUN repeats as requested Lorenz Bauer
@ 2021-02-17 20:08 ` John Fastabend
  2021-02-23  7:29 ` Andrii Nakryiko
  9 siblings, 0 replies; 16+ messages in thread
From: John Fastabend @ 2021-02-17 20:08 UTC (permalink / raw)
  To: Lorenz Bauer, ast, daniel, andrii, jakub
  Cc: kernel-team, bpf, netdev, Lorenz Bauer

Lorenz Bauer wrote:
> We don't have PROG_TEST_RUN support for sk_lookup programs at the
> moment. So far this hasn't been a problem, since we can run our
> tests in a separate network namespace. For benchmarking it's nice
> to have PROG_TEST_RUN, so I've gone and implemented it.
> 
> Multiple sk_lookup programs can be attached at once to the same
> netns. This can't be expressed with the current PROG_TEST_RUN
> API, so I'm proposing to extend it with an array of prog_fd.
> 
> Patches 1-2 are clean ups. Patches 3-4 add the new UAPI and
> implement PROG_TEST_RUN for sk_lookup. Patch 5 adds a new
> function to libbpf to access multi prog tests. Patches 6-8 add
> tests.
> 
> Andrii, for patch 4 I decided on the following API:
> 
>     int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
>                                 struct bpf_test_run_opts *opts)
> 
> To be consistent with the rest of libbpf it would be better
> to take int *prog_fds, but I think then the function would have to
> convert the array to account for platforms where
> 
>     sizeof(int) != sizeof(__u32)
> 
> Please let me know what your preference is.

Seems reasonable to me. For the series,

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs
  2021-02-16 10:57 ` [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
@ 2021-02-23  1:11   ` Alexei Starovoitov
  2021-02-23 10:10     ` Lorenz Bauer
  0 siblings, 1 reply; 16+ messages in thread
From: Alexei Starovoitov @ 2021-02-23  1:11 UTC (permalink / raw)
  To: Lorenz Bauer; +Cc: ast, daniel, andrii, jakub, kernel-team, bpf, netdev

On Tue, Feb 16, 2021 at 10:57:09AM +0000, Lorenz Bauer wrote:
> +	user_ctx = bpf_ctx_init(kattr, sizeof(*user_ctx));
> +	if (IS_ERR(user_ctx))
> +		return PTR_ERR(user_ctx);
> +
> +	if (!user_ctx)
> +		return -EINVAL;
> +
> +	if (user_ctx->sk)
> +		goto out;
> +
> +	if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
> +		goto out;
> +
> +	if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
> +		ret = -ERANGE;
> +		goto out;
> +	}
> +
> +	ctx.family = user_ctx->family;
> +	ctx.protocol = user_ctx->protocol;
> +	ctx.dport = user_ctx->local_port;
> +	ctx.sport = user_ctx->remote_port;
> +
> +	switch (ctx.family) {
> +	case AF_INET:
> +		ctx.v4.daddr = user_ctx->local_ip4;
> +		ctx.v4.saddr = user_ctx->remote_ip4;
> +		break;
> +
> +#if IS_ENABLED(CONFIG_IPV6)
> +	case AF_INET6:
> +		ctx.v6.daddr = (struct in6_addr *)user_ctx->local_ip6;
> +		ctx.v6.saddr = (struct in6_addr *)user_ctx->remote_ip6;
> +		break;
> +#endif
> +
> +	default:
> +		ret = -EAFNOSUPPORT;
> +		goto out;
> +	}
> +
> +	while (t_check(&t, repeat, &ret, &duration)) {
> +		ctx.selected_sk = NULL;
> +		retval = BPF_PROG_SK_LOOKUP_RUN_ARRAY(progs, ctx, BPF_PROG_RUN);
> +	}
> +
> +	if (ret < 0)
> +		goto out;
> +
> +	user_ctx->cookie = 0;
> +	if (ctx.selected_sk) {
> +		if (ctx.selected_sk->sk_reuseport && !ctx.no_reuseport) {
> +			ret = -EOPNOTSUPP;
> +			goto out;
> +		}
> +
> +		user_ctx->cookie = sock_gen_cookie(ctx.selected_sk);
> +	}

I'm struggling to come up with the case where running N sk_lookup progs
like this cannot be done with running them one by one.
It looks to me that this N prog_fds api is not really about running and
testing the progs, but about testing BPF_PROG_SK_LOOKUP_RUN_ARRAY()
SK_PASS vs SK_DROP logic.
So it's more of the kernel infra testing than program testing.
Are you suggesting that the sequence of sk_lookup progs are so delicate
that they are aware of each other and _has_ to be tested together
with gluing logic that the macro provides?
But if it is so then testing the progs one by one would be better,
because test_run will be able to check each individual prog return code
instead of implicit BPF_PROG_SK_LOOKUP_RUN_ARRAY logic.
It feels less of the unit test and more as a full stack test,
but if so then lack of cookie on input is questionable.
The progs can only examine port/ip/family data.
So testing them individually would give more accurate picture on
what progs are doing and potential bugs can be found sooner than
testing the sequence of progs. In a sequence one prog could have
been buggy, but the final cookie came out correct.

Looking at patch 7 it seems the unit test framework will be comparing
the cookies for your production tests, but then nentns argument
in the cover letter is suprising. If the tests are run in the init_netns
then selected sockets will be just as special as in patch 7.
So it's not a full stack kinda test.

In other words I'm struggling with in-between state of the api.
test_run with N fds is not really a full test, but not a unit test either.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs
  2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
                   ` (8 preceding siblings ...)
  2021-02-17 20:08 ` [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs John Fastabend
@ 2021-02-23  7:29 ` Andrii Nakryiko
  2021-02-23 10:12   ` Lorenz Bauer
  2021-02-24 21:37   ` Daniel Borkmann
  9 siblings, 2 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2021-02-23  7:29 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jakub Sitnicki, kernel-team, bpf, Networking

On Tue, Feb 16, 2021 at 2:58 AM Lorenz Bauer <lmb@cloudflare.com> wrote:
>
> We don't have PROG_TEST_RUN support for sk_lookup programs at the
> moment. So far this hasn't been a problem, since we can run our
> tests in a separate network namespace. For benchmarking it's nice
> to have PROG_TEST_RUN, so I've gone and implemented it.
>
> Multiple sk_lookup programs can be attached at once to the same
> netns. This can't be expressed with the current PROG_TEST_RUN
> API, so I'm proposing to extend it with an array of prog_fd.
>
> Patches 1-2 are clean ups. Patches 3-4 add the new UAPI and
> implement PROG_TEST_RUN for sk_lookup. Patch 5 adds a new
> function to libbpf to access multi prog tests. Patches 6-8 add
> tests.
>
> Andrii, for patch 4 I decided on the following API:
>
>     int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
>                                 struct bpf_test_run_opts *opts)
>
> To be consistent with the rest of libbpf it would be better
> to take int *prog_fds, but I think then the function would have to
> convert the array to account for platforms where
>
>     sizeof(int) != sizeof(__u32)

Curious, is there any supported architecture where this is not the
case? I think it's fine to be consistent, tbh, and use int. Worst
case, in some obscure architecture we'd need to create a copy of an
array. Doesn't seem like a big deal (and highly unlikely anyways).

>
> Please let me know what your preference is.
>
> Lorenz Bauer (8):
>   bpf: consolidate shared test timing code
>   bpf: add for_each_bpf_prog helper
>   bpf: allow multiple programs in BPF_PROG_TEST_RUN
>   bpf: add PROG_TEST_RUN support for sk_lookup programs
>   tools: libbpf: allow testing program types with multi-prog semantics
>   selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN
>   selftests: bpf: convert sk_lookup ctx access tests to PROG_TEST_RUN
>   selftests: bpf: check that PROG_TEST_RUN repeats as requested
>
>  include/linux/bpf-netns.h                     |   2 +
>  include/linux/bpf.h                           |  24 +-
>  include/linux/filter.h                        |   4 +-
>  include/uapi/linux/bpf.h                      |  11 +-
>  kernel/bpf/net_namespace.c                    |   2 +-
>  kernel/bpf/syscall.c                          |  73 +++++-
>  net/bpf/test_run.c                            | 230 +++++++++++++-----
>  net/core/filter.c                             |   1 +
>  tools/include/uapi/linux/bpf.h                |  11 +-
>  tools/lib/bpf/bpf.c                           |  16 +-
>  tools/lib/bpf/bpf.h                           |   3 +
>  tools/lib/bpf/libbpf.map                      |   1 +
>  .../selftests/bpf/prog_tests/prog_run_xattr.c |  51 +++-
>  .../selftests/bpf/prog_tests/sk_lookup.c      | 172 +++++++++----
>  .../selftests/bpf/progs/test_sk_lookup.c      |  62 +++--
>  15 files changed, 499 insertions(+), 164 deletions(-)
>
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs
  2021-02-23  1:11   ` Alexei Starovoitov
@ 2021-02-23 10:10     ` Lorenz Bauer
  2021-02-24  6:11       ` Alexei Starovoitov
  0 siblings, 1 reply; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-23 10:10 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jakub Sitnicki, kernel-team, bpf, Networking

On Tue, 23 Feb 2021 at 01:11, Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> I'm struggling to come up with the case where running N sk_lookup progs
> like this cannot be done with running them one by one.
> It looks to me that this N prog_fds api is not really about running and
> testing the progs, but about testing BPF_PROG_SK_LOOKUP_RUN_ARRAY()
> SK_PASS vs SK_DROP logic.

In a way that is true, yes. TBH I figured that my patch set would be
rejected if I just
implemented single program test run, since it doesn't allow exercising the full
sk_lookup test run semantics.

> So it's more of the kernel infra testing than program testing.
> Are you suggesting that the sequence of sk_lookup progs are so delicate
> that they are aware of each other and _has_ to be tested together
> with gluing logic that the macro provides?

We currently don't have a case like that.

> But if it is so then testing the progs one by one would be better,
> because test_run will be able to check each individual prog return code
> instead of implicit BPF_PROG_SK_LOOKUP_RUN_ARRAY logic.

That means emulating the kind of subtle BPF_PROG_SK_LOOKUP_RUN_ARRAY
in user space, which isn't trivial and a source of bugs.

For example we rely on having multiple programs attached when
"upgrading" from old to new BPF. Here we care mostly that we don't drop
lookups on the floor, and the behaviour is tightly coupled to the in-kernel
implementation. It's not much use to cobble up my own implementation of
SK_LOOKUP_RUN_ARRAY here, I would rather use multi progs to test this.
Of course we can also already spawn a netns and test it that way, so not
much is lost if there is no multi prog test run.

> It feels less of the unit test and more as a full stack test,
> but if so then lack of cookie on input is questionable.

I'm not sure what you mean with "the lack of cookie on input is
questionable", can you rephrase?

> In other words I'm struggling with in-between state of the api.
> test_run with N fds is not really a full test, but not a unit test either.

If I understand you correctly, a "full" API would expose the
intermediate results from
individual programs as well as the final selection? Sounds quite
complicated, and as
you point out most of the benefits can be had from running single programs.

I'm happy to drop the multiple programs bit, like I mentioned I did it
for completeness sake.
I care about being able to test or benchmark a single sk_lookup program.

Lorenz

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs
  2021-02-23  7:29 ` Andrii Nakryiko
@ 2021-02-23 10:12   ` Lorenz Bauer
  2021-02-24 21:37   ` Daniel Borkmann
  1 sibling, 0 replies; 16+ messages in thread
From: Lorenz Bauer @ 2021-02-23 10:12 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jakub Sitnicki, kernel-team, bpf, Networking

On Tue, 23 Feb 2021 at 07:29, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> Curious, is there any supported architecture where this is not the
> case? I think it's fine to be consistent, tbh, and use int. Worst
> case, in some obscure architecture we'd need to create a copy of an
> array. Doesn't seem like a big deal (and highly unlikely anyways).

Ok, thanks! I'm not super familiar with C platform differences, so I wanted
to be on the safe side. I'll take this up depending on the outcome of the
conversation with Alexey, maybe I don't need to add this after all.

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs
  2021-02-23 10:10     ` Lorenz Bauer
@ 2021-02-24  6:11       ` Alexei Starovoitov
  0 siblings, 0 replies; 16+ messages in thread
From: Alexei Starovoitov @ 2021-02-24  6:11 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jakub Sitnicki, kernel-team, bpf, Networking

On Tue, Feb 23, 2021 at 10:10:44AM +0000, Lorenz Bauer wrote:
> On Tue, 23 Feb 2021 at 01:11, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > I'm struggling to come up with the case where running N sk_lookup progs
> > like this cannot be done with running them one by one.
> > It looks to me that this N prog_fds api is not really about running and
> > testing the progs, but about testing BPF_PROG_SK_LOOKUP_RUN_ARRAY()
> > SK_PASS vs SK_DROP logic.
> 
> In a way that is true, yes. TBH I figured that my patch set would be
> rejected if I just
> implemented single program test run, since it doesn't allow exercising the full
> sk_lookup test run semantics.
> 
> > So it's more of the kernel infra testing than program testing.
> > Are you suggesting that the sequence of sk_lookup progs are so delicate
> > that they are aware of each other and _has_ to be tested together
> > with gluing logic that the macro provides?
> 
> We currently don't have a case like that.
> 
> > But if it is so then testing the progs one by one would be better,
> > because test_run will be able to check each individual prog return code
> > instead of implicit BPF_PROG_SK_LOOKUP_RUN_ARRAY logic.
> 
> That means emulating the kind of subtle BPF_PROG_SK_LOOKUP_RUN_ARRAY
> in user space, which isn't trivial and a source of bugs.

I'm not at all suggesting to emulate it in user space.

> For example we rely on having multiple programs attached when
> "upgrading" from old to new BPF. Here we care mostly that we don't drop
> lookups on the floor, and the behaviour is tightly coupled to the in-kernel
> implementation. It's not much use to cobble up my own implementation of
> SK_LOOKUP_RUN_ARRAY here, I would rather use multi progs to test this.
> Of course we can also already spawn a netns and test it that way, so not
> much is lost if there is no multi prog test run.

I mean that to test the whole setup close to production the netns is
probably needed because sockets would mess with init_netns.
But to test each individual bpf prog there is no need for RUN_ARRAY.
Each prog can be more accurately tested in isolation.
RUN_ARRAY adds, as you said, subtle details of RUN_ARRAY macro.

> > It feels less of the unit test and more as a full stack test,
> > but if so then lack of cookie on input is questionable.
> 
> I'm not sure what you mean with "the lack of cookie on input is
> questionable", can you rephrase?
> 
> > In other words I'm struggling with in-between state of the api.
> > test_run with N fds is not really a full test, but not a unit test either.
> 
> If I understand you correctly, a "full" API would expose the
> intermediate results from
> individual programs as well as the final selection? Sounds quite
> complicated, and as
> you point out most of the benefits can be had from running single programs.

I'm not suggesting to return intermediate results either.
I'm looking at test_run as a facility to test one individual program
at a time. Like in tc, cgroups, tracing we can have multiple progs
attached to one place and the final verdict will depend on what
each prog is returning. But there is no need to test them all together
through BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY.
Each prog is more accurately validated independently.
Hence I'm puzzled why sk_lookup's RUN_ARRAY is special.
Its drop/pass/selected sk is more or less the same complexity
as CGROUP_INET_EGRESS_RUN_ARRAY.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs
  2021-02-23  7:29 ` Andrii Nakryiko
  2021-02-23 10:12   ` Lorenz Bauer
@ 2021-02-24 21:37   ` Daniel Borkmann
  1 sibling, 0 replies; 16+ messages in thread
From: Daniel Borkmann @ 2021-02-24 21:37 UTC (permalink / raw)
  To: Andrii Nakryiko, Lorenz Bauer
  Cc: Alexei Starovoitov, Andrii Nakryiko, Jakub Sitnicki, kernel-team,
	bpf, Networking

On 2/23/21 8:29 AM, Andrii Nakryiko wrote:
> On Tue, Feb 16, 2021 at 2:58 AM Lorenz Bauer <lmb@cloudflare.com> wrote:
>>
>> We don't have PROG_TEST_RUN support for sk_lookup programs at the
>> moment. So far this hasn't been a problem, since we can run our
>> tests in a separate network namespace. For benchmarking it's nice
>> to have PROG_TEST_RUN, so I've gone and implemented it.
>>
>> Multiple sk_lookup programs can be attached at once to the same
>> netns. This can't be expressed with the current PROG_TEST_RUN
>> API, so I'm proposing to extend it with an array of prog_fd.
>>
>> Patches 1-2 are clean ups. Patches 3-4 add the new UAPI and
>> implement PROG_TEST_RUN for sk_lookup. Patch 5 adds a new
>> function to libbpf to access multi prog tests. Patches 6-8 add
>> tests.
>>
>> Andrii, for patch 4 I decided on the following API:
>>
>>      int bpf_prog_test_run_array(__u32 *prog_fds, __u32 prog_fds_cnt,
>>                                  struct bpf_test_run_opts *opts)
>>
>> To be consistent with the rest of libbpf it would be better
>> to take int *prog_fds, but I think then the function would have to
>> convert the array to account for platforms where
>>
>>      sizeof(int) != sizeof(__u32)
> 
> Curious, is there any supported architecture where this is not the
> case? I think it's fine to be consistent, tbh, and use int. Worst
> case, in some obscure architecture we'd need to create a copy of an
> array. Doesn't seem like a big deal (and highly unlikely anyways).

Given __u32 are kernel UAPI exported types for user space (e.g. used in
syscall APIs), you can check where / how they are defined. Mainly here:

   include/uapi/asm-generic/int-l64.h:27:typedef unsigned int __u32;
   include/uapi/asm-generic/int-ll64.h:27:typedef unsigned int __u32;

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-02-24 21:38 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 10:57 [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 1/8] bpf: consolidate shared test timing code Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 2/8] bpf: add for_each_bpf_prog helper Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 3/8] bpf: allow multiple programs in BPF_PROG_TEST_RUN Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 4/8] bpf: add PROG_TEST_RUN support for sk_lookup programs Lorenz Bauer
2021-02-23  1:11   ` Alexei Starovoitov
2021-02-23 10:10     ` Lorenz Bauer
2021-02-24  6:11       ` Alexei Starovoitov
2021-02-16 10:57 ` [PATCH bpf-next 5/8] tools: libbpf: allow testing program types with multi-prog semantics Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 6/8] selftests: bpf: convert sk_lookup multi prog tests to PROG_TEST_RUN Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 7/8] selftests: bpf: convert sk_lookup ctx access " Lorenz Bauer
2021-02-16 10:57 ` [PATCH bpf-next 8/8] selftests: bpf: check that PROG_TEST_RUN repeats as requested Lorenz Bauer
2021-02-17 20:08 ` [PATCH bpf-next 0/8] PROG_TEST_RUN support for sk_lookup programs John Fastabend
2021-02-23  7:29 ` Andrii Nakryiko
2021-02-23 10:12   ` Lorenz Bauer
2021-02-24 21:37   ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).