All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS
@ 2021-10-22  7:55 Hou Tao
  2021-10-22  7:55 ` [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Hou Tao @ 2021-10-22  7:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Martin KaFai Lau
  Cc: Yonghong Song, Daniel Borkmann, Andrii Nakryiko, netdev, bpf, houtao1

Hi,

Currently the test of BPF STRUCT_OPS depends on the specific bpf
implementation (e.g, tcp_congestion_ops), but it can not cover all
basic functionalities (e.g, return value handling), so introduce
a dummy BPF STRUCT_OPS for test purpose.

Instead of loading a userspace-implemeted bpf_dummy_ops map into
kernel and calling the specific function by writing to sysfs provided
by bpf_testmode.ko, only loading bpf_dummy_ops related prog into
kernel and calling these prog by bpf_prog_test_run(). The latter
is more flexible and has no dependency on extra kernel module.

Now the return value handling is supported by test_1(...) ops,
and passing multiple arguments is supported by test_2(...) ops.
If more is needed, test_x(...) ops can be added afterwards.

Comments are always welcome.
Regards,
Hou

Change Log:
v3:
 * rebase on bpf-next
 * address comments for Martin, mainly include: merge patch 3 & patch 4 in v2,
   fix names of btf ctx access check helpers, handle CONFIG_NET,
   fix leak in dummy_ops_init_args(), simplify bpf_dummy_init()
 * patch 4: use a loop to check args in test_dummy_multiple_args()

v2: https://www.spinics.net/lists/bpf/msg47948.html
 * rebase on bpf-next
 * add test_2(...) ops to test the passing of multiple arguments
 * a new patch (patch #2) is added to factor out ctx access helpers
 * address comments from Martin & Andrii

v1: https://www.spinics.net/lists/bpf/msg46787.html

RFC: https://www.spinics.net/lists/bpf/msg46117.html

Hou Tao (4):
  bpf: factor out a helper to prepare trampoline for struct_ops prog
  bpf: factor out helpers to check ctx access for BTF function
  bpf: add dummy BPF STRUCT_OPS for test purpose
  selftests/bpf: add test cases for struct_ops prog

 include/linux/bpf.h                           |  47 ++++
 kernel/bpf/bpf_struct_ops.c                   |  32 ++-
 kernel/bpf/bpf_struct_ops_types.h             |   3 +
 kernel/trace/bpf_trace.c                      |  16 +-
 net/bpf/Makefile                              |   3 +
 net/bpf/bpf_dummy_struct_ops.c                | 203 ++++++++++++++++++
 net/ipv4/bpf_tcp_ca.c                         |   9 +-
 .../selftests/bpf/prog_tests/dummy_st_ops.c   | 115 ++++++++++
 .../selftests/bpf/progs/dummy_st_ops.c        |  50 +++++
 9 files changed, 446 insertions(+), 32 deletions(-)
 create mode 100644 net/bpf/bpf_dummy_struct_ops.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
 create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops.c

-- 
2.29.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog
  2021-10-22  7:55 [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS Hou Tao
@ 2021-10-22  7:55 ` Hou Tao
  2021-10-23  0:14   ` Martin KaFai Lau
  2021-10-22  7:55 ` [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function Hou Tao
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Hou Tao @ 2021-10-22  7:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Martin KaFai Lau
  Cc: Yonghong Song, Daniel Borkmann, Andrii Nakryiko, netdev, bpf, houtao1

Factor out a helper bpf_struct_ops_prepare_trampoline() to prepare
trampoline for BPF_PROG_TYPE_STRUCT_OPS prog. It will be used by
.test_run callback in following patch.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h         |  4 ++++
 kernel/bpf/bpf_struct_ops.c | 29 +++++++++++++++++++----------
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 31421c74ba08..3d2cf94a72ce 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -999,6 +999,10 @@ bool bpf_struct_ops_get(const void *kdata);
 void bpf_struct_ops_put(const void *kdata);
 int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
 				       void *value);
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs,
+				      struct bpf_prog *prog,
+				      const struct btf_func_model *model,
+				      void *image, void *image_end);
 static inline bool bpf_try_module_get(const void *data, struct module *owner)
 {
 	if (owner == BPF_MODULE_OWNER)
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 9abcc33f02cf..44be101f2562 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -312,6 +312,20 @@ static int check_zero_holes(const struct btf_type *t, void *data)
 	return 0;
 }
 
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs,
+				      struct bpf_prog *prog,
+				      const struct btf_func_model *model,
+				      void *image, void *image_end)
+{
+	u32 flags;
+
+	tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
+	tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
+	flags = model->ret_size > 0 ? BPF_TRAMP_F_RET_FENTRY_RET : 0;
+	return arch_prepare_bpf_trampoline(NULL, image, image_end,
+					   model, flags, tprogs, NULL);
+}
+
 static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 					  void *value, u64 flags)
 {
@@ -323,7 +337,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	struct bpf_tramp_progs *tprogs = NULL;
 	void *udata, *kdata;
 	int prog_fd, err = 0;
-	void *image;
+	void *image, *image_end;
 	u32 i;
 
 	if (flags)
@@ -363,12 +377,12 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	udata = &uvalue->data;
 	kdata = &kvalue->data;
 	image = st_map->image;
+	image_end = st_map->image + PAGE_SIZE;
 
 	for_each_member(i, t, member) {
 		const struct btf_type *mtype, *ptype;
 		struct bpf_prog *prog;
 		u32 moff;
-		u32 flags;
 
 		moff = btf_member_bit_offset(t, member) / 8;
 		ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
@@ -430,14 +444,9 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 			goto reset_unlock;
 		}
 
-		tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
-		tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
-		flags = st_ops->func_models[i].ret_size > 0 ?
-			BPF_TRAMP_F_RET_FENTRY_RET : 0;
-		err = arch_prepare_bpf_trampoline(NULL, image,
-						  st_map->image + PAGE_SIZE,
-						  &st_ops->func_models[i],
-						  flags, tprogs, NULL);
+		err = bpf_struct_ops_prepare_trampoline(tprogs, prog,
+							&st_ops->func_models[i],
+							image, image_end);
 		if (err < 0)
 			goto reset_unlock;
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function
  2021-10-22  7:55 [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS Hou Tao
  2021-10-22  7:55 ` [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
@ 2021-10-22  7:55 ` Hou Tao
  2021-10-23  0:18   ` Martin KaFai Lau
  2021-10-22  7:55 ` [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
  2021-10-22  7:55 ` [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog Hou Tao
  3 siblings, 1 reply; 11+ messages in thread
From: Hou Tao @ 2021-10-22  7:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Martin KaFai Lau
  Cc: Yonghong Song, Daniel Borkmann, Andrii Nakryiko, netdev, bpf, houtao1

Factor out two helpers to check the read access of ctx for BTF
function. bpf_tracing_ctx_access() is used to check the
read access to argument is valid, and bpf_tracing_btf_ctx_access()
checks whether the btf type of argument is valid besides
the checking of argument read. bpf_tracing_btf_ctx_access()
will be used by the following patch.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h      | 27 +++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c | 16 ++--------------
 net/ipv4/bpf_tcp_ca.c    |  9 +--------
 3 files changed, 30 insertions(+), 22 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3d2cf94a72ce..0dd2de9eeed3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1649,6 +1649,33 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info);
+
+/*
+ * The maximum number of BTF function arguments is MAX_BPF_FUNC_ARGS.
+ * And only aligned read is allowed.
+ */
+static inline bool bpf_tracing_ctx_access(int off, int size,
+					  enum bpf_access_type type)
+{
+	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
+		return false;
+	if (type != BPF_READ)
+		return false;
+	if (off % size != 0)
+		return false;
+	return true;
+}
+
+static inline bool bpf_tracing_btf_ctx_access(int off, int size,
+					      enum bpf_access_type type,
+					      const struct bpf_prog *prog,
+					      struct bpf_insn_access_aux *info)
+{
+	if (!bpf_tracing_ctx_access(off, size, type))
+		return false;
+	return btf_ctx_access(off, size, type, prog, info);
+}
+
 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
 		      const struct btf_type *t, int off, int size,
 		      enum bpf_access_type atype,
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index cbcd0d6fca7c..7396488793ff 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1646,13 +1646,7 @@ static bool raw_tp_prog_is_valid_access(int off, int size,
 					const struct bpf_prog *prog,
 					struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-	return true;
+	return bpf_tracing_ctx_access(off, size, type);
 }
 
 static bool tracing_prog_is_valid_access(int off, int size,
@@ -1660,13 +1654,7 @@ static bool tracing_prog_is_valid_access(int off, int size,
 					 const struct bpf_prog *prog,
 					 struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-	return btf_ctx_access(off, size, type, prog, info);
+	return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
 }
 
 int __weak bpf_prog_test_run_tracing(struct bpf_prog *prog,
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 57709ac09fb2..2cf02b4d77fb 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -81,14 +81,7 @@ static bool bpf_tcp_ca_is_valid_access(int off, int size,
 				       const struct bpf_prog *prog,
 				       struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-
-	if (!btf_ctx_access(off, size, type, prog, info))
+	if (!bpf_tracing_btf_ctx_access(off, size, type, prog, info))
 		return false;
 
 	if (info->reg_type == PTR_TO_BTF_ID && info->btf_id == sock_id)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose
  2021-10-22  7:55 [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS Hou Tao
  2021-10-22  7:55 ` [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
  2021-10-22  7:55 ` [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function Hou Tao
@ 2021-10-22  7:55 ` Hou Tao
  2021-10-23  0:35   ` Martin KaFai Lau
  2021-10-22  7:55 ` [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog Hou Tao
  3 siblings, 1 reply; 11+ messages in thread
From: Hou Tao @ 2021-10-22  7:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Martin KaFai Lau
  Cc: Yonghong Song, Daniel Borkmann, Andrii Nakryiko, netdev, bpf, houtao1

Currently the test of BPF STRUCT_OPS depends on the specific bpf
implementation of tcp_congestion_ops, but it can not cover all
basic functionalities (e.g, return value handling), so introduce
a dummy BPF STRUCT_OPS for test purpose.

Loading a bpf_dummy_ops implementation from userspace is prohibited,
and its only purpose is to run BPF_PROG_TYPE_STRUCT_OPS program
through bpf(BPF_PROG_TEST_RUN). Now programs for test_1() & test_2()
are supported. The following three cases are exercised in
bpf_dummy_struct_ops_test_run():

(1) test and check the value returned from state arg in test_1(state)
The content of state is copied from userspace pointer and copied back
after calling test_1(state). The user pointer is saved in an u64 array
and the array address is passed through ctx_in.

(2) test and check the return value of test_1(NULL)
Just simulate the case in which an invalid input argument is passed in.

(3) test multiple arguments passing in test_2(state, ...)
5 arguments are passed through ctx_in in form of u64 array. The first
element of array is userspace pointer of state and others 4 arguments
follow.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h               |  16 +++
 kernel/bpf/bpf_struct_ops.c       |   3 +
 kernel/bpf/bpf_struct_ops_types.h |   3 +
 net/bpf/Makefile                  |   3 +
 net/bpf/bpf_dummy_struct_ops.c    | 203 ++++++++++++++++++++++++++++++
 5 files changed, 228 insertions(+)
 create mode 100644 net/bpf/bpf_dummy_struct_ops.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 0dd2de9eeed3..1297af15ec1c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1017,6 +1017,22 @@ static inline void bpf_module_put(const void *data, struct module *owner)
 	else
 		module_put(owner);
 }
+
+#ifdef CONFIG_NET
+/* Define it here to avoid the use of forward declaration */
+struct bpf_dummy_ops_state {
+	int val;
+};
+
+struct bpf_dummy_ops {
+	int (*test_1)(struct bpf_dummy_ops_state *cb);
+	int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
+		      char a3, unsigned long a4);
+};
+
+int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
+			    union bpf_attr __user *uattr);
+#endif
 #else
 static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
 {
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 44be101f2562..8ecfe4752769 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -93,6 +93,9 @@ const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
 };
 
 const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
+#ifdef CONFIG_NET
+	.test_run = bpf_struct_ops_test_run,
+#endif
 };
 
 static const struct btf_type *module_type;
diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
index 066d83ea1c99..5678a9ddf817 100644
--- a/kernel/bpf/bpf_struct_ops_types.h
+++ b/kernel/bpf/bpf_struct_ops_types.h
@@ -2,6 +2,9 @@
 /* internal file - do not include directly */
 
 #ifdef CONFIG_BPF_JIT
+#ifdef CONFIG_NET
+BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
+#endif
 #ifdef CONFIG_INET
 #include <net/tcp.h>
 BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
diff --git a/net/bpf/Makefile b/net/bpf/Makefile
index 1c0a98d8c28f..1ebe270bde23 100644
--- a/net/bpf/Makefile
+++ b/net/bpf/Makefile
@@ -1,2 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-$(CONFIG_BPF_SYSCALL)	:= test_run.o
+ifeq ($(CONFIG_BPF_JIT),y)
+obj-$(CONFIG_BPF_SYSCALL)	+= bpf_dummy_struct_ops.o
+endif
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
new file mode 100644
index 000000000000..ac27836e55df
--- /dev/null
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021. Huawei Technologies Co., Ltd
+ */
+#include <linux/kernel.h>
+#include <linux/bpf_verifier.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+extern struct bpf_struct_ops bpf_bpf_dummy_ops;
+
+/* A common type for test_N with return value in bpf_dummy_ops */
+typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
+
+struct bpf_dummy_ops_test_args {
+	u64 args[MAX_BPF_FUNC_ARGS];
+	struct bpf_dummy_ops_state state;
+};
+
+static struct bpf_dummy_ops_test_args *
+dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
+{
+	__u32 size_in;
+	struct bpf_dummy_ops_test_args *args;
+	void __user *ctx_in;
+	void __user *u_state;
+
+	if (!nr || nr > MAX_BPF_FUNC_ARGS)
+		return ERR_PTR(-EINVAL);
+
+	size_in = kattr->test.ctx_size_in;
+	if (size_in != sizeof(u64) * nr)
+		return ERR_PTR(-EINVAL);
+
+	args = kzalloc(sizeof(*args), GFP_KERNEL);
+	if (!args)
+		return ERR_PTR(-ENOMEM);
+
+	ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
+	if (copy_from_user(args->args, ctx_in, size_in))
+		goto out;
+
+	/* args[0] is 0 means state argument of test_N will be NULL */
+	u_state = u64_to_user_ptr(args->args[0]);
+	if (u_state && copy_from_user(&args->state, u_state,
+				      sizeof(args->state)))
+		goto out;
+
+	return args;
+out:
+	kfree(args);
+	return ERR_PTR(-EFAULT);
+}
+
+static int dummy_ops_copy_args(struct bpf_dummy_ops_test_args *args)
+{
+	void __user *u_state;
+
+	u_state = u64_to_user_ptr(args->args[0]);
+	if (u_state && copy_to_user(u_state, &args->state, sizeof(args->state)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int dummy_ops_call_op(void *image, struct bpf_dummy_ops_test_args *args)
+{
+	dummy_ops_test_ret_fn test = (void *)image;
+	struct bpf_dummy_ops_state *state = NULL;
+
+	/* state needs to be NULL if args[0] is 0 */
+	if (args->args[0])
+		state = &args->state;
+	return test(state, args->args[1], args->args[2],
+		    args->args[3], args->args[4]);
+}
+
+int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
+			    union bpf_attr __user *uattr)
+{
+	const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
+	const struct btf_type *func_proto;
+	struct bpf_dummy_ops_test_args *args;
+	struct bpf_tramp_progs *tprogs;
+	void *image = NULL;
+	unsigned int op_idx;
+	int prog_ret;
+	int err;
+
+	if (prog->aux->attach_btf_id != st_ops->type_id)
+		return -EOPNOTSUPP;
+
+	func_proto = prog->aux->attach_func_proto;
+	args = dummy_ops_init_args(kattr, btf_type_vlen(func_proto));
+	if (IS_ERR(args))
+		return PTR_ERR(args);
+
+	tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL);
+	if (!tprogs) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	image = bpf_jit_alloc_exec(PAGE_SIZE);
+	if (!image) {
+		err = -ENOMEM;
+		goto out;
+	}
+	set_vm_flush_reset_perms(image);
+
+	op_idx = prog->expected_attach_type;
+	err = bpf_struct_ops_prepare_trampoline(tprogs, prog,
+						&st_ops->func_models[op_idx],
+						image, image + PAGE_SIZE);
+	if (err < 0)
+		goto out;
+
+	set_memory_ro((long)image, 1);
+	set_memory_x((long)image, 1);
+	prog_ret = dummy_ops_call_op(image, args);
+
+	err = dummy_ops_copy_args(args);
+	if (err)
+		goto out;
+	if (put_user(prog_ret, &uattr->test.retval))
+		err = -EFAULT;
+out:
+	kfree(args);
+	bpf_jit_free_exec(image);
+	kfree(tprogs);
+	return err;
+}
+
+static int bpf_dummy_init(struct btf *btf)
+{
+	return 0;
+}
+
+static bool bpf_dummy_ops_is_valid_access(int off, int size,
+					  enum bpf_access_type type,
+					  const struct bpf_prog *prog,
+					  struct bpf_insn_access_aux *info)
+{
+	return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
+}
+
+static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
+					   const struct btf *btf,
+					   const struct btf_type *t, int off,
+					   int size, enum bpf_access_type atype,
+					   u32 *next_btf_id)
+{
+	const struct btf_type *state;
+	s32 type_id;
+	int err;
+
+	type_id = btf_find_by_name_kind(btf, "bpf_dummy_ops_state",
+					BTF_KIND_STRUCT);
+	if (type_id < 0)
+		return -EINVAL;
+
+	state = btf_type_by_id(btf, type_id);
+	if (t != state) {
+		bpf_log(log, "only access to bpf_dummy_ops_state is supported\n");
+		return -EACCES;
+	}
+
+	err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id);
+	if (err < 0)
+		return err;
+
+	return atype == BPF_READ ? err : NOT_INIT;
+}
+
+static const struct bpf_verifier_ops bpf_dummy_verifier_ops = {
+	.is_valid_access = bpf_dummy_ops_is_valid_access,
+	.btf_struct_access = bpf_dummy_ops_btf_struct_access,
+};
+
+static int bpf_dummy_init_member(const struct btf_type *t,
+				 const struct btf_member *member,
+				 void *kdata, const void *udata)
+{
+	return -EOPNOTSUPP;
+}
+
+static int bpf_dummy_reg(void *kdata)
+{
+	return -EOPNOTSUPP;
+}
+
+static void bpf_dummy_unreg(void *kdata)
+{
+}
+
+struct bpf_struct_ops bpf_bpf_dummy_ops = {
+	.verifier_ops = &bpf_dummy_verifier_ops,
+	.init = bpf_dummy_init,
+	.init_member = bpf_dummy_init_member,
+	.reg = bpf_dummy_reg,
+	.unreg = bpf_dummy_unreg,
+	.name = "bpf_dummy_ops",
+};
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog
  2021-10-22  7:55 [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS Hou Tao
                   ` (2 preceding siblings ...)
  2021-10-22  7:55 ` [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
@ 2021-10-22  7:55 ` Hou Tao
  2021-10-23  1:00   ` Martin KaFai Lau
  3 siblings, 1 reply; 11+ messages in thread
From: Hou Tao @ 2021-10-22  7:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Martin KaFai Lau
  Cc: Yonghong Song, Daniel Borkmann, Andrii Nakryiko, netdev, bpf, houtao1

Running a BPF_PROG_TYPE_STRUCT_OPS prog for dummy_st_ops::test_N()
through bpf_prog_test_run(). Four test cases are added:
(1) attach dummy_st_ops should fail
(2) function return value of bpf_dummy_ops::test_1() is expected
(3) pointer argument of bpf_dummy_ops::test_1() works as expected
(4) multiple arguments passed to bpf_dummy_ops::test_2() are correct

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 .../selftests/bpf/prog_tests/dummy_st_ops.c   | 115 ++++++++++++++++++
 .../selftests/bpf/progs/dummy_st_ops.c        |  50 ++++++++
 2 files changed, 165 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
 create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops.c

diff --git a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
new file mode 100644
index 000000000000..cbaa44ffb8c6
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021. Huawei Technologies Co., Ltd */
+#include <test_progs.h>
+#include "dummy_st_ops.skel.h"
+
+/* Need to keep consistent with definition in include/linux/bpf.h */
+struct bpf_dummy_ops_state {
+	int val;
+};
+
+static void test_dummy_st_ops_attach(void)
+{
+	struct dummy_st_ops *skel;
+	struct bpf_link *link;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(skel->maps.dummy_1);
+	ASSERT_EQ(libbpf_get_error(link), -EOPNOTSUPP, "dummy_st_ops_attach");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_init_ret_value(void)
+{
+	__u64 args[1] = {0};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_1);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(attr.retval, 0xf2f3f4f5, "test_ret");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_init_ptr_arg(void)
+{
+	int exp_retval = 0xbeef;
+	struct bpf_dummy_ops_state in_state = {
+		.val = exp_retval,
+	};
+	__u64 args[1] = {(unsigned long)&in_state};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_1);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(in_state.val, 0x5a, "test_ptr_ret");
+	ASSERT_EQ(attr.retval, exp_retval, "test_ret");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_multiple_args(void)
+{
+	__u64 args[5] = {0, -100, 0x8a5f, 'c', 0x1234567887654321ULL};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+	size_t i;
+	char name[8];
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_2);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	for (i = 0; i < ARRAY_SIZE(args); i++) {
+		snprintf(name, sizeof(name), "arg %zu", i);
+		ASSERT_EQ(skel->bss->test_2_args[i], args[i], name);
+	}
+
+	dummy_st_ops__destroy(skel);
+}
+
+void test_dummy_st_ops(void)
+{
+	if (test__start_subtest("dummy_st_ops_attach"))
+		test_dummy_st_ops_attach();
+	if (test__start_subtest("dummy_init_ret_value"))
+		test_dummy_init_ret_value();
+	if (test__start_subtest("dummy_init_ptr_arg"))
+		test_dummy_init_ptr_arg();
+	if (test__start_subtest("dummy_multiple_args"))
+		test_dummy_multiple_args();
+}
diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops.c b/tools/testing/selftests/bpf/progs/dummy_st_ops.c
new file mode 100644
index 000000000000..ead87edb75e2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dummy_st_ops.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021. Huawei Technologies Co., Ltd */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct bpf_dummy_ops_state {
+	int val;
+} __attribute__((preserve_access_index));
+
+struct bpf_dummy_ops {
+	int (*test_1)(struct bpf_dummy_ops_state *state);
+	int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+		      char a3, unsigned long a4);
+};
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/test_1")
+int BPF_PROG(test_1, struct bpf_dummy_ops_state *state)
+{
+	int ret;
+
+	if (!state)
+		return 0xf2f3f4f5;
+
+	ret = state->val;
+	state->val = 0x5a;
+	return ret;
+}
+
+__u64 test_2_args[5];
+
+SEC("struct_ops/test_2")
+int BPF_PROG(test_2, struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+	     char a3, unsigned long a4)
+{
+	test_2_args[0] = (unsigned long)state;
+	test_2_args[1] = a1;
+	test_2_args[2] = a2;
+	test_2_args[3] = a3;
+	test_2_args[4] = a4;
+	return 0;
+}
+
+SEC(".struct_ops")
+struct bpf_dummy_ops dummy_1 = {
+	.test_1 = (void *)test_1,
+	.test_2 = (void *)test_2,
+};
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog
  2021-10-22  7:55 ` [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
@ 2021-10-23  0:14   ` Martin KaFai Lau
  0 siblings, 0 replies; 11+ messages in thread
From: Martin KaFai Lau @ 2021-10-23  0:14 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Fri, Oct 22, 2021 at 03:55:08PM +0800, Hou Tao wrote:
> Factor out a helper bpf_struct_ops_prepare_trampoline() to prepare
> trampoline for BPF_PROG_TYPE_STRUCT_OPS prog. It will be used by
> .test_run callback in following patch.
Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function
  2021-10-22  7:55 ` [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function Hou Tao
@ 2021-10-23  0:18   ` Martin KaFai Lau
  2021-10-24  9:27     ` Hou Tao
  0 siblings, 1 reply; 11+ messages in thread
From: Martin KaFai Lau @ 2021-10-23  0:18 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Fri, Oct 22, 2021 at 03:55:09PM +0800, Hou Tao wrote:
> Factor out two helpers to check the read access of ctx for BTF
> function. bpf_tracing_ctx_access() is used to check the
> read access to argument is valid, and bpf_tracing_btf_ctx_access()
> checks whether the btf type of argument is valid besides
> the checking of argument read. bpf_tracing_btf_ctx_access()
> will be used by the following patch.
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
>  include/linux/bpf.h      | 27 +++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c | 16 ++--------------
>  net/ipv4/bpf_tcp_ca.c    |  9 +--------
>  3 files changed, 30 insertions(+), 22 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3d2cf94a72ce..0dd2de9eeed3 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1649,6 +1649,33 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
>  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		    const struct bpf_prog *prog,
>  		    struct bpf_insn_access_aux *info);
> +
> +/*
> + * The maximum number of BTF function arguments is MAX_BPF_FUNC_ARGS.
> + * And only aligned read is allowed.
It is not always 'BTF' function arguments.  Lets remove this comment.
The function is short and its intention is clear.

Others lgtm.

Acked-by: Martin KaFai Lau <kafai@fb.com>

> + */
> +static inline bool bpf_tracing_ctx_access(int off, int size,
> +					  enum bpf_access_type type)
> +{
> +	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
> +		return false;
> +	if (type != BPF_READ)
> +		return false;
> +	if (off % size != 0)
> +		return false;
> +	return true;
> +}
> +
> +static inline bool bpf_tracing_btf_ctx_access(int off, int size,
> +					      enum bpf_access_type type,
> +					      const struct bpf_prog *prog,
> +					      struct bpf_insn_access_aux *info)
> +{
> +	if (!bpf_tracing_ctx_access(off, size, type))
> +		return false;
> +	return btf_ctx_access(off, size, type, prog, info);
> +}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose
  2021-10-22  7:55 ` [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
@ 2021-10-23  0:35   ` Martin KaFai Lau
  2021-10-24 10:32     ` Hou Tao
  0 siblings, 1 reply; 11+ messages in thread
From: Martin KaFai Lau @ 2021-10-23  0:35 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Fri, Oct 22, 2021 at 03:55:10PM +0800, Hou Tao wrote:
> --- /dev/null
> +++ b/net/bpf/bpf_dummy_struct_ops.c
> @@ -0,0 +1,203 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021. Huawei Technologies Co., Ltd
> + */
> +#include <linux/kernel.h>
> +#include <linux/bpf_verifier.h>
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +
> +extern struct bpf_struct_ops bpf_bpf_dummy_ops;
> +
> +/* A common type for test_N with return value in bpf_dummy_ops */
> +typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
> +
> +struct bpf_dummy_ops_test_args {
> +	u64 args[MAX_BPF_FUNC_ARGS];
> +	struct bpf_dummy_ops_state state;
> +};
> +
> +static struct bpf_dummy_ops_test_args *
> +dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
> +{
> +	__u32 size_in;
> +	struct bpf_dummy_ops_test_args *args;
> +	void __user *ctx_in;
> +	void __user *u_state;
> +
> +	if (!nr || nr > MAX_BPF_FUNC_ARGS)
These checks are unnecessary and can be removed.  They had already been
checked by the verifier during the bpf prog load time.

Others lgtm.

Acked-by: Martin KaFai Lau <kafai@fb.com>

> +		return ERR_PTR(-EINVAL);
> +
> +	size_in = kattr->test.ctx_size_in;
> +	if (size_in != sizeof(u64) * nr)
> +		return ERR_PTR(-EINVAL);
> +
> +	args = kzalloc(sizeof(*args), GFP_KERNEL);
> +	if (!args)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> +	if (copy_from_user(args->args, ctx_in, size_in))
> +		goto out;
> +
> +	/* args[0] is 0 means state argument of test_N will be NULL */
> +	u_state = u64_to_user_ptr(args->args[0]);
> +	if (u_state && copy_from_user(&args->state, u_state,
> +				      sizeof(args->state)))
> +		goto out;
> +
> +	return args;
> +out:
> +	kfree(args);
> +	return ERR_PTR(-EFAULT);
> +}

[ ... ]

> +int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
> +			    union bpf_attr __user *uattr)
> +{
> +	const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
> +	const struct btf_type *func_proto;
> +	struct bpf_dummy_ops_test_args *args;
> +	struct bpf_tramp_progs *tprogs;
> +	void *image = NULL;
> +	unsigned int op_idx;
> +	int prog_ret;
> +	int err;
> +
> +	if (prog->aux->attach_btf_id != st_ops->type_id)
> +		return -EOPNOTSUPP;
> +
> +	func_proto = prog->aux->attach_func_proto;
> +	args = dummy_ops_init_args(kattr, btf_type_vlen(func_proto));
> +	if (IS_ERR(args))
> +		return PTR_ERR(args);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog
  2021-10-22  7:55 ` [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog Hou Tao
@ 2021-10-23  1:00   ` Martin KaFai Lau
  0 siblings, 0 replies; 11+ messages in thread
From: Martin KaFai Lau @ 2021-10-23  1:00 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Fri, Oct 22, 2021 at 03:55:11PM +0800, Hou Tao wrote:
> Running a BPF_PROG_TYPE_STRUCT_OPS prog for dummy_st_ops::test_N()
> through bpf_prog_test_run(). Four test cases are added:
> (1) attach dummy_st_ops should fail
> (2) function return value of bpf_dummy_ops::test_1() is expected
> (3) pointer argument of bpf_dummy_ops::test_1() works as expected
> (4) multiple arguments passed to bpf_dummy_ops::test_2() are correct
Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function
  2021-10-23  0:18   ` Martin KaFai Lau
@ 2021-10-24  9:27     ` Hou Tao
  0 siblings, 0 replies; 11+ messages in thread
From: Hou Tao @ 2021-10-24  9:27 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

Hi Martin,

On 10/23/2021 8:18 AM, Martin KaFai Lau wrote:
> On Fri, Oct 22, 2021 at 03:55:09PM +0800, Hou Tao wrote:
>> @@ -1649,6 +1649,33 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
>>  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>>  		    const struct bpf_prog *prog,
>>  		    struct bpf_insn_access_aux *info);
>> +
>> +/*
>> + * The maximum number of BTF function arguments is MAX_BPF_FUNC_ARGS.
>> + * And only aligned read is allowed.
> It is not always 'BTF' function arguments.  Lets remove this comment.
> The function is short and its intention is clear.
Yes, you are right, BTF is not necessary for BPF_PROG_TYPE_RAW_TRACEPOINT program.
I will remove the inaccurate comments and update commit message accordingly.
> Others lgtm.
>
> Acked-by: Martin KaFai Lau <kafai@fb.com>
Thanks for your detailed suggestions and careful review.

Regards,
Tao

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose
  2021-10-23  0:35   ` Martin KaFai Lau
@ 2021-10-24 10:32     ` Hou Tao
  0 siblings, 0 replies; 11+ messages in thread
From: Hou Tao @ 2021-10-24 10:32 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

Hi Martin,

On 10/23/2021 8:35 AM, Martin KaFai Lau wrote:
> On Fri, Oct 22, 2021 at 03:55:10PM +0800, Hou Tao wrote:
>
> +static struct bpf_dummy_ops_test_args *
> +dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
> +{
> +	__u32 size_in;
> +	struct bpf_dummy_ops_test_args *args;
> +	void __user *ctx_in;
> +	void __user *u_state;
> +
> +	if (!nr || nr > MAX_BPF_FUNC_ARGS)
> These checks are unnecessary and can be removed.  They had already been
> checked by the verifier during the bpf prog load time.
Yes. The check is done in bpf_struct_ops_init(). Will remove it in v4.
> Others lgtm.
>
> Acked-by: Martin KaFai Lau <kafai@fb.com>
Thanks again for your review.

Regards,
Tao

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-10-24 10:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-22  7:55 [PATCH bpf-next v3 0/4] introduce dummy BPF STRUCT_OPS Hou Tao
2021-10-22  7:55 ` [PATCH bpf-next v3 1/4] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
2021-10-23  0:14   ` Martin KaFai Lau
2021-10-22  7:55 ` [PATCH bpf-next v3 2/4] bpf: factor out helpers to check ctx access for BTF function Hou Tao
2021-10-23  0:18   ` Martin KaFai Lau
2021-10-24  9:27     ` Hou Tao
2021-10-22  7:55 ` [PATCH bpf-next v3 3/4] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
2021-10-23  0:35   ` Martin KaFai Lau
2021-10-24 10:32     ` Hou Tao
2021-10-22  7:55 ` [PATCH bpf-next v3 4/4] selftests/bpf: add test cases for struct_ops prog Hou Tao
2021-10-23  1:00   ` Martin KaFai Lau

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.