All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators
@ 2020-07-17  4:40 Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 1/4] bpf: Add bpf_prog iterator Alexei Starovoitov
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2020-07-17  4:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

v1->v2:
- changed names to 'progs.debug' and 'maps.debug' to hopefully better indicate
  instability of the text output. Having dot in the name also guarantees
  that these special files will not conflict with normal bpf objects pinned
  in bpffs, since dot is disallowed for normal pins.
- instead of hard coding link_name in the core bpf moved into UMD.
- cleanedup error handling.
- addressed review comments from Yonghong and Andrii.

This patch set is the first real user of user moder driver facility. The
general use case for user mode driver is to ship vmlinux with preloaded BPF
programs. In this particular case the user mode driver populates bpffs instance
with two BPF iterators. In several months BPF_LSM project would need to preload
the kernel with its own set of BPF programs and attach to LSM hooks instead of
bpffs. BPF iterators and BPF_LSM are unstable from uapi perspective. They are
tracing based and peek into arbitrary kernel data structures. One can question
why a kernel module cannot embed BPF programs inside. The reason is that libbpf
is necessary to load them. First libbpf loads BPF Type Format, then creates BPF
maps, populates them. Then it relocates code sections inside BPF programs,
loads BPF programs, and finally attaches them to events. Theoretically libbpf
can be rewritten to work in the kernel, but that is massive undertaking. The
maintenance of in-kernel libbpf and user space libbpf would be another
challenge. Another obstacle to embedding BPF programs into kernel module is
sys_bpf api. Loading of programs, BTF, maps goes through the verifier. It
validates and optimizes the code. It's possible to provide in-kernel api to all
of sys_bpf commands (load progs, create maps, update maps, load BTF, etc), but
that is huge amount of work and forever maintenance headache.
Hence the decision is to ship vmlinux with user mode drivers that load
BPF programs. Just like kernel modules extend vmlinux BPF programs
are safe extensions of the kernel and some of them need to ship with vmlinux.

This patch set adds a kernel module with user mode driver that populates bpffs
with two BPF iterators.

$ mount bpffs /my/bpffs/ -t bpf
$ ls -la /my/bpffs/
total 4
drwxrwxrwt  2 root root    0 Jul  2 00:27 .
drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
-rw-------  1 root root    0 Jul  2 00:27 maps.debug
-rw-------  1 root root    0 Jul  2 00:27 progs.debug

The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
maps, load two BPF programs, attach them to BPF iterators, and finally send two
bpf_link IDs back to the kernel.
The kernel will pin two bpf_links into newly mounted bpffs instance under
names "progs.debug" and "maps.debug". These two files become human readable.

$ cat /my/bpffs/progs.debug
  id name            pages attached
  11 dump_bpf_map        1 bpf_iter_bpf_map
  12 dump_bpf_prog       1 bpf_iter_bpf_prog
  27 test_pkt_access     1
  32 test_main           1 test_pkt_access test_pkt_access
  33 test_subprog1       1 test_pkt_access_subprog1 test_pkt_access
  34 test_subprog2       1 test_pkt_access_subprog2 test_pkt_access
  35 test_subprog3       1 test_pkt_access_subprog3 test_pkt_access
  36 new_get_skb_len     1 get_skb_len test_pkt_access
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access

The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
all BPF programs currently loaded in the system. This information is unstable
and will change from kernel to kernel.

In some sence this output is similar to 'bpftool prog show' that is using
stable api to retreive information about BPF programs. The BPF subsytems grows
quickly and there is always demand to show as much info about BPF things as
possible. But we cannot expose all that info via stable uapi of bpf syscall,
since the details change so much. Right now a BPF program can be attached to
only one other BPF program. Folks are working on patches to enable
multi-attach, but for debugging it's necessary to see the current state. There
is no uapi for that, but above output shows it:
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access
     [1]                   [2]          [3]
[1] is the name of BPF prog.
[2] is the name of function inside target BPF prog.
[3] is the name of target BPF prog.

[2] and [3] are not exposed via uapi, since they will change from single to
multi soon. There are many other cases where bpf internals are useful for
debugging, but shouldn't be exposed via uapi due to high rate of changes.

systemd mounts /sys/fs/bpf at the start, so this kernel module with user mode
driver needs to be available early. BPF_LSM most likely would need to preload
BPF programs even earlier.

Few interesting observations:
- though bpffs comes with two human readble files "progs.debug" and
  "maps.debug" they can be removed. 'rm -f /sys/fs/bpf/progs.debug' will remove
  bpf_link and kernel will automatically unload corresponding BPF progs, maps,
  BTFs. In the future '-o remount' will be able to restore them. This is not
  implemented yet.

- 'ps aux|grep bpf_preload' shows nothing. User mode driver loaded BPF
  iterators and exited. Nothing is lingering in user space at this point.

- We can consider giving 0644 permissions to "progs.debug" and "maps.debug"
  to allow unprivileged users see BPF things loaded in the system.
  We cannot do so with "bpftool prog show", since it's using cap_sys_admin
  parts of bpf syscall.

- The functionality split between core kernel, bpf_preload kernel module and
  user mode driver is very similar to bpfilter style of interaction.

- Similar BPF iterators can be used as unstable extensions to /proc.
  Like mounting /proc can prepopolate some subdirectory in there with
  a BPF iterator that will print QUIC sockets instead of tcp and udp.

Alexei Starovoitov (4):
  bpf: Add bpf_prog iterator
  bpf: Factor out bpf_link_get_by_id() helper.
  bpf: Add BPF program and map iterators as built-in BPF programs.
  bpf: Add kernel module with user mode driver that populates bpffs.

 include/linux/bpf.h                           |   2 +
 init/Kconfig                                  |   2 +
 kernel/bpf/Makefile                           |   3 +-
 kernel/bpf/inode.c                            |  86 ++++-
 kernel/bpf/map_iter.c                         |  13 +-
 kernel/bpf/preload/Kconfig                    |  18 +
 kernel/bpf/preload/Makefile                   |  21 +
 kernel/bpf/preload/bpf_preload.h              |  16 +
 kernel/bpf/preload/bpf_preload_kern.c         |  85 +++++
 kernel/bpf/preload/bpf_preload_umd_blob.S     |   7 +
 kernel/bpf/preload/iterators/.gitignore       |   2 +
 kernel/bpf/preload/iterators/Makefile         |  57 +++
 kernel/bpf/preload/iterators/README           |   4 +
 .../preload/iterators/bpf_preload_common.h    |  13 +
 kernel/bpf/preload/iterators/iterators.bpf.c  |  82 ++++
 kernel/bpf/preload/iterators/iterators.c      |  93 +++++
 kernel/bpf/preload/iterators/iterators.skel.h | 360 ++++++++++++++++++
 kernel/bpf/prog_iter.c                        |  97 +++++
 kernel/bpf/syscall.c                          |  65 +++-
 19 files changed, 995 insertions(+), 31 deletions(-)
 create mode 100644 kernel/bpf/preload/Kconfig
 create mode 100644 kernel/bpf/preload/Makefile
 create mode 100644 kernel/bpf/preload/bpf_preload.h
 create mode 100644 kernel/bpf/preload/bpf_preload_kern.c
 create mode 100644 kernel/bpf/preload/bpf_preload_umd_blob.S
 create mode 100644 kernel/bpf/preload/iterators/.gitignore
 create mode 100644 kernel/bpf/preload/iterators/Makefile
 create mode 100644 kernel/bpf/preload/iterators/README
 create mode 100644 kernel/bpf/preload/iterators/bpf_preload_common.h
 create mode 100644 kernel/bpf/preload/iterators/iterators.bpf.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.skel.h
 create mode 100644 kernel/bpf/prog_iter.c

-- 
2.23.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 bpf-next 1/4] bpf: Add bpf_prog iterator
  2020-07-17  4:40 [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
@ 2020-07-17  4:40 ` Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 2/4] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2020-07-17  4:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

It's mostly a copy paste of commit 6086d29def80 ("bpf: Add bpf_map iterator")
that is use to implement bpf_seq_file opreations to traverse all bpf programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
---
 include/linux/bpf.h    |  1 +
 kernel/bpf/Makefile    |  2 +-
 kernel/bpf/map_iter.c  | 13 ++----
 kernel/bpf/prog_iter.c | 97 ++++++++++++++++++++++++++++++++++++++++++
 kernel/bpf/syscall.c   | 19 +++++++++
 5 files changed, 122 insertions(+), 10 deletions(-)
 create mode 100644 kernel/bpf/prog_iter.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c67c88ad35f8..4ede2b0298b3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1112,6 +1112,7 @@ int  generic_map_delete_batch(struct bpf_map *map,
 			      const union bpf_attr *attr,
 			      union bpf_attr __user *uattr);
 struct bpf_map *bpf_map_get_curr_or_next(u32 *id);
+struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id);
 
 extern int sysctl_unprivileged_bpf_disabled;
 
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 1131a921e1a6..e6eb9c0402da 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -2,7 +2,7 @@
 obj-y := core.o
 CFLAGS_core.o += $(call cc-disable-warning, override-init)
 
-obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o
+obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o prog_iter.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
 obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
 obj-$(CONFIG_BPF_SYSCALL) += disasm.o
diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
index c69071e334bf..343c1df100cf 100644
--- a/kernel/bpf/map_iter.c
+++ b/kernel/bpf/map_iter.c
@@ -6,7 +6,7 @@
 #include <linux/kernel.h>
 
 struct bpf_iter_seq_map_info {
-	u32 mid;
+	u32 map_id;
 };
 
 static void *bpf_map_seq_start(struct seq_file *seq, loff_t *pos)
@@ -14,7 +14,7 @@ static void *bpf_map_seq_start(struct seq_file *seq, loff_t *pos)
 	struct bpf_iter_seq_map_info *info = seq->private;
 	struct bpf_map *map;
 
-	map = bpf_map_get_curr_or_next(&info->mid);
+	map = bpf_map_get_curr_or_next(&info->map_id);
 	if (!map)
 		return NULL;
 
@@ -25,16 +25,11 @@ static void *bpf_map_seq_start(struct seq_file *seq, loff_t *pos)
 static void *bpf_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
 {
 	struct bpf_iter_seq_map_info *info = seq->private;
-	struct bpf_map *map;
 
 	++*pos;
-	++info->mid;
+	++info->map_id;
 	bpf_map_put((struct bpf_map *)v);
-	map = bpf_map_get_curr_or_next(&info->mid);
-	if (!map)
-		return NULL;
-
-	return map;
+	return bpf_map_get_curr_or_next(&info->map_id);
 }
 
 struct bpf_iter__bpf_map {
diff --git a/kernel/bpf/prog_iter.c b/kernel/bpf/prog_iter.c
new file mode 100644
index 000000000000..b5824382eb7d
--- /dev/null
+++ b/kernel/bpf/prog_iter.c
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2020 Facebook */
+#include <linux/bpf.h>
+#include <linux/fs.h>
+#include <linux/filter.h>
+#include <linux/kernel.h>
+
+struct bpf_iter_seq_prog_info {
+	u32 prog_id;
+};
+
+static void *bpf_prog_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct bpf_iter_seq_prog_info *info = seq->private;
+	struct bpf_prog *prog;
+
+	prog = bpf_prog_get_curr_or_next(&info->prog_id);
+	if (!prog)
+		return NULL;
+
+	++*pos;
+	return prog;
+}
+
+static void *bpf_prog_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct bpf_iter_seq_prog_info *info = seq->private;
+
+	++*pos;
+	++info->prog_id;
+	bpf_prog_put((struct bpf_prog *)v);
+	return bpf_prog_get_curr_or_next(&info->prog_id);
+}
+
+struct bpf_iter__bpf_prog {
+	__bpf_md_ptr(struct bpf_iter_meta *, meta);
+	__bpf_md_ptr(struct bpf_prog *, prog);
+};
+
+DEFINE_BPF_ITER_FUNC(bpf_prog, struct bpf_iter_meta *meta, struct bpf_prog *prog)
+
+static int __bpf_prog_seq_show(struct seq_file *seq, void *v, bool in_stop)
+{
+	struct bpf_iter__bpf_prog ctx;
+	struct bpf_iter_meta meta;
+	struct bpf_prog *prog;
+	int ret = 0;
+
+	ctx.meta = &meta;
+	ctx.prog = v;
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, in_stop);
+	if (prog)
+		ret = bpf_iter_run_prog(prog, &ctx);
+
+	return ret;
+}
+
+static int bpf_prog_seq_show(struct seq_file *seq, void *v)
+{
+	return __bpf_prog_seq_show(seq, v, false);
+}
+
+static void bpf_prog_seq_stop(struct seq_file *seq, void *v)
+{
+	if (!v)
+		(void)__bpf_prog_seq_show(seq, v, true);
+	else
+		bpf_prog_put((struct bpf_prog *)v);
+}
+
+static const struct seq_operations bpf_prog_seq_ops = {
+	.start	= bpf_prog_seq_start,
+	.next	= bpf_prog_seq_next,
+	.stop	= bpf_prog_seq_stop,
+	.show	= bpf_prog_seq_show,
+};
+
+static const struct bpf_iter_reg bpf_prog_reg_info = {
+	.target			= "bpf_prog",
+	.seq_ops		= &bpf_prog_seq_ops,
+	.init_seq_private	= NULL,
+	.fini_seq_private	= NULL,
+	.seq_priv_size		= sizeof(struct bpf_iter_seq_prog_info),
+	.ctx_arg_info_size	= 1,
+	.ctx_arg_info		= {
+		{ offsetof(struct bpf_iter__bpf_prog, prog),
+		  PTR_TO_BTF_ID_OR_NULL },
+	},
+};
+
+static int __init bpf_prog_iter_init(void)
+{
+	return bpf_iter_reg_target(&bpf_prog_reg_info);
+}
+
+late_initcall(bpf_prog_iter_init);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 7ea9dfbebd8c..86df3daa13f6 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3036,6 +3036,25 @@ struct bpf_map *bpf_map_get_curr_or_next(u32 *id)
 	return map;
 }
 
+struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id)
+{
+	struct bpf_prog *prog;
+
+	spin_lock_bh(&prog_idr_lock);
+again:
+	prog = idr_get_next(&prog_idr, id);
+	if (prog) {
+		prog = bpf_prog_inc_not_zero(prog);
+		if (IS_ERR(prog)) {
+			(*id)++;
+			goto again;
+		}
+	}
+	spin_unlock_bh(&prog_idr_lock);
+
+	return prog;
+}
+
 #define BPF_PROG_GET_FD_BY_ID_LAST_FIELD prog_id
 
 struct bpf_prog *bpf_prog_by_id(u32 id)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 bpf-next 2/4] bpf: Factor out bpf_link_get_by_id() helper.
  2020-07-17  4:40 [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 1/4] bpf: Add bpf_prog iterator Alexei Starovoitov
@ 2020-07-17  4:40 ` Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 3/4] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  3 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2020-07-17  4:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Refactor the code a bit to extract bpf_link_get_by_id() helper.
It's similar to existing bpf_prog_by_id().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 include/linux/bpf.h  |  1 +
 kernel/bpf/syscall.c | 46 +++++++++++++++++++++++++++-----------------
 2 files changed, 29 insertions(+), 18 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4ede2b0298b3..009e744897d5 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1319,6 +1319,7 @@ int btf_check_type_match(struct bpf_verifier_env *env, struct bpf_prog *prog,
 			 struct btf *btf, const struct btf_type *t);
 
 struct bpf_prog *bpf_prog_by_id(u32 id);
+struct bpf_link *bpf_link_by_id(u32 id);
 
 const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
 #else /* !CONFIG_BPF_SYSCALL */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 86df3daa13f6..85ea56717368 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3975,40 +3975,50 @@ static int link_update(union bpf_attr *attr)
 	return ret;
 }
 
-static int bpf_link_inc_not_zero(struct bpf_link *link)
+static struct bpf_link *bpf_link_inc_not_zero(struct bpf_link *link)
 {
-	return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? 0 : -ENOENT;
+	return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? link : ERR_PTR(-ENOENT);
 }
 
-#define BPF_LINK_GET_FD_BY_ID_LAST_FIELD link_id
-
-static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
+struct bpf_link *bpf_link_by_id(u32 id)
 {
 	struct bpf_link *link;
-	u32 id = attr->link_id;
-	int fd, err;
 
-	if (CHECK_ATTR(BPF_LINK_GET_FD_BY_ID))
-		return -EINVAL;
-
-	if (!capable(CAP_SYS_ADMIN))
-		return -EPERM;
+	if (!id)
+		return ERR_PTR(-ENOENT);
 
 	spin_lock_bh(&link_idr_lock);
-	link = idr_find(&link_idr, id);
 	/* before link is "settled", ID is 0, pretend it doesn't exist yet */
+	link = idr_find(&link_idr, id);
 	if (link) {
 		if (link->id)
-			err = bpf_link_inc_not_zero(link);
+			link = bpf_link_inc_not_zero(link);
 		else
-			err = -EAGAIN;
+			link = ERR_PTR(-EAGAIN);
 	} else {
-		err = -ENOENT;
+		link = ERR_PTR(-ENOENT);
 	}
 	spin_unlock_bh(&link_idr_lock);
+	return link;
+}
 
-	if (err)
-		return err;
+#define BPF_LINK_GET_FD_BY_ID_LAST_FIELD link_id
+
+static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
+{
+	struct bpf_link *link;
+	u32 id = attr->link_id;
+	int fd;
+
+	if (CHECK_ATTR(BPF_LINK_GET_FD_BY_ID))
+		return -EINVAL;
+
+	if (!capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	link = bpf_link_by_id(id);
+	if (IS_ERR(link))
+		return PTR_ERR(link);
 
 	fd = bpf_link_new_fd(link);
 	if (fd < 0)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 bpf-next 3/4] bpf: Add BPF program and map iterators as built-in BPF programs.
  2020-07-17  4:40 [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 1/4] bpf: Add bpf_prog iterator Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 2/4] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
@ 2020-07-17  4:40 ` Alexei Starovoitov
  2020-07-17  4:40 ` [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  3 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2020-07-17  4:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

The program and map iterators work similar to seq_file-s.
Once the program is pinned in bpffs it can be read with "cat" tool
to print human readable output. In this case about BPF programs and maps.
For example:
$ cat /sys/fs/bpf/progs.debug
  id name            pages attached
   5 dump_bpf_map        1 bpf_iter_bpf_map
   6 dump_bpf_prog       1 bpf_iter_bpf_prog
$ cat /sys/fs/bpf/maps.debug
  id name            pages
   3 iterator.rodata     2

To avoid kernel build dependency on clang 10 separate bpf skeleton generation
into manual "make" step and instead check-in generated .skel.h into git.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/preload/iterators/.gitignore       |   2 +
 kernel/bpf/preload/iterators/Makefile         |  57 +++
 kernel/bpf/preload/iterators/README           |   4 +
 kernel/bpf/preload/iterators/iterators.bpf.c  |  82 ++++
 kernel/bpf/preload/iterators/iterators.skel.h | 360 ++++++++++++++++++
 5 files changed, 505 insertions(+)
 create mode 100644 kernel/bpf/preload/iterators/.gitignore
 create mode 100644 kernel/bpf/preload/iterators/Makefile
 create mode 100644 kernel/bpf/preload/iterators/README
 create mode 100644 kernel/bpf/preload/iterators/iterators.bpf.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.skel.h

diff --git a/kernel/bpf/preload/iterators/.gitignore b/kernel/bpf/preload/iterators/.gitignore
new file mode 100644
index 000000000000..ffdb70230c8b
--- /dev/null
+++ b/kernel/bpf/preload/iterators/.gitignore
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+/.output
diff --git a/kernel/bpf/preload/iterators/Makefile b/kernel/bpf/preload/iterators/Makefile
new file mode 100644
index 000000000000..28fa8c1440f4
--- /dev/null
+++ b/kernel/bpf/preload/iterators/Makefile
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: GPL-2.0
+OUTPUT := .output
+CLANG ?= clang
+LLC ?= llc
+LLVM_STRIP ?= llvm-strip
+DEFAULT_BPFTOOL := $(OUTPUT)/sbin/bpftool
+BPFTOOL ?= $(DEFAULT_BPFTOOL)
+LIBBPF_SRC := $(abspath ../../../../tools/lib/bpf)
+BPFOBJ := $(OUTPUT)/libbpf.a
+BPF_INCLUDE := $(OUTPUT)
+INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../../../tools/lib)        \
+       -I$(abspath ../../../../tools/include/uapi)
+CFLAGS := -g -Wall
+
+abs_out := $(abspath $(OUTPUT))
+ifeq ($(V),1)
+Q =
+msg =
+else
+Q = @
+msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
+MAKEFLAGS += --no-print-directory
+submake_extras := feature_display=0
+endif
+
+.DELETE_ON_ERROR:
+
+.PHONY: all clean
+
+all: iterators.skel.h
+
+clean:
+	$(call msg,CLEAN)
+	$(Q)rm -rf $(OUTPUT) iterators
+
+iterators.skel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL)
+	$(call msg,GEN-SKEL,$@)
+	$(Q)$(BPFTOOL) gen skeleton $< > $@
+
+
+$(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT)
+	$(call msg,BPF,$@)
+	$(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES)			      \
+		 -c $(filter %.c,$^) -o $@ &&				      \
+	$(LLVM_STRIP) -g $@
+
+$(OUTPUT):
+	$(call msg,MKDIR,$@)
+	$(Q)mkdir -p $(OUTPUT)
+
+$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)
+	$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC)			       \
+		    OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
+
+$(DEFAULT_BPFTOOL):
+	$(Q)$(MAKE) $(submake_extras) -C ../../../../tools/bpf/bpftool			      \
+		    prefix= OUTPUT=$(abs_out)/ DESTDIR=$(abs_out) install
diff --git a/kernel/bpf/preload/iterators/README b/kernel/bpf/preload/iterators/README
new file mode 100644
index 000000000000..7fd6d39a9ad2
--- /dev/null
+++ b/kernel/bpf/preload/iterators/README
@@ -0,0 +1,4 @@
+WARNING:
+If you change "iterators.bpf.c" do "make -j" in this directory to rebuild "iterators.skel.h".
+Make sure to have clang 10 installed.
+See Documentation/bpf/bpf_devel_QA.rst
diff --git a/kernel/bpf/preload/iterators/iterators.bpf.c b/kernel/bpf/preload/iterators/iterators.bpf.c
new file mode 100644
index 000000000000..81f7d4376ef9
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.bpf.c
@@ -0,0 +1,82 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct seq_file;
+struct bpf_iter_meta {
+	struct seq_file *seq;
+	__u64 session_id;
+	__u64 seq_num;
+} __attribute__((preserve_access_index));
+
+struct bpf_map_memory {
+	__u32 pages;
+} __attribute__((preserve_access_index));
+
+struct bpf_map {
+	__u32 id;
+	struct bpf_map_memory memory;
+	char name[16];
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__bpf_map {
+	struct bpf_iter_meta *meta;
+	struct bpf_map *map;
+} __attribute__((preserve_access_index));
+
+struct bpf_prog_aux {
+	__u32 id;
+	char name[16];
+	const char *attach_func_name;
+	struct bpf_prog *linked_prog;
+} __attribute__((preserve_access_index));
+
+struct bpf_prog {
+	struct bpf_prog_aux *aux;
+	__u16 pages;
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__bpf_prog {
+	struct bpf_iter_meta *meta;
+	struct bpf_prog *prog;
+} __attribute__((preserve_access_index));
+
+SEC("iter/bpf_map")
+int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	__u64 seq_num = ctx->meta->seq_num;
+	struct bpf_map *map = ctx->map;
+
+	if (!map)
+		return 0;
+
+	if (seq_num == 0)
+		BPF_SEQ_PRINTF(seq, "  id name             pages\n");
+
+	BPF_SEQ_PRINTF(seq, "%4u %-16s%6d\n", map->id, map->name, map->memory.pages);
+	return 0;
+}
+
+SEC("iter/bpf_prog")
+int dump_bpf_prog(struct bpf_iter__bpf_prog *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	__u64 seq_num = ctx->meta->seq_num;
+	struct bpf_prog *prog = ctx->prog;
+	struct bpf_prog_aux *aux;
+
+	if (!prog)
+		return 0;
+
+	aux = prog->aux;
+	if (seq_num == 0)
+		BPF_SEQ_PRINTF(seq, "  id name             pages attached\n");
+
+	BPF_SEQ_PRINTF(seq, "%4u %-16s%6d %s %s\n", aux->id, aux->name, prog->pages,
+		       aux->attach_func_name, aux->linked_prog->aux->name);
+	return 0;
+}
+char LICENSE[] SEC("license") = "GPL";
diff --git a/kernel/bpf/preload/iterators/iterators.skel.h b/kernel/bpf/preload/iterators/iterators.skel.h
new file mode 100644
index 000000000000..6b9aefd682ea
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.skel.h
@@ -0,0 +1,360 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+
+/* THIS FILE IS AUTOGENERATED! */
+#ifndef __ITERATORS_BPF_SKEL_H__
+#define __ITERATORS_BPF_SKEL_H__
+
+#include <stdlib.h>
+#include <bpf/libbpf.h>
+
+struct iterators_bpf {
+	struct bpf_object_skeleton *skeleton;
+	struct bpf_object *obj;
+	struct {
+		struct bpf_map *rodata;
+	} maps;
+	struct {
+		struct bpf_program *dump_bpf_map;
+		struct bpf_program *dump_bpf_prog;
+	} progs;
+	struct {
+		struct bpf_link *dump_bpf_map;
+		struct bpf_link *dump_bpf_prog;
+	} links;
+	struct iterators_bpf__rodata {
+		char dump_bpf_map____fmt[29];
+		char dump_bpf_map____fmt_1[14];
+		char dump_bpf_prog____fmt[38];
+		char dump_bpf_prog____fmt_2[20];
+	} *rodata;
+};
+
+static void
+iterators_bpf__destroy(struct iterators_bpf *obj)
+{
+	if (!obj)
+		return;
+	if (obj->skeleton)
+		bpf_object__destroy_skeleton(obj->skeleton);
+	free(obj);
+}
+
+static inline int
+iterators_bpf__create_skeleton(struct iterators_bpf *obj);
+
+static inline struct iterators_bpf *
+iterators_bpf__open_opts(const struct bpf_object_open_opts *opts)
+{
+	struct iterators_bpf *obj;
+
+	obj = (typeof(obj))calloc(1, sizeof(*obj));
+	if (!obj)
+		return NULL;
+	if (iterators_bpf__create_skeleton(obj))
+		goto err;
+	if (bpf_object__open_skeleton(obj->skeleton, opts))
+		goto err;
+
+	return obj;
+err:
+	iterators_bpf__destroy(obj);
+	return NULL;
+}
+
+static inline struct iterators_bpf *
+iterators_bpf__open(void)
+{
+	return iterators_bpf__open_opts(NULL);
+}
+
+static inline int
+iterators_bpf__load(struct iterators_bpf *obj)
+{
+	return bpf_object__load_skeleton(obj->skeleton);
+}
+
+static inline struct iterators_bpf *
+iterators_bpf__open_and_load(void)
+{
+	struct iterators_bpf *obj;
+
+	obj = iterators_bpf__open();
+	if (!obj)
+		return NULL;
+	if (iterators_bpf__load(obj)) {
+		iterators_bpf__destroy(obj);
+		return NULL;
+	}
+	return obj;
+}
+
+static inline int
+iterators_bpf__attach(struct iterators_bpf *obj)
+{
+	return bpf_object__attach_skeleton(obj->skeleton);
+}
+
+static inline void
+iterators_bpf__detach(struct iterators_bpf *obj)
+{
+	return bpf_object__detach_skeleton(obj->skeleton);
+}
+
+static inline int
+iterators_bpf__create_skeleton(struct iterators_bpf *obj)
+{
+	struct bpf_object_skeleton *s;
+
+	s = (typeof(s))calloc(1, sizeof(*s));
+	if (!s)
+		return -1;
+	obj->skeleton = s;
+
+	s->sz = sizeof(*s);
+	s->name = "iterators_bpf";
+	s->obj = &obj->obj;
+
+	/* maps */
+	s->map_cnt = 1;
+	s->map_skel_sz = sizeof(*s->maps);
+	s->maps = (typeof(s->maps))calloc(s->map_cnt, s->map_skel_sz);
+	if (!s->maps)
+		goto err;
+
+	s->maps[0].name = "iterator.rodata";
+	s->maps[0].map = &obj->maps.rodata;
+	s->maps[0].mmaped = (void **)&obj->rodata;
+
+	/* programs */
+	s->prog_cnt = 2;
+	s->prog_skel_sz = sizeof(*s->progs);
+	s->progs = (typeof(s->progs))calloc(s->prog_cnt, s->prog_skel_sz);
+	if (!s->progs)
+		goto err;
+
+	s->progs[0].name = "dump_bpf_map";
+	s->progs[0].prog = &obj->progs.dump_bpf_map;
+	s->progs[0].link = &obj->links.dump_bpf_map;
+
+	s->progs[1].name = "dump_bpf_prog";
+	s->progs[1].prog = &obj->progs.dump_bpf_prog;
+	s->progs[1].link = &obj->links.dump_bpf_prog;
+
+	s->data_sz = 5768;
+	s->data = (void *)"\
+\x7f\x45\x4c\x46\x02\x01\x01\0\0\0\0\0\0\0\0\0\x01\0\xf7\0\x01\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\xc8\x12\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0f\0\
+\x0e\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\
+\x1a\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\
+\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\xb7\x03\0\0\x1d\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\
+\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x08\0\0\0\xbf\x72\0\0\0\0\0\0\
+\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x04\0\0\0\0\0\x7b\x1a\xf8\
+\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\
+\0\x18\x02\0\0\x1d\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x0e\0\0\0\xb7\x05\0\0\x18\
+\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x79\x12\0\0\0\0\
+\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x21\0\0\0\0\0\x79\x78\0\
+\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\
+\x04\0\0\xd8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x2b\0\0\0\0\0\0\0\0\0\
+\0\0\xb7\x03\0\0\x26\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x81\0\0\
+\0\0\0\0\x7b\x1a\xd8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x82\0\0\0\0\0\0\x0f\
+\x12\0\0\0\0\0\0\x7b\x2a\xe0\xff\0\0\0\0\x69\x72\x08\0\0\0\0\0\x7b\x2a\xe8\xff\
+\0\0\0\0\x79\x82\x18\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x79\x82\x20\0\0\0\0\0\
+\x79\x22\0\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\
+\0\0\0\x07\x04\0\0\xd8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x51\0\0\0\0\
+\0\0\0\0\0\0\0\xb7\x03\0\0\x14\0\0\0\xb7\x05\0\0\x28\0\0\0\x85\0\0\0\x7e\0\0\0\
+\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\
+\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\x73\x0a\0\x25\
+\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\
+\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\
+\x73\x20\x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\
+\x73\x25\x36\x64\x20\x25\x73\x20\x25\x73\x0a\0\x47\x50\x4c\0\x9f\xeb\x01\0\x18\
+\0\0\0\0\0\0\0\x90\x03\0\0\x90\x03\0\0\x04\x04\0\0\0\0\0\0\0\0\0\x02\x02\0\0\0\
+\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\0\0\
+\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\0\
+\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\0\
+\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\0\
+\0\x09\0\0\0\0\0\0\0\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\0\
+\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\x08\
+\x0c\0\0\0\xd7\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\x92\x01\0\0\x03\0\0\x04\x18\
+\0\0\0\x9a\x01\0\0\x0e\0\0\0\0\0\0\0\x9d\x01\0\0\x10\0\0\0\x20\0\0\0\xa4\x01\0\
+\0\x12\0\0\0\x40\0\0\0\xa9\x01\0\0\0\0\0\x08\x0f\0\0\0\xaf\x01\0\0\0\0\0\x01\
+\x04\0\0\0\x20\0\0\0\xbc\x01\0\0\x01\0\0\x04\x04\0\0\0\xcb\x01\0\0\x0e\0\0\0\0\
+\0\0\0\xd1\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\0\0\x03\0\0\0\0\
+\x11\0\0\0\x13\0\0\0\x10\0\0\0\xd6\x01\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\0\0\0\0\
+\0\0\0\0\x02\x15\0\0\0\x41\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\
+\0\0\0\x54\x02\0\0\x16\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x19\0\0\0\0\0\0\0\x01\
+\0\0\x0d\x06\0\0\0\x1c\0\0\0\x14\0\0\0\x59\x02\0\0\x01\0\0\x0c\x17\0\0\0\xa5\
+\x02\0\0\x02\0\0\x04\x10\0\0\0\xae\x02\0\0\x1a\0\0\0\0\0\0\0\xcb\x01\0\0\x1b\0\
+\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x1d\0\0\0\xb2\x02\0\0\0\0\0\x08\x1c\0\0\0\xb8\
+\x02\0\0\0\0\0\x01\x02\0\0\0\x10\0\0\0\x1a\x03\0\0\x04\0\0\x04\x28\0\0\0\x9a\
+\x01\0\0\x0e\0\0\0\0\0\0\0\xa4\x01\0\0\x12\0\0\0\x20\0\0\0\x27\x03\0\0\x1e\0\0\
+\0\xc0\0\0\0\x38\x03\0\0\x16\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\
+\0\0\0\0\x0a\x11\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\x13\0\0\0\x1d\0\0\0\
+\x96\x03\0\0\0\0\0\x0e\x20\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\
+\x13\0\0\0\x0e\0\0\0\xaa\x03\0\0\0\0\0\x0e\x22\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\
+\0\0\0\0\x1f\0\0\0\x13\0\0\0\x26\0\0\0\xc0\x03\0\0\0\0\0\x0e\x24\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\x13\0\0\0\x14\0\0\0\xd5\x03\0\0\0\0\0\x0e\
+\x26\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x11\0\0\0\x13\0\0\0\x04\0\0\0\xec\
+\x03\0\0\0\0\0\x0e\x28\0\0\0\x01\0\0\0\xf4\x03\0\0\x04\0\0\x0f\0\0\0\0\x21\0\0\
+\0\0\0\0\0\x1d\0\0\0\x23\0\0\0\x1d\0\0\0\x0e\0\0\0\x25\0\0\0\x2b\0\0\0\x26\0\0\
+\0\x27\0\0\0\x51\0\0\0\x14\0\0\0\xfc\x03\0\0\x01\0\0\x0f\0\0\0\0\x29\0\0\0\0\0\
+\0\0\x04\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\
+\x61\x70\0\x6d\x65\x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\
+\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\
+\x6d\x61\x70\0\x30\x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\
+\x6b\x65\x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\
+\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\
+\x2e\x62\x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\
+\x69\x6c\x65\x20\x2a\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\
+\x61\x2d\x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\
+\x61\0\x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\
+\x6e\x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x6c\x6f\
+\x6e\x67\x20\x6c\x6f\x6e\x67\x20\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\
+\x74\0\x30\x3a\x31\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\
+\x70\x20\x2a\x6d\x61\x70\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\
+\x69\x66\x20\x28\x21\x6d\x61\x70\x29\0\x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\
+\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\
+\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\
+\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\
+\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\
+\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\
+\x67\x65\x73\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6d\
+\x65\x6d\x6f\x72\x79\0\x6e\x61\x6d\x65\0\x5f\x5f\x75\x33\x32\0\x75\x6e\x73\x69\
+\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x62\x70\x66\x5f\x6d\x61\x70\x5f\x6d\x65\x6d\
+\x6f\x72\x79\0\x70\x61\x67\x65\x73\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\x52\x41\
+\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\x5f\x53\
+\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\
+\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\x2d\x3e\
+\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\x70\x2d\
+\x3e\x6d\x65\x6d\x6f\x72\x79\x2e\x70\x61\x67\x65\x73\x29\x3b\0\x30\x3a\x31\x3a\
+\x30\0\x7d\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\
+\x6f\x67\0\x70\x72\x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\
+\x67\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\
+\x75\x63\x74\x20\x62\x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\
+\x3d\x20\x63\x74\x78\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\
+\x72\x6f\x67\x29\0\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x5f\x5f\x75\
+\x31\x36\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x73\x68\x6f\x72\x74\0\x09\x61\
+\x75\x78\x20\x3d\x20\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\
+\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\
+\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
+\x20\x20\x20\x70\x61\x67\x65\x73\x20\x61\x74\x74\x61\x63\x68\x65\x64\x5c\x6e\
+\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x61\x75\x78\0\x61\x74\x74\
+\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\x65\0\x6c\x69\x6e\x6b\x65\x64\
+\x5f\x70\x72\x6f\x67\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\
+\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\
+\x64\x20\x25\x73\x20\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\
+\x2c\x20\x61\x75\x78\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x70\x72\x6f\x67\x2d\x3e\
+\x70\x61\x67\x65\x73\x2c\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\
+\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\
+\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\
+\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\
+\x45\x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\
+\x9f\xeb\x01\0\x20\0\0\0\0\0\0\0\x24\0\0\0\x24\0\0\0\x44\x01\0\0\x68\x01\0\0\
+\x34\x01\0\0\x08\0\0\0\x31\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\0\x67\x02\0\0\x01\0\
+\0\0\0\0\0\0\x18\0\0\0\x10\0\0\0\x31\0\0\0\x09\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\
+\0\x1e\xc4\0\0\x08\0\0\0\x42\0\0\0\x7b\0\0\0\x24\xc4\0\0\x10\0\0\0\x42\0\0\0\
+\xf2\0\0\0\x1d\xcc\0\0\x18\0\0\0\x42\0\0\0\x13\x01\0\0\x06\xd4\0\0\x20\0\0\0\
+\x42\0\0\0\x22\x01\0\0\x1d\xc8\0\0\x28\0\0\0\x42\0\0\0\x47\x01\0\0\x06\xe0\0\0\
+\x38\0\0\0\x42\0\0\0\x5a\x01\0\0\x03\xe4\0\0\x70\0\0\0\x42\0\0\0\xea\x01\0\0\
+\x02\xec\0\0\xf0\0\0\0\x42\0\0\0\x3f\x02\0\0\x01\xf4\0\0\x67\x02\0\0\x0a\0\0\0\
+\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x08\x01\0\x08\0\0\0\x42\0\0\0\x7b\0\0\0\x24\
+\x08\x01\0\x10\0\0\0\x42\0\0\0\x75\x02\0\0\x1f\x10\x01\0\x18\0\0\0\x42\0\0\0\
+\x99\x02\0\0\x06\x1c\x01\0\x20\0\0\0\x42\0\0\0\xc7\x02\0\0\x0e\x28\x01\0\x28\0\
+\0\0\x42\0\0\0\x22\x01\0\0\x1d\x0c\x01\0\x30\0\0\0\x42\0\0\0\x47\x01\0\0\x06\
+\x2c\x01\0\x40\0\0\0\x42\0\0\0\xd9\x02\0\0\x03\x30\x01\0\x78\0\0\0\x42\0\0\0\
+\x44\x03\0\0\x02\x38\x01\0\x28\x01\0\0\x42\0\0\0\x3f\x02\0\0\x01\x44\x01\0\x10\
+\0\0\0\x31\0\0\0\x07\0\0\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\
+\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x02\0\0\0\xee\0\0\0\0\0\0\0\x20\0\0\0\x08\0\0\
+\0\x1e\x01\0\0\0\0\0\0\x70\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\x80\0\0\0\x0d\0\0\
+\0\x1e\x01\0\0\0\0\0\0\xa0\0\0\0\x0d\0\0\0\x39\x02\0\0\0\0\0\0\x67\x02\0\0\x0b\
+\0\0\0\0\0\0\0\x15\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\
+\0\x10\0\0\0\x15\0\0\0\xee\0\0\0\0\0\0\0\x20\0\0\0\x19\0\0\0\x3e\0\0\0\0\0\0\0\
+\x28\0\0\0\x08\0\0\0\x1e\x01\0\0\0\0\0\0\x78\0\0\0\x1d\0\0\0\x3e\0\0\0\0\0\0\0\
+\x88\0\0\0\x1d\0\0\0\xee\0\0\0\0\0\0\0\xa8\0\0\0\x19\0\0\0\xee\0\0\0\0\0\0\0\
+\xb8\0\0\0\x1d\0\0\0\x1e\x01\0\0\0\0\0\0\xc8\0\0\0\x1d\0\0\0\x92\x03\0\0\0\0\0\
+\0\xd0\0\0\0\x19\0\0\0\x3e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\xcf\0\0\0\0\0\x02\0\x70\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\xc1\0\0\0\0\0\x02\0\xf0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc8\0\0\0\0\0\x03\0\x78\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xba\0\0\0\0\0\x03\0\x28\x01\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\x14\0\0\0\x01\0\x04\0\0\0\0\0\0\0\0\0\x1d\0\0\0\0\0\0\0\xed\0\0\0\x01\
+\0\x04\0\x1d\0\0\0\0\0\0\0\x0e\0\0\0\0\0\0\0\x28\0\0\0\x01\0\x04\0\x2b\0\0\0\0\
+\0\0\0\x26\0\0\0\0\0\0\0\xd6\0\0\0\x01\0\x04\0\x51\0\0\0\0\0\0\0\x14\0\0\0\0\0\
+\0\0\0\0\0\0\x03\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\x03\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\xb2\0\0\0\x11\0\x05\0\0\0\0\0\0\0\0\0\x04\0\0\0\0\0\0\0\x3d\0\0\0\x12\0\
+\x02\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\x5b\0\0\0\x12\0\x03\0\0\0\0\0\0\0\0\0\
+\x38\x01\0\0\0\0\0\0\x48\0\0\0\0\0\0\0\x01\0\0\0\x0b\0\0\0\xc8\0\0\0\0\0\0\0\
+\x01\0\0\0\x0b\0\0\0\x50\0\0\0\0\0\0\0\x01\0\0\0\x0b\0\0\0\0\x01\0\0\0\0\0\0\
+\x01\0\0\0\x0b\0\0\0\x64\x03\0\0\0\0\0\0\x0a\0\0\0\x0b\0\0\0\x70\x03\0\0\0\0\0\
+\0\x0a\0\0\0\x0b\0\0\0\x7c\x03\0\0\0\0\0\0\x0a\0\0\0\x0b\0\0\0\x88\x03\0\0\0\0\
+\0\0\x0a\0\0\0\x0b\0\0\0\xa0\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x2c\0\0\0\0\0\0\
+\0\0\0\0\0\x09\0\0\0\x3c\0\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x50\0\0\0\0\0\0\0\0\0\
+\0\0\x09\0\0\0\x60\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x70\0\0\0\0\0\0\0\0\0\0\0\
+\x09\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x90\0\0\0\0\0\0\0\0\0\0\0\x09\0\
+\0\0\xa0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xb0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\
+\xc0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xd0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xe8\0\
+\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\xf8\0\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x08\x01\0\0\
+\0\0\0\0\0\0\0\0\x0a\0\0\0\x18\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x28\x01\0\0\0\
+\0\0\0\0\0\0\0\x0a\0\0\0\x38\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x48\x01\0\0\0\0\
+\0\0\0\0\0\0\x0a\0\0\0\x58\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x68\x01\0\0\0\0\0\
+\0\0\0\0\0\x0a\0\0\0\x78\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x94\x01\0\0\0\0\0\0\
+\0\0\0\0\x09\0\0\0\xa4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xb4\x01\0\0\0\0\0\0\0\
+\0\0\0\x09\0\0\0\xc4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xd4\x01\0\0\0\0\0\0\0\0\
+\0\0\x09\0\0\0\xe4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xf4\x01\0\0\0\0\0\0\0\0\0\
+\0\x09\0\0\0\x0c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x1c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x2c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x3c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x4c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x5c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x6c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x7c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x8c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x9c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\xac\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x3d\x3e\x30\x31\x32\x33\x3c\0\
+\x2e\x74\x65\x78\x74\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\x2e\x65\x78\x74\0\x64\
+\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\
+\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\
+\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x2e\x72\x65\x6c\x69\x74\x65\
+\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\
+\x72\x6f\x67\0\x2e\x72\x65\x6c\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\
+\x67\0\x2e\x6c\x6c\x76\x6d\x5f\x61\x64\x64\x72\x73\x69\x67\0\x6c\x69\x63\x65\
+\x6e\x73\x65\0\x2e\x73\x74\x72\x74\x61\x62\0\x2e\x73\x79\x6d\x74\x61\x62\0\x2e\
+\x72\x6f\x64\x61\x74\x61\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\0\x4c\x49\x43\x45\
+\x4e\x53\x45\0\x4c\x42\x42\x31\x5f\x34\0\x4c\x42\x42\x30\x5f\x34\0\x4c\x42\x42\
+\x31\x5f\x33\0\x4c\x42\x42\x30\x5f\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\
+\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x01\0\0\0\x06\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x04\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x4e\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\x40\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\x6d\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\x01\0\0\
+\0\0\0\0\x38\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\xa1\0\0\0\x01\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x78\x02\0\0\0\0\0\0\x65\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x89\0\0\0\x01\
+\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xdd\x02\0\0\0\0\0\0\x04\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xad\0\0\0\x01\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\xe1\x02\0\0\0\0\0\0\xac\x07\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\x8d\x0a\0\0\0\0\0\0\xbc\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\x99\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x50\
+\x0d\0\0\0\0\0\0\x68\x01\0\0\0\0\0\0\x0e\0\0\0\x0c\0\0\0\x08\0\0\0\0\0\0\0\x18\
+\0\0\0\0\0\0\0\x4a\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xb8\x0e\0\0\
+\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x02\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\
+\0\0\x69\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd8\x0e\0\0\0\0\0\0\
+\x20\0\0\0\0\0\0\0\x08\0\0\0\x03\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\xa9\
+\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xf8\x0e\0\0\0\0\0\0\x50\0\0\0\
+\0\0\0\0\x08\0\0\0\x06\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x07\0\0\0\x09\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x48\x0f\0\0\0\0\0\0\x70\x02\0\0\0\0\0\0\
+\x08\0\0\0\x07\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x7b\0\0\0\x03\x4c\xff\
+\x6f\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\0\xb8\x11\0\0\0\0\0\0\x07\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x91\0\0\0\x03\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\xbf\x11\0\0\0\0\0\0\x03\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0";
+
+	return 0;
+err:
+	bpf_object__destroy_skeleton(s);
+	return -1;
+}
+
+#endif /* __ITERATORS_BPF_SKEL_H__ */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-17  4:40 [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
                   ` (2 preceding siblings ...)
  2020-07-17  4:40 ` [PATCH v2 bpf-next 3/4] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
@ 2020-07-17  4:40 ` Alexei Starovoitov
  2020-07-19 23:56     ` kernel test robot
  2020-07-22 22:16   ` Daniel Borkmann
  3 siblings, 2 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2020-07-17  4:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, torvalds, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Add kernel module with user mode driver that populates bpffs with
BPF iterators.

$ mount bpffs /my/bpffs/ -t bpf
$ ls -la /my/bpffs/
total 4
drwxrwxrwt  2 root root    0 Jul  2 00:27 .
drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
-rw-------  1 root root    0 Jul  2 00:27 maps.debug
-rw-------  1 root root    0 Jul  2 00:27 progs.debug

The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
maps, load two BPF programs, attach them to BPF iterators, and finally send two
bpf_link IDs back to the kernel.
The kernel will pin two bpf_links into newly mounted bpffs instance under
names "progs.debug" and "maps.debug". These two files become human readable.

$ cat /my/bpffs/progs.debug
  id name            pages attached
  11 dump_bpf_map        1 bpf_iter_bpf_map
  12 dump_bpf_prog       1 bpf_iter_bpf_prog
  27 test_pkt_access     1
  32 test_main           1 test_pkt_access test_pkt_access
  33 test_subprog1       1 test_pkt_access_subprog1 test_pkt_access
  34 test_subprog2       1 test_pkt_access_subprog2 test_pkt_access
  35 test_subprog3       1 test_pkt_access_subprog3 test_pkt_access
  36 new_get_skb_len     1 get_skb_len test_pkt_access
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access

The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
all BPF programs currently loaded in the system. This information is unstable
and will change from kernel to kernel as ".debug" suffix conveys.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 init/Kconfig                                  |  2 +
 kernel/bpf/Makefile                           |  1 +
 kernel/bpf/inode.c                            | 86 ++++++++++++++++-
 kernel/bpf/preload/Kconfig                    | 18 ++++
 kernel/bpf/preload/Makefile                   | 21 +++++
 kernel/bpf/preload/bpf_preload.h              | 16 ++++
 kernel/bpf/preload/bpf_preload_kern.c         | 85 +++++++++++++++++
 kernel/bpf/preload/bpf_preload_umd_blob.S     |  7 ++
 .../preload/iterators/bpf_preload_common.h    | 13 +++
 kernel/bpf/preload/iterators/iterators.c      | 93 +++++++++++++++++++
 10 files changed, 339 insertions(+), 3 deletions(-)
 create mode 100644 kernel/bpf/preload/Kconfig
 create mode 100644 kernel/bpf/preload/Makefile
 create mode 100644 kernel/bpf/preload/bpf_preload.h
 create mode 100644 kernel/bpf/preload/bpf_preload_kern.c
 create mode 100644 kernel/bpf/preload/bpf_preload_umd_blob.S
 create mode 100644 kernel/bpf/preload/iterators/bpf_preload_common.h
 create mode 100644 kernel/bpf/preload/iterators/iterators.c

diff --git a/init/Kconfig b/init/Kconfig
index 0498af567f70..2adc1fa31fa1 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -2313,3 +2313,5 @@ config ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
 # <asm/syscall_wrapper.h>.
 config ARCH_HAS_SYSCALL_WRAPPER
 	def_bool n
+
+source "kernel/bpf/preload/Kconfig"
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index e6eb9c0402da..19e137aae40e 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -29,3 +29,4 @@ ifeq ($(CONFIG_BPF_JIT),y)
 obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
 obj-${CONFIG_BPF_LSM} += bpf_lsm.o
 endif
+obj-$(CONFIG_BPF_PRELOAD) += preload/
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index fb878ba3f22f..aadc703d18cf 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -20,6 +20,7 @@
 #include <linux/filter.h>
 #include <linux/bpf.h>
 #include <linux/bpf_trace.h>
+#include "preload/bpf_preload.h"
 
 enum bpf_type {
 	BPF_TYPE_UNSPEC	= 0,
@@ -369,9 +370,10 @@ static struct dentry *
 bpf_lookup(struct inode *dir, struct dentry *dentry, unsigned flags)
 {
 	/* Dots in names (e.g. "/sys/fs/bpf/foo.bar") are reserved for future
-	 * extensions.
+	 * extensions. That allows popoulate_bpffs() create special files.
 	 */
-	if (strchr(dentry->d_name.name, '.'))
+	if ((dir->i_mode & S_IALLUGO) &&
+	    strchr(dentry->d_name.name, '.'))
 		return ERR_PTR(-EPERM);
 
 	return simple_lookup(dir, dentry, flags);
@@ -409,6 +411,27 @@ static const struct inode_operations bpf_dir_iops = {
 	.unlink		= simple_unlink,
 };
 
+/* pin iterator link into bpffs */
+static int bpf_iter_link_pin_kernel(struct dentry *parent,
+				    const char *name, struct bpf_link *link)
+{
+	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
+	struct dentry *dentry;
+	int ret;
+
+	inode_lock(parent->d_inode);
+	dentry = lookup_one_len(name, parent, strlen(name));
+	if (IS_ERR(dentry)) {
+		inode_unlock(parent->d_inode);
+		return PTR_ERR(dentry);
+	}
+	ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops,
+			    &bpf_iter_fops);
+	dput(dentry);
+	inode_unlock(parent->d_inode);
+	return ret;
+}
+
 static int bpf_obj_do_pin(const char __user *pathname, void *raw,
 			  enum bpf_type type)
 {
@@ -638,6 +661,61 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
 	return 0;
 }
 
+struct bpf_preload_ops bpf_preload_ops = { .info.driver_name = "bpf_preload" };
+EXPORT_SYMBOL_GPL(bpf_preload_ops);
+
+static int populate_bpffs(struct dentry *parent)
+{
+	struct bpf_preload_info objs[BPF_PRELOAD_LINKS] = {};
+	struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
+	int err = 0, i;
+
+	mutex_lock(&bpf_preload_ops.lock);
+	if (!bpf_preload_ops.do_preload) {
+		mutex_unlock(&bpf_preload_ops.lock);
+		request_module("bpf_preload");
+		mutex_lock(&bpf_preload_ops.lock);
+
+		if (!bpf_preload_ops.do_preload) {
+			pr_err("bpf_preload module is missing.\n"
+			       "bpffs will not have iterators.\n");
+			goto out;
+		}
+	}
+
+	if (!bpf_preload_ops.info.tgid) {
+		err = bpf_preload_ops.do_preload(objs);
+		if (err)
+			goto out;
+		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
+			links[i] = bpf_link_by_id(objs[i].link_id);
+			if (IS_ERR(links[i])) {
+				err = PTR_ERR(links[i]);
+				goto out;
+			}
+		}
+		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
+			err = bpf_iter_link_pin_kernel(parent,
+						       objs[i].link_name, links[i]);
+			if (err)
+				goto out;
+			/* do not unlink successfully pinned links even
+			 * if later link fails to pin
+			 */
+			links[i] = NULL;
+		}
+		err = bpf_preload_ops.do_finish();
+		if (err)
+			goto out;
+	}
+out:
+	mutex_unlock(&bpf_preload_ops.lock);
+	for (i = 0; i < BPF_PRELOAD_LINKS && err; i++)
+		if (!IS_ERR_OR_NULL(links[i]))
+			bpf_link_put(links[i]);
+	return err;
+}
+
 static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
 {
 	static const struct tree_descr bpf_rfiles[] = { { "" } };
@@ -654,8 +732,8 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
 	inode = sb->s_root->d_inode;
 	inode->i_op = &bpf_dir_iops;
 	inode->i_mode &= ~S_IALLUGO;
+	populate_bpffs(sb->s_root);
 	inode->i_mode |= S_ISVTX | opts->mode;
-
 	return 0;
 }
 
@@ -705,6 +783,8 @@ static int __init bpf_init(void)
 {
 	int ret;
 
+	mutex_init(&bpf_preload_ops.lock);
+
 	ret = sysfs_create_mount_point(fs_kobj, "bpf");
 	if (ret)
 		return ret;
diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig
new file mode 100644
index 000000000000..b8ba5a9398ed
--- /dev/null
+++ b/kernel/bpf/preload/Kconfig
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: GPL-2.0-only
+menuconfig BPF_PRELOAD
+	bool "Preload BPF file system with kernel specific program and map iterators"
+	depends on BPF
+	help
+	  This builds kernel module with several embedded BPF programs that are
+	  pinned into BPF FS mount point as human readable files that are
+	  useful in debugging and introspection of BPF programs and maps.
+
+if BPF_PRELOAD
+config BPF_PRELOAD_UMD
+	tristate "bpf_preload kernel module with user mode driver"
+	depends on CC_CAN_LINK
+	depends on m || CC_CAN_LINK_STATIC
+	default m
+	help
+	  This builds bpf_preload kernel module with embedded user mode driver.
+endif
diff --git a/kernel/bpf/preload/Makefile b/kernel/bpf/preload/Makefile
new file mode 100644
index 000000000000..191d82209842
--- /dev/null
+++ b/kernel/bpf/preload/Makefile
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: GPL-2.0
+
+LIBBPF := $(srctree)/../../tools/lib/bpf
+userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi -I $(LIBBPF) \
+	-I $(srctree)/tools/lib/ \
+	-I $(srctree)/kernel/bpf/preload/iterators/ -Wno-int-conversion \
+	-DCOMPAT_NEED_REALLOCARRAY
+
+userprogs := bpf_preload_umd
+
+LIBBPF_O := $(LIBBPF)/bpf.o $(LIBBPF)/libbpf.o $(LIBBPF)/btf.o $(LIBBPF)/libbpf_errno.o \
+	$(LIBBPF)/str_error.o $(LIBBPF)/hashmap.o $(LIBBPF)/libbpf_probes.o
+
+bpf_preload_umd-objs := iterators/iterators.o $(LIBBPF_O)
+
+userldflags += -lelf -lz
+
+$(obj)/bpf_preload_umd_blob.o: $(obj)/bpf_preload_umd
+
+obj-$(CONFIG_BPF_PRELOAD_UMD) += bpf_preload.o
+bpf_preload-objs += bpf_preload_kern.o bpf_preload_umd_blob.o
diff --git a/kernel/bpf/preload/bpf_preload.h b/kernel/bpf/preload/bpf_preload.h
new file mode 100644
index 000000000000..c57cef812f96
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _BPF_PRELOAD_H
+#define _BPF_PRELOAD_H
+
+#include <linux/usermode_driver.h>
+#include "iterators/bpf_preload_common.h"
+
+struct bpf_preload_ops {
+        struct umd_info info;
+        struct mutex lock;
+	int (*do_preload)(struct bpf_preload_info *);
+	int (*do_finish)(void);
+};
+extern struct bpf_preload_ops bpf_preload_ops;
+#define BPF_PRELOAD_LINKS 2
+#endif
diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c
new file mode 100644
index 000000000000..cd10f291d6cd
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload_kern.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pid.h>
+#include <linux/fs.h>
+#include <linux/sched/signal.h>
+#include "bpf_preload.h"
+
+extern char bpf_preload_umd_start;
+extern char bpf_preload_umd_end;
+
+static int do_preload(struct bpf_preload_info *obj)
+{
+	int magic = BPF_PRELOAD_START;
+	struct pid *tgid;
+	loff_t pos = 0;
+	int i, err;
+	ssize_t n;
+
+	err = fork_usermode_driver(&bpf_preload_ops.info);
+	if (err)
+		return err;
+	tgid = bpf_preload_ops.info.tgid;
+
+	/* send the start magic to let UMD proceed with loading BPF progs */
+	n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
+			 &magic, sizeof(magic), &pos);
+	if (n != sizeof(magic))
+		return -EPIPE;
+
+	/* receive bpf_link IDs and names from UMD */
+	pos = 0;
+	for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
+		n = kernel_read(bpf_preload_ops.info.pipe_from_umh,
+				&obj[i], sizeof(*obj), &pos);
+		if (n != sizeof(*obj))
+			return -EPIPE;
+	}
+	return 0;
+}
+
+static int do_finish(void)
+{
+	int magic = BPF_PRELOAD_END;
+	struct pid *tgid;
+	loff_t pos = 0;
+	ssize_t n;
+
+	/* send the last magic to UMD. It will do a normal exit. */
+	n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
+			 &magic, sizeof(magic), &pos);
+	if (n != sizeof(magic))
+		return -EPIPE;
+	tgid = bpf_preload_ops.info.tgid;
+	wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
+	bpf_preload_ops.info.tgid = NULL;
+	return 0;
+}
+
+static int __init load_umd(void)
+{
+	int err;
+
+	err = umd_load_blob(&bpf_preload_ops.info, &bpf_preload_umd_start,
+			    &bpf_preload_umd_end - &bpf_preload_umd_start);
+	if (err)
+		return err;
+	bpf_preload_ops.do_preload = do_preload;
+	bpf_preload_ops.do_finish = do_finish;
+	return err;
+}
+
+static void __exit fini_umd(void)
+{
+	bpf_preload_ops.do_preload = NULL;
+	bpf_preload_ops.do_finish = NULL;
+	/* kill UMD in case it's still there due to earlier error */
+	kill_pid(bpf_preload_ops.info.tgid, SIGKILL, 1);
+	bpf_preload_ops.info.tgid = NULL;
+	umd_unload_blob(&bpf_preload_ops.info);
+}
+late_initcall(load_umd);
+module_exit(fini_umd);
+MODULE_LICENSE("GPL");
diff --git a/kernel/bpf/preload/bpf_preload_umd_blob.S b/kernel/bpf/preload/bpf_preload_umd_blob.S
new file mode 100644
index 000000000000..d0fe58c0734a
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload_umd_blob.S
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+	.section .init.rodata, "a"
+	.global bpf_preload_umd_start
+bpf_preload_umd_start:
+	.incbin "bpf_preload_umd"
+	.global bpf_preload_umd_end
+bpf_preload_umd_end:
diff --git a/kernel/bpf/preload/iterators/bpf_preload_common.h b/kernel/bpf/preload/iterators/bpf_preload_common.h
new file mode 100644
index 000000000000..8464d1a48c05
--- /dev/null
+++ b/kernel/bpf/preload/iterators/bpf_preload_common.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _BPF_PRELOAD_COMMON_H
+#define _BPF_PRELOAD_COMMON_H
+
+#define BPF_PRELOAD_START 0x5555
+#define BPF_PRELOAD_END 0xAAAA
+
+struct bpf_preload_info {
+	char link_name[16];
+	int link_id;
+};
+
+#endif
diff --git a/kernel/bpf/preload/iterators/iterators.c b/kernel/bpf/preload/iterators/iterators.c
new file mode 100644
index 000000000000..de5e9b010ac7
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include <argp.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <bpf/libbpf.h>
+#include <bpf/bpf.h>
+#include <sys/mount.h>
+#include "iterators.skel.h"
+#include "bpf_preload_common.h"
+
+int to_kernel = -1;
+int from_kernel = 0;
+
+static int send_link_to_kernel(struct bpf_link *link, const char *link_name)
+{
+	struct bpf_preload_info obj = {};
+	struct bpf_link_info info = {};
+	__u32 info_len = sizeof(info);
+	int err;
+
+	err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &info, &info_len);
+	if (err)
+		return err;
+	obj.link_id = info.id;
+	if (strlen(link_name) >= sizeof(obj.link_name))
+		return -E2BIG;
+	strcpy(obj.link_name, link_name);
+	if (write(to_kernel, &obj, sizeof(obj)) != sizeof(obj))
+		return -EPIPE;
+	return 0;
+}
+
+int main(int argc, char **argv)
+{
+	struct iterators_bpf *skel;
+	int err, magic;
+	int debug_fd;
+
+	debug_fd = open("/dev/console", O_WRONLY | O_NOCTTY | O_CLOEXEC);
+	if (debug_fd < 0)
+		return 1;
+	to_kernel = dup(1);
+	close(1);
+	dup(debug_fd);
+	/* now stdin and stderr point to /dev/console */
+
+	read(from_kernel, &magic, sizeof(magic));
+	if (magic != BPF_PRELOAD_START) {
+		printf("bad start magic %d\n", magic);
+		return 1;
+	}
+
+	/* libbpf opens BPF object and loads it into the kernel */
+	skel = iterators_bpf__open_and_load();
+	if (!skel) {
+		/* iterators.skel.h is little endian.
+		 * libbpf doesn't support automatic little->big conversion
+		 * of BPF bytecode yet.
+		 * The program load will fail in such case.
+		 */
+		printf("Failed load could be due to wrong endianness\n");
+		return 1;
+	}
+
+	err = iterators_bpf__attach(skel);
+	if (err)
+		goto cleanup;
+
+	/* send two bpf_link IDs with names to the kernel */
+	err = send_link_to_kernel(skel->links.dump_bpf_map, "maps.debug");
+	if (err)
+		goto cleanup;
+	err = send_link_to_kernel(skel->links.dump_bpf_prog, "progs.debug");
+	if (err)
+		goto cleanup;
+
+	/* The kernel will proceed with pinnging the links in bpffs.
+	 * UMD will wait on read from pipe.
+	 */
+	read(from_kernel, &magic, sizeof(magic));
+	if (magic != BPF_PRELOAD_END) {
+		printf("bad final magic %d\n", magic);
+		err = -EINVAL;
+	}
+cleanup:
+	iterators_bpf__destroy(skel);
+
+	return err != 0;
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-17  4:40 ` [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
@ 2020-07-19 23:56     ` kernel test robot
  2020-07-22 22:16   ` Daniel Borkmann
  1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2020-07-19 23:56 UTC (permalink / raw)
  To: Alexei Starovoitov, davem
  Cc: kbuild-all, daniel, torvalds, netdev, bpf, kernel-team

[-- Attachment #1: Type: text/plain, Size: 3160 bytes --]

Hi Alexei,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Alexei-Starovoitov/bpf-Populate-bpffs-with-map-and-prog-iterators/20200717-124311
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-14) 9.3.0
reproduce (this is a W=1 build):
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/bpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/bpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/btf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/btf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/str_error.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/str_error.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.o'.
   make[4]: Target '__build' not remade because of errors.
--
   kernel/bpf/preload/bpf_preload_kern.c: In function 'do_preload':
>> kernel/bpf/preload/bpf_preload_kern.c:16:14: warning: variable 'tgid' set but not used [-Wunused-but-set-variable]
      16 |  struct pid *tgid;
         |              ^~~~

vim +/tgid +16 kernel/bpf/preload/bpf_preload_kern.c

    12	
    13	static int do_preload(struct bpf_preload_info *obj)
    14	{
    15		int magic = BPF_PRELOAD_START;
  > 16		struct pid *tgid;
    17		loff_t pos = 0;
    18		int i, err;
    19		ssize_t n;
    20	
    21		err = fork_usermode_driver(&bpf_preload_ops.info);
    22		if (err)
    23			return err;
    24		tgid = bpf_preload_ops.info.tgid;
    25	
    26		/* send the start magic to let UMD proceed with loading BPF progs */
    27		n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
    28				 &magic, sizeof(magic), &pos);
    29		if (n != sizeof(magic))
    30			return -EPIPE;
    31	
    32		/* receive bpf_link IDs and names from UMD */
    33		pos = 0;
    34		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
    35			n = kernel_read(bpf_preload_ops.info.pipe_from_umh,
    36					&obj[i], sizeof(*obj), &pos);
    37			if (n != sizeof(*obj))
    38				return -EPIPE;
    39		}
    40		return 0;
    41	}
    42	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 75279 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs.
@ 2020-07-19 23:56     ` kernel test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2020-07-19 23:56 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3230 bytes --]

Hi Alexei,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Alexei-Starovoitov/bpf-Populate-bpffs-with-map-and-prog-iterators/20200717-124311
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-14) 9.3.0
reproduce (this is a W=1 build):
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/bpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/bpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/btf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/btf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/str_error.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/str_error.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.o'.
   make[4]: Target '__build' not remade because of errors.
--
   kernel/bpf/preload/bpf_preload_kern.c: In function 'do_preload':
>> kernel/bpf/preload/bpf_preload_kern.c:16:14: warning: variable 'tgid' set but not used [-Wunused-but-set-variable]
      16 |  struct pid *tgid;
         |              ^~~~

vim +/tgid +16 kernel/bpf/preload/bpf_preload_kern.c

    12	
    13	static int do_preload(struct bpf_preload_info *obj)
    14	{
    15		int magic = BPF_PRELOAD_START;
  > 16		struct pid *tgid;
    17		loff_t pos = 0;
    18		int i, err;
    19		ssize_t n;
    20	
    21		err = fork_usermode_driver(&bpf_preload_ops.info);
    22		if (err)
    23			return err;
    24		tgid = bpf_preload_ops.info.tgid;
    25	
    26		/* send the start magic to let UMD proceed with loading BPF progs */
    27		n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
    28				 &magic, sizeof(magic), &pos);
    29		if (n != sizeof(magic))
    30			return -EPIPE;
    31	
    32		/* receive bpf_link IDs and names from UMD */
    33		pos = 0;
    34		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
    35			n = kernel_read(bpf_preload_ops.info.pipe_from_umh,
    36					&obj[i], sizeof(*obj), &pos);
    37			if (n != sizeof(*obj))
    38				return -EPIPE;
    39		}
    40		return 0;
    41	}
    42	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 75279 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-17  4:40 ` [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  2020-07-19 23:56     ` kernel test robot
@ 2020-07-22 22:16   ` Daniel Borkmann
  1 sibling, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2020-07-22 22:16 UTC (permalink / raw)
  To: Alexei Starovoitov, davem; +Cc: torvalds, netdev, bpf, kernel-team

On 7/17/20 6:40 AM, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> Add kernel module with user mode driver that populates bpffs with
> BPF iterators.
> 
> $ mount bpffs /my/bpffs/ -t bpf
> $ ls -la /my/bpffs/
> total 4
> drwxrwxrwt  2 root root    0 Jul  2 00:27 .
> drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
> -rw-------  1 root root    0 Jul  2 00:27 maps.debug
> -rw-------  1 root root    0 Jul  2 00:27 progs.debug
> 
> The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
> maps, load two BPF programs, attach them to BPF iterators, and finally send two
> bpf_link IDs back to the kernel.
> The kernel will pin two bpf_links into newly mounted bpffs instance under
> names "progs.debug" and "maps.debug". These two files become human readable.
> 
> $ cat /my/bpffs/progs.debug
>    id name            pages attached
>    11 dump_bpf_map        1 bpf_iter_bpf_map
>    12 dump_bpf_prog       1 bpf_iter_bpf_prog
>    27 test_pkt_access     1
>    32 test_main           1 test_pkt_access test_pkt_access
>    33 test_subprog1       1 test_pkt_access_subprog1 test_pkt_access
>    34 test_subprog2       1 test_pkt_access_subprog2 test_pkt_access
>    35 test_subprog3       1 test_pkt_access_subprog3 test_pkt_access
>    36 new_get_skb_len     1 get_skb_len test_pkt_access
>    37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
>    38 new_get_constan     1 get_constant test_pkt_access
> 
> The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> all BPF programs currently loaded in the system. This information is unstable
> and will change from kernel to kernel as ".debug" suffix conveys.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
[...]
>   static int bpf_obj_do_pin(const char __user *pathname, void *raw,
>   			  enum bpf_type type)
>   {
> @@ -638,6 +661,61 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
>   	return 0;
>   }
>   
> +struct bpf_preload_ops bpf_preload_ops = { .info.driver_name = "bpf_preload" };
> +EXPORT_SYMBOL_GPL(bpf_preload_ops);
> +
> +static int populate_bpffs(struct dentry *parent)
> +{
> +	struct bpf_preload_info objs[BPF_PRELOAD_LINKS] = {};
> +	struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
> +	int err = 0, i;
> +
> +	mutex_lock(&bpf_preload_ops.lock);
> +	if (!bpf_preload_ops.do_preload) {
> +		mutex_unlock(&bpf_preload_ops.lock);
> +		request_module("bpf_preload");
> +		mutex_lock(&bpf_preload_ops.lock);
> +
> +		if (!bpf_preload_ops.do_preload) {
> +			pr_err("bpf_preload module is missing.\n"
> +			       "bpffs will not have iterators.\n");
> +			goto out;
> +		}
> +	}

Overall set looks good. One thing that appears to be possible from staring at
the code is that while we load the bpf_preload module here and below invoke the
modules' preload ops callbacks, there seems to be nothing that prevents the module
from being forcefully unloaded in parallel (e.g. no ref on the module held).

So it looks like the old bpfilter code was preventing exactly this through holding
bpfilter_ops.lock mutex during its {load,fini}_umh() modules init/exit functions.

Other than that, maybe it would be nice to have a test_progs selftests extension
which mounts multiple BPF fs instances, and asserts that if one of them has the
{progs,maps}.debug files that the other ones must have it as well, plus plain
reading of both (w/o parsing anything from there) just to make sure the dump
terminates .. at least to have some basic exercising of the code in there.

Thanks,
Daniel

> +	if (!bpf_preload_ops.info.tgid) {
> +		err = bpf_preload_ops.do_preload(objs);
> +		if (err)
> +			goto out;
> +		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
> +			links[i] = bpf_link_by_id(objs[i].link_id);
> +			if (IS_ERR(links[i])) {
> +				err = PTR_ERR(links[i]);
> +				goto out;
> +			}
> +		}
> +		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
> +			err = bpf_iter_link_pin_kernel(parent,
> +						       objs[i].link_name, links[i]);
> +			if (err)
> +				goto out;
> +			/* do not unlink successfully pinned links even
> +			 * if later link fails to pin
> +			 */
> +			links[i] = NULL;
> +		}
> +		err = bpf_preload_ops.do_finish();
> +		if (err)
> +			goto out;
> +	}
> +out:
> +	mutex_unlock(&bpf_preload_ops.lock);
> +	for (i = 0; i < BPF_PRELOAD_LINKS && err; i++)
> +		if (!IS_ERR_OR_NULL(links[i]))
> +			bpf_link_put(links[i]);
> +	return err;
> +}
> +
>   static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
>   {
>   	static const struct tree_descr bpf_rfiles[] = { { "" } };
> @@ -654,8 +732,8 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
>   	inode = sb->s_root->d_inode;
>   	inode->i_op = &bpf_dir_iops;
>   	inode->i_mode &= ~S_IALLUGO;
> +	populate_bpffs(sb->s_root);
>   	inode->i_mode |= S_ISVTX | opts->mode;
> -
>   	return 0;
>   }
>   
> @@ -705,6 +783,8 @@ static int __init bpf_init(void)
[...]
> diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c
> new file mode 100644
> index 000000000000..cd10f291d6cd
> --- /dev/null
> +++ b/kernel/bpf/preload/bpf_preload_kern.c
> @@ -0,0 +1,85 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/pid.h>
> +#include <linux/fs.h>
> +#include <linux/sched/signal.h>
> +#include "bpf_preload.h"
> +
> +extern char bpf_preload_umd_start;
> +extern char bpf_preload_umd_end;
> +
> +static int do_preload(struct bpf_preload_info *obj)
> +{
> +	int magic = BPF_PRELOAD_START;
> +	struct pid *tgid;
> +	loff_t pos = 0;
> +	int i, err;
> +	ssize_t n;
> +
> +	err = fork_usermode_driver(&bpf_preload_ops.info);
> +	if (err)
> +		return err;
> +	tgid = bpf_preload_ops.info.tgid;
> +
> +	/* send the start magic to let UMD proceed with loading BPF progs */
> +	n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
> +			 &magic, sizeof(magic), &pos);
> +	if (n != sizeof(magic))
> +		return -EPIPE;
> +
> +	/* receive bpf_link IDs and names from UMD */
> +	pos = 0;
> +	for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
> +		n = kernel_read(bpf_preload_ops.info.pipe_from_umh,
> +				&obj[i], sizeof(*obj), &pos);
> +		if (n != sizeof(*obj))
> +			return -EPIPE;
> +	}
> +	return 0;
> +}
> +
> +static int do_finish(void)
> +{
> +	int magic = BPF_PRELOAD_END;
> +	struct pid *tgid;
> +	loff_t pos = 0;
> +	ssize_t n;
> +
> +	/* send the last magic to UMD. It will do a normal exit. */
> +	n = kernel_write(bpf_preload_ops.info.pipe_to_umh,
> +			 &magic, sizeof(magic), &pos);
> +	if (n != sizeof(magic))
> +		return -EPIPE;
> +	tgid = bpf_preload_ops.info.tgid;
> +	wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
> +	bpf_preload_ops.info.tgid = NULL;
> +	return 0;
> +}
> +
> +static int __init load_umd(void)
> +{
> +	int err;
> +
> +	err = umd_load_blob(&bpf_preload_ops.info, &bpf_preload_umd_start,
> +			    &bpf_preload_umd_end - &bpf_preload_umd_start);
> +	if (err)
> +		return err;
> +	bpf_preload_ops.do_preload = do_preload;
> +	bpf_preload_ops.do_finish = do_finish;
> +	return err;
> +}
> +
> +static void __exit fini_umd(void)
> +{
> +	bpf_preload_ops.do_preload = NULL;
> +	bpf_preload_ops.do_finish = NULL;
> +	/* kill UMD in case it's still there due to earlier error */
> +	kill_pid(bpf_preload_ops.info.tgid, SIGKILL, 1);
> +	bpf_preload_ops.info.tgid = NULL;
> +	umd_unload_blob(&bpf_preload_ops.info);
> +}
[...]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-07-22 22:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-17  4:40 [PATCH v2 bpf-next 0/4] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
2020-07-17  4:40 ` [PATCH v2 bpf-next 1/4] bpf: Add bpf_prog iterator Alexei Starovoitov
2020-07-17  4:40 ` [PATCH v2 bpf-next 2/4] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
2020-07-17  4:40 ` [PATCH v2 bpf-next 3/4] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
2020-07-17  4:40 ` [PATCH v2 bpf-next 4/4] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
2020-07-19 23:56   ` kernel test robot
2020-07-19 23:56     ` kernel test robot
2020-07-22 22:16   ` Daniel Borkmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.